Launch of the Gavowen Ninja Wiki!

Hi all just a short note to say I’ve created a wiki! As the about page says, it’s to record info for my own use, but hopefully it’s useful for everyone out there.

Some pages are still placeholders, but there’s some meaty information in some sections, such as Samba, Docker, LEDE and some other bits and pieces.

I basically had a whole bunch of text files in folders on my computer and realised that I could spin up a wiki for no additional cost with my hosting provider, so converted all these little text snippets into a (semi)coherent wiki, along with some bigger guides that I’ve put up.

When I add something of note I’ll link to it from the blog. Over time I should build up a decent catalog of info. I wish I had this wiki years ago as there would be a lot more content on there by now, but better late than never!

Hopefully you find it useful. In any case check it out, and let me know in the comments if you did! (I may add commenting to the wiki eventually but am a bit wary of comment spam bots, which WordPress is great at filtering).

Upgrading ESXi from 6.0.x to 6.5

I recently upgraded my home ESXi server from 6.0.0U2 to 6.5 using the instructions at TinkerTry.

My initial attempt was a fail:

[root@emperor-esxi:~] esxcli software profile update -p ESXi-6.5.0-4564106-standard -d
 Failed to download VIB.
        url = vmkplexer-vmkplexer-6.5.0-0.0.4564106
  localfile = Unable to download VIB from any of the URLs
 Please refer to the log file for more details.

I up arrowed and tried again with the same result. I tried with the following and that did the trick!

esxcli network firewall ruleset set -e true -r httpClient
esxcli software profile update -p ESXi-6.5.0-4564106-standard -d

Seems like the firewall was blocking that “vmkplexer” file.

This site seems to be very helpful in knowing what the latest patch file is:


Install TVHeadend in a Proxmox LXC Container running Ubuntu 16.04

TVHeadend is my favourite TV Server.

There weren’t too many battles setting up the install but there were some.

Step 1 of course is to create your “CT’. It doesn’t need much so I just gave it the following:

root@pve01:/etc/pve/lxc# cat 102.conf 
arch: amd64
cpulimit: 4
cpuunits: 4096
hostname: media
memory: 4096
net0: name=eth0,bridge=vmbr1,gw=,hwaddr=00:0A:DE:01:02:10,ip=,ip6=auto,tag=10,type=veth
onboot: 1
ostype: ubuntu
rootfs: ssdmirror:subvol-102-disk-1,size=32G
startup: order=4,up=5,down=5
swap: 4096

4 cores, 4 gigs of RAM/swap and a 32G disk. This is because I’ll also have Plex, SABnzbd, CouchPotato and Sonar in the same CT.

A few things tripped me up. The debian sources aren’t optimal for me, the local timezone wasn’t set, and TVH wouldn’t auto start.

After creating CT, some housekeeping:

root@media:/home/hts/.hts# cat /etc/apt/sources.list
deb xenial main restricted universe multiverse
deb xenial-updates main restricted universe multiverse
deb xenial-backports main restricted universe multiverse
deb xenial partner
deb xenial-security main restricted universe multiverse

### TVH source
deb stable main

Those “au” ones are much better for Australia.  You can possibly find even faster ones with this guide, although I don’t know if that works with Ubuntu. It certainly works in Debian.

The “tvheadend” source for Ubuntu is as per here:  I just go the sable branch, as I’m not after any new features and like things to not break. The stable branch is regularly updated so I don’t feel like I’m missing out.

The timezone wasn’t set so I fixed that with:

dpkg-reconfigure tzdata

Then I went to work with apt:

apt-get update
apt-get dist-upgrade
apt-get autoremove
apt-get install tvheadend

I started tvheadend with “service tvheadend start” and saw it all looked good. The only problem is that it doesn’t start on reboot. I traced this down with a google search to the installer being not quite compatible with systemd, but there is a fix that works fine for me (as per this bug ID:

root@media:~# systemctl enable tvheadend.service
tvheadend.service is not a native service, redirecting to systemd-sysv-install
Executing /lib/systemd/systemd-sysv-install enable tvheadend


For Australia you can get the TV icons by:

cd /usr/src
git clone

Then prefer picons over channel names in the settings and set the path to:


 Migrating from another Server

I found this post to be very handy. In a nutshell:

## the "FROM" box
sudo service tvheadend stop
sudo -s
cd /home/hts/.hts
sudo tar cvfp ../tvheadend.tar tvheadend
cd /home/hts/

## the "TO" box
new box
sudo service tvheadend stop
sudo -s
cd /home/hts/.hts
sudo mv tvheadend tvheadend-backup
tar xvfp ~/tvheadend.tar

I was very glad I didn’t need to manually copy all my old settings across!


Install dnsmasq in a Proxmox LXC Container running Ubuntu 16.04

dnsmasq is a very handy DHCP server for the LAN. I also use it as a DNS forwarder, so I can use hostnames for all my virtual machines under the gavowen.local domain.

I had a few little battles setting this up so thought I’d share the step by step.

Step 1 of course is to create your “CT’. It doesn’t need much so I just gave it the following:

root@pve01:/etc/pve/lxc# cat 100.conf
#dnsmasq DNS/DHCP server%0A%0A
arch: amd64
cpulimit: 1
cpuunits: 1024
hostname: dnsmasq
memory: 512
net0: name=eth0,bridge=vmbr1,gw=,hwaddr=00:0A:DE:01:00:10,ip=,ip6=auto,tag=10,type=veth
onboot: 1
ostype: ubuntu
rootfs: ssdmirror:subvol-100-disk-1,size=8G
searchdomain: gavowen.local
startup: order=1,up=10,down=5
swap: 512

1 core, half a gig of RAM/swap and a 8G disk is plenty. That’s by standard amount for smaller containers (CTs).

A few things tripped me up. The debian sources aren’t optimal for me, the local timezone wasn’t set, and “resolvconf” was screwing up dnsmasq.

After creating the CT, some housekeeping:

root@media:/home/hts/.hts# cat /etc/apt/sources.list
deb xenial main restricted universe multiverse
deb xenial-updates main restricted universe multiverse
deb xenial-backports main restricted universe multiverse
deb xenial partner
deb xenial-security main restricted universe multiverse

Those “au” ones are much better for Australia.  You can possibly find even faster ones with this guide, although I don’t know if that works with Ubuntu. It certainly works in Debian.

The timezone wasn’t set so I fixed that with:

dpkg-reconfigure tzdata

Then I went to work with apt:

apt-get update
apt-get dist-upgrade
apt-get autoremove
apt-get remove resolvconf
apt-get install dnsmasq

The resolvconf fix I found here.  If you don’t remove resolvconf, or fix the issue up another way you get something like this:

root@dnsmasq:/etc# service dnsmasq status 
* dnsmasq.service - dnsmasq - A lightweight DHCP and caching DNS server
   Loaded: loaded (/lib/systemd/system/dnsmasq.service; enabled; vendor preset: enabled)
  Drop-In: /run/systemd/generator/dnsmasq.service.d
           `-50-dnsmasq-$named.conf, 50-insserv.conf-$named.conf
   Active: active (running) since Sun 2016-09-18 03:24:07 UTC; 3s ago
  Process: 910 ExecStop=/etc/init.d/dnsmasq systemd-stop-resolvconf (code=exited, status=0/SUCCESS)
  Process: 953 ExecStartPost=/etc/init.d/dnsmasq systemd-start-resolvconf (code=exited, status=0/SUCCESS)
  Process: 944 ExecStart=/etc/init.d/dnsmasq systemd-exec (code=exited, status=0/SUCCESS)
  Process: 943 ExecStartPre=/usr/sbin/dnsmasq --test (code=exited, status=0/SUCCESS)
 Main PID: 952 (dnsmasq)
   CGroup: /system.slice/dnsmasq.service
           `-952 /usr/sbin/dnsmasq -x /var/run/dnsmasq/ -u dnsmasq -r /var/run/dnsmasq/resolv.conf -7 /etc/dnsmasq.d,.dpk

Sep 18 03:24:06 dnsmasq dnsmasq[943]: dnsmasq: syntax check OK.
Sep 18 03:24:06 dnsmasq dnsmasq[952]: started, version 2.75 cachesize 500
Sep 18 03:24:06 dnsmasq dnsmasq[952]: compile time options: IPv6 GNU-getopt DBus i18n IDN DHCP DHCPv6 no-Lua TFTP conntrack ipset au
Sep 18 03:24:06 dnsmasq dnsmasq-dhcp[952]: DHCP, IP range --, lease time 12h
Sep 18 03:24:06 dnsmasq dnsmasq[952]: using local addresses only for domain gavowen.local
Sep 18 03:24:06 dnsmasq dnsmasq[952]: no servers found in /var/run/dnsmasq/resolv.conf, will retry
Sep 18 03:24:06 dnsmasq dnsmasq[952]: read /etc/hosts - 5 addresses
Sep 18 03:24:06 dnsmasq dnsmasq[952]: read /etc/banner_add_hosts - 0 addresses
Sep 18 03:24:07 dnsmasq dnsmasq[953]: /etc/resolvconf/update.d/libc: Warning: /etc/resolv.conf is not a symbolic link to /run/resolv
Sep 18 03:24:07 dnsmasq systemd[1]: Started dnsmasq - A lightweight DHCP and caching DNS server.

The DHCP part of dnsmasq works fine, but DNS breaks like this (tcpdump)

03:38:02.900866 IP dnsmasq.gavowen.local.domain > 48699 Refused 0/0/0 (45)
03:38:03.887415 IP > dnsmasq.gavowen.local.domain: 4837+ A? (45)
03:38:03.887523 IP dnsmasq.gavowen.local.domain > 4837 Refused 0/0/0 (45)
03:38:04.076221 IP > dnsmasq.gavowen.local.domain: 59968+ A? (45)
03:38:04.076306 IP dnsmasq.gavowen.local.domain > 59968 Refused 0/0/0 (45)
03:38:05.068785 IP > dnsmasq.gavowen.local.domain: 24507+ A? (45)
03:38:05.068892 IP dnsmasq.gavowen.local.domain > 24507 Refused 0/0/0 (45)

Now for my /etc/dnsmasq.conf

root@dnsmasq:~# cat /etc/dnsmasq.conf
# Configuration file for dnsmasq.

## SERVER ##

# listen interface and address

## DNS ##

local=/gavowen.local/  # domain(s) to search local /etc/hosts
cache-size=500 # set DNS lookup cache to 500 entries
no-negcache    # don't do negative caching
domain-needed  # never forward plain names
bogus-priv     # never forward bogus private (RFC1918) addresses

# block 'sitefinder' wildcard redirects from VeriSign and others for bogus A records

# no LDAP server for the local domain
#srv-host=_ldap._tcp.gavowen.local # no LDAP server for the local domain

# route rDNS to this server

## DHCP ##


dhcp-option=option:router, # default route
# dhcp-option=option:dns-server,,
dhcp-option=23,50 # set default IP TTL to 50

# Windows clients and Samba
dhcp-option=19,0              # option ip-forwarding off
dhcp-option=44,    # set netbios-over-TCP/IP nameserver(s) aka WINS server(s)
dhcp-option=45,    # netbios datagram distribution server
dhcp-option=46,8              # netbios node type
dhcp-option=252,"\n"          # send an empty WPAD option. Windows 7 and possibly later
dhcp-option=vendor:MSFT,2,1i  # Windows release DHCP lease when it shuts down
# These are the node types for netbios options:
#   1 = B-node, 2 = P-node, 4 =M-node, 8 = H-node

# FQDN settings for DHCP

# static leases

touch the following DHCP leases file:

touch /var/lib/misc/dnsmasq.leases #DHCP leases file

Also update /etc/hosts so you can easily ping hosts on your network:

root@dnsmasq:~# cat /etc/hosts localhost
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
# --- BEGIN PVE --- dnsmasq.gavowen.local dnsmasq
# --- END PVE ---
.69.10.40 pve01.gavowen.local pve01 crashplan.gavowen.local crashplan media.gavowen.local media sarlacc.gavowen.local sarlacc

much easier than IP addresses:

root@dnsmasq:~# ping sarlacc
PING sarlacc.gavowen.local ( 56(84) bytes of data.
64 bytes from sarlacc.gavowen.local ( icmp_seq=1 ttl=64 time=0.333 ms
64 bytes from sarlacc.gavowen.local ( icmp_seq=2 ttl=64 time=0.234 ms
64 bytes from sarlacc.gavowen.local ( icmp_seq=3 ttl=64 time=0.214 ms
--- sarlacc.gavowen.local ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.214/0.260/0.333/0.053 ms
root@dnsmasq:~# ping media
PING media.gavowen.local ( 56(84) bytes of data.
64 bytes from media.gavowen.local ( icmp_seq=1 ttl=64 time=0.283 ms
64 bytes from media.gavowen.local ( icmp_seq=2 ttl=64 time=0.031 ms
64 bytes from media.gavowen.local ( icmp_seq=3 ttl=64 time=0.027 ms
--- media.gavowen.local ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 1998ms
rtt min/avg/max/mdev = 0.027/0.113/0.283/0.120 ms

last but not least start up dnsmasq:

service dnsmasq start
service dnsmasq status

Hopefully now you are up and away with a nify little DNS and DHCP server.

iPhone7 vs Samsung Galaxy Note 7 Australian Phone Bands

After the underwhelming iPhone 7 launch and the lack of dual SIM, I’m now well in the market for a dual SIM Android phone, and the Samsung Galaxy Note 7 is currently in the lead – after the exploding battery recall is complete!

Before purchasing though, I thought I’d look at the mobile bands to make sure the dual SIM grey-market import is compatible with Australian carriers – particularly Vodafone and Telstra, which will be the carriers I’ll use (one SIM from each).

I googled “dual SIM Samsung Galaxy Note 7” and found a model ID #N930FD (“D” for “dual”), and found a great website that lists all the frequencies supported. Apple of course lists the details  for the iPhone 7 specs.

As for details of what’s used in Australia, the Mobile Network Guide was very handy in putting this together, as was the Whirlpool article on the Australian Mobile Networks.

Putting the Aussie frequencies in a table:

 Tech Band iPhone 7 Plus Galaxy Note 7 Carrier* Notes
2G 900 (E-GSM) Y Y (T), (O), (V) Telstra to shut down by end of 2016. Optus to shut down by April 2017. Optus and Vodafone have been refarming in regional and rural areas to 3G. Vodafone have as yet not announced shutdown plans.
Update – Vodaphone to exit 2G by 1st October 2017.
2G 1800 (DCS) Y Y retired Australian carriers have all refarmed to 4G on this band
3G B1 (2100) Y Y T, O, V Optus refarmed to 4G in some areas
3G B5 (850) Y Y T, (V) Telstra “NextG” rural coverage. Vodafone have deployed in predominantly metro areas but refarming to 4G
3G B8 (900 GSM) Y Y O, V Optus “YesG”. Predominantly used in rural areas, according to vodafone.
4G B1 (2100) Y Y T, O Telstra in a handful of sites. Optus in select areas such as Cairns, Darwin, Hobard and Sunshine Coast
4G B3 (1800 +) Y Y T, O, V All 3 carriers well established in this band. Ex-2G spectrum
4G B5 (850) Y Y V Vodafone in early stages of roll-out, as they phase out 3G 850MHz
4G B7 (2600) Y Y T, O Telstra and Optus in early stages of roll-out. TPG has spectrum. Oputs in selected regional centres and holiday towns, and busy areas in major cities
4G B8 (900) Y Y T Telstra in a handful of sites utilising spectrum previously used by 2G. They have since stopped extending the expansion of this band, instead focusing their efforts on the B28 700 APT band
4G B28 (700 APT) Y Y T, O Telstra and Optus in early stages of roll-out. Ex TV spectrum. Used in capital cities, and major regional and holiday destinations, with Telstra’s reach extending further in the bush. Good indoor use. Telstra call their rollout 4GX as they have twice the spectrum of optus
4G B40 (TD 2300) Y Y O Optus Plus (Vivid wireless spectrum). In big cities and select metro areas. Not in NBN fixed wireless areas
4G B42 N N O Optus, NBN (trials)

*Carrier: T = Telstra, O = Optus, V = Vodafone

The FrequencyCheck page on Australia is yet to list Telstra’s use of B8 (900).

Happy to see that the Samsung covers all the bands that I need. 🙂

Global Roaming

FrequencyCheck can do a comparison between the two phones. The only bands missing from the Samsung are as follows:
LTE B27 (800 SMR) – doesn’t seem to be used anywhere currently.
LTE B29 (700 de) –  doesn’t seem to be used anywhere currently.
LTE B30 (2300 WCS) – only used by a few carriers in the USA.

So in terms of global roaming – I don’t look to be at a disadvantage vs iPhone7.

Carrier Info

Telstra – with some extra info here


Vodafone – They say they only do 4G in metro, so for regional/rural areas you’d get 900 or 2100 MHz 3G or 900 MHz 2G only, although based on the coverage map, that info seems to be out of date. They do VoLTE which is nice.

Installing Ubuntu Server 16.04 in FreeNAS 9.10 Beehyve

Emulation Be Gone!

Well I had enough of trying to update the CrashPlan plugin and getting it working with CrashPlan 4.7. I even tried installing a standard jail and using the FreshPorts to get it installed. I got close – I  had CrashPlan downloaded but the make script wanted “jdk-8u92-linux-i586.tar.gz” even though that has known vulnerabilities and it wouldn’t take the latest version “jdk-8u102-linux-i586.tar.gz”. I forced the old Java version (security issues and all) and got everything built but then ran into problems with kernel modules, which given a month of Sundays I might have resolved, but I only want to devote half a Sunday to this, so backed out of that when I realised that I was at that cul-de-sac dead end of frustration I’m sure that you know all too well.

Beehyve to the Rescue

So I backed out of the CrashPlan plugin Jail; gave up on the Crashplan standard Jail, and have gone in a whole new direction with FreeBSD’s “Beehyve” which is accessible under FreeNAS 9.10. This is a hypervisor which has kernel support, so you could say it’s type 1, but probably emulates a lot of stuff, so just how “type 1” is for others to say.  In any case, it seems to perform flawlessly for me so I’m a happy camper.

Emulating Linux ABI on FreeBSD to me always felt like shoving a square peg through a round hole. Why emulate parts of Linux to run CrashPlan, when you can instead virtualize a whole Linux instance, and run CrashPlan native? This will surely keep compatibility problems to a minimum. Not only that, but I can move my TVHeadend to it as well, and anything else that I need to run on a Linux server. I can’t see myself ever going back to the plugin setup for CrashPlan.

Setting up the Beehyve Environment

It was reasonably straightforward setting up Ubuntu Server 16.04 (“Xenial Xerus”), which I will use to host my CrashPlan server, and also my TVHeadend server, and anything else that I really need Ubuntu for.

Speedy Alias – “iohyve” becomes “io”

You can configure beehive directly, but you’re far better served by using the “iohyve” scripts. Now here’s the thing – I hate typing. I am also a clumsy typer with bent fingers, and find “iohyve” particularly annoying to type. You can do what I do and alias “iohyve” to “io” to make things easier. Send a “which io” to make sure that the alias isn’t used in your path already, and then add it to your “~/.bashrc” if you’re using bash:

sarlacc# which io  #make sure that 'io' isn't used for any other commands
sarlacc# cat ~/.bashrc | grep iohyve
alias io='iohyve'    # the alias I added to ~/.bashrc

After adding the alias, log out and log back in, or just source the rc file: . ~/.bashrc

All my subsequent “iohyve” commands will just show “io”.

Initial Parameters

Beehyve needs to know 3 things:

  • Where to store its files?
  • Which NIC to bridge to?
  • If it should start up the kernel modules? (yes… yes it should!)

Configure the answer to those three questions with the following:

io setup pool=<ZFS pool> kmod=1 net=<bridged NIC>    #kmod=1 means yes, 0 means no.

io setup pool=volume1 kmod=1 net=vlan10
Setting up iohyve pool...
On FreeNAS installation.
Checking for symbolic link to /iohyve from /mnt/iohyve...
Symbolic link to /iohyve from /mnt/iohyve successfully created.
Loading kernel modules...
bridge0 is already enabled on this machine...
Setting up correct sysctl value... 0 -> 1

Some older docs say that on FreeNAS you need to ln -s /mnt/iohyve /iohyve but as you can see above that’s already added. If you add the symlink manually it’ll create a weird circular sym linking.

Files and Folders

Run this to see that the folder structure is setup:

sarlacc# zfs list | grep iohyve
volume1/iohyve                                              21.4G  2.46T   140K  /mnt/iohyve
volume1/iohyve/Firmware                                      140K  2.46T   140K  /mnt/iohyve/Firmware
volume1/iohyve/ISO                                           771M  2.46T   151K  /mnt/iohyve/ISO
volume1/iohyve/ISO/FreeBSD-10.3-RELEASE-amd64-bootonly.iso   116M  2.46T   116M  /mnt/iohyve/ISO/FreeBSD-10.3-RELEASE-amd64-bootonly.iso
volume1/iohyve/ISO/ubuntu-16.04.1-server-amd64.iso           655M  2.46T   655M  /mnt/iohyve/ISO/ubuntu-16.04.1-server-amd64.iso
volume1/iohyve/ubusrv16                                     20.6G  2.46T   140K  /mnt/iohyve/ubusrv16
volume1/iohyve/ubusrv16/disk0                               20.6G  2.48T  2.66G  -

You should just have the first three paths – the rest is stuff I’ve setup later on in this guide.

The Kernel Modules

You can check that the kernel modules are loaded with this:

sarlacc# kldstat
Id Refs Address            Size     Name
 1   94 0xffffffff80200000 18b4000  kernel
 2    1 0xffffffff81d9f000 ffd8c    ispfw.ko
 3    1 0xffffffff82021000 f947     geom_mirror.ko
 4    1 0xffffffff82031000 46a1     geom_stripe.ko
 5    1 0xffffffff82036000 ffca     geom_raid3.ko
 6    1 0xffffffff82046000 ec6a     geom_raid5.ko
 7    1 0xffffffff82055000 574f     geom_gate.ko
 8    1 0xffffffff8205b000 4a33     geom_multipath.ko
 9    1 0xffffffff82060000 5718     fdescfs.ko
10    1 0xffffffff82066000 89d      dtraceall.ko
11   10 0xffffffff82067000 3ad67    dtrace.ko
12    1 0xffffffff820a2000 4638     dtmalloc.ko
13    1 0xffffffff820a7000 225b     dtnfscl.ko
14    1 0xffffffff820aa000 63d7     fbt.ko
15    1 0xffffffff820b1000 579a4    fasttrap.ko
16    1 0xffffffff82109000 49cb     lockstat.ko
17    1 0xffffffff8210e000 162f     sdt.ko
18    1 0xffffffff82110000 d8d8     systrace.ko
19    1 0xffffffff8211e000 d494     systrace_freebsd32.ko
20    1 0xffffffff8212c000 4da3     profile.ko
21    1 0xffffffff82131000 7fdf     ipmi.ko
22    1 0xffffffff82139000 b3c      smbus.ko
23    1 0xffffffff8213a000 1a62a    hwpmc.ko
24    1 0xffffffff82155000 2b80     uhid.ko
25    2 0xffffffff82158000 2b32     vboxnetflt.ko
26    2 0xffffffff8215b000 45320    vboxdrv.ko
27    1 0xffffffff821a1000 41ca     ng_ether.ko
28    1 0xffffffff821a6000 3fd4     vboxnetadp.ko
29    1 0xffffffff821aa000 3567     ums.ko
30    1 0xffffffff821ae000 a684     linprocfs.ko
31    1 0xffffffff821b9000 670b     linux_common.ko
32    1 0xffffffff821c0000 1b140b   vmm.ko
33    1 0xffffffff82372000 2ebb     nmdm.ko
34    1 0xffffffff82375000 1fe1     daemon_saver.ko

If vmm.ko and nmdm are there, you’re golden.

MTU – Danger Will Robinson!

Now the “bridged NIC” is the physical or logical NIC that carries the IP address of the network that you want your virtual machine to bridge to – not the bridged interface. For my home setup I share a VLAN10 (data) and a VLAN99 (management) on a single physical interface – bge0. Why do I do this? Well my switches and routers only have management IPs on VLAN99, and my computer is the only one on VLAN99, so that’s added security. Plus I do it, because I am a network engineer, and because I can 🙂

Now when you have VLAN interfaces you can run into MTU problems, unless you up the MTU to account for the extra 4 bytes of VLAN tag overhead. In FreeNAS GUI, I set “mtu 1504” on any interface I run VLANs on, so that the VLANs can get 1500 bytes MTU.

The automatically created bridge0 interface interits this MTU:

sarlacc# ifconfig bridge0          
bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1504
        description: iohyve-bridge
        ether 02:f3:f6:80:91:00
        nd6 options=1<PERFORMNUD>
        id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
        maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
        root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
        member: tap0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 9 priority 128 path cost 2000000
        member: epair2a flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 12 priority 128 path cost 2000
        member: epair1a flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 11 priority 128 path cost 2000
        member: epair0a flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 10 priority 128 path cost 2000
        member: vlan10 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 6 priority 128 path cost 20000
sarlacc# ifconfig tap0 
 description: iohyve-ubusrv16
 ether 00:bd:1b:3e:01:00
 media: Ethernet autoselect
 status: active
 Opened by PID 3694

That tap0 is originally created by iohyve as 1500 bytes, and fails to add to the bridge0 because of the MTU mismatch. In order to get it into the bridge0, I had to do this:

ifconfig tap0 mtu 1504
ifconfig tap0 promisc       # not sure if this was necessary but added anyway
ifconfig bridge0 addm tap0

Surviving Reboots

You want these settings to survive reboots, so add these in the GUI to your “System” > “Tunables”.

iohyve_enable iohyve_flags

Unfortunately I haven’t worked out how to do the tap0 MTU fix just yet, so I’m manually doing that at reboot just for now. I’d like this to be “fixed” by iohyve, but if all else fails I could add a pre or post init script that just runs the commands that way.

Installing Ubuntu 16.04 “Xenial Xerus”

Either FTP fetch the install media or add the path:

io fetch
io cpiso /mnt/volume1/files/software/ISOs/Ubuntu/ubuntu-16.04.1-server-amd64.iso

Once downloaded or copied see that it’s listed:

sarlacc# io isolist
Listing ISO's...

Now create the VM and set its parameters (I call my VM ubusrv16 for Ubuntu Server 16.x):

sarlacc#io create ubusrv16 20G
sarlacc#io set ubusrv16 loader=grub-bhyve os=d8lvm ram=2G cpu=1 con=nmdm1
sarlacc#io list
Guest     VMM?  Running  rcboot?  Description
ubusrv16  NO    NO       NO       Sun Jul 24 11:04:26 AEST 2016

Use “os=debian” if not using LVM. If using LVM, use “os=d8lvm”

I just give it one CPU, and 2Gigs of RAM. The console will be nmdm1 if it’s the first VM.

Do the install, and use another SSH session to attach to the console:

io install ubusrv16 ubuntu-16.04.1-server-amd64.iso
io console ubusrv16  #handy to do this in another window

Configuring VM to Start at Reboot

One criticism I’ve heard of VirtualBox is that you can’t start the VMs on reboot. I haven’t verified this though. The good thing with Beehyve is that you can start a VM on reboot:

io set ubusrv16 boot=1


Let me know you you go in the comments.

Setting up a Crashplan FreeNAS Plugin Jail


I back up about 400 Gigabytes of photo RAW files and Lightroom (LR) catalogs to the cloud using CrashPlan. I used to have these files on a single hard drive inside my computer – dangerous!

I realised that I needed my photo files on a RAID array so that I don’t lose everything in case of a single disk failure. Instead of putting a RAID array inside my PC, or directly attaching a RAID array to it using USB or Thunderbolt (called a “DAS” for “directly attached”), I realised I didn’t need to spend money when I already have a perfectly good RAID box already – my NAS running FreeNAS!

I had a rude shock though when it came to backing up with CrashPlan running on my PC, and having my work files on a mapped network drive. CrashPlan refused to touch the files on the mapped network drive! I then to take the plunge and move the CrashPlan engine to my NAS, and do back ups from there. Brilliant!

About Running CrashPlan “Headless”

CrashPlan has two basic parts – the CrashPlan application, and the CrashPlan engine. The engine runs continuously and backs up even when the client isn’t running. The client just checks the engine status, and is used to configure it. The Client app is designed to connect to an engine on the local machine and not on a remote machine. Luckily it uses TCP ports, so we can hack the configuration in order to get it to connect to a remote (headless) machine.

Install the CrashPlan Plugin Jail

Setup you jail configuration, if you haven’t already. Mine is as follows:

jail configuration

Install the CrashPlan plugin jail by going to “Plugins > Available” and then highlighting “CrashPlan” and then clicking the “Install” button.

After it has installed, map the files you want to back up into the jail under “View Jails > Storage”. You’ll find detailed instructions on this on the FreeNAS documentation homepage.

My jail storage is as follows:

jail storage

“volume1” is my raid array volume, and “lacie” is an external 12TB USB3.0 drive volume. I initially decided to use CrashPlan to back up all my files (software, multimedia and music) to my external drive, but I found that too slow. Now I just have a backup set to back up my “RAW files” and “Calalogs”, which are contained within my “/software/photography”, as I didn’t want to create another dataset just for those two.

It makes sense to only map your source files as read-only as I have done here – no need to give CrashPlan more permissions than it needs to do its job, and safeguards the files in case something goes drastically wrong. Mapped like this – you can only trash your backups, and not the source.

Update the Plugin Jail

I usually run the following for any new jail to get it up to date:

pkg clean     # clean out old cache
pkg update    # gets the latest list of files
pkg upgrade   # updates the jail software

I also like to install bash with “pkg install bash” and then log out and back into the jail under bash:

sarlacc# jls
 JID IP Address Hostname Path
 1 - crashplan_1 /mnt/volume1/jails/crashplan_1
 2 - dnsmasq /mnt/volume1/jails/dnsmasq
 3 - plexmediaserver_1 /mnt/volume1/jails/plexmediaserver_1
 4 - sabnzbd_1 /mnt/volume1/jails/sabnzbd_1
sarlacc# jexec 1 bash
[root@crashplan_1 /]#

Configure SSH in the Plugin Jail

You’ll need SSH in order to connect your PC to the CrashPlan engine running on the NAS. This is quite straightforward:

Edit “/etc/ssh/sshd_config" and uncomment/edit as follows:

PermitRootLogin yes
PasswordAuthentication yes
AllowTcpForwarding yes

For more security you can create another user such as “adduser crashplan” or “adduser backupuser” etc, but I don’t bother – I just use the root user and set a strong root password (in the jail) with “passwd root” command.

Next get sshd going:

sysrc sshd_enable=YES  # allows sshd to be started as a service
service sshd keygen    # generate sshd keys
service sshd start     # start the sshd service
service sshd status    # check sshd service status - should return the process ID

Update and Start CrashPlan Engine in Plugin Jail

The current problem we face is that the plugin is only version 3.6.3_1, and that’s way behind the exiting GUI version of 4.7, and there are compatibility issues. No problem – just manually update the jail:

su -                    # if not already root
cd /usr/pbi/crashplan-amd64/share/crashplan
wget --no-check-certificate
tar -xf CrashPlan_4.7.0_Linux.tgz
cd crashplan-install
cpio -idv < CrashPlan_4.7.0.cpi
service crashplan stop
cd ..
rm -r lib*
cp -r crashplan-install/lib* .
sysrc crashplan_enable=YES

The above assumes that 4.7 is the latest version, and that the crashplan TARGETDIR is “/usr/pbi/crashplan-amd64/share/crashplan“. Check the install vars here:

root@crashplan_1:/usr/pbi/crashplan-amd64/share/crashplan # cat install.vars 

you may have to change “JAVACOMMON=/usr/pbi/crashplan-amd64/share/crashplan/jre/bin/java” to: “JAVACOMMON=/usr/pbi/crashplan-amd64/bin/java” if you get an error message in /var/log/crashplan/engine_error.log complaining about “”

How I found the correct java:

[root@crashplan_1 /usr/pbi/crashplan-amd64/share/crashplan]# find / -name "java"

/usr/pbi/crashplan-amd64/linux-sun-jre1.7.0/bin/java -version   Java(TM) SE Runtime Environment (build 1.7.0_51-b13) 
/usr/pbi/crashplan-amd64/share/java -version  directory
/usr/pbi/crashplan-amd64/share/crashplan/jre/bin/java -version   - breaks with issue
/usr/pbi/crashplan-amd64/bin/java -version   (build 1.7.0_51-b13)

Go to Plugins > CrashPlan, in the left hand side tree menu in order to accept the Java licence agreement. This trips a lot of people up.

Now start Crashplan

 service crashplan start

You can check that CrashPlan is running with the following:

root@crashplan_1:/mnt/lacie # sockstat -4 | grep java
root java 4859 88 tcp4
root java 4859 105 tcp4 *:*
root java 4859 108 tcp4
root java 4859 119 tcp4

Line 2 is a connection to, which is Code42 Australia, where I am backing up some files to.
Line 3 is listening on the local server for new connections.
Line 4 is a an SSH port map from my Windows PC where I run the GUI. We’ll get to that.
Line 5 is a connection to , which is Code42 (makers of CrashPlan) in the USA. Possibly a license server.

Don’t be alarmed when you see “crashplan is not running”, when issuing a “service crashplan status”. If Java is listening on the 4243 port then it should be fine. 🙂

Configure SSH in Windows

I use a program called SecureCRT to easily setup the portmap, connecting to my jail IP of, and CrashPlan port of 4243, using local Windows port of 4200:

Crashplan port forward secure CRT with crashplan settings

It’s handy to create a save a session for this, and then create a desktop shortcut to the session, so you can just double-click the icon and start it. I like to start it minimised. The target for the shortcut for me is “C:\Program Files\VanDyke Software\SecureCRT\SecureCRT.exe" /S "CrashPlan

NB: Check that 4243 is actually your CrashPlan engine port, with the “sockstat -4 | grep java” command above.

Connecting Windows CrashPlan Client to FreeNAS CrashPlan Engine

After setting up the portmap, we need to finish off by connecting the Windows Client to the FreeNAS server. To do this we need to update the following file:  “C:\ProgramData\CrashPlan\.ui_info” changing port and API key.

e.g. from:




The format is <local port>,<api key>,<IP address>. Where is the API key you might ask? Answer – from the server’s .ui_info file. Run this on the FreeNAS box to check:

cat /var/lib/crashplan/.ui_info

You can double check the server’s port config there.

Once the Windows “.ui_info file” is saved, you should now be able to start the CrashPlan application on your PC and connect to the server.

The .ui_info file reverts back to the previous settings on every reboot of Windows. It is therefore important to create a .bat file to update this on reboot.

Windows Batch File

  1. Copy your newly configured “.ui_info” file to a new file called “freenas.ui_info” in the same directory.
  2. Create a file called “cpcfg.bat” (short for “crashplan configuration”) in the same directory, with the following contents:
    copy C:\ProgramData\CrashPlan\FreeNAS.ui_info C:\ProgramData\CrashPlan\.ui_info
  3. Create a shortcut to that same cpcfg.bat file in the same folder.
  4. Once the shortcut has been created, right-click the file and select Cut.
  5. Press the WindowsKey+R to get to the “Run” dialog box.
  6. Type “shell:startup” in the Run dialog box and hit “OK”.
  7. Paste your “cpcfg.bat” shortcut into that folder.
  8. Right-click on the shortcut and go to “Properties > Shortcut (tab) > Advanced, and click “Run as Administrator”, and then OK, Apply, OK, to save.

Now everytime you reboot, that file will have the correct info. If that doesn’t work then you’ll have to just manually run the .bat file.

Troubleshooting and Tips

I did the following when I was troubleshooting, just following tips on forums, as you do. I’m not sure if they made my setup work or not, but if you have trouble, then it doesn’t hurt to try the following on the server, within the CrashPlan plugin jail:

ln -s /usr/local/bin/bash /bin/bash
/usr/bin/cpuset -l 0 /usr/local/share/crashplan/bin/CrashPlanEngine restart

In the GUI you can doubleclick on the CrashPlan “House” and logo on the top right and bring up the GUI CLI. Type “” and you should see something like this: 

UI Port=4243
HTTP Port=4244

Address= and UI Port=4243 is correct if you’re mapping local port 4200 to server port 4243


Big tip here is to set the CPU usage to 100% (for user present and idle) in the FreeNAS GUI. This is because FreeNAS does CPU management for jails, and 100% within the jail means about 60% overall. The more CPU you throw at it, the better.

It does help to have a very grunty box when creating local backups at speed. I found that my speeds went up, the more I ramped CPU up to 100%, so it’s definitely CPU-bound. I get about 325Mbps  (bits not bytes) to my external Lacie 12TB box over USB3.0 (5Gbps throughput). That would definitely go up with more CPU clock cycles.


There’s no need to compress your files within the jail, if you’ve already turned compression on at the dataset level. You’re just wasting your time and CPU otherwise. It is good to compress when going over the Internet though, so save your network bandwidth.


This draws heavily from these two links:
Using CrashPlan On A Headless Computer
FreeNAS Forums: CrashPlan 4.5 Setup


Avoid MegaBuy

Well the old adage “you get what you pay for” is true when it comes to Internet shopping. I recently ordered from because a search on showed that they were one of the cheapest, if not cheapest, and by a substantial amount. I usually order from – but for this particular item – a WD My Passport Wireless hard drive – they were out of stock.

I proceeded to order from Megabuy. So far so good but then I hit my first snag. Their website wouldn’t let me create an account failing with some code issue. Odd and very amateurish. I emailed them and then that got fixed up. OK moving on – I then proceeded to order – the WD drive, and also a 256GB SSD card.  Crazily – it said the two items would come in two separate boxes, and the combined shipping cost was just over $60! I should have stopped there and then, but the cost including shipping was still just under my preferred supplier Scorptech. I pushed ahead with the order.

Well the hard drive showed up, and you can see terrible the hard drive was packed. This is a harddrive for God’s sake! The drive would have bounced around like a soccer ball in the bigger box, potentially damaging the components. Couriering a hard drive is something I prefer not to do regardless of how it’s packed, but packed like this  – it’s inexcusable. That’s why I’ve been moved to blog about it.

I paid over $30 courier charge for this:


The takeaway is – DO NOT ORDER ANYTHING FROM MEGABUY.COM.AU.  It will end up being a false economy. As always – you get what you pay for. Caveat emptor!

Improvements to my Photo Backup Procedures

Things have been getting pretty geeky in Gav’s tech ninja dojo of late. I’m really excited about my upcoming move to 10Gigabit Ethernet for my NAS to PC communications! The main answer to the question, “why?” is “because I can!” I also like spend money on tech I don’t really need but will (a) allow me to learn something new; (b) entertain me; (c) impress my mates; (d) give me something to blog about; and (e) provide at least some performance benefit… all basically in that order. 🙂

Current Setup

Ok so here’s the basic setup. You may or may not know it but I take pretty pictures (mostly of the Australian landscape) and upload them to the Internet here. I also have a (much neglected) photography website here, which basically pulls my better shots from Flickr using a nifty WordPress plugin. So yeah – tech geek *and* photo geek.

The problem I’m facing is that I have a growing collection of photographic “raw” files – not as big as some but substantial enough to manage, at about 500 Gigabytes (half a Terabyte). What I’ve been doing is storing all the files on a single 2TB HDD  (my Windows ‘D’ drive), and syncing them to the cloud with CrashPlan. I also run the very user-friendly and reasonably priced Acronis backup software (and no they didn’t pay me for that – I pimp whatever I like!)

This is a poor strategy. On the face of it you might think it’s OK right? They say in the photo world that “your photos aren’t safe unless they exist in three places” and my stuff IS in 3 places – one of which is offsite… so what’s the problem?

Problems with Current Setup

I’ll number ’em…

  1. Well firstly my data exists in 3 places, but not one of them is archival in nature. If some files got deleted from my PC and I didn’t notice, then the deletion would sync to the cloud, and Acronis would also backup only the existing files. After a week when my backup schedule rolls over then *poof* – those deleted files are gone forever! What would be ideal is a secondary “archival” backup where I can take monthly snapshots in addition to the daily backups, and then be able to delve right back into the files at least a year into the past. More on this later.
  2. The primary location for the files doesn’t have physical redundancy – i.e. no RAID. They are just sitting on a single drive. Sooner or later that drive is going to fail. That is a certainty – it’s just a question of when. If I had some sort of +1 drive redundancy I could just swap the drive out and continue on without having to restore from backups. I could buy another drive for my main PC and RAID it up in a “mirror” with my current drive, but I already have a Redundant Array of Independent Drives (RAID)… my NAS… hmmm.
  3. Having all the files on my PC to backup to the cloud means I have to leave my PC on in order to back them up. When I’m backing up to the cloud, it could be many gigs of data to sync at a time, and since my upstream bandwidth is a paltry 800kbps at best (ADSL2+) then it’s going to take a very long time AND choke my upstream TCP acks, meaning my web browsing experience is bad (and yes I need to “priority queue” my TCP acks!) This means leaving my computer on when I’m not using it, which basically means leaving it on 24×7 in order to get the files to CrashPlan ASAP. My CrashPlan backup schedule is 9am-5am weekdays and 2am-8am weekends. Having my PC on 24×7 is a bit of power drain, and my quarterly power bill is fairly nasty. My NAS is always on 24×7, making it a good candidate to backup to the cloud from… hmmm again…
  4. My nightly Acronis backups of files on my D: drive to my NAS seems to play havoc with my CrashPlan cloud backups – touching the same files at the same time with file locks perhaps? Admittedly I could just change the scheduling of my Acronis backups and my CrashPlan backups so they don’t clash, but if I moved the RAW files *TO THE NAS* then I wouldn’t have to think about this issue as the problem would be avoided.

Well it was becoming abundantly clear that the NAS might just be a good spot to stick the RAW files, and work on them from there.

Pros for Moving the RAW Files to the NAS

Well I’ve touched on 3 benefits already (which I’ll repeat below). What else? Here’s a fairly comprehensive list:

  1. RAID speed / redundancy.
  2. Always-on NAS great for cloud backups.
  3. Eliminate backup conflicts.
  4. My NAS already has a UPS attached to it for smooth power delivery and safe shut down in event of expended power failure (I get about an hour uptime from my 1500VA UPS). Sure I could add a UPS to my PC but this is an added expense to have a UPS on both.
  5. The NAS uses the enterprise-hardened ZFS file system which regularly “scrubs” the data for errors, and employs copy-on-write for file safety, even if the power failed and the NAS didn’t shut down properly. It’s arguably superior to Windows NTFS for file integrity (although NTFS is much better than the old FAT32).
  6. Having less spinning disks near my workspace (and my head) means more peace and quiet when I’m at the PC. Sure I could add more SSDs to my workstation but… cost… and I already have that space on my NAS so it makes sense to use it.


There’s only two I can think of right now:

  1. The extra tech work of getting CrashPlan setup on my NAS. For a geek like me this is no trouble, and in fact I enjoy the challenge (a blog post to follow when I set it up 😉
  2. The speed of Network Attached Storage (NAS) vs Directly Attached Storage (DAS). This is the the main issue – the curse of the “slow” network.

Speeding up the Network

There are ways to to boost the network in order to make the NAS feel like DAS:

  1. Move to 10Gig Ethernet.
  2. Use a dedicated Storage Area Network (SAN), or point-to-point link between PC and NAS, and enable 9k jumbo frames.
  3. Use the latest vendor-supplied drivers.
  4. Tweak card buffers, motherboard, and filesystem settings for optimal performance.

To wit I have ordered 10Gig optical Ethernet cards for my NAS and PC (Intel X520), along with 10Gig SFP+ transceivers, a 10m fibre patch lead and some fibre cleaning equipment. The installation and optimisation of this setup will be the subject of a later blog post. My gear should arrive all within the next couple of weeks. Stay tuned!! One more thing though…

Archival Backups

I did mention I was going to revisit this. I’ve been told that you can set CrashPlan to store all your changes, including deletions, so like a TimeMachine for your backed up data. This is like an Apple TimeMachine in the cloud for your data. I on the other hand like to keep things local, and only use CrashPlan as the “backup of last restort”. FreeNAS provides great tools for doing backups in the form of rsync and ZFS snapshots, and I’ll be exploring this in another blog post, where I’ll setup my own “TimeMachine” of sorts to a separate Lacie 12 TB USB3.0 HDD that I’ll have plugged in to the back of the NAS B-)

Updating Free VMware EXSi


VMware regularly patch their free ESXi bare metal hypervisor. If you have the free version, then you can do this from the CLI, with the help of the (also free) VMware vSphere Client.


  • Download patch file
  • Shudown VM’s and put the Host into Maintenance mode
  • Enable SSH Server on Host
  • Copy patch file to host
  • Run the patch
  • Reboot

Download Patch File

You need to have a free VMware account to download the patch upgrade file. This is usually a zip.

To update from 6.0.0 to 6.0.0U1, it is called “”, and you can go here to grab it:

VMware patch search

When you search, you should be able to find the file and download it. Note that these instructions should also work for future versions.

Shudown VM’s and put the host into Maintenance mode

Simply shutdown your VM’s and right-click on the Host and “Enter Maintenance Mode”

Enable SSH Server on Host

Click on your Host –> Configuration (tab) –>  Software>Security Profile –> Properties…

VMware SSH enable

In the properties, go down to to “SSH”. If it is “Stopped” then click on the service property “Options” (bottom right), and start it up. I just have it set to the default “Start and stop manually”, as I only go in there to patch ESXi, and so start it when I need it. I have found that after a reboot, it is stopped by default.

You should now be able to able to SSH to your host. If you cannot, then go to the Firewall properties (see screenshot above), and make sure that the “SSH Server” checkbox is enabled.

Copy Patch File to Host

Use the vSphere Client to copy the patch file to the host:

ESXi data store

Don’t change directories – just dump into the root of your datastore:

ESXi data store file xfer

Run the Patch

On the CLI run this:

esxcli software vib update -d /vmfs/volumes/<datastore>/<file>.zip

…where <datastore> is the name of your datastore. For me it’s datastore1, so for me it is:

esxcli software vib update -d /vmfs/volumes/datastore1/

It shouldn’t take long.


When you’re done just reboot, then take your host out of maintenance mode and start your VMs up.