Installing Ubuntu Server 16.04 in FreeNAS 9.10 Beehyve

Emulation Be Gone!

Well I had enough of trying to update the CrashPlan plugin and getting it working with CrashPlan 4.7. I even tried installing a standard jail and using the FreshPorts to get it installed. I got close – I  had CrashPlan downloaded but the make script wanted “jdk-8u92-linux-i586.tar.gz” even though that has known vulnerabilities and it wouldn’t take the latest version “jdk-8u102-linux-i586.tar.gz”. I forced the old Java version (security issues and all) and got everything built but then ran into problems with kernel modules, which given a month of Sundays I might have resolved, but I only want to devote half a Sunday to this, so backed out of that when I realised that I was at that cul-de-sac dead end of frustration I’m sure that you know all too well.

Beehyve to the Rescue

So I backed out of the CrashPlan plugin Jail; gave up on the Crashplan standard Jail, and have gone in a whole new direction with FreeBSD’s “Beehyve” which is accessible under FreeNAS 9.10. This is a hypervisor which has kernel support, so you could say it’s type 1, but probably emulates a lot of stuff, so just how “type 1” is for others to say.  In any case, it seems to perform flawlessly for me so I’m a happy camper.

Emulating Linux ABI on FreeBSD to me always felt like shoving a square peg through a round hole. Why emulate parts of Linux to run CrashPlan, when you can instead virtualize a whole Linux instance, and run CrashPlan native? This will surely keep compatibility problems to a minimum. Not only that, but I can move my TVHeadend to it as well, and anything else that I need to run on a Linux server. I can’t see myself ever going back to the plugin setup for CrashPlan.

Setting up the Beehyve Environment

It was reasonably straightforward setting up Ubuntu Server 16.04 (“Xenial Xerus”), which I will use to host my CrashPlan server, and also my TVHeadend server, and anything else that I really need Ubuntu for.

Speedy Alias – “iohyve” becomes “io”

You can configure beehive directly, but you’re far better served by using the “iohyve” scripts. Now here’s the thing – I hate typing. I am also a clumsy typer with bent fingers, and find “iohyve” particularly annoying to type. You can do what I do and alias “iohyve” to “io” to make things easier. Send a “which io” to make sure that the alias isn’t used in your path already, and then add it to your “~/.bashrc” if you’re using bash:

sarlacc# which io  #make sure that 'io' isn't used for any other commands
sarlacc# cat ~/.bashrc | grep iohyve
alias io='iohyve'    # the alias I added to ~/.bashrc

After adding the alias, log out and log back in, or just source the rc file: . ~/.bashrc

All my subsequent “iohyve” commands will just show “io”.

Initial Parameters

Beehyve needs to know 3 things:

  • Where to store its files?
  • Which NIC to bridge to?
  • If it should start up the kernel modules? (yes… yes it should!)

Configure the answer to those three questions with the following:

io setup pool=<ZFS pool> kmod=1 net=<bridged NIC>    #kmod=1 means yes, 0 means no.

e.g.
io setup pool=volume1 kmod=1 net=vlan10
Setting up iohyve pool...
On FreeNAS installation.
Checking for symbolic link to /iohyve from /mnt/iohyve...
Symbolic link to /iohyve from /mnt/iohyve successfully created.
Loading kernel modules...
bridge0 is already enabled on this machine...
Setting up correct sysctl value...
net.link.tap.up_on_open: 0 -> 1

Some older docs say that on FreeNAS you need to ln -s /mnt/iohyve /iohyve but as you can see above that’s already added. If you add the symlink manually it’ll create a weird circular sym linking.

Files and Folders

Run this to see that the folder structure is setup:

sarlacc# zfs list | grep iohyve
volume1/iohyve                                              21.4G  2.46T   140K  /mnt/iohyve
volume1/iohyve/Firmware                                      140K  2.46T   140K  /mnt/iohyve/Firmware
volume1/iohyve/ISO                                           771M  2.46T   151K  /mnt/iohyve/ISO
volume1/iohyve/ISO/FreeBSD-10.3-RELEASE-amd64-bootonly.iso   116M  2.46T   116M  /mnt/iohyve/ISO/FreeBSD-10.3-RELEASE-amd64-bootonly.iso
volume1/iohyve/ISO/ubuntu-16.04.1-server-amd64.iso           655M  2.46T   655M  /mnt/iohyve/ISO/ubuntu-16.04.1-server-amd64.iso
volume1/iohyve/ubusrv16                                     20.6G  2.46T   140K  /mnt/iohyve/ubusrv16
volume1/iohyve/ubusrv16/disk0                               20.6G  2.48T  2.66G  -

You should just have the first three paths – the rest is stuff I’ve setup later on in this guide.

The Kernel Modules

You can check that the kernel modules are loaded with this:

sarlacc# kldstat
Id Refs Address            Size     Name
 1   94 0xffffffff80200000 18b4000  kernel
 2    1 0xffffffff81d9f000 ffd8c    ispfw.ko
 3    1 0xffffffff82021000 f947     geom_mirror.ko
 4    1 0xffffffff82031000 46a1     geom_stripe.ko
 5    1 0xffffffff82036000 ffca     geom_raid3.ko
 6    1 0xffffffff82046000 ec6a     geom_raid5.ko
 7    1 0xffffffff82055000 574f     geom_gate.ko
 8    1 0xffffffff8205b000 4a33     geom_multipath.ko
 9    1 0xffffffff82060000 5718     fdescfs.ko
10    1 0xffffffff82066000 89d      dtraceall.ko
11   10 0xffffffff82067000 3ad67    dtrace.ko
12    1 0xffffffff820a2000 4638     dtmalloc.ko
13    1 0xffffffff820a7000 225b     dtnfscl.ko
14    1 0xffffffff820aa000 63d7     fbt.ko
15    1 0xffffffff820b1000 579a4    fasttrap.ko
16    1 0xffffffff82109000 49cb     lockstat.ko
17    1 0xffffffff8210e000 162f     sdt.ko
18    1 0xffffffff82110000 d8d8     systrace.ko
19    1 0xffffffff8211e000 d494     systrace_freebsd32.ko
20    1 0xffffffff8212c000 4da3     profile.ko
21    1 0xffffffff82131000 7fdf     ipmi.ko
22    1 0xffffffff82139000 b3c      smbus.ko
23    1 0xffffffff8213a000 1a62a    hwpmc.ko
24    1 0xffffffff82155000 2b80     uhid.ko
25    2 0xffffffff82158000 2b32     vboxnetflt.ko
26    2 0xffffffff8215b000 45320    vboxdrv.ko
27    1 0xffffffff821a1000 41ca     ng_ether.ko
28    1 0xffffffff821a6000 3fd4     vboxnetadp.ko
29    1 0xffffffff821aa000 3567     ums.ko
30    1 0xffffffff821ae000 a684     linprocfs.ko
31    1 0xffffffff821b9000 670b     linux_common.ko
32    1 0xffffffff821c0000 1b140b   vmm.ko
33    1 0xffffffff82372000 2ebb     nmdm.ko
34    1 0xffffffff82375000 1fe1     daemon_saver.ko

If vmm.ko and nmdm are there, you’re golden.

MTU – Danger Will Robinson!

Now the “bridged NIC” is the physical or logical NIC that carries the IP address of the network that you want your virtual machine to bridge to – not the bridged interface. For my home setup I share a VLAN10 (data) and a VLAN99 (management) on a single physical interface – bge0. Why do I do this? Well my switches and routers only have management IPs on VLAN99, and my computer is the only one on VLAN99, so that’s added security. Plus I do it, because I am a network engineer, and because I can 🙂

Now when you have VLAN interfaces you can run into MTU problems, unless you up the MTU to account for the extra 4 bytes of VLAN tag overhead. In FreeNAS GUI, I set “mtu 1504” on any interface I run VLANs on, so that the VLANs can get 1500 bytes MTU.

The automatically created bridge0 interface interits this MTU:

sarlacc# ifconfig bridge0          
bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1504
        description: iohyve-bridge
        ether 02:f3:f6:80:91:00
        nd6 options=1<PERFORMNUD>
        id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
        maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
        root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
        member: tap0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 9 priority 128 path cost 2000000
        member: epair2a flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 12 priority 128 path cost 2000
        member: epair1a flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 11 priority 128 path cost 2000
        member: epair0a flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 10 priority 128 path cost 2000
        member: vlan10 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 6 priority 128 path cost 20000
sarlacc# ifconfig tap0 
tap0: flags=28943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST,PPROMISC> metric 0 mtu 1504
 description: iohyve-ubusrv16
 options=80000<LINKSTATE>
 ether 00:bd:1b:3e:01:00
 nd6 options=9<PERFORMNUD,IFDISABLED>
 media: Ethernet autoselect
 status: active
 Opened by PID 3694

That tap0 is originally created by iohyve as 1500 bytes, and fails to add to the bridge0 because of the MTU mismatch. In order to get it into the bridge0, I had to do this:

ifconfig tap0 mtu 1504
ifconfig tap0 promisc       # not sure if this was necessary but added anyway
ifconfig bridge0 addm tap0

Surviving Reboots

You want these settings to survive reboots, so add these in the GUI to your “System” > “Tunables”.

iohyve_enable iohyve_flags

Unfortunately I haven’t worked out how to do the tap0 MTU fix just yet, so I’m manually doing that at reboot just for now. I’d like this to be “fixed” by iohyve, but if all else fails I could add a pre or post init script that just runs the commands that way.

Installing Ubuntu 16.04 “Xenial Xerus”

Either FTP fetch the install media or add the path:

io fetch ftp://ftp.iinet.net.au/pub/ubuntu-releases/16.04.1/ubuntu-16.04.1-server-amd64.iso
io cpiso /mnt/volume1/files/software/ISOs/Ubuntu/ubuntu-16.04.1-server-amd64.iso

Once downloaded or copied see that it’s listed:

sarlacc# io isolist
Listing ISO's...
FreeBSD-10.3-RELEASE-amd64-bootonly.iso
ubuntu-16.04.1-server-amd64.iso

Now create the VM and set its parameters (I call my VM ubusrv16 for Ubuntu Server 16.x):

sarlacc#io create ubusrv16 20G
sarlacc#io set ubusrv16 loader=grub-bhyve os=d8lvm ram=2G cpu=1 con=nmdm1
sarlacc#io list
Guest     VMM?  Running  rcboot?  Description
ubusrv16  NO    NO       NO       Sun Jul 24 11:04:26 AEST 2016

Use “os=debian” if not using LVM. If using LVM, use “os=d8lvm”

I just give it one CPU, and 2Gigs of RAM. The console will be nmdm1 if it’s the first VM.

Do the install, and use another SSH session to attach to the console:

io install ubusrv16 ubuntu-16.04.1-server-amd64.iso
io console ubusrv16  #handy to do this in another window

Configuring VM to Start at Reboot

One criticism I’ve heard of VirtualBox is that you can’t start the VMs on reboot. I haven’t verified this though. The good thing with Beehyve is that you can start a VM on reboot:

io set ubusrv16 boot=1

🙂

Let me know you you go in the comments.

13 comments

  1. Thanks so much for your guide. I’m finally up and running. Took me a fair few hours though…
    Pretty noob with all things Linux, but there’s enough guides out there to get through.

    Issues I had:
    1. 2 versions of ubuntu would not install for me. I ran the install command and the console did not was stuck at OK. This was resolved by downloading the latest ubuntu server version, which installed ok.

    2. During install, at the final stage, say no to the Ubuntu installed version of Grub to the MBR. Pretty sure that Iohyve takes care of the Grub config.

    3. I had to learn how to mount via NFS. As mentioned above (thanks @Nello Lucchesi), the following had enough commands for me to follow:
    https://forums.freenas.org/index.php?threads/map-nfs-clients-to-servers-owner-group.45319/
    I didn’t have to worry about special users and groups, but my FreeNas to Crashplan is just straight NAS to Cloud. I don’t backup via Crashplan to my FreeNAS.

    4. I found the headless setup still didn’t work. Following the above to get VNC working on Ubuntu server did the job (Thanks again Nello):
    https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-vnc-on-ubuntu-16-04
    The commands for the desktop is: /usr/local/crashplan/bin/CrashPlanDesktop

    Thanks everyone. This really is the new way moving forward.

    Though moving means I’ve had to start my CrashPlan upload from scratch due to different drive mappings from my Jail install, I’m relieved I have a much more simple and native setup.

    1. Glad to hear you’re up and running. I try to avoid NFS, and stick with using cifs-utils to share amongst my VMs. Nothing against NFS but I am already running Samba on my main VM that shares out my files, and also I’ve read that it’s not ideal to share the same files due to file locking issues clashing between the two protocols. It *might* be OK, but that’s a headache I’ve decided to just avoid and stick with Samba / SMB / CIFS or whatever else we refer to it as 🙂
      As for headless setup, I have done this several times now and each time the port on the server tends to change itself to 4259 for some reason (from 4243). After that it tends to stay on 4259, but I’m keeping an eye on it!

  2. Just wanted to say thank you! I wanted to run VMs on my NAS, and let’s just say my attempt at running FreeNAS inside Hyper-V ended in disaster (fortunately while still testing so nothing was lost).

    Everything here worked perfectly, took me all of 5 minutes to have a console to start installing my guest OS. Thanks again!

  3. Nice write up. I did a very similar thing to get RancherOS running as a VM for Docker containers (I prefer RancherOS to boot2docker, which F10 will use, but that’s another story). I also use VLANs like yourself, and I use an LACP LAG. I can probably offer some help that will maybe fix your MTU issue as I had a similar issue (I wanted my tap0 interface to have a static MAC address).

    I created the following rc.conf tunables:

    cloned_interfaces=”lagg0 bridge0 tap0″ (you may not need lagg0)
    ifconfig_bridge0=”addm vlan1 addm tap0 SYNCDHCP” (probably replace “SYNCDHCP” with “inet 10.1.1.X netmask 255.255.255.0″ if you aren’t using DHCP, vlan1 is my primary VLAN)
    ifconfig_tap0=”ether xx:xx:xx:xx:xx:xx up” (here is where I would add “mtu 1504” before “up”)

    Note that my iohyve_flags are set to just “kmod=1” with this since I take care of setting up the bridge with these tunables.

    Also note that these tunables take care of the problem iohyve does not, in that they apply the IP address to the bridge interface instead of one of the member interfaces (this is how its recommended by FreeBSD, otherwise my vlan1/your vlan10 interface would have the IP). My network interfaces configuration in FreeNAS just has the interfaces defined but DHCP/autoconf is false and the options are simply “up”.

    I hope this helps solve your MTU issue (might also solve weird bridging issues).

    1. Hi Scott. Thanks for the info on the tunables – will definitely “mental note” that. Sorry for the late reply – I am just building a brand new NAS + virtualisation server and will have a crack at passing through a SAS controller through to FreeNAS hosted on Proxmox. Fun times. 🙂

  4. Thank you for this tutorial. It’s very clear and now have Ubuntu installed. But …

    Where’s the tutorial for installing and configuring Crashplan to run on Ubuntu?

    Sorry for being such a noob. I don’t even know how to connect to/use this new server. Is there some way to use VNC and have a GUI. Sorry again for such a basic question.

    – nello

    1. Hi Nello. It isn’t too difficult. I just followed the instructions for Linux on the Crashplan website. The bit about running a remote GUI is covered in my previous post http://gavowen.ninja/2016/04/setting-up-a-crashplan-freenas-plugin-jail/. I have this in my /etc/fstab
      # My Samba shares on FreeNAS
      //10.69.10.20/Multimedia /mnt/multimedia cifs credentials=/etc/samba/users/media,noexec 0 0
      //10.69.10.20/Software /mnt/software cifs credentials=/etc/samba/users/backup,noexec 0 0

      I have that credentials file to store my username and password for the samba share. It’s important to have usernames and passwords synced with FreeNAS. I do this manually by editing /etc/passwd and /etc/group. Good luck!

      1. I followed this post on VNC and am able to use a GUI within my VM:
        https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-vnc-on-ubuntu-16-04

        I managed to get the VM to mount FreeNAS storage as an NFS share coordinating UID:GID between FreeNAS and Ubuntu:
        https://forums.freenas.org/index.php?threads/map-nfs-clients-to-servers-owner-group.45319/

        One last problem …
        CrashPlan is writing it’s files with UID:GID 4294967294:1001. I understand that user 4294967294 is “nobody”

        How do I control the uid:gid that it uses to write files?

      2. Hi Nello. Thanks for the info that looks very handy. Which files are you referring to? Files snyced across from another crashplan install? i.e. “incoming” files? To be honest I haven’t paid much attention to what UID:GID that crashplan writes to because on my setup it’s read-only – my crashplan just syncs my local files to the cloud and that’s about it. Perhaps look at how Java is installed as it runs on top of the Java virtual machine. In the GUI you can double-click on the picture of the home to get to the command shell. The logs in there might be of use, if you haven’t checked those already. Apologies for delayed response – my message alerts haven’t been coming through – will need to sort that out.

  5. oh wow! thanks a lot! I can’t express how happy I’m right now! 😀 I also went through all those struggles with crashplan plugin upgrade & standard jail stuff, and you’re the first person who came up with iohyve solution! (at least I didn’t see anyone else mention this possibility). I just spent half an hour and now I have crashplan up and running again and without virtualbox (you actually can start VMs on boot, just crate an init script 🙂 this time.

    Thanks!

    1. Awesome and glad I could be of some assistance. I’d like Beehyve in future to support a graphical console that I can remotely attach to with Spice or VNC (which is a feature that Linux KVM/Qemu has) so that I can run the Crashplan GUI on the Beehyve’d Ubuntu too, but other than that it’s really sweet. BTW I got my TVheadend server and Crashplan working on the Beehyve’d Ubuntu install and both work great. Saves me having to run a second server. I haven’t played with VirtualBox within FreeNAS but Beehyve suits my needs so sticking with it. Now I need to learn snapshots of Beehyve, in case I bork my Ubuntu install drastically and need to roll it back!

Leave a Reply to Gavin OwenCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.