Avoid MegaBuy

Well the old adage “you get what you pay for” is true when it comes to Internet shopping. I recently ordered from http://megabuy.com.au because a search on http://staticice.com.au showed that they were one of the cheapest, if not cheapest, and by a substantial amount. I usually order from http://www.scorptec.com.au – but for this particular item – a WD My Passport Wireless hard drive – they were out of stock.

I proceeded to order from Megabuy. So far so good but then I hit my first snag. Their website wouldn’t let me create an account failing with some code issue. Odd and very amateurish. I emailed them and then that got fixed up. OK moving on – I then proceeded to order – the WD drive, and also a 256GB SSD card.  Crazily – it said the two items would come in two separate boxes, and the combined shipping cost was just over $60! I should have stopped there and then, but the cost including shipping was still just under my preferred supplier Scorptech. I pushed ahead with the order.

Well the hard drive showed up, and you can see terrible the hard drive was packed. This is a harddrive for God’s sake! The drive would have bounced around like a soccer ball in the bigger box, potentially damaging the components. Couriering a hard drive is something I prefer not to do regardless of how it’s packed, but packed like this  – it’s inexcusable. That’s why I’ve been moved to blog about it.

I paid over $30 courier charge for this:

terrible_packing_megabuy

The takeaway is – DO NOT ORDER ANYTHING FROM MEGABUY.COM.AU.  It will end up being a false economy. As always – you get what you pay for. Caveat emptor!

Improvements to my Photo Backup Procedures

Things have been getting pretty geeky in Gav’s tech ninja dojo of late. I’m really excited about my upcoming move to 10Gigabit Ethernet for my NAS to PC communications! The main answer to the question, “why?” is “because I can!” I also like spend money on tech I don’t really need but will (a) allow me to learn something new; (b) entertain me; (c) impress my mates; (d) give me something to blog about; and (e) provide at least some performance benefit… all basically in that order. 🙂

Current Setup

Ok so here’s the basic setup. You may or may not know it but I take pretty pictures (mostly of the Australian landscape) and upload them to the Internet here. I also have a (much neglected) photography website here, which basically pulls my better shots from Flickr using a nifty WordPress plugin. So yeah – tech geek *and* photo geek.

The problem I’m facing is that I have a growing collection of photographic “raw” files – not as big as some but substantial enough to manage, at about 500 Gigabytes (half a Terabyte). What I’ve been doing is storing all the files on a single 2TB HDD  (my Windows ‘D’ drive), and syncing them to the cloud with CrashPlan. I also run the very user-friendly and reasonably priced Acronis backup software (and no they didn’t pay me for that – I pimp whatever I like!)

This is a poor strategy. On the face of it you might think it’s OK right? They say in the photo world that “your photos aren’t safe unless they exist in three places” and my stuff IS in 3 places – one of which is offsite… so what’s the problem?

Problems with Current Setup

I’ll number ’em…

  1. Well firstly my data exists in 3 places, but not one of them is archival in nature. If some files got deleted from my PC and I didn’t notice, then the deletion would sync to the cloud, and Acronis would also backup only the existing files. After a week when my backup schedule rolls over then *poof* – those deleted files are gone forever! What would be ideal is a secondary “archival” backup where I can take monthly snapshots in addition to the daily backups, and then be able to delve right back into the files at least a year into the past. More on this later.
  2. The primary location for the files doesn’t have physical redundancy – i.e. no RAID. They are just sitting on a single drive. Sooner or later that drive is going to fail. That is a certainty – it’s just a question of when. If I had some sort of +1 drive redundancy I could just swap the drive out and continue on without having to restore from backups. I could buy another drive for my main PC and RAID it up in a “mirror” with my current drive, but I already have a Redundant Array of Independent Drives (RAID)… my NAS… hmmm.
  3. Having all the files on my PC to backup to the cloud means I have to leave my PC on in order to back them up. When I’m backing up to the cloud, it could be many gigs of data to sync at a time, and since my upstream bandwidth is a paltry 800kbps at best (ADSL2+) then it’s going to take a very long time AND choke my upstream TCP acks, meaning my web browsing experience is bad (and yes I need to “priority queue” my TCP acks!) This means leaving my computer on when I’m not using it, which basically means leaving it on 24×7 in order to get the files to CrashPlan ASAP. My CrashPlan backup schedule is 9am-5am weekdays and 2am-8am weekends. Having my PC on 24×7 is a bit of power drain, and my quarterly power bill is fairly nasty. My NAS is always on 24×7, making it a good candidate to backup to the cloud from… hmmm again…
  4. My nightly Acronis backups of files on my D: drive to my NAS seems to play havoc with my CrashPlan cloud backups – touching the same files at the same time with file locks perhaps? Admittedly I could just change the scheduling of my Acronis backups and my CrashPlan backups so they don’t clash, but if I moved the RAW files *TO THE NAS* then I wouldn’t have to think about this issue as the problem would be avoided.

Well it was becoming abundantly clear that the NAS might just be a good spot to stick the RAW files, and work on them from there.

Pros for Moving the RAW Files to the NAS

Well I’ve touched on 3 benefits already (which I’ll repeat below). What else? Here’s a fairly comprehensive list:

  1. RAID speed / redundancy.
  2. Always-on NAS great for cloud backups.
  3. Eliminate backup conflicts.
  4. My NAS already has a UPS attached to it for smooth power delivery and safe shut down in event of expended power failure (I get about an hour uptime from my 1500VA UPS). Sure I could add a UPS to my PC but this is an added expense to have a UPS on both.
  5. The NAS uses the enterprise-hardened ZFS file system which regularly “scrubs” the data for errors, and employs copy-on-write for file safety, even if the power failed and the NAS didn’t shut down properly. It’s arguably superior to Windows NTFS for file integrity (although NTFS is much better than the old FAT32).
  6. Having less spinning disks near my workspace (and my head) means more peace and quiet when I’m at the PC. Sure I could add more SSDs to my workstation but… cost… and I already have that space on my NAS so it makes sense to use it.

Cons

There’s only two I can think of right now:

  1. The extra tech work of getting CrashPlan setup on my NAS. For a geek like me this is no trouble, and in fact I enjoy the challenge (a blog post to follow when I set it up 😉
  2. The speed of Network Attached Storage (NAS) vs Directly Attached Storage (DAS). This is the the main issue – the curse of the “slow” network.

Speeding up the Network

There are ways to to boost the network in order to make the NAS feel like DAS:

  1. Move to 10Gig Ethernet.
  2. Use a dedicated Storage Area Network (SAN), or point-to-point link between PC and NAS, and enable 9k jumbo frames.
  3. Use the latest vendor-supplied drivers.
  4. Tweak card buffers, motherboard, and filesystem settings for optimal performance.

To wit I have ordered 10Gig optical Ethernet cards for my NAS and PC (Intel X520), along with 10Gig SFP+ transceivers, a 10m fibre patch lead and some fibre cleaning equipment. The installation and optimisation of this setup will be the subject of a later blog post. My gear should arrive all within the next couple of weeks. Stay tuned!! One more thing though…

Archival Backups

I did mention I was going to revisit this. I’ve been told that you can set CrashPlan to store all your changes, including deletions, so like a TimeMachine for your backed up data. This is like an Apple TimeMachine in the cloud for your data. I on the other hand like to keep things local, and only use CrashPlan as the “backup of last restort”. FreeNAS provides great tools for doing backups in the form of rsync and ZFS snapshots, and I’ll be exploring this in another blog post, where I’ll setup my own “TimeMachine” of sorts to a separate Lacie 12 TB USB3.0 HDD that I’ll have plugged in to the back of the NAS B-)

Updating Free VMware EXSi

Overview

VMware regularly patch their free ESXi bare metal hypervisor. If you have the free version, then you can do this from the CLI, with the help of the (also free) VMware vSphere Client.

Steps:-

  • Download patch file
  • Shudown VM’s and put the Host into Maintenance mode
  • Enable SSH Server on Host
  • Copy patch file to host
  • Run the patch
  • Reboot

Download Patch File

You need to have a free VMware account to download the patch upgrade file. This is usually a zip.

To update from 6.0.0 to 6.0.0U1, it is called “update-from-esxi6.0-6.0_update01.zip”, and you can go here to grab it:

VMware patch search

When you search, you should be able to find the file and download it. Note that these instructions should also work for future versions.

Shudown VM’s and put the host into Maintenance mode

Simply shutdown your VM’s and right-click on the Host and “Enter Maintenance Mode”

Enable SSH Server on Host

Click on your Host –> Configuration (tab) –>  Software>Security Profile –> Properties…

VMware SSH enable

In the properties, go down to to “SSH”. If it is “Stopped” then click on the service property “Options” (bottom right), and start it up. I just have it set to the default “Start and stop manually”, as I only go in there to patch ESXi, and so start it when I need it. I have found that after a reboot, it is stopped by default.

You should now be able to able to SSH to your host. If you cannot, then go to the Firewall properties (see screenshot above), and make sure that the “SSH Server” checkbox is enabled.

Copy Patch File to Host

Use the vSphere Client to copy the patch file to the host:

ESXi data store

Don’t change directories – just dump into the root of your datastore:

ESXi data store file xfer

Run the Patch

On the CLI run this:

esxcli software vib update -d /vmfs/volumes/<datastore>/<file>.zip

…where <datastore> is the name of your datastore. For me it’s datastore1, so for me it is:

esxcli software vib update -d /vmfs/volumes/datastore1/update-from-esxi6.0-6.0_update01.zip

It shouldn’t take long.

Reboot

When you’re done just reboot, then take your host out of maintenance mode and start your VMs up.

 

Replacing a Failed Drive on a Gen8 HP Microserver Running FreeNAS

I recently had one of my freeNAS9.3 report the following issues:

  • CRITICAL: Device: /dev/ada2, 8 Currently unreadable (pending) sectors
  • CRITICAL: Device: /dev/ada2, 8 Offline uncorrectable sectors

unreadable sectors

After a bit of reading I decided it best I replace the drive, as I don’t want to take chances since I’m only running RAID-Z1 (for the space), instead of the preferred and safer RAID-Z2.

Then I hit a problem: What physical drive is ada2?  The Gen8 HP Microserver G2020T doesn’t have drive lights to indicate which ones are active.

What I did was took a bit of a pun that the drives are numbered left to right, and it proved correct.

What you really need to do before doing anything, is take a screenshot of the different drive serial #’s. Don’t rely on the drive numbers!!! The reason I say that is because once you pull out a drive, the drive numbers get remapped! When I pulled out ada2, what was previously ada3 became the new ada2! This can get confusing and cause you to pull the wrong drive and screw your data – so concentrate on the serial numbers.

I don’t need to repeat the full instructions, but will link you to them here: http://doc.freenas.org/9.3/freenas_storage.html#replacing-a-failed-drive

Here’s my screenshot of my drives. Note the serial of ada2.

Pre drive replace

I now offlined ada2 and shutdown FreeNAS in order to pull the drive out, as the drives in the Gen8 HP Microserver are apparently not hot-swappable (that’s something they really should address!). I booted up to make sure that the correct expected drive SERIAL NUMBER disappeared. Notice how what was ada3 is now ada2.

Post drive pull

Happy that I have pulled the right drive, I now proceed to shut down again, and then insert the new 4TB Seagate NAS drive. After booting up again:

New drive inserted

OK so far so good. I now highlighted the new ada2 as per the screenshot, and then clicked on “Replace”. I then had to confirm that I wanted to replace the original ada2, but I didn’t get a screenshot of that. It then went about reslivering, which is the process of recreating the data on the redundant drive.

resilvering

For me it got up to about 5% after 10 mins, so figured it would take somewhere between 3 and 4 hours, so kicked it off before I went to bed. The interesting thing is that the old drive’s volume ID (8482932750830730262) is still listed in the array during the resilvering process. It’s as if you can cancel the resilvering and go back to the original drive if you so wished (if you had a failed resilver perhaps?) but I didn’t test this theory. Once resilvering is complete, this old drive/volume reference goes away.

Hope this helps. Happy and safe NASing!

 

Update: 20 August 2015.  There was an a file in my /tmp directory called “.smartalert” which seemed to contain the source of the alert. I deleted that file and rebooted, and the alarms cleared.

 

Getting Tvheadend Picons to Work in Plex

Picons are handy if you are using the Plex Tvheadend channel and you want the TV station icons to show up.

I was able to pull down all the Australian TV icons online from Beyonwiz (this is for my Ubuntu Linux Tvheadend server):

apt-get install git
cd /usr/src
git clone https://bitbucket.org/beyonwiz/picons-australia.git

 

Then you can set the Tvheadend (TVH) server “Configuration -> General” tab to prefer picons over channel name, and set the path to file:///usr/src/picons-australia/picon

The other thing you need to do is under “Configuration -> Access Entries”, add a new entry with the following:

Enabled: tick
Username: *
Password: *
Network prefix:  The IP address of your Plex server or Kodi player, or even local subnet if you want. e.g. "10.1.0.45/32" or "10.1.0.0/24" (or "127.0.0.1/32" if Plex and TVH are on the same box).
Streaming: tick

That should be enough access to get the icons working.

When I get time I’ll see if I can feed Kodi the picons in a similar way, as I prefer this server-side method of delivering TV channel icons, rather than client-side. For now I just point Kodi to a local directory with PNG images named the same as the channel names, which seems to work fine.

The only issue with my current picon set-up for the Plex Tvheadend Channel, and it is a minor one, is that the picons get truncated on my iPhone as they aren’t square format. They do look look fine on the PC though. I’m tossing up whether it’s worth my time to create square icons for the 22 stations I make use of in Melbourne Australia.

Other than that, I’m pretty stoked with being able to get the icons/picons to display!

Installing Tvheadend on Ubuntu

Update April 2016: You don’t have to build your own Ubuntu packages any more, as they are maintained here.

I’ve chosen to install Tvheadend (TVH) on a vanilla Ubuntu Server 14.04.2 installation. Incidentally I have Ubuntu (64-bit version) setup on a ESXi 6.0 host. Here’s how you can do it to:

Install Ubuntu Server 14.04.2

I recommend installing these at install time:

ssh server
samba server

Otherwise don’t install them straight up, but later once in the CLI you can do this:

apt-get install ssh
apt-get install samba

If the timezone is somehow messed up you can:

dpkg-reconfigure tzdata

In Ubuntu, you don’t log on as root or set the root password, but rather sudo -i, which will get you root privs.

Finalise the install:

sudo -i
rm -rf /var/lib/apt/lists/*
apt-get update
apt-get upgrade
reboot

Install Required Libraries

apt-get install build-essential git pkg-config libssl-dev bzip2 wget
apt-get install libavahi-client-dev zlib1g-dev libavcodec-dev
apt-get install libavutil-dev libavformat-dev libswscale-dev
apt-get install libcurl4-gnutls-dev liburiparser-dev
apt-get install debhelper

Install Tvheadend

Go to your building area:

cd /usr/src/

Get a snapshot of the latest TVH:

git clone https://github.com/tvheadend/tvheadend.git

This will install the latest development code branch. At the moment this is 4.1. If you want to install the stable 4.0 branch then try this:

git clone --branch release/4.0 https://github.com/tvheadend/tvheadend.git

Now change into the “tvheadend” directory and list the build optons:

cd tvheadend/
./configure --help

Build it with hdhomerun support, and some other goodies required for transcoding:

AUTOBUILD_CONFIGURE_EXTRA=" --enable-hdhomerun_client --enable-avahi --enable-hdhomerun_static --enable-libffmpeg_static" ./Autobuild.sh -t precise-amd64

This should create a tvheadend deb package a level up, which you can now install using the distro’s install tool “dpkg”:

cd ..
# dpkg -i tvheadend_<your freshly created package>.deb
# e.g.
dpkg -i tvheadend_4.0.7-11~g398e4fe~precise_amd64.deb

Now run it:

service tvheadend start

You should be able to browse to your TVH server on port 9981. eg. http://10.69.10.42:9981/

login is tvhadmin/tvhadmin.   You can update that once you’re in. Enjoy.

Keeping TVH Up to Date

Go into the /usr/src/tvheadend directory and run git pull .  After that, do a build just as you would before, and install the new deb package which will update TVH.  Simples! You should still have the previous deb package if anything mucks up, and you can apt-get remove tvheadend the current version, and then re-install the older version.

You can keep the base system up to date with:

apt-get upgrade
apt-get autoremove

 

Tvheadend vs MythTV for Kodi – TVH the Clear Winner

As you can see from my previous blogs, I’ve been playing around with MythTV of late, as a backend TV server for Kodi.  Even though I was successful in setting this up, I’ve hit some frustrating limitations and have so decided to pull the pin on that experiment, and go back to Tvheadend (TVH for short). My major annoyances are/were:

  • I couldn’t find a way to split DVB-T TV and DVB-T radio stations, the way TVH does in Kodi. With MythTV, they all appear as TV stations.
  • The ability to set channel groups seems to be lacking in MythTV.

I was looking further and further into these issues and there was some talk in some forum somewhere about being able to do these things with SQL commands in MySQL, but I figured I didn’t want to waste any more time – I’d invested far too much already! In TVH it *just works*.  Other gripes with MythTV include:

  • Since Myth’s both a front-end and back-end, there are often parts of config that relate to the front-end that I’ll never touch, so it’s a bit confused when you’re only using the backend. This is especially apparent with some iPhone apps I bought, where some screens of the app are for the backend server, and some for the frontend. The benefit of TVH is that it is a pure server – there is none of this confusion of backend vs frontend!
  • It’s pretty fiddly just to get the MythTV server running. You also have to jump through a lot of hoops to get the mythweb webserver going as well. Say goodbye to a weekend!

A fairer comparison is MythBuntu vs TVH-Ubuntu, rather than MythTV-FreeBSD.  I’ve tried MythBuntu, and set-up is a bit easier than all that work I did on FreeBSD, but it’s still kludgy to my mind, and still suffers from my major gripes with it.

I have found it to be actually quite straightforward to install TVH from a vanilla Ubuntu install, and to keep it up to date with Git pulls. I’ll add a blog post soon to show how this is done. My ideal setup would be for TVH to stabilise and add native HDHomeRun support for FreeBSD (the way MythTV manages to do!), and then have the FreeBSD people update the Ports collection with this stable code. Hopefully TVH will get there in the next year or two. Then I could run TVH on my FreeNAS/FreeBSD box, and shut down my separate Linux server.