Project Gibson: Building A Home Server

Hacking The Gibson
“Hacking the Gibson” (MGM)

I’ve always been a bit of a media hoarder. It started in the VHS era recording various programs from television, then to downloading MP3s from Napster and burning them to CDs. Of course, there was also retro game emulation, but NES, SNES, and Genesis games were measured in kilobytes; those titles could easily fit on several floppy diskettes if I needed to back anything up. At some point, the cost of hard disk storage came down enough to make consolidating my literal binders full of backup discs a practical choice: a shoebox full of USB hard drives took up less space than the equivalent binders. Eventually, my knack for collecting and repurposing second-hand hardware led me to some discarded NAS enclosures, and I dutifully filled them with those hard drives I had previously stored in the aforementioned shoebox. Of course, this ad-hoc assemblage of networked devices could only extend so far before it became a monster–there was one NAS for my music, another for video, another for backups, one for my wife’s media–and they all lived in a noisy cubbyhole just below the living room television. The COVID-19 pandemic gave me an excuse to finally hit the reset button on the whole unwieldy, dusty, noisy mess.

Like many others during the early pandemic lockdowns, I took to reconfiguring my living situation as a means to not only to occupy myself and avoid the anxieties of the outside world, but to also improve some part of my living situation. The collection of NAS enclosures was hard to clean, and because it was hard to clean, it made the fans less effective, which made cooling less effective, which made the fans work harder, which made the system noisier. Switching everything to a combined enclosure seemed like the logical first move in rebuilding my media center, so I set about planning to build a fully-functional server that could handle at least the 10 hard drives that made up my current NAS solution and be extensible and upgradeable to meet any future demands. I had built stand-alone PCs before, so the theory was familiar to me, but I had never tried to assemble anything on this scale before. I was going to need to do some homework!

Project Magnavox (before the NAS takeover)
Project Magnavox before the NAS takeover

The Project Magnavox HTPC that I built back in 2014 seemed like the logical starting point. My wife and I had upgraded to an Android-powered smart TV already and made the original set-top box concept obsolete (or, at least, redundant). The motherboard, processor, and memory were still more than capable enough to decode 1080p video, so basic file management should be a piece of cake. This would also offset the total cost of the project as I wouldn’t need to purchase those parts. The bulk of the cost would be sourcing a suitable enclosure: something that could house at least (10) 3.5″ drives, something that has good airflow for cooling, and something that doesn’t take up a lot of space. Additionally, I would need SATA Y-adapters to attach all the drives to the motherboard that I already had, and I would need to find an appropriate OS that could power the whole thing without much overhead.

For the case, I settled on a 9-bay tower from Antec that already had a couple fans installed as well as some pretty large vents for thermal management. (On a side note: I get annoyed at how all high performance computer parts are labeled “gamer” and usually come with superfluous LED arrays or odd geometric form factors. Is it too much to ask for subtlety? Does everything need to look like it was a rejected prop from an early 00’s movie hacker scene?). The MSI motherboard that I pulled from Project Magnavox only had 4 SATA ports, so I picked up a couple of 4-port PCIe SATA controllers and some power splitters to connect all the drives I was about to employ. I would also need to pick up a few “last-minute” parts from the local Micro Center (which, it would turn out, was an adventure in and of itself during the early days of the COVID-19 pandemic) as well as 3D print a few adapters to fit my 3.5″ HDDs into the case’s 5.25″ drive bays.

Inside Project Gibson
Drive serial numbers have been obscured, but I would advise labeling them to make service easier.

One of the NAS enclosures I would cannibalize contained a mount for an additional 4 drives, so that also went into the case bringing the total up to 12 drives by the time I brought the server online! However, because I am using drives from a variety of devices and vintages, the available storage would only total to some 7TB. My goal is to replace drives with larger units as they wear out and grow the available storage over time. The final part I will need top install is an internal USB port. This is a conventional USB-A female port attached to a USB header allowing for USB devices to be placed inside a computer case. This port will host the USB flash drive that the FreeNAS operating system is installed on, freeing all available HDD space for storage.

Once assembled, it will be time to install the operating system and begin migrating data from the stacks of USB drives that I’m using as temporary storage!

How To Install FreeNAS

I don’t really need to go into further detail about why I’m building a server, but one of my main concerns was its reliability and resilience. My multi-drive NAS enclosures were all set up as RAID 0, which provided absolutely no protection against data loss, but that was–of course–the most space-efficient setup at the time. Wanting to improve my setup, I was looking for an OS that could handle more advanced RAID setups as well as allowing me to run server-side applications such as Plex, OwnCloud, and Pi-Hole as my needs evolved. After some research, I settled on FreeNAS, a FreeBSD-based OS developed by iX Systems that seemed to suit my use case and–more importantly–is mature and popular enough to have a large community support base.

Installing FreeNAS

The first thing to do, obviously, is download the installation media from the FreeNAS website. You’ll need the version for your processor architecture, so I’m grabbing the 64-bit version. Once downloaded, I’ll burn it to a USB jump drive using Balena Etcher like I usually do with Raspberry Pi images. Of course, you can use your preferred application.

Side note: I find it funny that we still refer to the process of writing a bootable image to a USB drive or SD card as “burning” even though we’re not literally burning the information to an optical drive. It’s one of those interesting linguistic artifacts that has outlived its origins like “hanging up” a cellular phone or “tuning in” to a streaming broadcast.

A couple of things to note before installing: The process will require a keyboard and display connected to the system. You’ll also need to connect the system to your network. For the installation process, I actually have the tower connected to the living room television since it was the most conveniently accessible HDMI monitor. I already have my SD card boot drive installed inside the case, so just pop the installation media into a free USB port, and power on the system.

FreeNAS boot screen
I’m loving that ASCII art logo

Press enter or just wait out the autoboot timer, we’ll use the “Boot Multi User” option. The next screen should present you with the main installer menu. Highlight “Install/Upgrade” and press enter.

FreeNAS installer menu
Pretty self-explanatory options

When you’re presented with a list of connected drives, you’ll want to choose the one that you set up as the dedicated OS disk. For me, this was the USB SD card reader and 32GB MicroSD card that I installed inside thee case previously (which is pretty easy to find–it’s the only 32GB drive in a list of multiple-terabyte options). For you, it should be something similar: an SD card or USB (don’t worry about corrupting the drive, it’s pretty simple to replace the OS) and not one of the storage drives. Once you’ve confirmed your selection, the installation process will begin. This will take several minutes, so grab a cuppa tea while you wait. Once completed, remove the installation media and select “Reboot System” from the main menu.

Configuring Network Settings

Once rebooted, FreeNAS will present you with the console setup menu. Towards the bottom of the screen, you should have an IP address that will allow access to the FreeNAS web interface. From a separate computer, try navigating to that address in a web browser. If it connects, congratulations! You shouldn’t need any further setup and can proceed to configuring your drives. If it doesn’t connect, choose “Configure Network Interfaces” from the menu and select your chosen interface. Since I’m connected via ethernet cable, I’m setting up the eth0 connection. The following settings should get work for any direct connection to a router:

Reset network configuration? n
Configure interface for DHCP? (y/n) n
Configure IPv4? (y/n) y
Interface name: eth0
Saving interface configuration: Ok
Configure IPv6? n

At this point, you should reboot the server which will automatically renew the DHCP lease and assign a working IP address. Navigate to that IP address from another computer’s web browser, and you should be presented with the FreeNAS web interface and a prompt to set up a user name and password. You should also be able to reach the web interface from the URL freenas.local. Once logged in, we can start setting up storage pools and shares.

Configuring FreeNAS

When you first log into FreeNAS, you’ll be presented with a setup wizard that will walk you through the process of setting up your storage pool–the name, drives, and various options including user names, email settings (useful for receiving notifications), RAID configuration (RAIDZ2 FTW), etc. Once the basic configuration is complete, you can add disks to the pool and you’re ready to start building a library!

Specific configuration settings are really outside the scope of this article, but the FreeNAS User Guide is extremely helpful in that regard! Much of what I write here is intended to clarify what appears in the official user guide, and should be taken as a supplement to–not a replacement for–that document.

How To Replace A Failing Disk in FreeNAS (or Increase Storage Space)

My initial design for Project Gibson was a single vdev (FreeNAS’s term for a group of drives acting as a single device) with 6 drives in a single pool, and I would extend as I go. When planning vdevs, they should consist of drives of similar size, as the RAIDZ2 protocol will treat all the disks in the vdev as having the same capacity as the smallest drive in the vdev. For example: A vdev containing drives that are 4T, 3T, and 2T will treat them all as being 2T; replace the 2T drive with another 4T drive and the array will still only be seen as 3T each. When I set up FreeNAS, I consolidated all of my data on the smallest drives I had and created my first vdev with the 6 largest drives available.

A Word About ZFS RAID Protocols

I chose RAIDZ2 as it is a nice balance between space availability, read-write times, redundancy, and longevity. The RAIDZ2 is a double-parity system that distributes data in such a way that any two drives within a vdev could fail and the dataset would still survive. After having suffered data loss from accidental disk drops, and knowing that I would be storing more critical backups on this array, I opted for a little more protection at the cost of less storage space.

Once the initial vdev was set up and data was transferred from the smaller drives, I began to assemble a second vdev of the smaller drives. My plan is to, over time, replace failed drives within the vdevs with larger capacity drives until the entire system is uniform. As storage will eventually come down in price, I can begin replacing drives with even larger capacity units as my storage needs grow.

Extending a ZFS Pool

Once my second set of 6 drives was ready, I installed them as normal, then fired the system back up. Under the Storage > Pools dialog, click the cog icon (Settings) to open the “Pool Actions” dialog, then click “Add Vdevs”. In the Pool Manager dialog, you’ll be presented with two lists: Available Disks on the left and Data Vdevs on the right. The newly installed drives will be listed under Available Disks, select each of them and click the right arrow to add them to a new vdev.

FreeNAS Pool Manager
FreeNAS Pool Manager

With the new drives added to a new vdev, you can click “Add Vdevs” to add the new vdev to the existing pool. Once back on the Pools dialog, click the cog again and click “Extend Pool” to stripe the available space across the two vdevs, adding the second vdev’s available space to the total. This procedure is covered in more detail in the FreeNAS User Guide, section 9.2, but this will serve as a general guide to the process.

Replacing (Or Upgrading) Disks In A ZFS Pool

I have 2 use cases for replacing disks in a pool: The first is obviously replacing a failed drive and the second is replacing a smaller drive with a larger capacity one. Both of these involve the same offline-replace-online procedure, so I’ll cover them at the same time. This is where my decision to use RAIDZ2 really becomes apparent, as the procedure puts the parity of the vdevs in a weakened state by removing a drive instead of simply connecting another drive to the motherboard (I only have 12 drive connections available). With RAIDZ1, any mishaps during the process would render the data on the vdev unrecoverable, so with RAIDZ2 I have some parity during the replacement process–just in case.

To replace a disk within a pool, first navigate to the Storage > Pools dialog in the web interface, click the cog in the right corner, then click “Status”. You will be presented with a list of physical drives within the pool.

FreeNAS Pool Status dialog
FreeNAS Pool Status dialog. Yes, my pool is named “balabushka” after George Balabushka and is probably the only thing with his name that I will ever own.

Locate the disk you need to replace from the list, then click the stacked dots (Options) to the right of the line item and then “Offline”. This will take the drive offline for replacement. I usually cross-reference the drive label with the Storage > Disks dialog to get the disk’s serial number (which I have printed on the exposed side of the physical disks in the case) to make identifying the physical disk easier. Once you have offlined the drive and noted its serial, shut down the system and replace the physical drive. Boot the system back up, reconnect to the web interface, and return to the Storage > Pools > Status dialog (under the settings menu). Click the options menu for the offline drive and click “Replace”. Select the serial number of the new disk and click “Replace Disk”. The vdev will remain in a compromised state until the resilvering process completes and the data is restored in its original form on the new disk.

The smaller vdev has experienced more failed disks than the larger array, so my upgrade plan has served well here. As disks wear out, I am replacing them with new 4T disks (although FreeeNAS only sees them as 1T for the moment). Eventually, the smaller vdev will consist of 6x 4T disks to match the first, and I will finally have doubled my available storage. No telling how long that will take, though, but I still have plenty of space to spare for now, and I’m weighing the option of adding my stack of 2.5″ USB drives in a third vdev. More on that if and when it comes.

How To Install Pi-Hole on FreeNAS

I’ve been playing around a lot with my FreeNAS installation since assembling it last year as my “Pandemic Project” (which, of course, would become the first of many), and I’m constantly looking for new things to implement. Advertising has been a thorn in my side since the early days of the internet, so it seemed only logical that I should see what all the fuss with Pi-Hole was about!

Pi-Hole is most readily installed on a Raspberry Pi, but I’m trying to consolidate as much of my infrastructure as possible, so I thought I might have a go getting it working on the server. Unfortunately, FreeNAS is based on BSD while Pi-Hole is written for Linux (so there’s no plugin available), so we’ll have to install it on a virtual machine.

Installing Ubuntu Server on a Virtual Machine

The first thing we’ll need, of course, is the installation media. There’s a flavor of Pi-Hole written specifically for Ubuntu, so that seems to be the logical choice! My recommendation is to install the most compact version available, and the netboot installer image allows you to pick Ubuntu Server with minimal options. It’s a little difficult to find the correct download, so just grab the URL below:

http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/current/images/netboot/mini.iso

Of course, if Bionic Beaver is outdated, just change the /bionic directory to the current version!

Back in FreeNAS, go to the Virtual Machines menu and add a new Linux VM. Give it a name that you’ll remember (“pihole” is a solid choice) and set the virtual CPU count to 1 and the memory size to 512MiB. On the Disks page, create a new AHCI disk and set its Zvol location to /data/pihole and size to 4GiB. When you get to the options for installation media, select “Upload an installer image file” and choose the mini.iso file you downloaded earlier. Once all your settings are configured, you can boot the virtual machine and install Ubuntu. The VNC option opens a virtual terminal that will allow you to connect to and interact with the virtual machine through the installation process.

If you are prompted for DNS servers, use Google’s (8.8.8.8 and 8.8.4.4) as a default for now.

When the install completes, Ubuntu will prompt you to remove installation media and reboot. Once you are disconnected from the VNC, stop the virtual machine and remove the installation media by deleting the CDROM from the “Devices” list under the virtual machine options.

Setting up Pi-Hole

Restart the virtual machine and connect to the VNC. Log into Ubuntu and invoke the following commands:

sudo apt update
sudo apt upgrade
sudo apt install net-tools
sudo apt install wget

The first thing we need to do is set up a static IP address for the virtual machine. Use ifconfig to find the local IP address.

In this example, the device is called ‘enp0s4’.

We will now need to change the settings for this device by editing the netplan config. Invoke the following command:
sudo nano /etc/netplan/01-netcfg.yaml

You will need to change edit the file so that it look like the image below. Pay special attention to the number of spaces for each indentation.

Once this is complete, reboot the VM.

After rebooting and logging back into Ubuntu, install Pi-Hole using the automatic installation script, just like you normally would.

wget -O basic-install.sh https://install.pi-hole.net

sudo bash basic-install.sh

Once the script finishes, you can access the web UI by navigating to [PIHOLEIPADDRESS]/admin. Make sure to change your password!

The last thing you’ll need to do is set up your router’s DHCP settings, but that’s best explained by Pi-Hole’s own documentation.

How To Update Plex Plugin TrueNAS

Once upon a time, during the dark ages, we had to run several shell commands–like savages–to get the Plex plugin in TrueNAS (or FreeNAS, if you go back that far) to update. One had to fetch, then unpack the tarball, then move to the right directory, change ownership, and finally run the script! It was quite a pain when Plex was coming out with a new update every week (or so it seemed), and got to be more annoying than productive.

Fortunately, we don’t have to live like animals anymore because [mstinaff] wrote a nice, simple shell script to take care of all the heavy lifting! You can even set it up as a cron task to run on schedule (for when Plex decides to start issuing updates every few days again).

Let’s start by assuming that you know how to access your TrueNAS jails. On the Jails dialog, open Plex, then DO NOT CLICK “Update”. Click “Shell”.Once you’ve got your root prompt in the shell, download the updater by invoking the following command:
fetch https://raw.githubusercontent.com/mstinaff/PMS_Updater/master/PMS_Updater.sh

From here, you can just run the shell script with sh PMS_Updater.sh

Automating Plex updates on TrueNAS with cron jobs

To set up a cron job on your TrueNAS installation, navigate to the Tasks > Cron Jobs dialog. Click the “ADD” button to create a new cron job and give it a descriptive name such as “Plex update”. Then, enter the following in the “Command” field:

/usr/local/bin/iocage exec [plexjail] /bin/sh /usr/local/PMS_Updater/PMS_Updater.sh -r -a -v

Substitute the name of your Plex jail for [plexjail]. Mine is just called “plex“. The -r flag will keep your installation clean by removing the older packages before installing the new one. The -a flag automatically updates to the newest version without user intervention. Finally, the -v flag runs the script in verbose mode, so you’ll have a log available just in case anything goes wrong.

Set the “Run As User” field to root, and set your preferred schedule. I run mine weekly on Sunday nights. From here, make sure your job is enabled, and click “SAVE”. Now, you shouldn’t have to make another manual Plex update again!