January 13, 2020

Adventures in home networking

I was recently given an old Intel NUC. Nothing special about it particularly; a passively cooled Celeron N2807, 4GB of memory, and modest Intel video processing capacity. I have decided to take advantage of its USB3 port and small form factor to use it as a home media server.

OS choice and justification

The rules I've set myself on my cloud server (banff) include that all services need to be running in docker on a stable distro. I'm still very much a fan of the manual approach, so the home server - hereafter referred to as drumheller - will be running Arch and everything will be running in the primary space rather than a container. One reason I went for Arch is that I have always wanted to have a network shared pacman cache so this was the first thing that had to be set up.

The linked Arch Wiki article explains the easiest and also the most complex ways to set up such a cache. I went for the easy option; I don't need the cache to be read-write (although I do want other devices to be able to send their local caches to it) so plain HTTP will do just fine. I followed the instructions for using nginx and symlinks to create a repo on a local network, giving me:

server {
        listen 8080 ssl http2;
		server_name drumheller.warhaggis.com;
        root /var/cache/pacman/pkg;
        try_files $uri $uri/;
        ssl_certificate /etc/dehydrated/certs/warhaggis/fullchain.pem;
        ssl_certificate_key /etc/dehydrated/certs/warhaggis/privkey.pem;
	ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

	ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
	ssl_prefer_server_ciphers on;
	ssl_session_cache shared:SSL:10m;
	ssl_stapling on;
	ssl_stapling_verify on;
	#ssl_dhparam /etc/nginx/dhparams.pem;

# Security headers

	add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
	add_header X-Frame-Options SAMEORIGIN;
	add_header X-Content-Type-Options nosniff;
	add_header X-Xss-Protection "1; mode=block" always;

		location / {
			autoindex on;
		}


}

We then add Server = https://drumheller.warhaggis.com:8086 to /etc/pacman.d/mirrorlist on all of my local machines.

The very worthwhile question you may be asking here becomes: why the hell is that HTTPS?. The answer is simply why not. TLS is trivial with Let's Encrypt and the AWS Route53 API and a simple python hook for my favourite Let's Encrypt client (dehydrated) makes universal acquisition of valid certificates a breeze. There truly is no valid reason to have TLS on a LAN except that my domains are all HSTS-enabled. So therefore, in order to have them work on a LAN without any issues we have to be using TLS.

Additionally, using aurutils I am given a simple solution to manage a local repository of AUR packages that can be built in a chroot using distcc.

Local DNS resolution

Using a real domain name with local subdomains is silly because without any local DNS solution we would end up with needing to add world-wide DNS entries for the local network. That's silly and completely redundant, so we're gonna go wih a local DNS solution. Thankfully, local DNS is trendy enough to have produced the Pi-hole project. The basic idea behind Pi-hole is the provision of a local DNS resolver that acts between the local network and real DNS enabling the maintenance of local blacklists that block ads. We can also use this to provide our very own homebrew DNS server, serving subdomains on a real domain-name without touching real DNS. Since we're using Route53 there's modest cost considerations to make here too, since lookup requests to Route53 incur bills - what's the point in asking Amazon to point us to our own local network?

Using Pi-hole as additional local DNS is very simple; we just maintain /etc/hosts on the Pi-hole and, at the router level, assign Pi-hole as the DNS server for the whole network. The output of host drumheller.warhaggis.com while I am at home, therefore, becomes

drumheller.warhaggis.com has address 192.168.1.70
drumheller.warhaggis.com has IPv6 address 2001:56a:7190:a500:feaa:14ff:fe1c:201c
drumheller.warhaggis.com is an alias for banff.warhaggis.com.

Elsewhere though:

drumheller.warhaggis.com is an alias for banff.warhaggis.com.
banff.warhaggis.com has address 138.197.170.3
banff.warhaggis.com has IPv6 address 2604:a880:cad:d0::36:9001

This also means we can run scripts on local machines that can identify that we're on the home network, as I do in a pacman hook to ensure the server's pacman cache is up to date with all of its clients:

[Trigger]
Type = File
Operation = Install
Operation = Upgrade
Target = *

[Action]
Description = Sending packages to drumheller...
When = PostTransaction
Exec = /usr/bin/netcache.sh

With the contents of netcache.sh being:

#!/bin/bash

if [[ $(host drumheller.warhaggis.com) = *192.168.1.70* ]]; then
rsync -e ssh /var/cache/pacman/pkg/* root@drumheller:/var/cache/pacman/pkg
echo ":: Completed local pacman cache sync"
else
echo "Not at home, skipping package cache sync"
fi

Local DNS resolution and TLS also allows us to have a suite of web-based services existing within the local network and browsers won't complain about them. Eventually, plain unencrypted HTTP will disappear from modern internet infrastructure and even our local traffic will need to be encapsulated in TLS.

TLS certificate management

I always use dehydrated for Let's Encrypt certificate management. It is the most simple client out there and in my humble opinion more closely follows the UNIX philosophy than any other client. All it does is ask for a cert and drop it in a directory; nothing else - unless we tell it to, and that's where route53.py comes in.

Originally stolen from a gist post, the hook allows dehydrated to work with the Route53 API via boto. Here is the hook itself:

#!/usr/bin/env python

import os
import sys
from subprocess import call
from boto.route53 import *
from time import sleep


USAGE_TEXT = "USAGE: route53.py CHALLENGE_TYPE DOMAIN TOKEN_FILENAME_IGNORED TOKEN_VALUE [DOMAIN TOKEN_FILENAME_IGNORED TOKEN_VALUE]..."


def get_zone_id(conn, domain):
    if 'HOSTED_ZONE' in os.environ:
        hosted_zone = os.environ['HOSTED_ZONE']
        if not domain.endswith(hosted_zone):
            raise Exception("Incorrect hosted zone for domain {0}".format(domain))
        zone = conn.get_hosted_zone_by_name("{0}.".format(hosted_zone))
        zone_id = zone['GetHostedZoneResponse']['HostedZone']['Id'].replace('/hostedzone/', '')
    else:
        zones = conn.get_all_hosted_zones()
        candidate_zones = []
        domain_dot = "{0}.".format(domain)
        for zone in zones['ListHostedZonesResponse']['HostedZones']:
            if domain_dot.endswith(zone['Name']):
                candidate_zones.append((domain_dot.find(zone['Name']), zone['Id'].replace('/hostedzone/', '')))

        if len(candidate_zones) == 0:
            raise Exception("Hosted zone not found for domain {0}".format(domain))

        candidate_zones.sort()
        zone_id = candidate_zones[0][1]
    return zone_id


def wait_for_dns_update(conn, response, time_elapsed=0):
    timeout = 300
    sleep_time = 5
    st = status.Status(conn, response['ChangeResourceRecordSetsResponse']['ChangeInfo'])
    while st.update() != 'INSYNC' and time_elapsed <= timeout:
        print("Waiting for DNS change to complete... ({0}; elapsed {1} seconds)".format(st, time_elapsed))
        sleep(sleep_time)
        time_elapsed += sleep_time

    if st.update() != 'INSYNC' and time_elapsed > timeout:
        raise Exception("Timed out while waiting for DNS record to be ready. Waited {0} seconds but the last status was {1}".format(time_elapsed, st))

    print("DNS change completed")
    return time_elapsed


def route53_dns(domain_challenges_dict, action):
    action = action.upper()
    assert action in ['UPSERT', 'DELETE']

    conn = connection.Route53Connection()

    responses = []
    for domain, txt_challenges in domain_challenges_dict.items():

        print("domain: {0}".format(domain))
        print("txt_challenges: {0}".format(txt_challenges))

        zone_id = get_zone_id(conn, domain)

        name = u'_acme-challenge.{0}.'.format(domain) # note u'' and trailing . are important here for the == below

        # Get existing record set, so we can add our challenges to it.
        # It's important that we add instead of override, to support dehydrated's HOOK_CHAIN="no",
        # (in which case we as the hook can't see all changes to make upfront).
        record_set = conn.get_all_rrsets(zone_id, name=name)

        record_exists = False
        existing_quoted_txt_challenges = [] # include "" quotes already; not a set because 'DELETE' may care about order
        for record in record_set:
            if record.name == name and record.type == "TXT":
                record_exists = True
                existing_quoted_txt_challenges += record.resource_records

        if action == 'UPSERT':
            needed_quoted_txt_challenges = set('"{0}"'.format(c) for c in txt_challenges)
            all_quoted_txt_challenges = set(existing_quoted_txt_challenges) | needed_quoted_txt_challenges

            change = record_set.add_change('UPSERT', name, type='TXT', ttl=60)
            for txt_challenge in all_quoted_txt_challenges:
                change.add_value(txt_challenge)
            response = record_set.commit()
            responses.append(response)

        elif action == 'DELETE':
            if record_exists:
                change = record_set.add_change('DELETE', name, type='TXT', ttl=60)
                for txt_challenge in existing_quoted_txt_challenges:
                    change.add_value(txt_challenge)
                response = record_set.commit()
                # We don't block the hook to wait for deletion to complete.
                # responses.append(response)
            else:
                print("Challenge record " + name + " is already gone!")

    if responses != []:
        print("Waiting for all responses...")
        time_elapsed = 0
        for response in responses:
            time_elapsed = wait_for_dns_update(conn, response, time_elapsed)


def deploy_hook_args_to_domain_challenge_dict(hook_args):
    assert len(hook_args) % 3 == 0, "wrong number of arguments, hook arguments must be multiple of 3; " + USAGE_TEXT
    domain_dict = {}
    for i in range(0, len(hook_args), 3):
        domain = hook_args[i]
        txt_challenge = hook_args[i+2]
        domain_dict.setdefault(domain, []).append(txt_challenge)
    return domain_dict


if __name__ == "__main__":

    assert len(sys.argv) >= 2, "wrong number of arguments, need at least 1; " + USAGE_TEXT
    hook = sys.argv[1]

    if hook == "deploy_challenge":
        hook_args = sys.argv[2:]
        domain_challenges_dict = deploy_hook_args_to_domain_challenge_dict(hook_args)
        route53_dns(domain_challenges_dict, action='upsert')
    elif hook == "clean_challenge":
        hook_args = sys.argv[2:]
        domain_challenges_dict = deploy_hook_args_to_domain_challenge_dict(hook_args)
        route53_dns(domain_challenges_dict, action='delete')
    elif hook == "startup_hook":
        print("Ignoring startup_hook")
        exit(0)
    elif hook == "exit_hook":
        print("Ignoring exit_hook")
        exit(0)
    elif hook == "deploy_cert":
        call(["systemctl", "reload", "nginx"])
        exit(0)
    elif hook == "unchanged_cert":
        print("Ignoring unchanged_cert hook")
        exit(0)

And my corresponding /etc/dehydrated/config

BASEDIR=/etc/dehydrated
WELLKNOWN="${BASEDIR}/acme-challenges"
DOMAINS_TXT="/etc/dehydrated/domains.txt"
CHALLENGETYPE="dns-01"
CA="https://acme-v02.api.letsencrypt.org/directory"
#CA="https://acme-staging.api.letsencrypt.org/directory"
HOOK="/etc/dehydrated/hook.py"
HOOK_CHAIN="yes"

With domains.txt simply containing *.warhaggis.com > warhaggis. dehydrated -c thus just grabs a certificate for *.warhaggis.com. Easy-peasy.

Media server

The choice of media server software is Jellyfin. A free-software fork of Emby, which abandoned permissive licensing, Jellyfin provides a catch-all solution for the cataloguing and streaming of media via known media streaming protocols as well as its own Netflix-like browser player. Using a large external hard drive caddy with a USB3 connection allows us to have a network-accessible media drive plugged into the NUC with Jellyfin standing watch over the media library.

I particularly enjoy Jellyfin's fine-grain control over media metadata which allows us to quickly fix any metadata issues.

Currently I am without a decent media client. The one issue with the NUC is that its video output capabilities are lacking, even behind that of the Raspberry Pi. We are currently using a PS3 with a homebrew media application and it does the job but it doesn't interact well with Jellyfin's metadata provision.

Samba

Samba is typically a pain to use. My ultimate dream in home networking is indeed to have an Active Directory provider running but in the meantime network drives will just have to do.

Samba is usually really hard to configure, but for a public drive that is only exposed on a local network, the following configuration is fine:

[public]
   path = /mnt/media
   public = yes
   only guest = yes
   writable = yes
   printable = no

Where to go from here?

I definitely consider the NUC as very much a temporary solution. It is a venerable little machine and is perfectly fine for file storage and simple tasks but in terms of storage scalability it's dead in the water. A single USB3 port isn't enough. The size of the external drive we are using though is considerable, and as long as the drive survives we will be quite happy using this as our solution.

I'm very happy with how things are with the little network that could and I'm excited to see where we go from here.