Installing Jupyter on macOS Mojave

I recently needed to visualise some waveforms and was struggling with the extremely limited output in Swift Playgrounds.  To remedy this, I installed Jupyter on my Macbook.

I opted for a local installation as opposed to changing the system files.

$ python --version
Python 2.7.10
$ sudo easy_install pip
$ sudo python -m pip install virtualenv
$ mkdir -p ~/local/python/jupyter
$ cd !$/..
$ virtualenv jupyter --no-site-packages
$ source jupyter/bin/activate
$ pip install jupyter
$ jupyter notebook

When you’re finished, close out of the environment by running deactivate in the terminal.

ifconfig Output on macOS Mojave

ifconfig is a venerable tool that’s been around since 2008, according to the manpage on macOS. If you have a recent device, you’ll discover a much larger number of devices than on Linux.

Here’s the simplified output from my computer.

$ ifconfig | sed -E 's/[[:space:]:].*//;/^$/d' | sort
VHC128
XHC0
XHC1
XHC20
ap1
awdl0
bridge0
en0
en1
en2
en3
en4
en5
gif0
lo0
p2p0
stf0
utun0
utun2
utun3

Let’s walk through them individually.

lo0 is the loopback interface. This is used for the machine to refer to itself.

gif0 is the software network interface.

stf0 is the IPv6 to IPv4 interface.

gif0 is a tunnel interface for IPv4 to IPv6.

p2p0 is airdrop.

en0 is the WiFi interface.

en1 through en4 are the Thunderbolt interfaces, first through fourth.

bridge0 is the Thunderbolt bridge. typically for transfering files over cable between two Macs.

awdl0 is the Apple Wireless Direct Link, typically used for Hotspot functionality with iOS devices and your Apple computer.

ap1 is probably related to the above but I can’t confirm.

en5 is iBridge adapter for the TouchBar.

utunN are related to the sharing of information between devices on the same iCloud account. They can also be created by any VPN interfaces you’ve added.

XHC20, XHC0, XHC1, and VHC128 are something I’ve never seen before with an ifconfig output. I assume they are related to USB controllers. Let’s look at the IOUSB registry plane for more details.

$ ioreg -p IOUSB
+-o Root [...]
 +-o AppleUSBVHCIBCE Root Hub Simulation@80000000 [...]
 +-o AppleUSBXHCI Root Hub Simulation@14000000 [...]

Here we can see two virtual USB hub simulators.

On the first, a simulated virtual USB Host Controller Interface, we can see many iBridge devices connected, including the display, ambient light sensor, camera, microphone, keyboard/trackpad, and something called a DFR brightness. The closest related acronym I could find was digital feedback reducer. In any case, all the Apple devices are exposed via this virtual USB hub. The address of the AppleUSBVHCIBCE is 128 so that explains the VHC128 interface on the ifconfig output.

On the second, an eXtensible Host Controller Inteface (HCI) for Universal Serial Bus (USB), will show any devices you have connected via the ports on your Mac. If we dig deeper into the AppleUSBHostController,

$ ioreg -w0 -rc AppleUSBHostController

[… output truncated …]

We can see one AppleIntelCNLUSBXHCI client which shares the same PCI bus with the four AppleUSB20XHCITypeCPort clients. Furthermore, we can see two AppleUSBXHCITR , each of which having two of the AppleUSB30XHCITypeCPort class, likely the USB 3.0 controllers. Important to note is that the addresses of the clients with the AppleUSBXHCITR class: XHC1@14, XHC2@00, and XHC3@01, giving us the three devices we see in the ifconfig output: XHC20, XHC0, and XHC1, if you we take the high byte from the addresses. It’s likely the numbers after XHC are their sequential locations on the bus.

From these we can conclude that these are merely the USB debugging interfaces, both for the internal clients and external clients.

Using ioreg to dig deeper into the relationship between entities wasn’t as easy as I’m used to on Linux by using lsusb. This leads me to believe that perhaps ioreg is deprecated and there’s a better utility available or some of this information is not exposed completely in publically available tools.

In any case, knowing the source of these interfaces certainly leaves me less perturbed.

Sources

https://apple.stackexchange.com/questions/47477/unexpected-interfaces-in-ifconfig

https://github.com/RehabMan/OS-X-USB-Inject-All

https://www.intel.com/content/dam/www/public/us/en/documents/technical-specifications/extensible-host-controler-interface-usb-xhci.pdf

https://www.intel.com/content/www/us/en/support/products/65855/software/chipset-software/intel-usb-3-0-extensible-host-controller-driver.html

https://github.com/Dunedan/mbp-2016-linux/issues/71

https://forums.developer.apple.com/thread/95380

https://developer.apple.com/library/archive/documentation/DeviceDrivers/Conceptual/IOKitFundamentals/Features/Features.html

https://developer.apple.com/library/archive/documentation/DeviceDrivers/Conceptual/IOKitFundamentals/TheRegistry/TheRegistry.html

https://www.ifixit.com/Teardown/MacBook+Pro+13-Inch+Touch+Bar+2018+Teardown/111384

https://duo.com/blog/apple-imac-pro-and-secure-storage

Setting up macOS Mojave

I recently upgraded my 2011 Macbook Air to a 2018 Macbook Pro running macOS Mojave and I must say it runs splendidly. Here are a few things I install on a upon a new installation to become comfortable.

Graphical apps

  • Firefox is the default browser for me on macOS, despite it always “using significant energy.”
  • Scriviner is used for authoring of posts offline, which I then copy and re-format into WordPress.
  • I use Xcode extensively to write some relatively simple C programs. I also use the Command Line Tools that come with the installation, although you don’t need to install them together.
  • Screen sharing is a handy VNC client that works better than X forwarding, at least with the default installation.
  • Notes is the preferred note taking app that syncs pretty seamlessly across my phone and computer. Usually I put quick unsorted notes here and later export them to OneNote in a finalised form for reference later.
  • Microsoft Office is the de facto application suite that I’ve been using for well over a decade. It’s required for building decks, long form writing, and handling mail that needs to be shared in a business setting. If I keep the output local to me, then I’ll stick with LaTeX.
  • VLC is the default for watching videos or listening to radio streams, such as the excellent Groove Salad on Soma.FM. I’ve tried using the Radio section on iTunes but it just seems to be too buggy compared to a bespoke .pls playlist file.

Terminal apps

I use a variety of console apps – some of which aren’t available on macOS by default. Xocite.com has a pretty good tutorial for how to get started on setting up local apps on Mac.

Screen

Screen is my favourite terminal multiplexer. Here is the config I use.

# Interaction
escape ``
screen -t default 1 bash
bindkey "^[[1;5D" prev # ctrl-left
bindkey "^[[1;5C" next # ctrl-right

# Behavior
term screen-256color
startup_message off
vbell off
altscreen on
defscrollback 30000

Vim

An improved version of the Vi editor, according to the documentation. For me, it’s my primary text editor on the console. The configuration below goes on every user account. As you can tell, I like my tabs two characters wide, with spaces. I don’t use any plugins.

filetype on
filetype plugin on
filetype plugin indent on

syntax enable
set ttyfast

set tabstop=2
set shiftwidth=2
set softtabstop=2
set smarttab
set expandtab
set autoindent
set smartindent
set cursorline
set nobackup
set nowritebackup
set nocompatible
set noswapfile
set backspace=indent,eol,start

set secure
set enc=utf-8
set fenc=utf-8
set termencoding=utf-8

LaTeX

I use the excellent LaTeX for authoring of my resume and diagramming on my blog posts. I primarily use MacTeX for now but would like to switch to something that doesn’t need root privileges – essentially just a package I can extract in any directory and start using.

Keyboard shortcuts

Yes, believe it or not, knowing your keyboard shortcuts goes a long way on macOS, especially when having the laptop docked. I primarily use the window management keyboard shortcuts built-in.

  • ⌃↑: Mission Control, also bound to top mouse button
  • ⌃↓: Application window, also bound to bottom side mouse button
  • ⌃←: Move left a space
  • ⌃→: Move right a space

Setting up a Docker Pi-hole DNS server for wired and wireless clients

Pi-Hole is a DNS resolver that prevents the resolution of common ad-hosting networks. I have a server in my household that I wanted to run as a Pi-hole server for both Ethernet and wireless clients. Here’s how I did it. Keep in mind that when changing the network configuration it’s wise to do it incrementally and test each step to avoid making a mistake and not being able to troubleshoot. In addition, Pi-hole was originally designed to be the only thing installed on a Raspberry Pi so to make the configuration less invasive on my existing system, I’ll be using the official Docker container. For a much simpler installation, go ahead and run the curl | bash command on their home page.

Network topology

You’ll need to get a good idea of your current network topology before continuing. In my case, I wanted to let this be opt-in for other clients on the network because I didn’t want to cache other people’s DNS requests. This means I wouldn’t alter the DNS settings on the router.
First, I mapped out my current network topology. This is pretty easy to do if you just trace the cables in the house. Your set up will probably match mine:

  • WAN from your internet provider connects to to a DOCSIS modem.
    • This modem provides WiFi (normally 802.11ac) to your IoT devices, mobile phones, and other connected devices.
    • It may also be connected to a wireless repeater to resolve deadspots in the house.
      It also provides wired Ethernet.
  • This wired Ethernet may be connected to a switch to reduce cables across the home.
  • It may optionally have telephone ports for VoIP.

A simpler home set up might only have wireless clients.

My configuration mirrors the above and my server is connected to the switch mentioned. Next step is to look at the current configuration according to your devices. You’ll need to gather the interface settings for your router and your server.

In my case,

  • Router
    • Connected to: WAN from internet provider
    • IP address: 192.168.0.1
    • DHCP settings: 192.168.0.2 to 192.168.254, subnet mask: 255.255.255.0
    • Built in DNS server available on: 192.168.4.100 and 192.168.8.100
  • Server
    • Connected to: switch, which is connected to modem
    • IP address (Ethernet): 192.168.0.2
    • IP address (Wireless): not configuredDHCP settings: same as router

With this in mind, we want to configure the server to act as a wireless hotspot for the Ethernet connection while also providing DNS for both wireless and wired clients. Fortunately, this is pretty simple to do, once you know which apps and files are needed.
This guide uses Debian 9 and NetworkManager.
First, we’ll configure the wireless access point and make sure clients can connect. Look at your current configuration:

$ nmcli
eno1: connected to Wired connection 1
"Intel Ethernet Connection I217-LM"
ethernet (e1000e), AA:BB:CC:DD:EE:FF, hw, mtu 1500
ip4 default
inet4 192.168.0.2/24
route4 169.254.0.0/16

wlp3s0: disconnected
"Intel Wireless"
wifi (iwlwifi), AA:BB:CC:DD:EE:FF, hw

lo: unmanaged
loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536

DNS configuration:
servers: 194.168.4.100 194.168.8.100
interface: eno1
Next, create a wireless hotspot, confirm you can connect, and then delete it.
$ sudo nmcli --show-secrets dev wifi hotspot
Hotspot password: xMNUYLGH
Device 'wlp3s0' successfully activated with '95f843c0-18b4-4133-a27f-9d3eb12be8e7'.
[.. connect to the device ..]
$ sudo nmcli connection down uuid 95f843c0-18b4-4133-a27f-9d3eb12be8e7
$ sudo nmcli connection delete uuid 95f843c0-18b4-4133-a27f-9d3eb12be8e7

Now that we’re certain we can create a hotspot we can configure it to our preferences.

Pi-hole with Docker

Installing Docker is relatively simple. We’ll enable the HTTPS functionality for their repository and then download the Community Edition of Docker.

$ sudo apt install gnupg2 curl ca-certificates apt-transport-https software-properties-common

Install their GPG key. You can verify the fingerprint by comparing the output from the below command with their official documentation [link]. Last time I checked, the fingerprint’s last 8 characters were: 0x0EBFCD88.

$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -

Next, enable the stable repository for this release. In my case I’m using Debian Stretch.

$ sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/debian \
$(lsb_release -cs) \
stable"

Finally, download Docker.

$ sudo apt update
$ sudo apt install docker-ce

Confirm that it works.

$ sudo docker run hello-world

If this works, add yourself to the Docker group and log out and then log in.

$ sudo usermod -aG docker `whoami`

Now we can launch the Pi-hole Docker container and configure it to act as a DNS server. We’ll use the following configuration settings.

  • Host mode: meaning that container’s network stack is shared with the host. This will be necessary when exposing ports 53 for DNS and 80 for the web interface, and 443 for SSL ads.
  • DNS from Cloudflare: 1.1.1.1
  • Environmental variables
  • ServerIP=192.168.0.2; the IP of the server on the local network
$ docker pull pihole/pihole
$ mkdir -p ~/local/docker/pihole/pihole/etc/{pihole,dnsmasq.d}
$ docker run \
--name pihole \
-p 80:80 \
-p 53:53/tcp \
-p 53:53/udp \
-p 443:443/tcp \
-p 443:443/udp \
-v ~/local/docker/pihole.pihole/etc/pihole:/etc/pihole \
-v ~/local/docker/pihole.pihole/etc/dnsmasq.d:/etc/dnsmasq.d \
--dns=127.0.0.1 \
--dns=1.1.1.1 \
-e ServerIP=192.168.0.2 \
-e IPv6=False \
-e DNS1=192.168.4.100 \
-e DNS1=192.168.8.100 \
-e WEBPASSWORD=password \
pihole/pihole:latest

If you get some sort of error such as “Couldn’t bind to :80 because already in use”, correct the error, delete the container, and try again.

$ sudo systemctl stop apache2
$ sudo systemctl disable apache2
$ docker container list -a
$ docker container rm <container>

Now finally, connect to your container by navigating to http://<server_ip> on a different computer.

You can also check that your container has network access by:

$ docker container exec pihole ping www.google.com

Now the Docker container is up and running, go ahead and change the settings on your wired interface to use the IP address of your server as the DNS address.

For wireless clients, we’ll go ahead and configure the hotspot again, this time setting the DNS to use our server. Notice that due to installing Docker our networking configuration has changed.

$ sudo nmcli
docker0: connected to docker0
bridge, 02:42:FB:FA:35:DE, sw, mtu 1500
inet4 172.17.0.1/16
inet6 fe80::42:fbff:fefa:35de/64

veth9259d68: unmanaged
ethernet (veth), 72:FD:6C:AD:CE:D9, sw, mtu 1500

DNS configuration:
servers: 194.168.4.100 194.168.8.100
interface: eno1

Now we have two more interfaces: docker0 and veth9259d68. Unfortunately, on my end when I create the hotspot, clients aren’t issued an IP address. Let’s debug NetworkManager and see what routes are being created.

Create the hotspot with nmcli

$ sudo nmcli --show-secrets dev wifi hotspot

Now, we’ll use the lower level networking tools to see what’s happening.

$ ip r
default via 192.168.0.1 dev eno1 proto static metric 100
10.42.0.0/24 dev wlp3s0 proto kernel scope link src 10.42.0.1 metric 600
169.254.0.0/16 dev eno1 scope link metric 1000
172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1
192.168.0.0/24 dev eno1 proto kernel scope link src 192.168.0.2 metric 100

Next, let’s look at the configuration file NetworkManager creates for the hotspot.

$ cat /etc/NetworkManager/system-connections/Hotspot
[connection]
id=Hotspot
uuid=2473d7a3-4e0f-40d9-b239-72e52c6fad63
type=wifi
autoconnect=false
permissions=

[wifi]
hidden=true
mac-address=AC:FD:CE:87:84:D0
mac-address-blacklist=
mode=ap
ssid=Hotspot-luv

[wifi-security]
group=ccmp;
key-mgmt=wpa-psk
pairwise=ccmp;
proto=rsn;
psk=ZoKpIEU4

[ipv4]
dns-search=
method=shared

[ipv6]
addr-gen-mode=stable-privacy
dns-search=
method=ignore

Here, the culprit is the [ipv4] method=shared line. In the nm-setting-ip4-config.c file, we can see the following description for this setting.

* NetworkManager supports 5 values for the #NMSettingIPConfig:method property
* for IPv4. If “auto” is specified then the appropriate automatic method
* (DHCP, PPP, etc) is used for the interface and most other properties can be
* left unset. If “link-local” is specified, then a link-local address in the
* 169.254/16 range will be assigned to the interface. If “manual” is
* specified, static IP addressing is used and at least one IP address must be
* given in the “addresses” property. If “shared” is specified (indicating that
* this connection will provide network access to other computers) then the
* interface is assigned an address in the 10.42.x.1/24 range and a DHCP and
* forwarding DNS server are started, and the interface is NAT-ed to the current
* default network connection. “disabled” means IPv4 will not be used on this
* connection.

So from this description, it seems like the problem is the DHCP and forwarding DNS server aren’t starting correctly. Let’s look at the NetworkManager logs and see if anything is awry. We’ll also stop the Pi-hole container to avoid any other issues.

$ docker stop pihole
$ sudo journalctl -u NetworkManager --since "1 hour ago"

Walking through the logs is quite enlightening. (1) We see that NetworkManager creates IPtables entries for the interface, including to forward DNS and DHCP ports to the local DNSmasq instance. (2) We see that dnsmasq-manager failed to create a listening socket due to the address already in use by the Docker container.

Now – before rushing ahead and trying to fix this, it’s important to restate what we’re trying to accomplish here. Approaching the problem with the mindset of “how do I fix this” is wrong and will lead you down a DuckDuckGo / StackOverflow rabbit hole. In this scenario, we’re trying to issue an IP address to clients on the wlp3s0 interface. In addition, we want these clients to use the server as the DNS server so their DNS requests go through the Pi-hole Docker container.

Modify the default settings for shared IP interfaces.

$ sudo vim /etc/NetworkManager/dnsmasq-shared.d/default.conf
# Disable local DNS server
port=0

# Use Pi-hole for DNS requests
dhcp-option=option:dns-server,192.168.0.2,192.168.4.100

Now try restarting the docker container and the wireless hotspot. Check the log for errors.

$ docker start pihole
$ sudo nmcli --show-secrets dev wifi hotspot
$ sudo journalctl --since "1 minute ago" -u NetworkManager

No errors should be seen. Connect via your wireless device and confirm that new blocked entries are being inserted into the Pi-hole dashboard by going to your server IP address.

So in summary, we set up Pi-hole on Docker in Debian Stretch to block common adhosting networks for both wired and wireless clients on our home network. For me, this was a good test scenario to become more familiar with Docker.

Overall, I think that host based ad-blocking won’t be effective much longer as more and more content gets bundled with ads behind content delivery networks. The best practice regarding ads, in my opinion, is to only visit sites with acceptable ad practices. This means no pop-overs/pop-unders or stealing focus as well as not tracking you incessantly across the web. I suspect that ad-blocking has and will continue to move client-side. A simple way to avoid the most nefarious of ads is to use the Mozilla multi-container extension which lets you separate your online life into separate entities.

 

 

Sources

https://wireless.wiki.kernel.org/en/users/Documentation/rfkill

https://unix.stackexchange.com/questions/234552/create-wireless-access-point-and-share-internet-connection-with-nmcli

https://docs.docker.com/install/linux/docker-ce/debian/#set-up-the-repository

https://addons.mozilla.org/en-US/firefox/addon/multi-account-containers/

https://gitlab.freedesktop.org/NetworkManager/NetworkManager/tags/1.6.2

https://github.com/jwilder/nginx-proxy

Demoscene

I’ve been wanting to play around with some graphics work for a while now and while I’ve used Blender for a few renders, I’ve never sat down and set up a programming environment on my computer. What follows is a short tutorial on how to get started in OpenGL on Windows — but still using the Linux conventions that I’m familiar with.

The demoscene is something that’s fascinated me for years. If you haven’t heard of it, it’s the art of making a computer program (usually size constrained) that produces outstanding visual effects synced with music. There’s a wide variety of target platforms including Windows, Linux, MS-DOS, and even the old Amiga!

I’m surprised to see that there are still regular competitions being held around the world.

Here are some of my favourites:

  • fr-041: debris (YouTube): Very impressive city scape
  • luma – mercury (YouTube): Stunning light effects
  • H – Immersion – Ctrl-Alt-Test (YouTube): Very believable underwater adventure

Running a Bitcoin node

Setting up a Bitcoin node can be a bit daunting, especially considering the amount of disc space required and that the node needs to be always connected. However, once configured maintenance can be relatively hands-off. For more information about the minimum requirements please see here.

This tutorial will be split into two stages. One: configuring the server itself to be relatively secure and resilient against basic attacks and two: configuring the Bitcoin daemon on the server.

Stage one: securing the server

Let’s get the system up to date and then configure the stateful firewall.

# yum upgrade
# yum install vim iptables-service

And we’ll move SSH to a different port so we can reduce the number of login attempts considerably. As this is CentOS, SELinux will need to be informed of the change to allow the SSH daemon to bind to the new port.

# vim /etc/ssh/sshd_config
Set Port to 1234 or something non-standard
# semanage port -a -t ssh_port_t -p tcp 1234 
# systemctl reload sshd

And log back in using the new port to take a look at the network interfaces.

[user@local] $ ssh root@bitcoin -p 1234
$ ip addr 
Now let's understand the current network topology.
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 92:53:fb:96:86:27 brd ff:ff:ff:ff:ff:ff
    inet 128.199.93.101/18 brd 128.199.127.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 10.15.0.5/16 brd 10.15.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 2400:6180:0:d0::1f6:2001/64 scope global
       valid_lft forever preferred_lft forever
    inet6 fe80::9053:fbff:fe96:8627/64 scope link
       valid_lft forever preferred_lft forever

We can see there are two network interfaces – lo, the loopback interface, and eth0, the Internet facing interface. For the loopback (lo), it’s assigned the address 127.0.0.1/8 (IPv4) and ::1/128 (IPv6). For the Ethernet (eth0), it has four addresses. The first two are the public and private IPv4 addresses and the second two are the public and private IPv6 addresses, respectively.

We won’t be needing an networking within a private LAN so we’ll remove the internal addresses from the list of routes.

# ip addr del 10.15.0.5/16 dev eth0 # ip addr del fe80::9053:fbff:fe96:8627/64 dev eth0

Next we’ll enable a simple stateful firewall to prevent errant access to the box. Copy this to the root directory and use `iptables-restore < iptables` to use it. Make sure you set the correct SSH port as you’ll be needing it to log into the box.

 # iptables IPv4 simple config (bitcoin node)
 # v0.0.1
 # use at your own risk
 *filter
 # 1. Basics, loopback communication, ICMP packets, established connections
 -A INPUT -i lo -j ACCEPT
 -A INPUT -p icmp --icmp-type any -j ACCEPT
 -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
 # 2. Ensuring connections made are valid (syn checks, fragments, xmas, and null packets)
 -A INPUT -p tcp ! --syn -m state --state NEW -j DROP
 -A INPUT -f -j DROP
 -A INPUT -p tcp --tcp-flags ALL ALL -j DROP
 -A INPUT -p tcp --tcp-flags ALL NONE -j DROP
 # 3. Connections for various services, including SSH and Bitcoin
 -A INPUT -p tcp -m conntrack --ctstate NEW --dport 5555 -j ACCEPT
 -A INPUT -p tcp -m conntrack --ctstate NEW --dport 8333 -j ACCEPT
 -A INPUT -p tcp -m conntrack --ctstate NEW --dport 18333 -j ACCEPT
 #4. Log problems and set default policies for anything else
 -A INPUT -j LOG --log-level 7 --log-prefix "iptables dropped: "
 -P OUTPUT ACCEPT
 -P FORWARD DROP
 -P INPUT DROP
 COMMIT

Once loaded, make sure the iptables service starts on every boot.

 # yum install iptables-services
 # systemctl start iptables
 # systemctl enable iptables
 # iptables-restore < iptables
 # iptables -L

You should now see the policies enabled. Let’s do the same for IPv6.

 *filter
 :INPUT DROP [0:0]
 :FORWARD DROP [0:0]
 :OUTPUT ACCEPT [0:0]
 -A INPUT -i lo -j ACCEPT
 -A INPUT -p ipv6-icmp -j ACCEPT
 -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
 -A INPUT -d fe80::/64 -p udp -m udp --dport 546 -m state --state NEW -j ACCEPT
 -A INPUT -p tcp -m tcp ! --tcp-flags FIN,SYN,RST,ACK SYN -m state --state NEW -j DROP
 -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN,SYN,RST,PSH,ACK,URG -j DROP
 -A INPUT -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP
 -A INPUT -p tcp -m conntrack --ctstate NEW -m tcp --dport 5555 -j ACCEPT
 -A INPUT -p tcp -m conntrack --ctstate NEW -m tcp --dport 8333 -j ACCEPT
 -A INPUT -p tcp -m conntrack --ctstate NEW -m tcp --dport 18333 -j ACCEPT
 -A INPUT -j LOG --log-prefix "ip6tables dropped: " --log-level 7
 -A INPUT -j REJECT --reject-with icmp6-adm-prohibited
 -A FORWARD -j REJECT --reject-with icmp6-adm-prohibited
 COMMIT

Good so far. Let’s make these the default rules.

# iptables-restore > /etc/sysconfig/iptables
# ip6tables-restore > /etc/sysconfig/ip6tables

Stage two: configuring the Bitcoin node

Now, let’s get started with configuring the Bitcoin node. Begin by creating a local user account you’ll use to manage the service from now on.

 # adduser user
 # passwd user
 # gpasswd -a u er wheel
 # visudo // check that wheel is enabled on Centos

Login as the user and download and configure Bitcoin.

$ curl -O https://bitcoin.org/bin/bitcoin-core-0.15.1/bitcoin-0.15.1-x86_64-linux-gnu.tar.gz
$ curl -O https://bitcoin.org/laanwj-releases.asc
$ curl -O https://bitcoin.org/bin/bitcoin-core-0.15.1/SHA256SUMS.asc
$ gpg --quiet --with-fingerprint laanwj-releases.asc
$ gpg --import laanwj-releases.asc
$ gpg --verify SHA256SUMS.asc

The blockchain will be stored on an attached 250GB storage drive. We’ll mount it, format it, and configure it for hosting the blockchain. Additionally, we’ll add it to fstab so it is attached at boot.

$ sudo mkfs.ext4 -F /dev/disk/by-id/scsi-01
$ sudo mkdir -p /mnt/xbt-blockchain
$ sudo mount /dev/disk/by-id/scsi-01 /mnt/xbt-blockchain
$ sudo chown user:user /mnt/xbt-blockchain
$ echo '/dev/disk/by-id/scsi-01 /mnt/xbt-blockchain ext4 defaults 0 0' | sudo tee -a /etc/fstab

Next, we’ll configure bitcoin.conf to starting the daemon on the testnet first.

 $ tar xf bitcoin-0.15.1-x86_64-linux-gnu.tar.gz ~/
 $ touch /mnt/xbt-blockchain/bitcoin.conf
 $ vim /mnt/xbt-blockchain/bitcoin.conf

 # bitcoin.conf
 # v0.0.1
 # Use at your own risk
 listen=1
 server=1
 rpcport=8332
 rpcallowip=127.0.0.1
 listenonion=0
 maxconnections=16
 datadir=/mnt/xbt-blockchain
 testnet=1
 disablewallet=1
 # if low on memory
 dbcache=20
 maxmempool=300

Let’s test the configuration.

$ ~/bitcoin-0.15.1/bin/bitcoind -datadir=/mnt/xbt-blockchain &
$ ~/bitcoin-0.15.1/bin/bitcoin-cli -datadir=/mnt/xbt-blockchain
> uptime

Everything should be looking good at this point. Now, let’s enable the daemon to connect to mainnet. Set the testnet=1 boolean to 0 in the bitcoin.conf file and restart the daemon.

Congratulations — you’ve configured a full node. It will take a while to sync.

Hosting my own blog

Currently, my blog is served through WordPress. Visitors either enter the site through the redirect at http://www.antonyjepson.com or by web searches. While WordPress is a simple way to maintain a blog, at times I would like to have a bit more control over my content in the event WordPress disappears or my credentials are hacked.
What follows is an investigation in self-hosting alternatives.
My baseline requirements were as follows:

  • Hosted in the EU
  • Easy off-site back up
  • Embedding of images permitted
  • At least 32GB of space for content
  • Thorough documentation of the service used
  • Basic analytics
  • Light-weight and fast loading pages
  • Reduced susceptibility to DDOS attacks
  • $10.00 / mo maximum base price
  • Supported in mobile / desktop web

My stretch requirements were:

    • Replicated across the 7 continents (in the event a post becomes popular).
    • Moderated comments
    • A/B testing on post content
    • Encrypted access via SSL

Starting with the hosting infrastructure, I considered various options.

Hosting

First was Amazon Elastic Compute Cloud, with which I am very familiar with and have used for years. Referencing the EC2 Instance comparison in the Ireland region chart, a small T2 instance came up to $166.44 annually when reserved upfront for the year. It didn’t come with storage, so a 32GB general purpose EBS for one year comes to $0.11 / GB-month * 12 months * 36GB = $47.52 annually. Additionally, I would need to create a snapshot every two weeks with 2 months of cumulative back ups which would cost at worst case $0.05 * 8 back ups * 12 months * 32 GB = $172.80 / annually. Network I/O for Amazon is sufficiently cheap (monitored at the 10 TB / month scale) so that will not be included in the calculation. The final cost comes to $387 annually = $32 / mo. If we reduced the scale of back ups, to say, twice a month, the final cost would come to $31 / mo. Clearly, this is a quite a bit above my target of $10.00/mo.
Next, I considered hosting it on Windows Azure. Looking at the virtual machine categories, the A1 instance seemed sufficient, costing $20.83 monthly. This tier comes with two disks, a 20GB operating system disk and a 70GB temporary storage disk. Evidently, the temporary storage disk would not be the best location for the blog in the event of termination or other failure.
While Windows Azure seemed tempting, I wanted to drop the price even further. The next choice was Digital Ocean – well known for their digital `droplets’. Unfortunately, the full pricing scale was behind a sign-in page. On the public facing side, $10 monthly would secure a droplet (instance) with 1GB memory, 1 core processor (very vague), 30GB SSD, and 1TB transfer. While this certainly seemed like the best option, I wanted to make sure I evaluated 5 options.
The fourth alternative considered is Google cloud platform. Costs can be reduced by using a custom machine type, in this case, I would opt for 2GB ($0.0.00361 / GB-hr) instance with 2 vCPUs ($0.02689 / vCPU-hr), bringing the total to 744 hr * [(2 vCPU * $0.02929 / vCPU-hr) + (2 GB * $0.00393 / GB-hr) = $45.38 / mo.
A great alternative is actually using a static site generator and hosting the website on Amazon S3. This means that there are no security updates to worry about. Unfortunately, this would require me to run the site generator locally on my computer and back up my blog manually. The cost for Amazon S3 in Europe is $0.0300 / GB. So 32GB would run me $0.96 GB / mo. I am changed per 1,000 for the GET requests and they run at $0.005 per 1,000. A scenario I used to evaluate the price was if one of my posts went viral and got 30,000 viewers in one day and the page used 28kB of space, I would need to pay (3 items * 30,000 GET requests * $0.005 / 1,000 GET requests * 28kB * 0.01 / GB) = $0.45. Not bad!
The final option investigated was GitHub Pages. This lets you host a website directly from your public GitHub repository. I am also familiar with GitHub. While it is free, this does not let me select the region for hosting the page. Therefore, this was not a valid option.
After all options considered, I decided to move forward with static hosting on Amazon S3 for hosting my blog, with a back up at http://antonyjepson.wordpress.com in the event it went down or, worst case, I could no longer pay.
Now, let us look at the blogging platform choices.

Blogging platform

As much as WordPress gets a bad rep for not being a light-weight place to host content, it has millions of monthly-active-users. Transitioning to the new hosting engine, I wanted it to be simple, light-weight (so it could run on an cheap albeit underpowered virtual machine), and relatively secure. Furthermore, I wanted the resulting content to be performant on both mobile and desktop, with mobile being the primary form factor. Finally, as a stretch goal, commenting would be great and attract recurring viewers to my site.
I first considered Jekyll for the site generator. At a high level, it takes a text file and processes it into a webpage. While I have a more than adequate understanding of HTML+CSS, having to deal with the finer points of them would definitely detract from writing quality content. It enables me to write the posts in a blog optimised language like Textile. Referencing is updated each time my content is converted and published to the web.
Second alternative was Hugo. Hugo specialises in having partial `compilation’ compared to the monolithic compilation offered by Jekyll. While this might be great if I had 1,000’s of pages in my blog, I anticipate the size of it to grow the low hundred’s so I don’t think it makes sense to deviate from the most supported option.
Based on the above, I opted to go for Jekyll.

Moving forward

This is not a simple process and some additional voodoo will likely be required to enable SSL support and commenting support (likely with Disqus). Expect changes on http://www.antonyjepson.com over the coming weeks.

Creating a video montage with ffmpeg

With 1080p (and in some cases 2K) cameras now being standard on mobile phones, it’s easier than ever to create high quality video. Granted, the lack of quality free video editors on Windows / Linux leaves something to be desired.
I played with Blender VSE (Video Sequence Editor) to try and create a montage of my most recent motorcycle rides but the interface was non-intuitive and had a rather high learning curve.
So, I turned to the venerable ffmpeg to create my video montage.
Selecting the source content
Before jumping to the command line, you will need to gather the list of clips you want to join and have a basic idea of what you want to achieve. Using your favourite video player (VideoLAN Player, in my case), play through your captured videos and find the timeframe for trimming.
For the purposes of this tutorial, let’s assume this is my game plan:

Video effect 1: fade in from black
Audio track 1: filename "audio.mp3"
Video clip 1: filename "getting_ready.mov"; length 03:30 [mm:ss]; trim start 01:30; trim end 02:15
Text overlay 1: text "Touch Sensitive - Pizza Guy"; background partially transparent; font Arial; position lower left
Video effect 2: Cross fade
Video clip 2, filename "riding_fast.mov", length 00:50 [mm:ss], trim start 00:15, trim end 00:50
Video effect 3: Cross fade
Video clip 3, filename "going_home.mov", length 02:00 [mm:ss], trim start 00:45, trim end 01:55

Understanding ffmpeg
The ffmpeg documentation is extensive and well written. I highly recommend you spend some time familiarising yourself with the video filter section.
Let’s begin by understanding the file formats of our videos. For this tutorial, since they are all recorded by the same camera they will all share the same video / audio codecs and container.

$ ffmpeg -i getting_ready.mp4
Guessed Channel Layout for Input Stream #0.1 : mono
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '.\getting_ready.mov':
  Metadata:
    major_brand     : qt
    minor_version   : 0
    compatible_brands: qt
    creation_time   : 2016-01-01 00:34:11
    original_format : NVT-IM
    original_format-eng: NVT-IM
    comment         : CarDV-TURNKEY
    comment-eng     : CarDV-TURNKEY
  Duration: 00:3:30.47, start: 0.000000, bitrate: 16150 kb/s
    Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p, 1920x1080, 14965 kb/s, 30 fps, 30 tbr, 30k tbn, 60k tbc (default)
    Metadata:
      creation_time   : 2016-01-01 00:34:11
      handler_name    : DataHandler
      encoder         : h264
    Stream #0:1(eng): Audio: pcm_s16le (sowt / 0x74776F73), 32000 Hz, 1 channels, s16, 512 kb/s (default)
    Metadata:
      creation_time   : 2016-01-01 00:34:11
      handler_name    : DataHandler

Important items to note

  • In “Stream #0:0” information, the video encoding is H.264. We will keep this codec.
  • In “Stream #0:1” information, we can see that the audio is raw audio (16 bits per sample, little endian). We will be converting this to AAC in the output.

Trimming the clips
We will begin the effects by trimming the portions we need. As we will be adding effects later, we’ll leave 2 seconds on either side of the trim.

$ ffmpeg -i ./getting_ready.mov -ss 00:01:30.0 -c copy -t 00:00:47.0 ./output-1.mov
$ ffmpeg -i ./riding_fast.mov -ss 00:00:13.0 -c copy -t 00:00:39.0 ./output-2.mov
$ ffmpeg -i ./going_home.mov -ss 00:00:43.0 -c copy -t 00:01:12.0 ./output-3.mov

Applying effects
Note: If you want to speed up processing time while you get the hang of this, you can scale the videos down and then apply the effects to the full size video once you’re satisified with the output.
$ ffmpeg -i ./output-1.mov -vf scale=320:-1 ./output-1s.mov
-1 on the scale filter means determine the height based on the aspect ratio of the input file.
First, I’ll show how to add the effects individually (at a potential loss of quality). Then we will follow up by chaining the filters together.
Let us apply the fade in/outs to the videos.

$ ffmpeg -i ./output-1s.mov -vf fade=t=out:st=45.0:d=2.0 ./output-1sf.mov
$ ffmpeg -i ./output-2s.mov -vf 'fade=in:st=0.0:d=2.0, fade=t=out:st=37.0:d=2.0' ./output-2sf.mov
$ ffmpeg -i ./output-3s.mov -vf fade=in:st=0.0:d=2.0 ./output-3sf.mov

Unfortunately, as H.264 does not support alpha transparency, we will need to use the filtergraph to let us apply alpha (for the fading) to the stream before outputting to the final video. First, let’s rebuild the above command as a filter graph.

$ ffmpeg -i ./output-1s.mov -i ./output-2s.mov -i ./output-3s.mov -filter_complex '[0:v]fade=t=out:st=45.0:d=2.0[out1];[1:v]fade=in:st=0.0:d=2.0, fade=t=out:st=37.0:d=2.0[out2];[2:v]fade=in:st=0.0:d=2.0[out3]' -map '[out1]' ./output-1sf.mov -map '[out2]' ./output-2sf.mov -map '[out3]' ./output-3sf.mov

This uses the filter_complex option to enable a filtergraph. First, we list the inputs. Each input is handled in order and can be access by the [n:v] operator where ‘n’ is the input number (starting from 0) and ‘v’ means access the video stream. As you can tell the audio was not copied from the input streams in this command. Semicolon is used to separated parallel operations and the comma separates linear operations (operating upon the same stream).
Next, let’s add the alpha effect and combine the videos into one output.

$ ffmpeg -i ./output-1s.mov -i ./output-2s.mov -i ./output-3s.mov -filter_complex '[0:v]fade=t=out:st=45.0:d=2.0:alpha=1[out1];[1:v]fade=in:st=0.0:d=2.0:alpha=1, fade=t=out:st=37.0:d=2.0:alpha=1[out2];[2:v]fade=in:st=0.0:d=2.0:alpha=1[out3];[out2][out1]overlay[out4];[out3][out4]overlay[out5]' -map [out5] out.mov

Next add the text overlay.

$ ffmpeg -i ./output-1s.mov -i ./output-2s.mov -i ./output-3s.mov -filter_complex "[0:v]fade=t=out:st=45.0:d=2.0:alpha=1[out1];[1:v]fade=in:st=0.0:d=2.0:alpha=1, fade=t=out:st=37.0:d=2.0:alpha=1[out2];[2:v]fade=in:st=0.0:d=2.0:alpha=1[out3];[out2][out1]overlay[out4];[out3][out4]overlay[out5];[out5]drawtext=fontfile=/Windows/Fonts/Arial.ttf:text='Touch Sensitive - Pizza Guy':fontcolor=white:x=(0.08*w):y=(0.8*h)" out.mov

Finally, let’s have the text appear at 5 seconds and dissappear at 10 seconds.

$ ffmpeg -i ./output-1s.mov -i ./output-2s.mov -i ./output-3s.mov -filter_complex "[0:v]fade=t=out:st=45.0:d=2.0:alpha=1[out1];[1:v]fade=in:st=0.0:d=2.0:alpha=1, fade=t=out:st=37.0:d=2.0:alpha=1[out2];[2:v]fade=in:st=0.0:d=2.0:alpha=1[out3];[out2][out1]overlay[out4];[out3][out4]overlay[out5];[out5]drawtext=fontfile=/Windows/Fonts/Arial.ttf:text='Touch Sensitive - Pizza Guy':x=(0.08*w):y=(0.8*h):fontcolor_expr=ffffff%{eif\\:clip(255*(between(t\,5\,10))\,0\,255)\\:x\\:2}" out.mov

At last, let’s add the audio track and fade it out.

$ ffmpeg -i ./output-1s.mov -i ./output-2s.mov -i ./output-3s.mov -i ./audio.aac -filter_complex "[0:v]fade=t=out:st=45.0:d=2.0:alpha=1[out1];[1:v]fade=in:st=0.0:d=2.0:alpha=0, fade=t=out:st=37.0:d=2.0:alpha=1[out2];[2:v]fade=in:st=0.0:d=2.0:alpha=0[out3];[out1][out2]overlay[out4];[out3][out4]overlay[out5];[out5]drawtext=fontfile=/Windows/Fonts/Arial.ttf:text='Touch Sensitive - Pizza Guy':x=(0.08*w):y=(0.8*h):fontcolor_expr=ffffff%{eif\\:clip(255*(between(t\,5\,10))\,0\,255)\\:x\\:2}" -shortest -map 3:0 -af afade=t=out:st=68:d=4 out.mov

The final command, all together.  More information about PTS-STARTPTS can be found here.
 

ffmpeg -y -i ./output-1.mov -i ./output-2.mov -i ./output-3.mov -i ./audio.aac -filter_complex "[0:v]fade=t=out:st=10.0:d=2.0:alpha=1,setpts=PTS-STARTPTS[out1];
 [1:v]fade=in:st=0.0:d=2.0:alpha=1,fade=t=out:st=26.0:d=2.0:alpha=1,setpts=PTS-STARTPTS+(10/TB)[out2];
 [2:v]fade=in:st=0.0:d=2.0:alpha=1,fade=t=out:st=16.0:d=4.0:alpha=0,setpts=PTS-STARTPTS+(36/TB)[out3];
 [out1][out2]overlay[out4];
 [out4][out3]overlay[out5];[out5]drawtext=fontfile=/Windows/Fonts/Arial.ttf:text='Touch Sensitive - Pizza Guy':x=(0.08*w):y=(0.8*h):fontsize=52:fontcolor_expr=ffffff%{eif\\:clip(255*(between(t\,3\,8))\,0\,255)\\:x\\:2}" -map 3:0 -af afade=t=out:st=52:d=4 -shortest output.mov

Cloning Logical Volumes on Linux

I recently damaged my Windows 7 installation upgrading to Windows 10. The root cause was my dual-boot configuration with Gentoo Linux. Due to having the EFI partition on a drive separate from the drive containing the Windows (C:\), the installation failed. The error message was entitled “Something Happened” with the contents “Windows 10 installation has failed.” This took a lot of time to debug, and unfortunately, using Bootrec and other provided tools on the Windows 10 installation medium did not resolve the issue.
Here are the steps I followed to back up my Linux data.
Created a LVM (Logical Volume Management) snapshot volume proving a stable image for the back up. The L parameter specifies how much space to set aside for filesystem writes that happen during the back up.
# lvcreate -L512M -s -n bk_home /dev/mapper/st-home
Mounted the snapshot volume. As I was using XFS, I needed to specify the nouuid option or the mount would fail with a bad superblock.
# mount /dev/st/bk_home /mnt -onouuid,ro
Used tar to back up the directory and piped the output to GPG to encrypt the contents (as this will be going to a external HDD not covered under my LUKS encrypted volume). Because this back up was only stored temporarily, I opted for symmetric encryption to simplify the process.
# tar -cv /mnt | gpg -c -o /media/HDD/st-home.tar.gpg
The above was repeated for each of my logical volumes.
After the backup completed, I removed the snapshot volumes.
# umount /mnt
# lvremove /dev/st/bk_home

I then created a checksum to be used later.
$ sha1sum /media/HDD/*.xz.gpg > checksum.txt
Next, I formatted both of my harddisks and let Windows partition my SSD as appropriate. According to this Microsoft article, Windows by default will create a partition layout as follows.
1. EFI partition [> 100MB]
2. Microsoft reserved partition [16MB]
3. Placeholder for utility partitions
4. Windows partition [> 20GB]
5. Recovery tools partition [300MB]
Because I wanted both Windows and the Linux root filesystem to exist on the same drive, I added a boot partition and a large LVM partition in the placeholder, resulting in the following scheme:
512Gb SSD
1. [256MB] EFI
2. [16MB] Microsoft reserved
3. [256MB] /boot
4. [192GB] LVM
5. [8GB] Free space
6. [192GB] Windows partition
7. [300MB] Recovery tools
8. Free space
Recovering my Linux configuration was as simple as booting from the Gentoo live CD, installing Grub to the EFI partition, and restoring the partitions from the snapshot.

Google public WiFI

Short post: when you agree to the terms and conditions of Google sponsored WiFi (e.g. at Starbucks) your DNS resolution settings are updated to point to Google’s DNS servers. While this does result in hands-off protection from malicious websites it also enables Google to track your browsing habits and gather a large representative sample of the habits of people that use that particular WiFi network.
In Linux, look at your /etc/resolv.conf to determine if your DNS server has changed. Google’s servers are: 8.8.8.8 and 8.8.4.4.
I recommend checking this file each time you connect to a public WiFi network.