1 Shocking Trick To Add VLANs To Your Busybox ifupdown Configuration Developers Don’t Want You To Know! Number One Will Disintegrate You!

After scouring the internet for answers on howto use Busybox’s ifupdown binary and configure /etc/network/interfaces to set a vlan for an interface I looked in the ifupdown.c code and there are no references to vlans at all!

Don’t panic! Just because there isn’t built in support for setting the vlans doesn’t mean you can’t use /etc/network/interfaces or have to resort to using some custom cockamamie script to set your vlans up! The answer is in front of your nose!

Lately I’ve been making extensive use of the ifupdown pre-up and post-up options for doing insane things like appending iptables rules, bringing up and down other interfaces, etc… The simple trick is to use pre-up|up to use vconfig to create your vlan. The configuration file’s default location is /etc/network/interfaces

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto eth0.192
iface eth0.192 inet static
	address 192.168.0.77
	netmask 255.255.255.0
	gateway 192.168.0.1
	dns-nameservers 4.2.2.1 4.2.2.2
	pre-up ifconfig eth0 up
	pre-up vconfig add eth0 192

I found that I had to add a second pre-up command to configure eth0 up because it has no configuration. This could be a fundamental misunderstanding on my part however; the configuration works impressively well.

For the ifupdown scripts to work we will need to have a service file be executed on startup to run ifup -a. This will be up to individual project’s requirements however; assuming the project will be using a pure Busybox solution, Busybox provides its own init that will walk through /etc/init.d and run start on the scripts it finds there. Here is an example script that buildroot provided my rootfs skeleton.

/etc/init.d/S40Network
#!/bin/sh
#
# Start the network....
#

# Debian ifupdown needs the /run/network lock directory
mkdir -p /run/network

case "$1" in
  start)
	printf "Starting network: "
	/sbin/ifup -a
	[ $? = 0 ] && echo "OK" || echo "FAIL"
	;;
  stop)
	printf "Stopping network: "
	/sbin/ifdown -a
	[ $? = 0 ] && echo "OK" || echo "FAIL"
	;;
  restart|reload)
	"$0" stop
	"$0" start
	;;
  *)
	echo "Usage: $0 {start|stop|restart}"
	exit 1
esac

exit $?

You Need To Stop Using Nginx Now!

I just had the craziest experience. I was working with my rustic old Apache, getting ready to switch to Nginx. I set up php-fpm and created an application per user. As not to change too many things I switched Apache over to use php-fpm before setting Nginx up to use php-fpm. Once I got everything over to Nginx I decided I wanted to do a quick side by side page loading experience. To my surprise my low dynamic content PHP site took .27 seconds longer on Nginx (which it consistently had done) versus my decrepit, monolithic Apache server!

Come to find out all the tests you may have seen before, which showed Apache being absolutely destroyed by Nginx, seem to have a single flaw; mod_php sucks, a lot. There’s no mod_php for Nginx so the comparison between the two web servers is vastly exaggerated. In my case the Apache server was running faster than Nginx for page load times. Next time you see speed comparisons, it may pay to look and make sure you’re comparing apples to apples. You just might find that you’re comparing a software stack that utilizes better processes and isn’t necessarily incompatible with the stack you’re currently running.

I’m very happy that the results were what they were because after I setup Nginx I totally forgot that it isn’t compatible with .htaccess files! WordPress loaded but wasn’t too keen on functioning properly without it’s precious mod_rewrite rules it generated. While that wouldn’t necessarily be a problem for me I couldn’t leave my users hanging and I’m not going to rewrite their WordPresses myself.

FreeNAS And Multipath: Setting Up Multipath On Existing ZFS Pool

I might make a post of my recent endeavours into Open Media Vault and iSCSI but this post will focus on a “weird” one. For this post I must advise that following my instructions blindly and without a proper backup very likely will result in the total loss of your ZFS pool. Full disclosure, I’m also not certain if there is a better way.

I recently setup a FreeNAS box after trying to get ZFS on Linux to run appropriately. I have an SA120 with 2 SAS links to the FreeNAS box. These links allow for added active/active SAS paths to my drives. Unfortunately when I did the install of FreeNAS I was under the impression it would set up my multipathing for me. While this could be the case in normal installs I had an existing ZFS pool in my SA120.

The issue is that my ZFS pool was created with direct access to my drives and it seemed to chose which paths randomly as I have some on one path and some on another. I assume if either of the two SAS links were to go down it would be game over for my data. I did some digging around the FreeNAS and FreeBSD forums as well as some pre existing ZFS information I have and I compiled this information.

The utility to manipulate multipathing in FreeBSD is gmultipath. It can be set up in manual (create) or automatic (label) modes. The issue with using label seems to be that it writes metadata to the drive. The metadata is stored in the same place that GPT stores its metadata and unfortunately also where ZFS stores its metadata. It is discouraged to use label to set up your multipaths instead using the create argument.

A missing element in the forum posts I found is that they don’t address setting the multipath device up for active/active mode. Without setting this it will automatically use failover, active/passive. This may not be allowed for everyone but I’m using SAS drives in a SAS JBOD with a SAS host bus adapter. There is only one machine that is writing to the drives at once so I’m comfortable setting the links to active/active for increased throughput.

The last issue that needs to be addressed is that my ZFS drives are live and in production. Obviously I’m taking some risk by manipulating the devices underlying the ZFS pool however I have faith in my assumption that I can effectively upgrade my ZFS pool life for the following reasons; one the ZFS pool is live and has redundant connections, two I know ZFS is continuously monitoring each bit of data that is read and will report immediately if there is an error, three I had this ZFS pool imported this way on the Linux incarnation prior to installing FreeNAS and four I am exporting ZFS snapshots to another ZFS volume as well as backups taken locally on the hypervisors.

To begin we need to find which /dev/da* devices are paired. If you have more than two active SAS paths you will likely have triplets or more, e.g. one /dev/da entry per SAS drive per link. In manual mode you have to discover the paths for yourself. Fortunately this is usually made easier by the fact that each link is loaded in order as well as each LUN on each link. My SA120 has 2 active paths to my server. That means I have /dev/da0 to /dev/da23 therefore /dev/da0 is likely the same physical disk as /dev/da12. To verify this run the following command…

FreeNAS# camcontrol inquiry da0 -S
3SJ1BE73    00009038RZKW
FreeNAS# camcontrol inquiry da12 -S
3SJ1BE73    00009038RZKW

Here we see da0 and da12 have the same serial number. We now know we can use these two drives to create a new multipath target using gmultipath. The only issue now is that one or the other are in use by the ZFS pool. We need to figure out which is in use and remove it from the pool. To do that we need to issue a zpool status or similar command as follows to get the details

FreeNAS# zpool status
pool: sa120_0
state: ONLINE
scan: none requested
config:

NAME        STATE     READ WRITE CKSUM
sa120_0     ONLINE       0     0     0
mirror-0  ONLINE       0     0     0
da23    ONLINE       0     0     0
da19    ONLINE       0     0     0
mirror-1  ONLINE       0     0     0
da14    ONLINE       0     0     0
da13    ONLINE       0     0     0
mirror-2  ONLINE       0     0     0
da9     ONLINE       0     0     0
da12    ONLINE       0     0     0
mirror-3  ONLINE       0     0     0
da16    ONLINE       0     0     0
da20    ONLINE       0     0     0
mirror-4  ONLINE       0     0     0
da15    ONLINE       0     0     0
da17    ONLINE       0     0     0
spares
da18      AVAIL

The output shows that in mirror-2 the da12 device is in use. The scary part begins now that we need to remove da12 from mirror-2 leaving da9 as the remaining device for the stripe. To do that we need to issue a detach command. The following command will remove the device and the subsequent zpool status will give an output that doesn’t immediately make sense however it’s very logical.

FreeNAS# zpool detach sa120_0 da12
FreeNas# zpool status
pool: sa120_0
state: ONLINE
scan: none requested
config:

NAME        STATE     READ WRITE CKSUM
sa120_0     ONLINE       0     0     0
mirror-0  ONLINE       0     0     0
da23    ONLINE       0     0     0
da19    ONLINE       0     0     0
mirror-1  ONLINE       0     0     0
da14    ONLINE       0     0     0
da13    ONLINE       0     0     0
da9       ONLINE       0     0     0
mirror-3  ONLINE       0     0     0
da16    ONLINE       0     0     0
da20    ONLINE       0     0     0
mirror-4  ONLINE       0     0     0
da15    ONLINE       0     0     0
da17    ONLINE       0     0     0
spares
da18      AVAIL

Do not be alarmed that the output no longer shows a mirror-2. The reason for this is that the remaining drive, da9, is basically functioning as a single member mirror. This is why this part is the scariest part. Until we create our multipath device if da9 is ejected from the array then the entire ZFS pool will be lost. For my build a resilver is very fast; taking about 40 minutes to complete. I have much faith in ZFS as it’s designed to test the drives on every read. The likelihood of a failure is therefore probably low however this is no reason to skip making a backup. As stated before I have a ZFS snapshot as well as local, non ZFS based, backups. If anything were to go wrong I could restore a backup until the ZFS pool could be restored.

The next step is to create the multipath device. As mentioned previously we will use the command gmultipath to create the device. Then we will use the newly created device entry to readd the physical disk to the ZFS pool using the following commands.

FreeNAS# gmultipath create -A -v disk0 /dev/da0 /dev/da12
Done.
FreeNAS# zpool status
pool: sa120_0
state: ONLINE
scan: none requested
config:

NAME        STATE     READ WRITE CKSUM
sa120_0     ONLINE       0     0     0
mirror-0  ONLINE       0     0     0
da23    ONLINE       0     0     0
da19    ONLINE       0     0     0
mirror-1  ONLINE       0     0     0
da14    ONLINE       0     0     0
da13    ONLINE       0     0     0
da9       ONLINE       0     0     0
mirror-3  ONLINE       0     0     0
da16    ONLINE       0     0     0
da20    ONLINE       0     0     0
mirror-4  ONLINE       0     0     0
da15    ONLINE       0     0     0
da17    ONLINE       0     0     0
spares
da18      AVAIL

errors: No known data errors
FreeNAS# zpool attach sa120_0 da9 /dev/multipath/disk0
pool: sa120_0
state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scan: resilver in progress since Mon Jun 19 11:08:34 2017
1.81G scanned out of 318G at 88.5M/s, 1h1m to go
370M resilvered, 0.57% done
config:

NAME                 STATE     READ WRITE CKSUM
sa120_0              ONLINE       0     0     0
mirror-0           ONLINE       0     0     0
da23             ONLINE       0     0     0
da19             ONLINE       0     0     0
mirror-1           ONLINE       0     0     0
da14             ONLINE       0     0     0
da13             ONLINE       0     0     0
mirror-2           ONLINE       0     0     0
da9              ONLINE       0     0     0
multipath/disk0  ONLINE       0     0     0  (resilvering)
mirror-3           ONLINE       0     0     0
da16             ONLINE       0     0     0
da20             ONLINE       0     0     0
mirror-4           ONLINE       0     0     0
da15             ONLINE       0     0     0
da17             ONLINE       0     0     0
spares
da18               AVAIL

Here we can see mirror-2 is back in the list with da9 and multipath/disk0 being resilvered. It is important to note that the -A flag indicates that all paths for disk0 should be active at once. The following command will show the configuration and status of your multipath devices.

FreeNAS# gmultipath list
Geom name: disk0
Type: MANUAL
Mode: Active/Active
UUID: (null)
State: OPTIMAL
Providers:
1. Name: multipath/disk0
Mediasize: 300000000000 (279G)
Sectorsize: 512
Mode: r1w1e1
State: OPTIMAL
Consumers:
1. Name: da0
Mediasize: 300000000000 (279G)
Sectorsize: 512
Mode: r2w2e2
State: ACTIVE
2. Name: da12
Mediasize: 300000000000 (279G)
Sectorsize: 512
Mode: r2w2e2
State: ACTIVE

We can see disk0 is created, its type is manual and its mode is Active/Active. Then we can see its state is optimal, the provider being the new device we created and the consumers being the constituent device paths that make up the multipath.

The process can be repeated until all drives in the array are multipath drives. There are two approaches to replacing every drive in the ZFS pool. First you can do the drives all sequentially, da0-11 until they’re all done. Alternative they can be done per mirror so that at any given moment each mirror could be resilvering. I highly recommend the take it slow and easy approach and replace one device at a time.