Upgrading my SSH and SSHd settings

Thanks to the great articles by stribika on github and Aaron Toponce, these are the updated settings I am using now:

Protocol 2
Ciphers aes256-ctr,aes192-ctr,aes128-ctr,arcfour256,arcfour128,arcfour 
KexAlgorithms diffie-hellman-group-exchange-sha256
MACs hmac-sha2-512,hmac-sha2-256,hmac-ripemd160

and

Host *
    Ciphers aes256-ctr,aes192-ctr,aes128-ctr,arcfour256,arcfour128,arcfour
    KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
    MACs hmac-sha2-512,hmac-sha2-256,hmac-ripemd160

How do I name my servers?

I’ve got asked this question a couple of times already! The process is straight forward now but it didn’t always have been!

At first I named my servers according with their purpose: “HOMESERVER”, “UTORRENT”, etc. This turned out not to work very well when I started to fiddle with virtualization. I needed to give them proper names that were not tied to the main software they were running so I could switch eventually, as I did when I decided to go with FreeNAS instead of Microsoft Windows Home Server.

The solution I took from watching the news about the US tornado season. They always have female names given alphabetically. My wife suggested to use stars since several analogies could be made: Constellations could hold servers that have a common purpose, the size of the star could be related to the given server’s processing power, there are several galaxies or groups of “stars” and the list goes on and on.

Right now I have the following names being actively used:

  • Andromeda – this is my low-power, always on, Proxmox Virtualization box.
  • Betelgeuse – this is my uTorrent-based downloader.
  • Capella – my FreeNAS 4x2TB ZFS Raid-Z file server.
  • Deneb – my cloud backup virtualized box: every hour it takes a snapshot of the websites I manage and also downloads and archives all gmail messages locally, these backups are all sent to Capella for long-term storage when it comes online.
  • Elnath – my git code server virtualized box.
  • Smokeping – this is a virtualized network test machine that constantly pings several addresses to keep track of when and where network failures occur, and yes, it needs a new name.
  • Puppetmaster – this is the puppet master virtualized machine I use to coordinate puppet settings, I also need a new name for it.

So there you have it, and before you ask, Andromeda is a galaxy but is also a constellation!

Speeding up slow zfs resilver on FreeNas

A couple of months ago I began receiving constant e-mail alerts stating that my FreeNas box was 80% full. My 2-year-old setup had 4 2TB Seagate drives in a Raidz1 pool. After some research and test with new firmware builds I found out that this was not optimal since the Raidz1 should follow the 2*n+1 formula [with n>0] (3, 5, 7 or 9 … drives).

I cannot afford to rebuild the pool at this time and one of the original drives failed and was replaced by a newer 4TB unit.

My approach was to replace every remaining 2TB drive on the pool by a 4TB one and this proved to be very time consuming. My box was taking too long to resilver the pool.

After some more research I came across Allan Jude’s “ZFS Advanced Topics” chapter proposed to the FreeBSD documentation project.

sudo sysctl vfs.zfs.resilver_delay=0

sudo sysctl vfs.zfs.scrub_delay=0

These tunables reduce the wait time between each resilver and scrub IO operation. Client performance was somewhat degraded but getting my pool back into pristine condition was more important.

Automatically shutdown FreeNAS box when all clients are offline

I’ve been struggling with trying to keep the power consumption of my FreeNAS box to a minimum for quite a while now.

On the past months I also started to play with the Raspberry Pi, running it as a media center device primarely using OpenElec at first and then Raspbmc later on.

I intend to talk about those experiences on a separate posting. At first what concerned me is that for my media center to work I would need to keep my file server running all the time, something that I would not like to do.

My solution comes in two parts:

First to build a device to monitor my lan, pinging known addresses (statically issued by my local DHCP server) and sending a Wake-On-Lan packet when one of them come online.

Second, to keep monitoring my lan, checking every minute if such devices are still present, and if they are not, shut down my file server after 30 minutes.

Now imagine the following scenario: I arrive at home with my phone and laptop. My phone upon seeing my home wifi network will automatically connect. The first raspberry pi will be pinging every minute a list of statically defined addresses looking for a phone or laptop, mine or my wife’s.

Once it receives a response, it sends a WOL packet that will turn my file server on. Now that it is on, the first raspberry pi will only keep pinging the file server, as long as it is on, there is no need for further checks.

On the file server, it would have a script running, pinging another list of addresses. As long as one of them answers, it will do nothing. When all addresses on the list fail to answer on the past 30 minutes it will initiate a shutdown.

And the cycle repeats.

When there is someone at home, the file server will be on, when everyone leaves, it turn itself gracefully off.

Perfect!

Since I have only ONE raspberry pi to do this, I will be turning my file server manually on for the time being and focusing this post on the second part.

This is the script I am using on my FreeNAS box:

#!/bin/sh

CHECK_EVERY=60
MAX_FAIL_COUNT=30

keep_on() {
  for p in htpc.home raphael-pc.home sala-tv.home teste.home;
  do
    if ping -c 1 $p >/dev/null 2>&1; then
      return 0
    fi
  done
  return 1
}

# Client must be up before starting main loop
while sleep 5
do
  if keep_on; then
    break
  fi
done

FAIL_COUNT=0

# main script
while sleep ${CHECK_EVERY}
do
  if keep_on; then
    FAIL_COUNT=0
  else
    FAIL_COUNT=$((FAIL_COUNT+1))
    echo $FAIL_COUNT
  fi
  if [ $FAIL_COUNT == $MAX_FAIL_COUNT ]; then
    shutdown -p now
    exit
  fi
done 2>&1

Not much complex stuff. The variable CHECK_EVERY state that the checks should be every 60 seconds and MAX_FAIL_COUNT that after 30 fail attempts it will shut itself down.

There is one failsafe: The script will only act when it receives an answer from at least one device on the list. This is to prevent the box to be turning off if something goes wrong with my internal DNS or if I plug it on someone else’s network. You never know…

To allow this to persist between boots, I first made the root writable with

su
mount -uw /

Then, I saved this script on /conf/base/etc/autoshutdown.sh and added a line calling it on /conf/base/etc/rc.local:

#!/bin/sh

/conf/base/etc/autoshutdown.sh

Also, made both scripts executable.

And that is it!

When I get a hold of a second raspberry pi I’ll post the other scripts here as well.

Letting your pool sleep…

Some very good points on an article I just stumbled upon…

Mount all your filesystems/pools with noatime

This way you won’t generate writes every time a file is accessed. I had this suggested by an episode of TechSnap where one of the hosts mentioned that they do this to avoid writes while doing reads but never came back to actually implement it.

I don’t have other filesystems on my FreeNAS box and ZFS has a property for this. Just run:

zfs set atime=off POOLNAME

Find files modified in the last day or so

A good snippet to try to get to these files is:

find / -mtime -1

Relocate directories and files to non-rotating media

Also another great suggestion on the original article:

Get a cheap USB drive (does not need to be big) and format it as ext4 (technically, you could set up another ZFS pool there too). Then, set it to be mounted in `/var/volatile` on your fstab. You can now move directories that contain frequently modified files there. After you’re done moving those directories, you can symlink them from their original location. So, for example, you would move `/var/log` to `/var/volatile/log`, then creating a symbolic link to `/var/volatile/log` named `/var/log`. At this point, it would be wise to make a cron job to nightly back the contents of this USB drive up (think `rsync -a`) to a backups directory somewhere in your pool. OK. If you’ve moved the most frequently modified files to `/var/volatile`, your disks will be idle unless you are actually using your file server. Now it’s time to take advantage of that idleness.