Difficulties installing TrueCrypt on OS X Yosemite

IMG_1013

Finally I came across doing a clean install of my Macbook, this is long due since the system is piling up crap from 2012.

I am a heavy TrueCrypt user despite the announcements the team made a couple of months back so after installing OS X I went to the archive page on GRC to get the latest fully functional binary and upon installing on my Mac I found that there was a version check.

Thanks to the fix on stefansundin.com I was able to be up and running:

First copy the package to your desktop, then open terminal and type:

sed -i '' 's/<installation-check .*>//' 'Desktop/TrueCrypt 7.1a.mpkg/Contents/distribution.dist'

Thats it!

How do I name my servers?

I’ve got asked this question a couple of times already! The process is straight forward now but it didn’t always have been!

At first I named my servers according with their purpose: “HOMESERVER”, “UTORRENT”, etc. This turned out not to work very well when I started to fiddle with virtualization. I needed to give them proper names that were not tied to the main software they were running so I could switch eventually, as I did when I decided to go with FreeNAS instead of Microsoft Windows Home Server.

The solution I took from watching the news about the US tornado season. They always have female names given alphabetically. My wife suggested to use stars since several analogies could be made: Constellations could hold servers that have a common purpose, the size of the star could be related to the given server’s processing power, there are several galaxies or groups of “stars” and the list goes on and on.

Right now I have the following names being actively used:

  • Andromeda – this is my low-power, always on, Proxmox Virtualization box.
  • Betelgeuse – this is my uTorrent-based downloader.
  • Capella – my FreeNAS 4x2TB ZFS Raid-Z file server.
  • Deneb – my cloud backup virtualized box: every hour it takes a snapshot of the websites I manage and also downloads and archives all gmail messages locally, these backups are all sent to Capella for long-term storage when it comes online.
  • Elnath – my git code server virtualized box.
  • Smokeping – this is a virtualized network test machine that constantly pings several addresses to keep track of when and where network failures occur, and yes, it needs a new name.
  • Puppetmaster – this is the puppet master virtualized machine I use to coordinate puppet settings, I also need a new name for it.

So there you have it, and before you ask, Andromeda is a galaxy but is also a constellation!

Creating an Ansible-Ready Proxmox VE OpenVZ template

Alongside with the instructions on creating a customized template, also:

  1. Create the user “ansible”.

    adduser ansible 
    mkdir /home/ansible/.ssh 
    echo "YOURSSSHKEYHERE" > /home/ansible/.ssh/authorized_keys 
    chown -R ansible:ansible /home/USERNAME/.ssh 
    chmod 600 /home/ansible/.ssh/authorized_keys
    
  2. Add it to the sudoer’s list:

    cat > /etc/sudoers.d/ansible <<!ENDSUDOERS 
    ansible ALL=(ALL) NOPASSWD: ALL 
    !ENDSUDOERS 
    chmod 440 /etc/sudoers.d/ansible
    

Speeding up slow zfs resilver on FreeNas

A couple of months ago I began receiving constant e-mail alerts stating that my FreeNas box was 80% full. My 2-year-old setup had 4 2TB Seagate drives in a Raidz1 pool. After some research and test with new firmware builds I found out that this was not optimal since the Raidz1 should follow the 2*n+1 formula [with n>0] (3, 5, 7 or 9 … drives).

I cannot afford to rebuild the pool at this time and one of the original drives failed and was replaced by a newer 4TB unit.

My approach was to replace every remaining 2TB drive on the pool by a 4TB one and this proved to be very time consuming. My box was taking too long to resilver the pool.

After some more research I came across Allan Jude’s “ZFS Advanced Topics” chapter proposed to the FreeBSD documentation project.

sudo sysctl vfs.zfs.resilver_delay=0

sudo sysctl vfs.zfs.scrub_delay=0

These tunables reduce the wait time between each resilver and scrub IO operation. Client performance was somewhat degraded but getting my pool back into pristine condition was more important.

Improving network performance of a new FreeBSD server

Thanks to Calomel.org for these tips. I was having network performance issues and my throughput more than doubled now!

Editing /etc/sysctl.conf:

# Default is fine for most networks. You may want to increase to 4MB if the
# upload bandwidth is greater the 30Mbit. For 10GE hosts set to at least 16MB
# as well as to increase the TCP window size to 65535 and window scale to 9.
# For 10GE hosts with RTT over 100ms you will need to set a buffer of 150MB and
# a wscale of 12.  Default of "2097152 = 2*1024*1024" is fine for 1Gbit, FIOS
# or slower.
# network:   1 Gbit   maxsockbuf:    2MB   wsize:  6    2^6*65KB =    4MB (default)
# network:   1 Gbit   maxsockbuf:    4MB   wsize:  7    2^7*65KB =    8MB (FIOS 150/65)
# network:  10 Gbit   maxsockbuf:   16MB   wsize:  9    2^9*65KB =   32MB
# network:  40 Gbit   maxsockbuf:  150MB   wsize: 12   2^12*65KB =  260MB
# network: 100 Gbit   maxsockbuf:  600MB   wsize: 14   2^14*65KB = 1064MB
kern.ipc.maxsockbuf=4194304  # (default 2097152)

# set auto tuning maximums to the same value as the kern.ipc.maxsockbuf above.
# Use at least 16MB for 10GE hosts with RTT of less then 100ms. For 10GE hosts
# with RTT of greater then 100ms set buf_max to 150MB. The default of
# "2097152" is fine for most networks.  
net.inet.tcp.sendbuf_max=4194304  # (default 2097152)
net.inet.tcp.recvbuf_max=4194304  # (default 2097152)

Proxmox with an Intermediate cert

Proxmox testing node running with an Intermediate Certificate Authority cert

It has been a long time since my last post. My boxes have been working fine so far and up until yesterday I had not noticed any issues. After updating JAVA on my machine I started to get errors concerning invalid certificates. I had previously installed new proper certs on my box so that might have been the cause.

Regarding my certs: I use the built in tools on pfSense to generate and manage all certs that I use on testing units. There I have a Root Certificate Authority setup and its cert is installed on the machines I use to debug my test installations (to avoid paying for temporary and easily disposable certs). On it I had created a server cert for my Proxmox testing node and had it installed a long time ago.

Looking for solutions I came across a post on how installing a intermediate certificate authority on the proxmox node could solve this and here is how I did it:

First I backed up all my old certs:

mv /etc/pve/pve-root-ca.pem /etc/pve/pve-root-ca.pem.bak 
mv /etc/pve/pve-www.key /etc/pve/pve-www.key.bak 
mv /etc/pve/priv/pve-root-ca.key /etc/pve/priv/pve-root-ca.key.bak 
mv /etc/pve/priv/pve-root-ca.srl /etc/pve/priv/pve-root-ca.srl.bak 
mv /etc/pve/local/pve-ssl.key /etc/pve/local/pve-ssl.key.bak 
mv /etc/pve/local/pve-ssl.pem /etc/pve/local/pve-ssl.pem.bak

Then I regenerated them and restarted all pvedaemon and pveproxy services:

pvecm updatecerts --force
service pvedaemon restart 
service pveproxy restart

I proceeded creating a new Intermediate Certificate Authority and a Server Certificate on my pfSense going on System > Cert Manager > CA > Add. Filled in the details and then Cert Manager > Certificate > Add and selected the previously intermediate cert authority. Downloaded the server key and cert and the authority cert.

Here came the tricky part:

  • The certificate authority cert became /etc/pve/pve-root-ca.pem;
  • The server key was copied to /etc/pve/local/pve-ssl.key; and
  • The server cert was edited to include the certificate authority cert at the bottom and copied to /etc/pve/local/pve-ssl.pem.

Restarted the services again and tested:

service pvedaemon restart 
service pveproxy restart

All working fine now!

Thanks to symmcom on the Proxmox forums and the maintainers of the Proxmox Wiki for some of these tips!

Cheers!

Updating Puppet on Debian 6

Some missing dependencies stopped Puppet from automatically updating on my systems.

After some research, I found the proper way to enable Puppet Labs Repos on my installs:

wget http://apt.puppetlabs.com/puppetlabs-release-precise.deb && sudo dpkg -i puppetlabs-release-precise.deb

proceeded by

sudo apt-get update && sudo apt-get dist-upgrade -y && sudo apt-get upgrade -y

Will cause everything to be running on the latest version.

Automatically shutdown FreeNAS box when all clients are offline

I’ve been struggling with trying to keep the power consumption of my FreeNAS box to a minimum for quite a while now.

On the past months I also started to play with the Raspberry Pi, running it as a media center device primarely using OpenElec at first and then Raspbmc later on.

I intend to talk about those experiences on a separate posting. At first what concerned me is that for my media center to work I would need to keep my file server running all the time, something that I would not like to do.

My solution comes in two parts:

First to build a device to monitor my lan, pinging known addresses (statically issued by my local DHCP server) and sending a Wake-On-Lan packet when one of them come online.

Second, to keep monitoring my lan, checking every minute if such devices are still present, and if they are not, shut down my file server after 30 minutes.

Now imagine the following scenario: I arrive at home with my phone and laptop. My phone upon seeing my home wifi network will automatically connect. The first raspberry pi will be pinging every minute a list of statically defined addresses looking for a phone or laptop, mine or my wife’s.

Once it receives a response, it sends a WOL packet that will turn my file server on. Now that it is on, the first raspberry pi will only keep pinging the file server, as long as it is on, there is no need for further checks.

On the file server, it would have a script running, pinging another list of addresses. As long as one of them answers, it will do nothing. When all addresses on the list fail to answer on the past 30 minutes it will initiate a shutdown.

And the cycle repeats.

When there is someone at home, the file server will be on, when everyone leaves, it turn itself gracefully off.

Perfect!

Since I have only ONE raspberry pi to do this, I will be turning my file server manually on for the time being and focusing this post on the second part.

This is the script I am using on my FreeNAS box:

#!/bin/sh

CHECK_EVERY=60
MAX_FAIL_COUNT=30

keep_on() {
  for p in htpc.home raphael-pc.home sala-tv.home teste.home;
  do
    if ping -c 1 $p >/dev/null 2>&1; then
      return 0
    fi
  done
  return 1
}

# Client must be up before starting main loop
while sleep 5
do
  if keep_on; then
    break
  fi
done

FAIL_COUNT=0

# main script
while sleep ${CHECK_EVERY}
do
  if keep_on; then
    FAIL_COUNT=0
  else
    FAIL_COUNT=$((FAIL_COUNT+1))
    echo $FAIL_COUNT
  fi
  if [ $FAIL_COUNT == $MAX_FAIL_COUNT ]; then
    shutdown -p now
    exit
  fi
done 2>&1

Not much complex stuff. The variable CHECK_EVERY state that the checks should be every 60 seconds and MAX_FAIL_COUNT that after 30 fail attempts it will shut itself down.

There is one failsafe: The script will only act when it receives an answer from at least one device on the list. This is to prevent the box to be turning off if something goes wrong with my internal DNS or if I plug it on someone else’s network. You never know…

To allow this to persist between boots, I first made the root writable with

su
mount -uw /

Then, I saved this script on /conf/base/etc/autoshutdown.sh and added a line calling it on /conf/base/etc/rc.local:

#!/bin/sh

/conf/base/etc/autoshutdown.sh

Also, made both scripts executable.

And that is it!

When I get a hold of a second raspberry pi I’ll post the other scripts here as well.

Letting your pool sleep…

Some very good points on an article I just stumbled upon…

Mount all your filesystems/pools with noatime

This way you won’t generate writes every time a file is accessed. I had this suggested by an episode of TechSnap where one of the hosts mentioned that they do this to avoid writes while doing reads but never came back to actually implement it.

I don’t have other filesystems on my FreeNAS box and ZFS has a property for this. Just run:

zfs set atime=off POOLNAME

Find files modified in the last day or so

A good snippet to try to get to these files is:

find / -mtime -1

Relocate directories and files to non-rotating media

Also another great suggestion on the original article:

Get a cheap USB drive (does not need to be big) and format it as ext4 (technically, you could set up another ZFS pool there too). Then, set it to be mounted in `/var/volatile` on your fstab. You can now move directories that contain frequently modified files there. After you’re done moving those directories, you can symlink them from their original location. So, for example, you would move `/var/log` to `/var/volatile/log`, then creating a symbolic link to `/var/volatile/log` named `/var/log`. At this point, it would be wise to make a cron job to nightly back the contents of this USB drive up (think `rsync -a`) to a backups directory somewhere in your pool. OK. If you’ve moved the most frequently modified files to `/var/volatile`, your disks will be idle unless you are actually using your file server. Now it’s time to take advantage of that idleness.