Upgrading my SSH and SSHd settings

Thanks to the great articles by stribika on github and Aaron Toponce, these are the updated settings I am using now:

Protocol 2
Ciphers aes256-ctr,aes192-ctr,aes128-ctr,arcfour256,arcfour128,arcfour 
KexAlgorithms diffie-hellman-group-exchange-sha256
MACs hmac-sha2-512,hmac-sha2-256,hmac-ripemd160


Host *
    Ciphers aes256-ctr,aes192-ctr,aes128-ctr,arcfour256,arcfour128,arcfour
    KexAlgorithms diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1
    MACs hmac-sha2-512,hmac-sha2-256,hmac-ripemd160

How do I name my servers?

I’ve got asked this question a couple of times already! The process is straight forward now but it didn’t always have been!

At first I named my servers according with their purpose: “HOMESERVER”, “UTORRENT”, etc. This turned out not to work very well when I started to fiddle with virtualization. I needed to give them proper names that were not tied to the main software they were running so I could switch eventually, as I did when I decided to go with FreeNAS instead of Microsoft Windows Home Server.

The solution I took from watching the news about the US tornado season. They always have female names given alphabetically. My wife suggested to use stars since several analogies could be made: Constellations could hold servers that have a common purpose, the size of the star could be related to the given server’s processing power, there are several galaxies or groups of “stars” and the list goes on and on.

Right now I have the following names being actively used:

  • Andromeda – this is my low-power, always on, Proxmox Virtualization box.
  • Betelgeuse – this is my uTorrent-based downloader.
  • Capella – my FreeNAS 4x2TB ZFS Raid-Z file server.
  • Deneb – my cloud backup virtualized box: every hour it takes a snapshot of the websites I manage and also downloads and archives all gmail messages locally, these backups are all sent to Capella for long-term storage when it comes online.
  • Elnath – my git code server virtualized box.
  • Smokeping – this is a virtualized network test machine that constantly pings several addresses to keep track of when and where network failures occur, and yes, it needs a new name.
  • Puppetmaster – this is the puppet master virtualized machine I use to coordinate puppet settings, I also need a new name for it.

So there you have it, and before you ask, Andromeda is a galaxy but is also a constellation!

“Warning: /var/lib/mlocate/daily.lock present”

I am still fixing small issues as they appear in my home setup. Right now I have a file server running FreeNAS named capella.home and a virtualization box running Proxmox named andromeda.home.

I configured andromeda to map a NFS share from capella as a repository for images and templates and for backups to be saved as well. Everyday andromeda performs full backups of all my VMs but to conserve power and to preserve the hardware I turn capella off whenever I’m travelling and every single time I was getting multiple warning e-mails with the message:

Warning: /var/lib/mlocate/daily.lock present, not running updatedb.
run-parts: /etc/cron.daily/mlocate exited with return code 1

This message was being sent both from andromeda and multiple VMs hosted inside it.

At first I thought that this was somehow due to auto-upgrade issues, even scheduled my VM host to auto reboot every couple days to see if it would avoid it (terribly bad practice, I know) with no success.

It turns out this was happening because the NFS server was offline and mlocate was trying to index it, so I adapted my puppet base recipe to include the following:

# locate, mlocate and updatedb

# Limit where updatedb scans
file {'/etc/updatedb.conf':
  ensure  => 'present',
  owner   => 'root',
  group   => 'root',
  source  => 'puppet:///etc/base/updatedb.conf'

and borrowed the contents of updatedb.conf from :

PRUNENAMES=".git .bzr .hg .svn"
PRUNEPATHS="/tmp /var/spool /media"
PRUNEFS="NFS nfs nfs4 rpc_pipefs afs binfmt_misc proc smbfs autofs iso9660 ncpfs coda devpts ftpfs devfs mfs shfs sysfs cifs lustre_lite tmpfs usbfs udf fuse.glusterfs fuse.sshfs ecryptfs fusesmb devtmpfs"

And that is it!

Setting Debian time zone with puppet

To manually change the time zone on a Debian install you would naturally use the interactive command dpkg-reconfigure tzdata.

In order to change all your puppet managed machines at once I use this simple recipe:

# Timezone -> America/Sao_Paulo

package {'tzdata':
  ensure  => 'present'

file {'/etc/localtime':
  require => Package['tzdata'],
  source  => 'file:///usr/share/zoneinfo/America/Sao_Paulo',
  notify  => Exec['reboot']

file {'/etc/timezone':
  require => Package['tzdata'],
  content => 'America/Sao_Paulo',
  notify  => Exec['reboot']

That’s it!

Creating a customized Proxmox VE OpenVZ template


Every time I want to test some code or software I usually do it on a virtualized environment to keep it isolated from my main system and every time I setup a machine from scratch. I use Virtual Box when I am on the go but at home I have several single-purpose VMs running on Proxmox VE, a powerful open source virtualization platform, based on KVM and OpenVZ. Here is how to simplify the setup process creating a custom Debian-based OpenVZ template:

  1. Create a regular OpenVZ Container having debian-6.0-standard_6.0-6_i386 as base.
  2. With the VM up and running, log in and setup networking. In my case I am using DHCP, so I added the following lines to /etc/network/interfaces:

    auto eth0 
    iface eth0 inet dhcp

    and reseted the network stack with /etc/init.d/networking restart.

  3. Update the system to install the latest patches:

    apt-get update && apt-get upgrade
  4. Make sure sudo and openssh-server are installed:

    apt-get install sudo openssh-server
  5. Create the default admin user, add it to the sudoer’s list and setup your ssh-key:

    adduser USERNAME
    usermod -a -G sudo USERNAME 
    mkdir /home/USERNAME/.ssh 
    echo "YOURSSSHKEYHERE" > /home/USERNAME/.ssh/authorized_keys 
    chown -R USERNAME:USERNAME /home/USERNAME/.ssh
  6. Add PuppetLabs as a repository and install puppet:

    echo -e "deb http://apt.puppetlabs.com/ squeeze main\ndeb-src http://apt.puppetlabs.com/ squeeze main" >> /etc/apt/sources.list.d/puppet.list 
    apt-key adv --keyserver keyserver.ubuntu.com --recv 4BD6EC30 
    apt-get update 
    apt-get install puppet
  7. Cleanup!

    apt-get --purge clean
    rm -f /etc/hostname 
    cat /dev/null > /etc/resolv.conf

    Let’s remove the current host ssh keys and create a script to auto generate them on the next boot.watch full Stayin’ Alive: A Grammy Salute to the Music of the Bee Gees 2017 film online

    rm -f /etc/ssh/ssh_host_*
    vi /etc/init.d/ssh_gen_host_keys

    Paste the script, a modified version of the one shown on HowToForge:

    # Provides:          Generates new ssh host keys on first boot
    # Required-Start:    $remote_fs $syslog
    # Required-Stop:     $remote_fs $syslog
    # Default-Start:     2 3 4 5
    # Default-Stop:
    # Short-Description: Generates new ssh host keys on first boot
    # Description:       Generates new ssh host keys on first boot
    ssh-keygen -f /etc/ssh/ssh_host_rsa_key -t rsa -N ""
    ssh-keygen -f /etc/ssh/ssh_host_dsa_key -t dsa -N ""
    /etc/init.d/ssh restart
    insserv -r /etc/init.d/ssh_gen_host_keys
    rm -f \$0

    After editing the file, make it executable and install it:

    chmod a+x /etc/init.d/ssh_gen_host_keys
    insserv /etc/init.d/ssh_gen_host_keys
  8. Done setting up the VM but don’t turn it off yet! Now take note of your VM ID (CTID) and ssh into Proxmox then run:

    vzctl set CTID --ipdel all --save

    You might want to tweak the /etc/network/interfaces now. Before continuing is a good idea to create an /tmp/excludes file with the following:


    Stop the VM and change directory to the VM root:

    vzctl stop CTID
    cd /var/lib/vz/private/CTID

    Then, tar the directory:

    tar --numeric-owner -czvf /var/lib/vz/template/cache/debian-6.0-YOURCUSTOMTEMPLATE\_6.0-6\_i386.tar.gz -X /tmp/excludes .

After that it will be available as a template for you to create new OpenVZ containers from. Please note that the template name should match one of the conf files on /etc/vz/dists (in your Proxmox box), otherwise you will have to write yourself your own.

This was heavily based on the OpenVZ Wiki, How to create a CentOS template and on Proxmox Forums.

That’s it!