I’ve been building a trial version of our production environment, hosted on AWS. I’m using debian as it’s what I’m most familiar with, and I’ve chosen to use puma to serve the rails end of the site.
Puma comes with an init.d script, and a control process called pumactl. These were non-trivial to get working, particularly when also using rvm, so I thought I’d document what I did in case someone else wants to do the same. The script uses great terminology based around the puma theme, so we have a jungle of pumas which may or may not come out to play. In essence you can have multiple puma instances on your server, each serving a different application.
I’ve been working on creating a sample infrastructure on Amazon. I’m choosing to use the Debian AMI, mostly because all my servers for a number of years have been Debian, and I’m familiar with it. I wanted to install the logcheck package so I could get notifications, and in turn I wanted those messages to be mailed to me on my home mail address so I could review them.
Mail on AWS is sent using Amazon SES. There are instructions in the Amazon documentation for configuring exim, and I found different instructions around the net, but in general they resulted in all local mail also being delivered via SES. This is a problem for me as I haven’t configured any mail receipt for this domain, so lots of mail went into the aether, as well as being generally inefficient to send local mail out via SES. Finally, some of the configurations appeared to be pushing usernames and passwords into the exim config file, which I didn’t like much.
I managed to navigate my way to a more standard exim4 configuration that works, this documents how to achieve that.
I have three web sites that I use internally – they’re part of the development environment. One is a rails app that we use for tracking defects, the second is a mediawiki that we use for documentation, the third is gitweb. We decided to do some travel, but still want to access that development environment over the internet.
First step was to harden the server that hosts this – which I documented a bit here. I also needed to set security on all these servers – the rails app already had a user logon, mediawiki needed changes to make it only accessible with a userid following the instructions at the mediawiki site, gitweb I put behind a simple apache authentication using username and password (we only have 2 users at the moment, so not too onerous).
The next step was to proxy all these from a single web server, so that they all appear as if they were one web site (I only have one domain name). This was suprisingly hard, so I thought I’d write up some instructions for what I did. I’m running Debian Wheezy, so all these instructions are relevant for debian and probably ubuntu.
So, I’ve been making a couple of servers publicly accessible so as to allow some services outside the firewall. The main things I’m aiming to have available are git and mediawiki, which in turn means making SSH and Apache services available.
I have a virtual server that is my main gateway, and I have a smoothwall that sits at the perimeter. I’ve been through a bunch of different sites and looked at lots of documentation, this post just points to a few of them in case someone else is following the same path.
I’ve been using Rdiff-backup to take an incremental backup every 15 minutes of the directory I store code in. Logic being that I occasionally decide to do some refactoring that I probably shouldn’t have, and if I’m not disciplined with using git then I don’t have a save point. I suspect I don’t really need this, but it’s been running.
Recently it started giving an error along the lines of :Exception ‘[Errno 22] Invalid argument: ‘/home/backups/development-apps/apps/tmp/rdiff-backup.tmp.479” raised of class ‘<type ‘exceptions.OSError’>’
I’ve chosen to set up my development environment with two servers – a mysql central server, and an app server on which rails runs. This reflects my eventual intent – that the production system will have the mysql database separate from the app server, I’m figuring it’s easier to work that way. I spend a lot of time ssh’ing between these servers, and a bunch of tools such as scp and git run over top of ssh.
My aim in this post is to configure for a bit more security – I’m setting up so that you use public keys to logon rather than providing a password. This is easier/quicker to log on, but actually more secure than using a password.
As I’ve noted earlier, there is code available and in kernel 3.9 that provides for native RAID5 and 6 on btrfs. I have some spare time this weekend, I’m going to compile a new kernel on a virtual machine, and have a play. This post contains instructions for anyone who might want to do the same.