VLAN DMZ on Debian Virtual Server with Qemu and Libvirt

I have a set of virtual servers running on a single home physical server. I like to be able to remote into my home servers when I’m away from home so that I can get to the home automation if I need to. So one of my virtual servers is internet facing – I have a port on my router mapped through to ssh on that server.

I had thought I had configured that server with public key logon only to the ssh, but apparently I hadn’t, and I got a breach on that server. Which is embarrassing. So I decided to do a solid pass through the security of my environment.

My desire is to run a DMZ network, with only a jump-box machine in the DMZ visible to the internet. This machine will be locked down tightly, and only permit ssh off that machine to a single machine in the inner network zone. No root login will be permitted on any ssh. That means that a penetration of the environment would require cracking the public key logon on the jump-box, then cracking the username/password of the single machine that is connectable from that jump-box.

I’ll walk through the configuration I did there, but a key in all this is that it means my host server (the physical machine that hosts the virtuals) is actually connected to both the inner network and the DMZ network. But I don’t want that machine answering any connections on the DMZ network – that would effectively put the physical host into the DMZ. The hardest thing in this setup was having virtual servers that are on the DMZ network without having the host itself on that network.

So, configuration that I did. I am using a single physical network interface on the host, and I want from that:

  • The physical network interface itself. I’d like this to not get an IP address, as I don’t want it to be routable
  • A bridge interface build from the physical network, with a DHCP address on the inner network, this is on VLAN 1 / default. This bridge is made available to all virtual machines that are on the inner network
  • A virtual network interface that is build from the underlying physical network interface, but that is tagged to be on VLAN 3 (my DMZ). All traffic that transits this interface will be VLAN 3 without any configuration needed to be done on the individual machines. I don’t want this interface to be routable – I don’t want the host iself
  • A bridge interface build from this virtual network interface, allowing virtuals to be on this DMZ (VLAN 3). I don’t want this routable directly from the host, as I don’t want the host on the DMZ network

To achieve this, I have configuration in both /etc/network/interfaces, and in /etc/dhcpcd.conf. Both of these are involved in network routing and DHCP addresses, and I found that if I just create bridge configuration in the interfaces file then dhcpcd will still grab that bridge and allocate an IP address to, making it a routable interface.

Note that I did play around with using the nogateway directive in dhcpcd.conf, and with using post-up hooks to delete the routes after they were created. Neither of those gave the result I wanted.

Here is my configuration in /etc/network/interfaces:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

auto enp3s0
iface enp3s0 inet manual

# iface enp3s0 inet manual

# vlan network interface - the .3 automatically tells it what to tag (apparently) - 3 is DMZ
auto enp3s0.3
iface enp3s0.3 inet manual

# The primary network interface
auto br0
iface br0 inet dhcp
  bridge_ports enp3s0
  bridge_stp off
  bridge_maxwait 0
  bridge_waitport 0
  bridge_fd 0
  metric 10
  pre-up /sbin/ifconfig enp3s0 mtu 9000

# the vlan bridge - DMZ
auto br0-3
iface br0-3 inet manual
  bridge_ports enp3s0.3
  bridge_stp off
  bridge_maxwait 0
  bridge_waitport 0
  bridge_fd 0
  metric 1000
  pre-up /sbin/ifconfig enp3s0.3 mtu 9000

This defines my primary interface enp3s0, but tells it that I want a manual IP address – in other words, please don’t allocate an IP address from this service. Note that if you only do this, dhcpcd will still allocate an address, so we need more configuration in dhcpcd.conf.

Then it defines a bridge br0 from that interface. This is the primary network connection that this server uses, and also the bridge used for any virtual that is on the inner network. This gets it’s IP address from DHCP, my DHCP server allocates a static address based on MAC.

Next, I define a virtual network interface on my DMZ network VLAN 3, enp3s0.3. It appears that the .3 on the end of the interface is how you tell it that you would like it tagged to VLAN 3, there isn’t some sort of tag directive. Again, I don’t want an IP address for this interface, I don’t want the host answering on the DMZ network.

Finally, I define a virtual bridge br0-3 on the virtual network interface, giving me a bridge that I can make available to my virtual servers that I want on the DMZ network.

Now, I need to do configuration in /etc/dhcpcd.conf to tell it which of these devices I want to have IP addresses. I am also blocking ipv6, because I haven’t worked out yet what firewalling I’d need on this.

interface br0
  noipv6

interface enp3s0
  noipv4
  noipv6

interface enp3s0.3
  noipv4
  noipv6

interface br0-3
  noipv4
  noipv6

So we have interface br0, which we are happy to have an ipv4 address (the only addressable interface) but we don’t want an ipv6 address on it. Then our other three interfaces all have no ipv4 or ipv6 on them, meaning they’re not routable at all.

We can check the routing that we get by using ip route:

root@server:/home/paul# ip route
default via 192.168.1.1 dev br0 proto dhcp src 192.168.1.4 metric 10 
default via 192.168.1.1 dev br0 metric 10 
192.168.1.0/24 dev br0 proto dhcp scope link src 192.168.1.4 metric 10 
root@server:/home/paul# 

This is telling us that the only routes available are via the gateway on the inner network (192.168.1.1), and via br0 with its IP address. Prior to this configuration you would have seen here something more like:

paul@server:~/ansible$ ip route
default via 192.168.1.1 dev br0 proto dhcp src 192.168.1.4 metric 10 
default via 192.168.1.1 dev enp3s0 proto dhcp src 192.168.1.148 metric 202 
default via 192.168.3.1 dev enp3s0.3 proto dhcp src 192.168.3.148 metric 204 
default via 192.168.3.1 dev br0-3 proto dhcp src 192.168.3.201 metric 206 
192.168.1.0/24 dev br0 proto dhcp scope link src 192.168.1.4 metric 10 
192.168.1.0/24 dev enp3s0 proto dhcp scope link src 192.168.1.148 metric 202 
192.168.3.0/24 dev enp3s0.3 proto dhcp scope link src 192.168.3.148 metric 204 
192.168.3.0/24 dev br0-3 proto dhcp scope link src 192.168.3.201 metric 206 

This is what I had before I started the dhcpcd.conf changes, and it meant the host server was answering on all 4 interfaces, and it was very hard to work out what was going on / where network traffic was going.

Compiling and testing RAID6 on btrfs

As I’ve noted earlier, there is code available and in kernel 3.9 that provides for native RAID5 and 6 on btrfs. I have some spare time this weekend, I’m going to compile a new kernel on a virtual machine, and have a play. This post contains instructions for anyone who might want to do the same.

Continue reading

Virtualisation and disk caching

In this post I’m talking a bit about virtualisation and the disk caching options, how I interpret what they’re doing, and why I’m choosing the settings that I am.

This discussion is really only applicable in a world where your storage is direct attached to your host, usually as SATA disk.  If you’re using network attached storage (NAS) or a full-fat SAN environment, then you’d typically attach the storage directly to the virtual machine, and therefore the host is unlikely to be providing any caching for you.

Continue reading

Yet more btrfs and RAID6

I’m thinking of downloading the new kernel and trailing it on a virtual this weekend, at which point I can maybe post some experiences / howto information.

In the meantime, what I see that’s new is an update on the FAQ page.  There is support for:

  • RAID-5 (many devices, one parity)
  • RAID-6 (many devices, two parity)
  • The algorithm uses as many devices as are available (see note, below)
  • No scrub support for fixing checksum errors
  • No support in btrfs-progs for forcing parity rebuild
  • No support for discard
  • No support yet for n-way mirroring
  • No support for a fixed-width stripe (see note, below)

It looks like the fixed-width stripe is the key one.  The implication is that it uses all the devices available, so if you happened to have 12 disks, it’d build a 12 disk RAID.  This impacts seek times – every disk needs to seek to the right place simultaneously to get that data.  Whereas if you had 12 disks and btrfs decided to build stripes using 6 disks, then only 6 disks need to seek simultaneously, and on average you can support two different IO requests in parallel.

Secondly, the balancing code doesn’t yet deal with different sized disks.  It sounds like if it didn’t always use as many devices as were available then it would deal OK with mixed size disks, no doubt within limits.

All good progress.  I think to use it I’ll have to download and build a new kernel, and download and build a new version of the btrfs progs.  We’ll see this weekend maybe.

 

libvirtd using a lot of memory, virt-manager blank

So, when I run top on my machines, I’m starting to see libvirtd using a large amount of memory. I’ve been working through what can be done about that, and found a couple of interesting points.

Firstly, libvirtd is the management tools only.  You can restart it without impacting your running virtuals, and therefore part of the answer is to just restart it.  There are a number of reports on the web of memory leaks in libvirtd, many of them appear to have been fixed in various versions, I assume there is a leak in the version I’m running, 0.9.12.

Secondly, when I restart it, it kills off my virt-manager sessions.  After restarting libvirtd and starting up a new virt-manager, I was finding that my virt-manager was giving a totally blank page on one of my servers.  I eventually tracked this to the fact that there was a hung virt-manager running in the background, and new sessions were just connecting to it.  Killing that session and then restarting virt-manager fixed my issue.

Continue reading

Part Two: Install virtualisation and a base virtual server

This is the second part of the tutorial on installing a mail server, refer the overview, or hit the tutorials menu at the top, and look at the mail server tutorial category.

In this section, I explain how to install the virtualisation software on an existing Debian (or, possibly, Ubuntu) server, and create a new virtual machine. I’m assuming that you want to use lvm as the backing store, if you don’t want to do this then you can set up the virtual within a LVM volume or a file, I note where you’d do things differently if you wanted to do that. You can also use drbd as a backing store to allow you to run the same mail server across two different servers without data loss, described here. I’ll mention the point where you should look at that post if you want to do this.

You should check that you’ve installed the right OS version for your base operating system – whilst you can run virtualisation on a 32-bit Linux kernel, it’s generally better to run on 64-bit. A 64-bit host can support 32-bit guests, a 32-bit host cannot support 64-bit guests. There are also memory limitations on 32-bit that get annoying (although unlikely to matter for a mail server).

I also assume that you have a gui environment (an x server, and KDE, Gnome or xfce installed). You can do all this from the command line, but you’ll need to derive the commands for that yourself. Note that you can install the virt-manager package on another machine to manager your server, so you just need to have the gui environment somewhere, not necessarily on your server.

Continue reading

Part One: Install a secure Debian Wheezy imap mail server into a virtual using Exim, Dovecot, Fetchmail

So, I’ve seen a number of people visiting to look at this post, which was, to be honest, not much of an instruction list.  Given that there is interest (at least relative to the lack of interest in the rest of my blog!!), I’m making a much more detailed tutorial that takes you through step by step how to install a virtual machine on an existing server, install Debian Wheezy into that virtual, then install Exim, Dovecot and Fetchmail to create a secure server that you can use for personal IMAP e-mail.

This post provides an overview of the components, what each does, and why I chose them.

Continue reading

Virtual machine backup size

As noted in an earlier post https://technpol.wordpress.com/2013/02/18/kvm-drbd-failover-and-backups/, I’ve created a process to backup my virtual machines.

The trick is that much of the disk space I allocated for my virtual machines is empty – they’re running about 25% full.  So ideally my backup would be 25% or less of the total machine size, particularly after I zip the extracted file.

As noted in that previous post, I’m just backing up the image itself directly.  If the bits of the file system that aren’t used were all filled with zeroes, it would zip really well.  In reality, the bits of a file system that aren’t used are filled with whatever was there before – essentially random stuff.  That compresses much less well.

So, my aim is to zero fill the spare space.  This comes in two flavours:

1. The swap space, which is within the drbd device

2. The ext4 file system.

Continue reading

KVM, DRBD, failover and backups

I’ve created a number of virtual machines as I’ve reinstalled my servers.  My old setup had lots of services all running on one main server, and had become a little fragile.  Upgrading any one aspect without breaking others was tricky, and I had poor documentation of what I’d done.  So in the new install I created a set of virtuals, both to ease the migration, and to give me a more stable setup.

The basic process here was that I:

  1. Upgraded my second server (test server) to a 64-bit kernel, which involves a full reinstall.  Got that running adequately
  2. Installed a couple of virtuals – these are development environments (a services node with mysql, git and a mediawiki; an apps node with ruby on rails) for something else I’m doing – I’ve been writing a few things in ruby over the last couple of years, most notably a watering system that I built and want to upgrade
  3. Stabilised that, and made sure it was all running reasonably stably
  4. Moved the core services off my main server – the media backend as one server, a mail server, a web server.  This leaves basically NFS and Samba on the main server
  5. Once that’s stable, then reinstall the main server, which now has much less stuff running on it to go wrong
  6. Then get the virtuals mobile across the two servers

This post deals with that last step – getting both the virtuals mobile so I can move from one to the other, and also a  method to back them up without taking them offline.

The technology I chose to use was DRBD, which is a distributed block device.  Basically this is a mirror across two servers, which runs active/passive – so one node is active, the other gets replication with a slight lag.  There are a bunch of different instructions on the web on how to do this, so I guess this is another one, again particular to my setup.

Continue reading