Categories
Linux Server Administration

Dedicated IP addresses and virtual machines

In today’s world, more and more things are running virtualized. Increasingly popular are those little things called “containers”. I feel like these are slowly replacing the “old” fully fledged virtual machines (VMs) in many areas. Yet they still exist and I still use them quite frequently.

The following talks mostly about my own typical server setup, which is Debian + VirtualBox. However, principles may apply to different setup types (non-Debian, containers) too.

When running a VM on a server, I often need to assign them dedicated IP addresses. How I do this depends a little on the host and the VM, but for my Debian + VirtualBox setups in the past I relied on a very old guide from Hetzner (partially still available here, german only). The guide pretty much suggested this config:

auto virbr1
iface virbr1 inet static
   address (Host IP)
   netmask 255.255.255.255
   bridge_ports none
   bridge_stp off
   bridge_fd 0
   pre-up brctl addbr virbr1
   up ip route add (Additional IPv4)/32 dev virbr1
   down ip route del (Additional IPv4)/32 dev virbr1

(This shows IPv4 only – IPv6 is highly similar, with inet6 instead of inet, all netmasks replaced by IPv6 compatible syntax and ip -6 instead of ip)

This is something that you would put into /etc/network/interfaces and then tell VirtualBox to use that interface as a bridge. Then you could configure the guest as you would configure a host by putting the additional IP as static IP and setting the host IP as gateway.

What this technically does is it creates a new interface using brctl (command from the bridge-utils package) which is then configured as some type of “fake bridge”, because we don’t actually assign it an interface to bridge to. Instead we tell the kernel that we want packages to our additional IP get forwarded into this virtual interface, where it gets picked up by our VM [This obviously requires forwarding enabled in the kernel, e.g net.ipv4.ip_forward=1 for IPv4 and net.ipv6.conf.all.forwarding=1 for IPv6].

This used to work nicely for quite a few years – I believe I’ve been using this setup since either Debian jessie or stretch – somewhere around that. However, on upgrading to Debian bullseye, it broke – the VMs would no longer receive any packets.

I’m still not sure what broke it – the new 5.10 kernel or a change in bridge-utils probably – but I found a solution, hence this blog post. Instead of creating a “fake bridge”, just use a tuntap virtual interface. My new workflow is like this:

Have a bashscript running on boot that pretty much does this:

#!/bin/bash
ip tuntap add mode tap virbr1
ip addr add <Host IP> dev virbr1
ip link set virbr1 up
ip [-6] route add <Dedicated IP>/<Netmask> dev virbr1

(I’ve retained the “virbr1” interface name from the example above for consistency)

You can probably convert the above bash script into a syntax compatible with /etc/network/interfaces, but I decided to not bother with that – nowadays theres often additional network management software installed which just interferes with the old file.

The approach is functionally still the same: It’s a routed configuration that forwards packets from the incoming physical interface to the virtual tuntap interface, where they get picked up by the VM – and vice versa for outbound packets. The use of the tuntap interface just avoids the bridged interface, which doesn’t work anymore anyway.

This approach seems to be suggested by the new Hetzner documentation, altough they lack examples on how to setup such a tap interface – hence my example above.

For full completeness, I will also shortly show how to configure a VM to use this virtual interface:

First of all, make sure IPv4/IPv6 packet forwarding is on – it’s not going to work otherwise. Second, configure VirtualBox to use the virtual interface as a “bridged adapter”, like this:

Screenshot from phpVirtualBox

If you don’t have a GUI for VirtualBox, you will need to figure out the VBoxManage command to do the same thing – good luck with that.

Then, configure your guest like this (example for /etc/network/interfaces)

auto enp0s8
iface enp0s8 inet[6] static
  address <Dedicated IP>
  netmask <Netmask>
  gateway <Host IP>

(The name of the interface – enp0s8 – depends on how your guest OS names the bridged adapter from VirtualBox – check ip a on the guest)

And that’s it. That’s the very short tutorial on how to assign your VM’s dedicated IP addresses (v4 or v6, or both).

Categories
Linux Server Administration

Dovecot and Argon2 doesn’t work? This may be why.

Sorry, title is too long. Again. Nevertheless, ignore this and continue on…

I run my own mailserver. I love doing this, don’t ask me why. Anyway, I was recently migrating a few password hashes to Argon2. I confirmed manually that everything was working, I checked that Dovecot was able to generate and verify Argon2 hashes, that my backend storage was doing everything correctly and so on.

Then I changed a bunch of passwords to migrate them over to Argon2 (I was previously using bcrypt, but I started to like Argon2 more because of it’s resistance to many optimization attacks). Just after I had those new Argon2 hashes in the database, I could no longer login using these users. I think it worked like once and then never again.

Well, damn. I spend hours researching what may be wrong. Dovecot was simply spitting out it’s usual “dovecot: auth-worker(pid): Password mismatch” message. Nothing I could get any information from. To summarize what I found in the ‘net: Nothing of use.

So well, why am I writing this post then? Well because I finally figured out what’s wrong. The Dovecot documentation states this:

ARGON2 can require quite a hefty amount of virtual memory, so we recommend that you set service auth { vsz_limit = 2G } at least, or more.

https://doc.dovecot.org/configuration_manual/authentication/password_schemes/

Well, I obviously already did that – I do read the documentation from time to time, at least when I’m trying to solve a critical problem. But you know what’s strange? The docs also state that the actual login isn’t handled by the auth service, instead it’s usually done by a service called auth-worker 1[at least if you’re using a database like I do] (that’s also the thing that’s producing the “Password mismatch” log messages).

To make a long story short, what happened was that Dovecot stopped the auth-worker process as it was trying to hash the Argon2 password. This simply triggered a generic “Password mismatch” message instead of something useful like “out of memory”, so yeah…

Lesson Learned: If you’re using Argon2, increase the memory limit of auth-worker, not the auth service like the docs tell you to.

This was the solution. I simply added a few config lines to Dovecot:

service auth-worker {
   # Needed for argon2. THIS IS THE IMPORTANT SETTING!
   vsz_limit = 0 # Means unlimited, other values like 2G or more also also valid
   # Other custom settings for your auth-workers here...
}

And login was working again. I never found anyone mentioning that you need to set this value for auth-worker instead of auth, which is why I wrote this little post.

Categories
Transport Layer Security (TLS)

Undocumented openssl.cnf options and PrioritizeChaCha

Blarg, another long title. Again. Sorry.

So, this is something that I actually discovered a while (months) ago, so my memories are already a bit less fresh, but I think I still remember the important things.

I discovered this, because I had a common TLS CipherSuite config problem: I needed a server-site cipher order (e.g server has it’s own preferences), because I had some legacy weaker cipher enabled and we obviously don’t want clients with incorrect cipher order to connect with a potentially weak suite, when there are better suites available. But I also wanted to use ChaCha (more on that later).

If you don’t understand what I’m talking about, below is a summary on how TLS cipher suite negotiation works. If you know that stuff already, skip this chapter.

TLS cipher suite negotiation


TLS negotiates a cipher suite on each connection. There are many available – some are really secure, others are “okay-ish”, others are really bad. The really bad ones are usually disabled either at server or client-side (or both). But the “okay-ish” suites are generally enabled both at server and client side, even though client and server potentially also support the strong “good” ones.
What happens is that the client gives the server a list of cipher suites, which should be sorted in order of preference, and the server chooses one of them, depending on what is supported by both.

The server has generally two ways of choosing:

  • Server preference. The server maintains its own list of “preferred” cipher suites and chooses the best one on its list, depending on what the client supports.
  • Client preference. The server just chooses the first suite of the clients list that both parties support.

The thing with Client preference is, that there are (or were) some clients that send incorrectly ordered cipher suite lists – they have insecure ciphers at the top and more secure ones at the bottom. If the server lets the client choose, a weak cipher suite will be negotiated, even though both parties may support something stronger.

On the other hand, if the server maintains a correctly sorted list, one can guarantee that with server preference, the server will choose the most secure option depending on what the client supports. Thats the reason why server preference is the most common setting in real-world TLS.

Just use server cipher order and be done with it!

Yeah, that’s what many people do. And it was fine for some time. The thing is, TLS 1.2 at some point introduced a new cipher / cipher suite:

TLS_CHACHA20_POLY1305_SHA2561

That is an entirely new encryption algorithm (ChaCha with 20 rounds) and Poly1305 authentication. While those may not be exactly new, they were newly introduced into TLS.

What’s so special about ChaCha?

I won’t go into details here, I’m sure there are already posts elsewhere that cover this. Basically, ChaCha is (probably) secure and fast. This is interesting for machines that are not AES hardware accelerated (no AES-NI). This applies for example to most ARM based systems, like smartphones. Other examples may be IOT or embedded devices. Those don’t have accelerated AES and ChaCha is noticeably faster on those devices, especially when comparing against AES-256. Again, I won’t post big benchmarks here, those are elsewhere.

In summary, we would really want to use ChaCha on such devices.

Then just set ChaCha as preferred server cipher and be done with it!

But… I still like AES! Because that’s the thing: While ChaCha may be the new cool thing, many devices do have AES-NI (hardware accelerated AES) and thus can do AES faster than ChaCha. Plus, AES has been around for quite a while and so far we consider it secure. It’s also FIPS and similar certified, for people that are into that stuff.

So we still want to use AES! We just don’t want to use it with devices that have no AES-NI.

How can you get the best of both worlds?

What we need is something like this: If the client says “I don’t have AES-NI”, use ChaCha. If the client doesn’t speak ChaCha, or if it has AES-NI, use AES.

But how to determine if a client doesn’t have AES-NI? Fortunately, the standards made this not that hard: Modern clients that speak ChaCha will put ChaCha on the top of their cipher list, if they don’t have AES-NI.

Older clients, or clients with AES-NI will have something else on top. So the easiest way to use ChaCha with clients is to let the clients choose: If they have ChaCha on the top, use that. If not, use whatever else they have at the top. But that puts us back to the original problem above! What if an old, broken client puts legacy stuff on the top? There was a reason why we used server cipher preference.

Okay, so we want both client AND server cipher preference. That’s sadly something that OpenSSL can’t do. But OpenSSL developers saw the issue and offered us a solution:

OpenSSL has a flag for this!

Yep, that’s right. OpenSSL has a flag, called “PrioritizeChaCha” that does exactly what I described above: It will choose ChaCha if the client says “thats my most preferred cipher suite”, but will still honor the server cipher order in all other cases.

However, that is generally a compile-time flag. Meaning that you need to compile OpenSSL with this option if you want it. This isn’t really what I consider ideal. Many people use pre-compiled packages – after all, most distros ship programs this way. Especially when talking about security relevant stuff like OpenSSL, I personally like relying on the debian security team to update important packages. I just don’t favor the idea of compiling OpenSSL myself – it’s a lot more work for me, with not much benefits.

There’s another way: Setting this without re-compiling OpenSSL!

Yup, and that’s what this blog post was supposed to be about. The entire text wall above was just introduction for this.

OpenSSL has a config file. If you didn’t know that, I don’t blame you: I didn’t either, until last year or so. Since OpenSSL is mostly used as a library, there isn’t much to configure. On Debian-like systems, the file is /etc/ssl/openssl.cnf. 99% of that file is related to Certificate Authority settings. I think that is because you can make your own CA with OpenSSL and that file is for persistently setting parameters needed for a real CA.

But you can set more than just CA-specific parameters! The file can do more. But here’s the thing:

It’s fucking undocumented (except for the CA stuff)!

At least that’s what I think. If you find a detailed documentation for that file, that explains all the flag options that it has, please link it here! In that case you’re better than me in researching, because I found hardly ANY infos about that file.

I had to read the source code of OpenSSL to get a bit of understanding on what you can do with this file. If you have to read the source code to understand something, the documentation is shit.

I will now try to give some guidance for that file. Ignore all the CA stuff – I’m not going to explain that. I’m interested in what this file can do to modify the TLS behavior.

Basically, the file is structured in sections, which are formatted like this: [section_name] followed by a list of settings valid for that section. A section ends when the next section starts.

In order to configure TLS options, we need to look at what sections are reponsible for that. The starting point is the setting openssl_conf which is part of some kind of “root section” that has no specific name. The default value of openssl_conf is set to default_conf. In order to define what default_conf should be, you can define a section called default_conf that sets everything you want.

Since we’re interested in TLS (SSL) specific settings, we set the ssl_conf parameter in our custom default_conf to the name of a section, let’s say ssl_sect (short for ssl_section). In the ssl_sect we then declare a setting system_default which holds some TLS options, unless the application overrides them explicitly. We set system_default to system_default_sect, again that is short for “system default section”.

In this section, we finally get to set our TLS/SSL parameters. You can for example define custom TLS min/max versions, default cipher suites and most importantly flags like PrioritizeChaCha.

[default_conf]
ssl_conf = ssl_sect

[ssl_sect]
system_default = system_default_sect

[system_default_sect]
MinProtocol = TLSv1.2
CipherString = DEFAULT@SECLEVEL=2
Options = PrioritizeChaCha,NoRenegotiation

The example above sets default parameters for each application that doesn’t specify anything else on it’s own:

  • Minimum TLS version is set to TLS 1.2
  • Some insecure cipher suites are disabled (to be honest, I’m not excactly sure what seclevel 2 is exactly)
  • Two flags are set: PrioritizeChaCha and NoRenegotation

The important things here are the flags: PrioritizeChaCha is explained in the beginning of this post. This config would enable the flag by default for all applications using OpenSSL. The other flag, NoRenegotation, disables TLS renegotiation. That’s mostly “for fun”. Renegotation was broken in the past, has not much use and was even killed in TLS 1.3, so not much reason to have it on in TLS 1.2 either.

There are more than just these flags. However, as explained, I couldn’t find documentation for them. I found the available flags by browsing the source code. From the code, the following flags are available in the config file:

  • SessionTicket
  • EmptyFragments
  • Bugs
  • Compression
  • ServerPreference
  • NoResumptionOnRenegotiation
  • DHSingle
  • ECDHSingle
  • UnsafeLegacyRenegotiation
  • EncryptThenMac
  • NoRenegotiation
  • AllowNoDHEKEX
  • PrioritizeChaCha
  • MiddleboxCompat
  • AntiReplay
  • ExtendedMasterSecret
  • CANames

I won’t explain all these flags in detail here. If you have a specific question about one of these, feel free to ask. You can also look up the source code yourself and see if that answers your question.

Since I feel like this post is already way longer than intended, I will stop here. If you have any further questions about some of the topics covered here, feel free to use the comment section below. Or email me directly – my email is here somewhere.