Free multi-domain SSL certificates from WoSign and HAProxy OCSP stapling

Since everyone now can get free 2-year multi-domain certificates from WoSign, I grabbed one for one of my web sites. However, WoSign’s OCSP server is located in China which may, depending on your and your server’s location, increase latency once the web browser is verifying the certificate’s revocation status. In my case from Europe:

PING ocsp6.wosign.com (111.206.66.61) 56(84) bytes of data.
64 bytes from 111.206.66.61: icmp_seq=1 ttl=53 time=428 ms
64 bytes from 111.206.66.61: icmp_seq=2 ttl=53 time=347 ms
64 bytes from 111.206.66.61: icmp_seq=3 ttl=53 time=312 ms
64 bytes from 111.206.66.61: icmp_seq=4 ttl=53 time=328 ms
64 bytes from 111.206.66.61: icmp_seq=5 ttl=53 time=313 ms

OCSP stapling comes in handy to reduce the latency for the revocation status check, again, depending on your clients and your server’s location.

Here’s the all-in-one shell script in /etc/cron.daily I’m using…

  1. to create the domain’s OCSP file for HAProxy
  2. to inject the latest OCSP data into a running HAProxy instance using its stats socket
#!/bin/sh
ROOT_CERT_FILE=/etc/ssl/private/wosign-root-bundle.crt
SERVER_CERT_FILE=/etc/haproxy/certs.d/domain.crt
HAPROXY_SOCKET=/var/run/haproxy.socket
OCSP_URL=`/usr/bin/openssl x509 -in $SERVER_CERT_FILE -text | grep -i ocsp | cut -d":" -f2-2,3`
OCSP_FILE=${SERVER_CERT_FILE}.ocsp

/usr/bin/openssl ocsp -noverify -issuer $ROOT_CERT_FILE -cert $SERVER_CERT_FILE -url "$OCSP_URL" -respout $OCSP_FILE -header Host `echo "$OCSP_URL" | cut -d"/" -f3`
echo "set ssl ocsp-response $(/usr/bin/base64 -w 10000 $OCSP_FILE)" | socat stdio $HAPROXY_SOCKET

To check if OCSP stapling works:

openssl s_client -connect mydomain.xyz:443 -tls1 -tlsextdebug -status

or for SNI-only configurations:

openssl s_client -connect mydomain.xyz:443 -servername mydomain.xyz -tls1 -tlsextdebug -status

If it works, there should be an OCSP section in the response like this:

OCSP response:
======================================
OCSP Response Data:
    OCSP Response Status: successful (0x0)
    Response Type: Basic OCSP Response
    Version: 1 (0x0)
    Responder Id: C = CN, O = WoSign CA Limited, CN = WoSign Free SSL OCSP Responder(G2)
    Produced At: Mar  8 14:01:14 2015 GMT
    .
    .
    .

A few notes:

  1. HAProxy’s stats socket needs to be enabled
  2. wosign-root-bundle.crt was taken from the Apache bundle in the certificate .zip file I received from WoSign
  3. /etc/haproxy/certs.d/domain.crt contains the private key and the certificate bundle from the “for Other Server” directory, however you could remove the last certificate since it’s the root CA cert.
  4. Requires HAProxy >= 1.5
  5. If socat is missing: apt-get install socat in Debian/Ubuntu
  6. Always aim for an A or A+ grade: SSL Server Test

Ubuntu release upgrade says ‘no new release found’ on IPv6-only server

I’m running some sort of an experimental KVM guest with IPv6 connectivity only. Since it still had Ubuntu Server 13.10 installed I tried to run a do-release-upgrade on it to upgrade it to the latest Ubuntu Server release – which at the time of this writing is 14.10. However, the do-release-upgrade command kept saying that no new release could be found:

root@ipv6lab:~# do-release-upgrade
Checking for a new Ubuntu release
No new release found

I verified the /etc/update-manager/release-upgrades configuration file but it already contained the Prompt=normal line. After doing some digging I found out that the do-release-upgrade tries to connect to http://changelogs.ubuntu.com but there is no AAAA DNS record for this host. Essentially, this means that an Ubuntu server can’t be upgraded to a newer release over IPv6 because it can’t connect to the update info site over IPv6.

root@ipv6lab:~# dig +short changelogs.ubuntu.com A
91.189.95.36
root@ipv6lab:~# dig +short changelogs.ubuntu.com AAAA
root@ipv6lab:~#

Interestingly, the Ubuntu APT repository update site is accessible over IPv6, which is why something like apt-get update runs fine on IPv6-only Ubuntu servers.

I solved the problem by creating an IPv6 to IPv4 HTTP proxy using HAProxy on a IPv4/IPv6 dual stack server. The proxy listens on an IPv6 address and “tunnels” all requests to changelogs.ubuntu.com using the IPv4 address of the changelogs server. I was able to upgrade to a newer Ubuntu release this way on an IPv6-only Ubuntu server. Continue reading

Apache2 2.4+ not logging remote IP address using mod_remoteip

Since there’s no support for mod_rpaf in Apache2 2.4+ it’s recommended to use mod_remoteip instead if Apache2 is running behind a proxy like HAProxy.

mod_remoteip can be enabled using

a2enmod remoteip

It can be fine tuned in ./conf-available/remoteip.conf. You have to manually create the file if it doesn’t exist.

RemoteIPHeader X-Forwarded-For
RemoteIPTrustedProxy 127.0.0.1

And don’t forget to

a2enconf remoteip
service apache2 restart

While the remote IP address is injected into PHP just fine, Apache2 continues to log 127.0.0.1 as the remote IP address in access.log.

A modification is required in apache2.conf:

#LogFormat "%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined

%h has to be replaced with %a. I’m not sure if this is a bug or not but Apache2 will log the real remote IP address from now on.

If you’re still seeing 127.0.0.1 then maybe your proxy doesn’t forward the remote IP address to Apache2. For HAProxy, use the forwardfor option:

  option        forwardfor

Netflix DNS-unblocking without SNI for your Xbox 360, PS3, WDTV, Samsung TV

My poor man’s DNS-unblocking configuration using just a single, public IP address has one serious limitation: it will not run Netflix or Hulu Plus with non-SNI players like the PS3, Xbox 360, Samsung TVs, Sony BluRay players and possibly quite a few other devices. A commenter (kudos go out to Alex) suggested to use Netfilter’s DNAT port forwarding mechanism to overcome this limitation. Using DNAT you can forward packets based on the source-ip:port to a remote-ip:port.

So, here’s a modified version of the poor man’s DNS-unblocking approach. You will need some sort of Linux server at home to do this. I’m using a Raspberry Pi Linux mini computer which is up 24/7 on my LAN. And of course you will need a remote Linux server with an IP address registered in the U.S. You can get a low-end virtual private server for as low as $5/year. Unfortunately, it’s almost impossible to come up with a step-by-step tutorial because every LAN setup is different, hence you have to have some Linux and networking skills in order to get this baby up and running.

And here’s how this approach works: A DNS forwarder like Dnsmasq on your local Linux server will intercept domain names relevant for DNS unblocking. All other queries will be forwarded to the DNS resolver/forwarder of your choice (usually, this will be your router). The intercepted domain names will be resolved to IP addresses which are routed to your Linux server within your LAN. Depending on the resolved IP addresses and ports, iptables DNAT rules will forward the request to a HAProxy proxy on your remote server. Each domain name can have its own internal IP adress and thus its own listening port on your remote server’s HAProxy. And since every domain name can have it’s own HAProxy TCP proxy on your remote server, there’s no need for SNI! Continue reading

DNS unblocking using Dnsmasq and HAProxy

As I mentioned in my previous post, the open source DNS forwarder Dnsmasq is ideal for the DNS part of DNS unblocking. I’m running Dnsmasq on a $30 Raspberry Pi credit card sized mini computer which is up 24/7 anyway since it also handles all VOIP phone calls at home. I point my Mac, Apple TV and iPad to the RPi as the primary DNS server.
On the server side, I’ve setup a HAProxy instance using just a single IP address as a proof of concept. This poor-man’s approach works beautifully with SNI-capable devices like my Mac and iOS devices. I think newer Android devices are SNI-compatible as well but I haven’t tested it. Windows 7 and up should be OK too. Older devices like the Playstation 3 or Xbox 360 are most likely not SNI-compatible and won’t work with my highly cost-efficient single IP address approach. Unfortunately, even some of the newest multimedia players don’t support SNI.

The HAProxy server is running on a lowend virtual private server in the U.S. As a starting point, feel free to use my proof of concept server as shown in the Dnsmasq configuration below. In the web browser, you should be able to watch Netflix, Hulu/HuluPlus, free episodes/TV shows on MTV, Disney XD, Syfy, NBC, ABC, Vevo, Crackle, PBS and CWTV. Netflix works on iPad and Apple TV too. HuluPlus could work on iOS as well. Continue reading

Tunlr-style DNS unblocking for Pandora, Netflix, Hulu et al

Since Tunlr closed down unexpectedly this week, I decided to publish my ideas and findings on the subject of DNS unblocking. I used Tunlr for some time when I decided to develop my own, private DNS unblocking solution last year.

Why VPNs are no good for streaming

DNS unblocking refers to a technique used to circumvent geo-fenced Internet services without the use of a VPN. When we’re using a VPN to access geo-fenced websites, usually all our Internet traffic gets routed through a remote VPN server. With DNS unblocking, only selected traffic gets routed through a remote proxy server, ideally just the minimum traffic required to trick geo-fenced services like Pandora, Netflix or Hulu into “thinking” our current geolocation is within the United States (or any other country required to pass the geo-fence). One advantage is that DNS unblocking works for all devices that allow custom DNS settings while a VPN only works on a computer or in the router. But the big advantage over a VPN is that DNS unblocking allows the full and intended use of Content Delivery Networks (CDN).

Continue reading

PhpMyAdmin behind HAProxy

Here’s a quick one. If HAProxy is used to SSL-offload the PhpMyAdmin web application, the following line has to be added to PhpMyAdmin’s config.inc.php:

$cfg['PmaAbsoluteUri'] = 'https://www.mydomain.net/phpmyadmin';

How to compile HAProxy for Debian/Ubuntu

If you want to use that latest and greatest feature in HAProxy, you’ll probably end up having it to build it yourself. If you’re adventurous enough to run a potentially unstable development version on your server, here’s how to compile the binary.

Get the latest dev version from here: haproxy.1wt.eu/#down

Set up the build environment:

apt-get install build-essential zlib1g-dev libpcre3-dev libssl-dev

Build the binary (assuming your Linux kernel version is >= 2.6.28):

make TARGET=linux2628 CPU=native USE_PCRE=1 USE_OPENSSL=1 USE_ZLIB=1

This will include OpenSSL and zlib compression support.

HAProxy and real IP addresses in Apache2 using the RPAF module

If you’re using a reverse proxy and want to see the client’s real IP addresses instead of the proxy’s localhost address in Apache2’s log file (or any Apache-based web application which reports the client’s IP address), you might want to have a look at the RPAF module.
The RPAF (Reverse Proxy Add Forward) module will enable Apache2 to report the client’s real IP address through a reverse proxy (like HAProxy). The module essentially replaces the proxy’s IP address with the X-Forwarded-For HTTP header set by the proxy. Continue reading

HAProxy and SNI-based SSL offloading with intermediate CA

In a world of diminishing IPv4 space and slow IPv6 adoption, SNI-based SSL is getting more and more important. Using the TLS extension SNI, only hardware limits the number of virtual SSL-hosts we can put on a single IP address. Most modern web browsers and web servers support SNI nowadays. Since September 2012, HAProxy supports native SSL as well which means the job of SSL-offloading can now be implemented with a simple HAProxy configuration:

frontend f_web_ssl
  bind 0.0.0.0:443 ssl crt /etc/haproxy/default.pem crt /etc/haproxy/certs.d ciphers ECDHE-RSA-AES256-SHA:RC4-SHA:RC4:HIGH:!MD5:!aNULL:!EDH:!AESGCM

This line will instruct HAProxy to look for server (since this is only one-way SSL) certificate files in /etc/haproxy/certs.d and match them to the SNI-name passed by the client. If no match is found or no SNI handshake was taking place, the default.pem certificate is presented to the client. The ciphers are included to pass the BEAST attack test.
Continue reading

Prevent SSL redirect loop using WordPress and HAProxy

This is a first post in a series on how to use HAProxy in front of WordPress. I’m using HAProxy to offload SSL connections to a WordPress site. The site itself runs on an internal IP address on port 80 while HAProxy listens on incoming connections on *:80 and *:443. Connections to *:443 will be presented the correct certificate using HAProxy’s SNI-based certificate matching algorithm. I’ll write more about that SNI-based configuration in a future post. In this post I’m going to focus on the SSL redirect loop which is happening if you use Continue reading