How to install CoreOS on an OVH Kimsufi low-end dedicated server

Wouldn’t it be cool to build a bare-metal high availability cluster using CoreOS and a handful of DDoS-protected, €5/month Kimsufi servers from OVH? Here’s how to install CoreOS on a Kimsufi server.

At the time of this writing, OVH is not providing a CoreOS installation template for the Kimsufi servers. Since there is no virtual KVM console available for the entry level servers,  I tried to use OVH’s iPXE API. This approach would have worked well weren’t it for the CoreOS installer which tries to load binaries in the installation script after overwriting the same partition – which always results in a segfault. Also, the API is only available for the older Kimsufi 2G servers on OVH’s V6 control panel, not for the current Kimsufi servers for which OVH doesn’t provide an API at this time. Fortunately, OVH provides a “rescue mode” which lets us boot from an USB stick which is permanentely plugged in on all Kimsufi servers.

Step 1 – Boot the server in rescue mode

The first step ist to log in to the Kimsufi control panel and boot the server in rescue mode. You will receive a temporary password in the email once the rescue mode boot has been completed.


Step 2 – Download the CoreOS installer

In your server’s rescue mode shell:

cd /tmp
chmod +x coreos-install

Step 3 – Create the CoreOS cloud-config file

nano cloud-config.yaml

Sample cloud-config is provided below. You will have to provide your own password hash and/or ssh key. The hash can be created using mkpasswd:

$ mkpasswd --method=SHA-512 --rounds=4096

You will also have to provide a unique token for your new CoreOS cluster. You can get your own token here:

If you plan to use your server’s public IP address in CoreOS shell scripts, you have to replace it as well.

hostname: coreos-1
  - name: jan
    passwd: $6$rounds=4096$EqA.upwNw9abOwLz$M1kJkcibayX3Uuax8lZkaVsVfj4D150.7VV.ijf1YmoSCvNLZrtanuYuSleO6LuXkeFDMKr3gKz.hsCIobDvH0
      - sudo
      - docker

    - name: etcd.service
      command: start
    - name: fleet.service
      command: start

  - path: /etc/environment
    permissions: 0644
    content: |

  - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDEmGH8XzonmbJiF1W70mYIHE/ja8PRkysPpDKvlV6rM+fmQxvVjyxF9N/MxWmYI7XFUvwbBjosXyBrET9cUqr/GlB9J9XeNxrFg/9oDugVZg3aPDA7ekKYzXC156wMarPXdZTAHJt4lBaVmmDcxnInsfOk5WFjn91gYw6TXIZW0LMaxEbLf0gYWbXPC0unRn1S15OV6dtzaLciO1WhPWeoG0f/23UEpVI7Z5GU4D6QOx1vr6V/NoHgx93PAZkMofB7IM3Uxo24Gqo0pQP8lSVTxlnU27OqaG78hI4gSbys621DD81EM6uasuOU8USl1EIdfQaJDUFJvs6AMLWFdazj core

Tip: Do not use tabs in YAML. You could always validate your cloud-config file using a YAML validator.

Step 4 – Run the CoreOS installer

I’m using the beta release channel in the sample below. You could also specify the alpha or stable channel.

./coreos-install -d /dev/sda -c ./cloud-config.yaml -C beta
Downloading the signature for
2014-09-27 14:44:47 URL: [543/543] -> "/tmp/coreos-install.UbWzYE5v8T/coreos_production_image.bin.bz2.sig" [1]
Downloading, writing and verifying coreos_production_image.bin.bz2...

2014-09-24 14:46:17 URL: [181295789/181295789] -> "-" [1]
gpg: Signature made Fri Sep 26 08:00:47 2014 CEST using RSA key ID E5676EFC
gpg: key 93D2DCB4 marked as ultimately trusted
gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: Good signature from "CoreOS Buildbot (Offical Builds) <>"
Installing cloud-config...
Success! CoreOS beta current is installed on /dev/sda

Step 5 – Reboot

Back in OVH’s control panel, turn off rescue mode boot by selecting hard disk boot and restart the server. After a couple of minutes you should be able to ssh into your new CoreOS server either by providing the username/password combination from the cloud-config file or just by using the specified ssh key and the CoreOS default user “core”.

This is it! Where to go from here? There’s a lot of documentation on how to set up etcd/fleet for your new CoreOS server. A very handy command after setting up a new CoreOS server is “toolbox”.

7 replies on “How to install CoreOS on an OVH Kimsufi low-end dedicated server”

  1. On OVH Cloud you should install on `/dev/sdb` (check it using `fdisk -l` — CoreOS must be not installed on your limited current boot disk).

  2. It would be possible to make a template and install hackintosh in Kimsufi server or there is any technical impediment? Thanks for answering.

  3. marmotte, yes, that should be okay.
    However, I don’t see any connection between not being able to boot to rescue mode and this tutorial.


    1. I also experienced that after running ./coreos-install -d /dev/sda I was not able to enter the rescue mode on kimsufi later. In fact re-installing the original OS have not helped either and kimsufi informed that their technician had to fix the machine manually. The reason for the rescue mode again was an issue with my cloudconfig so I could not login into my system.

      In any case, after kimsufi fixed the issue, I reinstalled an image and then successfully followed your guide.

  4. Hello Jan,

    I’ve followed your tutorial but something went wrong on my kimsufi server … I cannot restart in rescue mode anymore … The kimsufi techs will repair it :(

    So could you please tell me more about these lines in the YAML.
    – ssh-rsa AAAA….j core

    I assume that core is it the default username of CoreOS ? So I just have to change the key ?

    The rsa key is it the public key I have generate on my PC ?
    Is this the right way :
    ssh-keygen -t rsa -b 2048 -f ~/.ssh/id_rsa
    Then use the key in ~/.ssh/

    Thanks !

  5. Thanks for this little write up. I’m wondering if I should take a look at setting something similar up myself.

    Once you have decided on your 3 kimsufi nodes is it possible to add additional kimsufi nodes to the CoreOS cluster without starting over? If so can you add nodes that are hosted elsewhere?

    If I needed access to GPU hardware for an app presumably I’d need to set up a dedicated CoreOS cluster where all nodes had GPU hardware available and exposed to the Docker containers?

    Thanks again for the great write up!

    1. Rob, you should be able to add as many CoreOS nodes to you cluster as you like. They are discovered using their unique ID in the central registry as soon as you add them. However, I’ve been struggling binding “Fleet” to localhost and a public IPv4. So the cluster never fully worked on a public network in my case. Since traffic between the nodes is not secured/encrypted (AFAIK) I’m not sure if this is an intended scenario for CoreOS at this time (probably not).
      GPU: CoreOS itself doesn’t have support for GPUs, so you may have to compile some things on your own which may in turn break the CoreOS autoupdate features. But hey, I was just playing around with CoreOS, I’m certainly no expert in this area.


Comments are closed.