The Hyperpessimist

The grandest failure.

Automatic Docker Deployment With Codeship

We have a private GitHub repo which gets tested by Codeship on every commit. But it would have been nice if we could get the code to be automatically deployed on our staging system if the tests succeed. Turns out it is rather easy.

Codeship allows you to specify code to be run if the tests succeed. So I went with this code:

1
2
3
4
5
6
set -ue
REV=$(git rev-parse --verify HEAD)
DATA=$(printf '{rev: "%s", key: "secret"}' $REV)
curl --connect-timeout 60 -H "Content-Type: application/json" \
  --data "$DATA" \
  http://ourhost.example.com:8080/

It gathers the git revision that was successfully tested, constructs a JSON file (in a crude way) with it and a secret key and calls a “webhook” endpoint on our staging server. The secret key is to prevent random people from POSTing their own revisions to our server. Which wouldn’t be terrible, but still annoying.

As I blogged before, we run CoreOS so the base system hardly features any runtimes to use languages like Python for scripting a solution.

So I did what every sane person would do and wrote the endpoint in Go… No, just kidding, I used OCaml. First I used Ocsigen which worked, but compiling that to a self-contained executable was a pain in the ass, so I ended up rewriting it with CoHTTP. I won’t post the code here because it is highly specific to our setup. What it does is:

  • Takes the the POSTed data
  • Parses it as JSON
  • Checks the key
  • Extracts the revision
  • Clones our repository into a new temporary folder and check out the revision
  • Builds the image with docker
  • Tears down the previous staging image
  • Puts up the new image
  • Posts to Slack that it is done :-)

I compiled the binary on my own amd64 laptop, then pushed it to the server and made a systemd unit file to run it automatically:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[Unit]
Description=Continuous Deployment Webhook Endpoint
After=docker.service
Requires=docker.service

[Service]
User=user
WorkingDirectory=/home/user
ExecStart=/home/user/deploy.native
ProtectHome=read-only
PrivateTmp=true

[Install]
WantedBy=multi-user.target

Put this in /etc/systemd/system/deploy.service and run systemctl enable deploy followed by systemctl start deploy and you’re done.

I was thouroughly impressed with how easy it was to set up with systemd, because the stdin and stdout automatically get redirected to the journal, so I don’t have to mess with how to log to syslog. Also, I don’t have to do this stupid double fork daemon dance, it is just a regular program. I can even set systemd to restart my program automatically if it crashes, though I currently haven’t since my OCaml code seems to be rather stable. Some additional niceties are the fact that I can give my program a private /tmp so it doesn’t pollute the system /tmp and set my home directory to read-only in case someone would take over my program so they couldn’t do much damage. I could make my home directory inaccessible but then I’d have to put the binary somewhere else than the home directory.

So there, using OCaml in production on a bleeding edge CoreOS/Docker setup to get continuous deployment :-)