The Hyperpessimist

The grandest failure.

Implementing C3 Linearization

Maybe you have wondered how, with multiple inheritance, the computer figures out which methods to call. There is a number of different Method Resolution Order (MRO) algorithms that have different advantages and disadvantages. These algorithms take a class tree and figure out a linear order in which to try the classes.

There are a number of important properties these algorithms should have:

  • Inheritance: If \(C\) inherits from \(A\) and \(B\), it should come in the linearization before both \(A\) and \(B\). That sounds trivial, but there are algorithms that fail this property.
  • Multiplicity: If \(C\) inherits from \(A\) and \(B\) in this order, \(A\) should come in the linearization before \(B\).
  • Monotonicity: If you linearize a class \(C\) and the linearization places \(A\) before \(B\) in the linearized order, every linearization of the parents of \(C\) should preserve the relative order of \(A\) before \(B\). The idea behind this is that the linearization of the parents can never change when adding new child classes to the inheritance hierarchy. This is essential, because otherwise if you extended your code with new classes, the old code might change behaviour because it now calls different methods in a different parent class.

There are a number of different algorithms like

  • Leftmost Preorder Depth First Search (LPDFS), which searches, well, depth first. If there is more than one parent, it first takes the leftmost one, before returning th the next one, etc. Unfortunately, this one breaks inheritance, since examples can be constructed where the grandparent classes are tried before the parent classes.
  • LPDFS with Duplicate Cancellation: It fixes the issue by first allowing duplicates to be found and then removes them and only leaving the last occurence. Unfortunately, this one fails monotonicity.
  • Reverse Postorder Rightmost DFS: Traverses the inheritance graph in depth first, this time taking the rightmost parent and reverses the order. This is equivalent to topological sorting, but still fails multiplicity.
  • Refined Postorder Rightmost DFS: run the regular RPRDFS and check if there are conflicts between the edges, e.g. multiplicity. If that is the case, add an explicity edge and rerun. Boy, this algorithm is ugly and monotonicity is still not guaranteed.

So in 1996 a new algorithm was found that does it differently: C3. C3 stands for nothing in particular. It is an algorithm that will generate a monotonic linearization! C3 has been taking the world in storm after being published originally for Dylan it was adopted for Python 2.3 (Python 2.2 used LPDFS (DC), earlier versions just LPDFS), but also in Perl 6 and Perl 5.

It can be described by these formulæ, where \(L\) is the linearization function and \(C(B_1, B_2)\) means \(C\) inherits from \(B_1\) and \(B_2\) in that order. Bear with me, I’ll explain what these mean shortly.

\[ L[C(B_1…B_n)] = C \cdot \bigsqcup (L[B_1], …, L[B_n], B_1 … B_n) \]

\[ L(Object) = Object \]

This is for the linearization. It is very simple, it just takes the first element, \(C\) out and delegates the rest to the “merging” function, \(\bigsqcup\):

\[ \underset{i} \bigsqcup (L_i) = \begin{cases} c \cdot (\bigsqcup_i(L_i \setminus c)) & if \exists_{\text{min} k} \forall_j c = head(L_k) \notin tail(L_j) \\ fail & \text{otherwise} \end{cases} \]

What it does is it checks if there is a \(c\) that is a \(head\) of any of the lists that does not occur in any tail of the lists. It there is one (there might be multiple, in which case it takes the first), it determines this to be the next “step” and adds it to the linearization list. It removes then removes the \(c\) that was found from the list and recurses to find the next step. If it doesn’t find one, it fails. This is important to know, because linearization might not always be possible preserving all the properties mentioned above. In this case, C3 opts to go for correctnes and admit that there is no possible result.

How about we implement it? Sure, to understand it better, we can try. We could use a language that supports multiple inheritance, the concept that bought us this linearization problem to start with, like OCaml. Let’s use OCaml but not use inheritance (or objects) at all, since there isn’t even any need for it.

First we need a type

1
type 'a hierarchy = Class of ('a * 'a hierarchy list)

Then let’s implement the \(L\) function:

1
2
3
let rec c3_exn = function
  | Class (_, []) as res -> [res]
  | Class (_, parents) as res -> res :: (merge @@ (List.map c3_exn parents) @ [parents])

So far so good, we named \(L\) c3_exn (because it implements C3 and might throw exceptions) and \(\bigsqcup\) merge. Let’s continue with merge. This is the formula with the bracket, so let’s do the bracket here as well:

1
2
3
4
5
6
7
8
exception No_linearization

let rec merge (l : 'a hierarchy list list) =
  match head_not_in_tails l with
  | Some c -> (match remove c l with
    | [] -> [c]
    | n -> c :: merge n)
  | None -> raise No_linearization

head_not_in_tails gives us the first element (remember min in the formula) of the heads of the lists that does not occur in any tail of any of the lists. There might be None in which case we have to admit that there is no linearization. But if there is, we remove this element from all the lists \(L_i \setminus c\) and continue on. At this point I deviate from the formula (I believe the formula is wrong at that point) and check if there is even something left, to terminate the recursion. If there is, we try recursively and if there isn’t, we just found the end of the linearization list.

What is still missing is remove, which removes the element from all the lists:

1
2
3
let remove to_remove l =
  let rem to_remove = List.filter (fun e -> e != to_remove) in
  rem [] @@ List.map (rem to_remove) l

It also removes empty lists for convenience, although that makes no difference to the algorithm at all. Now we need to determine how to find the first \(c\) that is a \(head(L_k)\) but is not in any \(tail(L_j)\):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
let head = function
  | [] -> []
  | x -> [List.hd x]

let tail = function
  | [] -> []
  | x -> [List.tl x]

let concat_map f l = List.concat @@ List.map f l

let head_not_in_tails (l : 'a hierarchy list list) =
  let heads = concat_map head l in
  let tails = concat_map tail l in
  let find_a_head_that_is_not_in_tails acc v = match acc with
    | Some x -> Some x
    | None -> if List.exists (List.mem v) tails then None else Some v
  in
  List.fold_left find_a_head_that_is_not_in_tails None heads

First we implement some helpers, because it is customary in OCaml to reinvent standard libraries all the time. head returns the head of a list and tail returns the tail. If the list are empty, [] will be returned. This is useful, because if we use concat_map on those, we get all heads and all tails, even if we have empty lists. So heads and tails will always be some list, though maybe empty. But that is not a problem. Then we fold_left over the heads, with a function that looks for the first head that is not in tail. If it doesn’t find one, it just returns None.

Lo and behold, our implementation is complete. If we test it with the example from Wikipedia, we get exacly the right order:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
let () =
  let rec show_hierarchy = function
    | Class (n, _) -> n
  and show_hierarchy_list lat =
    "[" ^ String.concat ", " (List.map show_hierarchy lat) ^ "]"
  and o = Class ("O", [])
  and a = Class ("A", [o])
  and b = Class ("B", [o])
  and c = Class ("C", [o])
  and d = Class ("D", [o])
  and e = Class ("E", [o])
  and k1 = Class ("K1", [a; b; c])
  and k2 = Class ("K2", [d; b; e])
  and k3 = Class ("K3", [d; a])
  and z = Class ("Z", [k1; k2; k3])
  in
  print_endline @@ show_hierarchy_list c3_exn z

We get [Z, K1, K2, K3, D, A, B, C, E, O], which is correct.

If you are interested more in this topic, I can recommend the slides of the programming languages lecture that inspired the whole implementation. It also includes another set of example classes which demonstrate a linearization failing. Naturally, our code does the right thing.

If you feel like playing with the code, you can check out c3.ml. You can also check one or another implementation in Python.

Cycling, 10.000 Km Later

After starting cycling in January 2012 when I got my road bike, I finally managed to hit 10.000 km cycled this year. It took me pretty much 3 years in which I learned quite a bit. Let’s see how these 10.000 km changed me

Year 1

After everybody in Japan was cycling everywhere, I eventually decided to get a bike too. When spending money, I decided I can just as well get a decent road bike that is worth taking back home. So I got one. I had zero experience with bikes so I had not much of a clue on what to get, what to look for etc. I went to a small bike shop near university lead by an old guy speaking some english who offered discounts for students. Actually, that guy was kinda cool. Yet the bike was probably not the best fitting one, as I am a rather small person and I did a number of adjustments for better fit over the years.

By first rides with the bike were ridiculously short, but the posture on the bike is completely different and frankly, the hills around my house in Japan were quite steep so I got tired easily.

My equipment at that time consisted of the bike, platform pedals, normal shoes (used only for cycling and eventually worn down within the year), an air pump, a LED flashlight mounted as frontlight, a saddle bag and a back light mounted on it. I lost the latter pretty quickly, it just fell of somewhere. I ended up replacing it with my much beloved Fibre flare (I immediately fell in love when I saw someone with one, it is such a great idea).

Clothing wise I had some cycling-like shirts which were not real cycling shirts (lack of back pockets) and cheap cycling shorts bought from Amazon. Plus long gloves for winter and a collection of short gloves which had to be replaced often.

Year 2

I was back in Germany and while the roads were much better, the winter was much longer. January is a bad month for cycling in Germany. When the weather was finally more amenable for cycling, I managed to produce an accident and break my finger, so I was out for two more months. Bummer.

Equipment-wise I upgraded to first to platform/SPD hybrid cycling pedals then to double-sided SPD pedals. Together with the proper shoes, this has been a very welcome change. I also changed my tyres from the crappy stock ones to proper Continental GP 4000S which I’d recommend without hesitation.

This years km were mostly spent on day trips with a friend of mine to most nearby towns, roughly 100km each. So you start in the morning, arrive for lunch, eat and return. Pretty good, though I’ve been rather disappointed with the roads on the way. The remaining km were done mostly in two months where I tried the Strava 1000km challenges and succeeded. That felt like a tremendous achivement, but it also cost a lot of time and energy. The odds were in my favour that year, because it was warm and didn’t rain for about two months.

During this year, I also upgraded from the stock wheels to the Campagnolo Zonda wheelset which is probably 3 or 4 times more expensive than what came with the bike. The wheels are about 500g lighter and they look really, really pretty. After all, it was a compromise: I wanted wheels that are durable, light and as inexpensive as possible. The more expensive Campagnolo wheels are less durable due to aluminium spokes, so I shelled out the 300€ for the Zondas. Haven’t regretted it yet.

At the end of the year, I managed to outdo my previous distance record by a tiny amount. Considering I started 5 months late, that is a pretty decent result.

Year 3

This year I got a headstart, since winter was kinda short, so I could start cycling in earnest pretty early. Also, I got a home trainer, so I wasn’t as much out of the loop as the year before.

The longer trips were replaced by one long trip with a friend, from Munich to Vienna via Passau, Linz and Melk. We had bad luck because the first day, the longest was also incredibly rainy so when we arrived in Passau after some 10 hours in rain, we were really exhausted. The remaining days were better, but still had their share of rain. Though cycling along the Danube is quite nice (if somewhat boring) and definitely an achivement that we can remember for a long time.

I also moved to the south of Munich, which allowed me to explore the south more. Pretty nice places there, arguably better than the north. Actually, not really arguable, I can get to Lake Starnberg in about an hour.

Equipment-wise I hardly changed anything. Don’t fix what isn’t broken, right? I upgraded my rear derailleur from Tiagra to 105 hoping for some better shifting performance (and because I suspected the old one was bent) as well as replaced the cabling. The changes are minimal, but on the upside, it wasn’t expensive either.

It is this year that I managed to finish my first 10.000 km, so big hooray here. Actually, I’m at 10,500 now, but that looks like the end of the line for this year as it is too cold for me to enjoy it.

Conclusions

What did I learn? I learned that proper clothing makes quite some difference. I learned that once a bike is proper road bike (not the cheap road bike lookalikes that get sold for 300€ new), updating equipment is mostly cosmetic and improves the morale more than the performance. Another lesson is that cycling alone, while definitely a good thing, is not really enough, since my legs are trained pretty allright but my arms, well.

There is also a lesson that I need pure water when cycling, all these sweet isotonic drinks and whatnot do not work for me at all. To prevent muscle lock-up I sometimes bring a smaller bottle with water with dissolved magnesia with me.

The only thing that worries me is that it is slowly getting boring. Fortunately I have a winter to think about it and maybe find a way to make it more interesting, I believe I need some more challenge.

Automatic Docker Deployment With Codeship

We have a private GitHub repo which gets tested by Codeship on every commit. But it would have been nice if we could get the code to be automatically deployed on our staging system if the tests succeed. Turns out it is rather easy.

Codeship allows you to specify code to be run if the tests succeed. So I went with this code:

1
2
3
4
5
6
set -ue
REV=$(git rev-parse --verify HEAD)
DATA=$(printf '{rev: "%s", key: "secret"}' $REV)
curl --connect-timeout 60 -H "Content-Type: application/json" \
  --data "$DATA" \
  http://ourhost.example.com:8080/

It gathers the git revision that was successfully tested, constructs a JSON file (in a crude way) with it and a secret key and calls a “webhook” endpoint on our staging server. The secret key is to prevent random people from POSTing their own revisions to our server. Which wouldn’t be terrible, but still annoying.

As I blogged before, we run CoreOS so the base system hardly features any runtimes to use languages like Python for scripting a solution.

So I did what every sane person would do and wrote the endpoint in Go… No, just kidding, I used OCaml. First I used Ocsigen which worked, but compiling that to a self-contained executable was a pain in the ass, so I ended up rewriting it with CoHTTP. I won’t post the code here because it is highly specific to our setup. What it does is:

  • Takes the the POSTed data
  • Parses it as JSON
  • Checks the key
  • Extracts the revision
  • Clones our repository into a new temporary folder and check out the revision
  • Builds the image with docker
  • Tears down the previous staging image
  • Puts up the new image
  • Posts to Slack that it is done :-)

I compiled the binary on my own amd64 laptop, then pushed it to the server and made a systemd unit file to run it automatically:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[Unit]
Description=Continuous Deployment Webhook Endpoint
After=docker.service
Requires=docker.service

[Service]
User=user
WorkingDirectory=/home/user
ExecStart=/home/user/deploy.native
ProtectHome=read-only
PrivateTmp=true

[Install]
WantedBy=multi-user.target

Put this in /etc/systemd/system/deploy.service and run systemctl enable deploy followed by systemctl start deploy and you’re done.

I was thouroughly impressed with how easy it was to set up with systemd, because the stdin and stdout automatically get redirected to the journal, so I don’t have to mess with how to log to syslog. Also, I don’t have to do this stupid double fork daemon dance, it is just a regular program. I can even set systemd to restart my program automatically if it crashes, though I currently haven’t since my OCaml code seems to be rather stable. Some additional niceties are the fact that I can give my program a private /tmp so it doesn’t pollute the system /tmp and set my home directory to read-only in case someone would take over my program so they couldn’t do much damage. I could make my home directory inaccessible but then I’d have to put the binary somewhere else than the home directory.

So there, using OCaml in production on a bleeding edge CoreOS/Docker setup to get continuous deployment :-)

Introducing Slacko

I’ve been using Slack these days quite a bit and overall I quite like the idea. It is basically a more accessible IRC so you can put even non-techies in front of it and tell them to use it. Slack provides a number of integrations with 3rd party services. We use GitHub, Codeship, Trello and Twitter, but I wanted to do my own integration, with our home-brew deployment solution.

So I did what every hacker would do and created a new interface to Slack from scratch and it being implemented in OCaml I called it Slacko. In case you’re wondering, that shape in the Slacko logo is a Fira Sans Heavy Italic uppercase O in the color of the Slack font (wow, that sounds like “Chai Caramel Latte Semi-Soy”, sorry for this).

tl;dr: Get Slacko 0.9.0, test it and report bugs! I plan to collect input and publish a version 1.0.0 soon!

So why did I write it?

The Community-built Slack integrations page features a number of integrations, but zero for OCaml and the Haskell integrations are very rudimentary (though hilarious). It has to run on CoreOS without installing additional stuff, so basically everything which comes with an interpreter is out. Even then, some other integrations, like the Clojure ones are in a kinda sorry state. I’m also not too impressed with the Go integrations (or Go itself, but that’s another matter) and I won’t touch C or C++ for this because I rather waste my time doing something sensible. So I hacked it up using OCaml. Because I can.

Let’s talk about the API: Slacko currently mirrors the Slack REST API 1:1, so everything is stringly typed. For version 1.0 I plan to leave it like that, future versions will have more explicit types built on top of this foundation, but I believe it is already useful as is and can already be used productively. Every call returns a JSON data type that can be traversed to get the information. Future versions will most likely parse this information, but I mostly just use it to post to Slack and don’t need to actually read the information returned from Slack; except for error codes which are already mapped onto error types.

What I really liked in OCaml was the way that I can chain functions with the new @@ and |> operators. I liked $ from Haskell so initially I thought I’ll be using @@ a lot, but it turns out |> was more useful for me. It allowed me to write code that was kinda similar to shell pipes or to code in applicative languages like Factor. Here’s an example:

1
2
3
4
5
6
7
8
let im_history token ?latest ?oldest ?count channel =
  let uri = endpoint "im.history"
    |> definitely_add "token" token
    |> definitely_add "channel" channel
    |> optionally_add "latest" latest
    |> optionally_add "oldest" oldest
    |> optionally_add "count" count
  in query uri

So this is a function with 5 arguments, 3 of them optional. I’ve defined helper functions definitely_add and optionally_add which take a URI, add data if there was data to add (i.e. someone supplied the count argument) and return an URI. All these calls are neatly chained using |>, in a similar way like the check it out yourself. It currently is rather thin, partly because I was too lazy, partly because it really is quite straightforward.

I filed a pull request to the OPAM repository which got merged quickly, so you can just install Slacko with a simple

1
$ opam install slacko

Hope you’ll enjoy and looking forward for your input.

OCaml Library Development in the REPL

Maybe like me, you develop a lot by exploration, in the REPL/Toplevel/UTop. Lately I’ve been writing a library but was very annoyed by the fact that testing the functions I wrote in the REPL worked so bad. With the help of #ocaml I found a workaround that is kinda passable. Passing it on, in case you or future me will need this knowledge later.

I’ll explain the basic setup: my code is managed by OASIS and ocamlbuild puts the compiled code into _build/src, so all cmo, cma files etc. are there. I have dependencies on the Uri, Yojson and Cohttp.lwt libraries.

First you have to actually build your code, so it is available in _build/src, I do this via ocaml setup.ml -build. Depends on your build system, of course.

  1. Start utop
  2. #require "uri";; to load the Uri module, the other steps are similar. When you miss one require, OCaml will let you know by saying “Error: Reference to undefined global Foobar”
  3. #require "yojson";;
  4. #require "cohttp.lwt";;
  5. #directory "_build/src";; This step is important! I left it out and could load my module, but not use any function from it.
  6. #load "mymodule.cma";; This loads your code. Now you can use it in the REPL. Or it throws some error, in which case you might need to #require some more packages and retry.

Yes, not a very pretty way, but it works for development. My biggest hope for future OCaml development is not more features but first and foremost an easier way to deal with the compiler, for now (4.01) we’re stuck with this.

My Thoughts on OCaml From the Very Beginning

After reading the pretty great reviews on “OCaml from the Very Beginning” I was interested, so I asked my university library to get it (my need for paper books is rather limited and someone else might profit from the copy).

Few weeks later, I had the copy in my hands, went to a park and skipped through, and truth be told, I was kinda disappointed. The premise is, that it is from the very beginning, which does quite literally mean “very beginning”. It is very basic and never goes much much into depth: the first chapter is about literals, there is one about lists, some basic pattern matching etc. but apart from a short explanation with some examples, it didn’t teach me anything new.

What surprised me was that the book is rather thin and out of that, roughly the latter half of the book is dedicated to answers to the questions posed in the chapters.

I believe I understand who the target audience is: computer science students at the beginning of their studies. At TUM we have a lecture colloquially known as “Info2”, which teaches students basic concepts of functional programming, typically using a statically typed functional language. When I took this course, this book would’ve been quite useful, but these days I’d recommend Real World OCaml a lot more.

Dockerizing Meteor

In case you were wondering, what the deal with CoreOS was: I wasn’t planning on leaving the stuff running as is, the idea is to actually run some containers. After wasting some 3 or 4 hours testing, here’s how to get your Meteor app running in Docker.

The setup

First off, you need a database. Meteor can use any database as long as it is MongoDB. Not my first choice, but after all, this is someone else’s baby that I’m setting up.

Setting up a MongoDB is pretty easy, due to the fact that a pre-made official image is available which works pretty alright. Just do a docker run -d --name db mongodb and wait a few minutes, until the images are loaded and set up.

Now, we want to run Meteor. I don’t have much experience with Meteor, but the idea is that you compile everything to plain JS you can just run it via Node.js. So, I took some inspiration from the rocker-docker Dockerfile. I put this stuff as Dockerfile into my Git repo:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
FROM node
MAINTAINER Marek Kubica <marek@xivilization.net>

RUN npm cache clean -f && npm install -g n && n 0.10.30
RUN curl https://install.meteor.com/ | sh
RUN npm install --silent -g forever meteorite

ADD . ./meteorsrc
WORKDIR /meteorsrc

RUN mrt install && meteor bundle --directory /var/www/app
WORKDIR /var/www/app

RUN npm install phantomjs

ENV PORT 8080
ENV ROOT_URL http://127.0.0.1
ENV MONGO_URL mongodb://db:27017/test

EXPOSE 8080

RUN touch .foreverignore
CMD forever -w ./main.js

This starts off with the official Nodejs image, installs a newer Nodejs (since Meteor needs at least 0.10.28 or so, which is newer than the node that is packaged), installs Meteor and some helper packages. Then I am copying all the Git repo contents to /meteorsrc in the image. Not the cleanest location, but I just wanted something quick that works. Then I run mrt install which presumably installs some Meteor stuff. The important part is meteor bundle which creates a compiled copy of all Meteor resources in /var/www/app that can be served by Nodejs.

Then it is mostly environment variables and stuff. What is important is the MONGO_URL, as you notice it points not to localhost but to db. Docker isolates containers from each other, so the communication between containers is kind of like between different hosts.

To create it, run docker build -t yourusername/reponame:tag . which will start the build. This might take a while. Afterwards I pushed it to the Docker registry with docker push yourusername/repo:tag, so I can get it to the server in some more or less convenient way.

On the server, I did docker run -d -p 80:8080 --name web --link db:db yourusername/repo:tag. Let me explain that for a bit. What I do here is to run the container which I just built, name it web. The interesting part is the 80:8080 part which means that whatever the server gets on port 80 will be forwarded to the Meteor app running in the container on port 8080. Next, I am telling Docker to expose the db container (running MongoDB) to the new container using the db alias. So the web container can access db via the hostname db.

Observations

The idea with the containers is actually pretty neat, I like that setting up a MongoDB container requires zero setup and I can set up as many of them as I like without having to care about isolation. What I liked less is that the Nodejs container is outdated enough that I need to install a newer Node anyway, so what is even the point? I hope this will get fixed in the future. What I also didn’t like very much is that creating the image takes so long. Every step in the Dockerfile is another layer in the image and creating those takes way too long.

Next steps

I hope the node image will finally be updated to 0.10.29 or 0.10.30 which saves me from a lengthy Node setup. I will also try to reduce the build times by decreasing the amount of commands in the Dockerfile (PORT is one obvious candidate to be deleted). What I am also less than happy is the fact that I am building the code on my laptop, then uploading the image to the Docker registry (~250 MB, which takes a couple of minutes) and then download it again on my server. Not sure what the best way to go about this will be, maybe setting up a registry on the server, or maybe even building the image directly on the server.

Also, the Docker user guide is rather pretty damn good. Need to read more.

Installing CoreOS

So, I wanted to set up a server. In the olden times, I used a Virtuozzo VPS at which I ended up being so annoyed that I switched to a dedicated server running first Xen, then KVM and multiple VMs. Now, this setup was always quite the pain, so I decided to try again with a VPS. Upside: backups aren’t my problem anymore. Also, I don’t want to build elaborate configurations that I have to administer, this time I want to keep it simple, so that a reinstall takes maybe 2 hours instead of weeks.

This time I went for netcup, because they provide virtual servers running on KVM. Why KVM? Because I want to use Docker and it will probably run a lot better if I use a recent kernel rather than some Virtuozzo/OpenVZ shared kernel. When I first got that server, it was running all kinds of stuff (MySQL, Postfix, ProFTPd, srsly?), so this had to go.

Now remember my premise, I wanted to keep the administrative overhead down (aka zero), so I was thinking about some OS that was specifically designed to get out of the way. A few weeks ago I found CoreOS so I gave it a go.

Logged in as root, ran the coreos-install script as written in the CoreOS docs and, well, didn’t work because the system was running. Bummer, no on-the-fly replacement of the current system. So I got myself the ISO and uploaded it to netcups FTP server to use as own install media. Reboot and it boots onto CoreOS.

You can sudo su into it (netcup provides a VNC console for it), and start coreos-install -d /dev/vda -C stable. Runs and installs, but at the end complains about some GPT bullshit. You need to run parted /dev/vda and call print all at which point it will ask you whether you want to fix the partition (course you do).

Next step is rebooting, which is a challenge in and itself because netcup does not support detaching custom ISOs, so you always start into the CoreOS installer. After fiddling for like 20 minutes, I figured out you can attach an “official” ISO instead and detach that ISO and then you get to boot into your OS. I should probably report a bug to netcup.

Then your glorious new CoreOS boots up and you have no way to log in, since you don’t know the root password or anything. There are no users set up. Ooops. Back to the installer.

This time, you actually read to the end and find out that you need to add a cloud-config file, which will get executed on first boot. I wrote mine kinda like this:

1
2
3
4
5
6
7
8
9
10
11
#cloud-config

ssh_authorized_keys:
  - ssh-rsa AAAAB...

users:
  - name: marek
    groups:
      - sudo
      - docker
    coreos-ssh-import-github: Leonidas-from-XIV

Pretty self-explanatory, the ability to import SSH keys from GitHub is kinda super neat. So, next up coreos-install -d /dev/vda -C stable -c cloud-config-file and off you go. Remember to fix up the GPT again with parted and do the stupid ISO detach dance that netcup forces you into, but there you go.

Afterwards, CoreOS boots fine, resizes itself to fill the whole partition and allows you to log in. Will probably post more adventures in CoreOS and Dockerland soon, so stay tuned (or don’t, I don’t own you).

Go 1.3 for Raspberry Pi

Update: I updated the builds, binaries for the Go 1.3.3 are available.

If you’ve been following this blog, you know I happen to backport Go on Raspbian Wheezy. So, starting today I offer you Go 1.3 on the Raspberry Pi. Have fun.

The instructions are as usual and if you happen to already use my backports, a simple update is enough:

1
2
3
4
5
wget https://xivilization.net/~marek/raspbian/xivilization-raspbian.gpg.key -O - | sudo apt-key add -
sudo wget https://xivilization.net/~marek/raspbian/xivilization-raspbian.list -O /etc/apt/sources.list.d/xivilization-raspbian.list

sudo aptitude update
sudo aptitude install golang

Go 1.2 for Raspberry Pi

After I have been providing Go for Raspbian since quite some time, I recently decided to update it to Go 1.2.1.

If you’re new, here’s how to get Go on your Raspberry Pi:

1
2
3
4
5
wget https://xivilization.net/~marek/raspbian/xivilization-raspbian.gpg.key -O - | sudo apt-key add -
sudo wget https://xivilization.net/~marek/raspbian/xivilization-raspbian.list -O /etc/apt/sources.list.d/xivilization-raspbian.list

sudo aptitude update
sudo aptitude install golang

No compiling, no strange tarballs. If you have already added the repository, it will get updated automatically. To my knowledge, this is the easiest and cleanest way to get Go on Raspbian.

Bonus feature: it includes the cross compilers for OS X (darwin), Windows, FreeBSD and NetBSD as well as the i386, amd64 and arm CPU variants. So you could build Windows executables on your Raspberry Pi. Heh.