The Hyperpessimist

The grandest failure.

Current State of Rasberry Pi Kernels

If you look at the current mainline kernel (3.4.4 at this moment) you’ll realize, there is no support for the Raspberry Pi, so you cannot just get the kernel from kernel.org, compile it and dump it on the RPi. So how does it run Linux then?

The answer is that currently the Raspberry Pi runs an old, forked version of Linux, 3.1.9, which can be found on the foundation’s GitHub page. It turns out that for some reason, Broadcom uses this kernel internally and therefore that’s what the people from Broadcom support (mostly popcornmix). Now this is rather unfortunate, first, since the latest 3.1 kernel is 3.1.10, so the current “official” kernel lags behind and second that the 3.1 series reached it’s end of life which means there won’t be any more updates, not even security updates. In fact, there is an exploit for 3.1.9 which means that local users can get root privileges without password.

As this is quite some unfortunate, especially as the following 3.2 series is a long-term-support series and will be used in the upcoming Debian 7.0, “Wheezy”. Therefore Chris Boot ported the changes to the newer 3.2 series, which will be supported for a couple of years, which is already a considerable improvement. This tree includes most drivers. In the near term future it is planned to migrate to this kernel, which should solve the most pressing problems. But it still is bound to 3.2, not mainline, so no new features will make it to this kernel, unless someone backports it.

Now, why is having RPi support important in mainline? Having a device supported in upstream Linux means that on every new kernel release, you can just update your kernel and the device will keep on running. Also, distributions like Fedora only officially support the mainline kernel, so the Fedora version for the Raspberry Pi is more or less also a fork of Fedora.

I talked to one of the maintainers of the arm-soc tree, that maintains Linux support for ARM devices. He told me that the current state of the foundations kernel is not close to being merged into the mainline kernel, as it does not support DeviceTree and in general does not use the kernel infrastructure.

Fret not, help is on the way! Due to the effort of Simon Arlott and others, there exists another kernel fork which is meant for inclusion into the mainline kernel. This effort is coordinated via the linux-rpi-kernel list as well as the #raspberrypi-dev IRC channel on freenode. The main things blocking inclusion to the mainline kernel are proper USB drivers and the sound-driver (the vchiq code seems to be too difficult to port directly).

If you happen to have a Raspberry Pi and would like to help the developers, drop by in the channel and help testing. I am sure they’ll appreciate.

Review of Some Free, Open Source Fonts

I like obsessing about things. One of the things I absolutely love to obsess is fonts. So since I installed this blog, I was constantly ticked off how badly PT Sans was rendered on Linux. I ended up replacing it with Cantarell, a very beautiful sans-serif font. The criticism section in Wikipedia is is from a single, not very convincing source, and, as I write in my blog posts in latin script, it doesn’t seem to apply to me. But why stop there?

Due to Google Web Fonts, and the ability to inject anything via JavaScript, I started changing fonts and comparing them on different systems. So I tried Chrome on Mac OS X and Firefox on Arch Linux.

More or less default

The first image was a shot of the status quo, with PT Serif and Cantarell on Arch Linux with Firefox (default GNOME3 font rendering). Generally I like the look and with the default sidebar font replaced by Cantarell it looks decent. PT Sans used to look very pixelated at that size. Click the image to see the whole screen.

As interesting first observation, Mac fonts seem to be generally more bold and especially on straight lines a bit blurry. I also got a comment that it looks narrow.

Droid Family

The Droid fonts were made by Google for good screen legibility and it shows. Honestly, this is a pretty fine font and I use the monospace variant as programming font (and DejaVu Monospace as console font) because it just works. Not surprising given Droid is, like DejaVu, derived from the excellent Bitstream Vera font family. My only complaint is that it is boring.

One of my testers liked this one the most. Not sure whether it was because he’s a Google fanboy or because of the honestly good font; I wasn’t able to do proper double blind testing.

Gentium

The first time I saw Gentium, I fell in love, because I thing the glyphs look wonderful. Unfortunately, it is not really a screen font as the Linux shot shows. When I started using it in my documents, it turned out it is not a document font either, because unlike the excellent Linux Libertine it does not work. It looks way to fidgety on paper and just awful on screen.

On Mac it looks quite ok although a bit hard to read at that size (16px, Octopress default). One of my testers’ favorite, although due to the abysmal rendering on Linux I couldn’t live with it.

Vollkorn & Yanone Kaffeesatz

I usually hate the “Vollkorn” font. It completely falls apart at smaller sizes so I just included it to make fun of it. Everytime I see it, it looks awful, on the site I saw it first, on the creators site. The glyphs are fuzzy and not uniformly thick. Unless you increase the size, then it starts looking actually quite decent to the extent that I liked it quite a bit. In this size, it looks quite similar to PT Serif but better. There’s a number of tiny details that make it nice, I especially like the “old style” numbers. The underline is quite far from the text it underlines, maybe a bit too far but overall still bearable.

The font on the sidebar is “Yanone Kaffeesatz”, a free font that got quite famous. After testing it, I consider it useless for text. It is a decorative font, maybe useful for page headers but not really up for the sidebar, for example.

Averia

This one is basically the odd one out. Averia is a font that is the average of other fonts and while at this, it looks “interesting”. I tried using it on the blog the first time I saw it, until I realized that it doesn’t work either. One of my testers stated in a blind test this looks like a fun font (similar to Sans Serif), which is actually quite true. While Averia Serif looks kinda too bold and smudgy, Averia Sans is just broken on Mac and on Linux even more.

Conclusion

Designing fonts is hard and it shows when looking how differently fonts are rendered on different operating systems and sizes. At least nowadays we have a good selection of high quality system fonts like Droid or Cantarell (and the Ubuntu font is fine as well). For the web it is still kinda hard to find a font that “works”. I heard things look different on Windows, but I did not have a system available to test.

Personally, I’ll go with the combination of Vollkorn and Cantarell now.

Why Raspberry Pi Competitors Miss the Point

Every second day I see a post on some blog or news site of a Raspberry Pi competitor that is more powerful, cheaper or better available (snicker). Just today it was the Gooseberry, the Mele A1000 and the CX-01.

Well, most of the time these claims are true, more or less. Until you realize that the cheaper price get’s eaten up by shipping, customs etc. Until you realize that it is just some kind of hacked hardware from some device. For example, I have a Seagate DockStar, which was a 25€ device that can be hacked to run normal Debian. Same thing with the Mele A1000, which has interesting specs (Allwinner A10 and Mali 400) but you need to buy a device that is meant for some other task and might be discontinued any time. The Gooseboard is basically a stripped tablet, meant for running Android, nothing else. The CX01 is also meant as an Android davice, not a general-purpose computer. With it, you get all kinds of typical Android updating pains.

Now let me tell you why I think the RPi is exciting: is is an actual computer! Many people compare it with the BBC Micro, of which I don’t know much but for me it has the appeal of a platform like C64. A device to empower users, unlike the locked-down consumer stuff from HTC, Sony, Apple, you name it. What the RPi offers is a device that you can buy in 2012 (well, to some extent), in 2013 and 2017 just as well and it still serves it’s purpose. I compare it to my Nexus One, a beautiful top-of-the line 2010 device that begins to show it’s age now that it hardly gets updates, the storage is too small and it cannot be used as a general-purpose computer.

Of course the RPi has it’s fair share of problems. The curious retro composite ports, the outdated and slow CPU, the proprietary-to-the-max GPU. I already hate all this, but what the foundation has achieved is quite interesting: a big, excited community of users and hackers, a low price point, a decent and useful device especially due to the USB port which extend the possibilities of such a device vastly. I can be reasonably sure that the RPi will stay around, so it is worth tinkering with it and other people can reproduce it. Hell, at this price point I might consider getting lots of them and using them as computation units in every room in my house, something that Beagleboard/bone, Pandaboard etc. haven’t been able to achieve.

What I hope for, is a modding community to form like in the case with Arduino, with different compatible boards. Anyway, I for now welcome our new Raspberry Pi overlords.

My Way to Cycling

Those who follow me on Google+ know that I picked up road-cycling some time ago. Those who know me personally are probably surprised why I lazy slob like me who has an expressed distaste for sport started to do sport. Let me explain and maybe motivate you to do something similar.

1. Pre-history

Like all people I learned to cycle when I was a kid (technically, that is not true but for 99% of people you meet indeed the case, I digress). I used these kids driving wheels, which turn your bicycle into a quadcycle or rather a tricycle because you always end up balancing in three wheels. This is possibly the worst way to “learn”, because at the end of the day, you still cannot ride a bicycle. And switching from a fake-bicycle to a proper one is about as hard as to learn it right from the beginning.

After that, I used to have a kids bicycle with 3 gears. 3 gears! What an awesome thing. I cannot remember the gears to be that much of a difference at all, so I was just switching gears for fun, without really understanding when to shift. Considering it was a kids bycycle, the speeds were not worth shifting after all.

The second bicycle I had was a dream in aluminium. A real adult bike with suspensors in the front! Unfortunately it got stolen (out of a locked cellar, imagine that, took my dad quite some time to figure out how the thieves opened the door. Hint: they didn’t, they took off the hinges). I got a new one, shiny silver metal MTB from a supermarket, with rear suspension. My friends told me how cool it is, even girls told me how nice it is. And it weighted about a ton. The gears were shitty, the suspension worth nothing, therefore I abused it to the point where I broke out the spokes on the front wheel. That wheel was never, ever true again. The bike is still around, rusting and heavy in the garden hut. Nobody tried to steal that one.

Parallel to that, my dad found an abandoned old 1994ish Scott Memphis steel cross bike that he fixed up. That was the first time I was on 28 inch wheels and 13kg bike. I was hooked.

2. The Good

Fast forward some years and I am a computer nerd. Like in the books, with glasses, staying inside, disliking sports, you name it. At some point I started to study, guess what, and by more or less luck I turned up in Japan for an exchange year.

And contrary to Germany, where everybody has a cheap fake mountainbike, in Japan everybody has an old-lady bike, except for those that have road bicycles. That’s where I thought I might as well get one as well.

After lots and lots and lots of review-reading, visiting bike shops and talking to myself I ended up with a 2012 Giant Defy 2. Now that was a long introduction.

Anyhow, I got this bike and it is awesome. It is light. After these years of heavy bikes, being able to hold the bike in one hand is an incredible change. The shifting mechanics work like a clockwork (except when they don’t, see below), riding the bike on a street is pure joy because it is basically effortless.

This lack of effort makes it possible to make long distances without feeling too tired. When considering going 70km by bike, I would have balked, but now 70km seems like a nice way to spend a saturday evening.

The cool thing about owing a road bike is the elitism, heh. I just love passing other road cyclist and nodding to each other as a form of mutual greeting. You pay 1000€ to get in this club, though.

3. The Bad

Riding in Japan is quite bad. Well, not riding-in-the-jungle bad but apparently nobody who builds roads has tried to ride a bicycle on these roads. The bicycle roads on the pedestrian roads are a total joke. Unlike the pedestrian roads there are not straight but curved often, the road is of bad quality and full of gully holes that stick out. Also, the slopes are not flat but usually have jumps where I fear that my wheels might be damaged.

Taking bikes on a train is another problem, because they have to be in a bag. There is no sensible reason for this, just to annoy cyclists. These bags cost 30 to 50€, are glorified garbage bags and it takes 15 minutes to disassemble the bike before riding.

While Japan is quite a safe country so I leave my lights on the bike all the time due to sheer lazyness, I wouldn’t leave my bike outside overnight. That means that inside my room, the bike get’s in the way constantly. Considering normal dorms, my room is rather large.

The problem of theft will be even more of an issue when I return to Germany. I am not looking forward to leaving my bicycle in front of a train station.

4. The Ugly

Do not – ever! – try to imitate what a cyclist on a mountainbike is doing. A road bike is not a mountainbike and while not as fragile as you might think, it is nearly as rugged as an MTB. In the first months, I managed to break off my rear derailer (I write it intentionally in the english way of spelling). Also, I have quite a number of scratches from falls because going downhill on a wet road is not so smart. I also had a rather huge number of punctures and I lost one tail light (because it wasn’t affixed properly) and one wheel light (no idea, maybe lost maybe stolen). Also, sometimes the gears don’t shift perfectly, because the cables stretch and have to be shortened. Overall, I pumped quite some bit of money into the bike after I bought it in replacement parts, equipment and clothes.

5. Conclusion

Well yeah, road cycling is not for everyone. It costs more money that you might think, especially in equipment and has some requirements like “proper” roads. But I think if you’re even remotely interested, you should at least give it a try. In the future, I’ll look into groups of people to cycle with, will post my experiences.

LinuxCon Japan 2012 Review

I’ve been to the LinuxCon 2012 in Japan from 6th to 8th of June in Yokohama. Fortunately, my lab (Takada lab) pays one trip to one conference in Japan so the trip was affordable for me as a poor exchange student. I decided to go by Shinkansen, the high speed train that moves my physical body from Nagoya to Yokohama in 80 minutes instead of four hours by local train. Fun fact: GPS confirmed we were making about 260 km/h.

Day 1

After arriving in Yokohama and being overwhelmed by the sheer amount of people in the trains and the utter confusion of the Tokyo-area subway system I found the venue of the LinuxCon at the Pacifico Yokohama easy enough. The place easily fit all attendees and after the queue in the registration it was never necessary to enqueue for anything. That’s a welcome change from the Chaos Communication Congress, where you don’t get by without queues, squeezing and sitting in two sessions to keep the place for the third session. Also contrary to that, free WiFi was available, was fast enough and stable. Yay for that. Generally yay for the organization, except for the keynotes on the first day everything was on time and the deviation from the published schedule was minimal.

I started reviewing everything in detail not but after realizing that it would be way too much noise, I decided to shorten it to the relevant bits without too much blurb.

The keynotes were non-remarkable except for the the technical talk by Greg KH who gave examples of things not to do when sending patches to the Linux kernel, interestingly using the over 400 patches received in the last 2 weeks. Pretty impressive.

From the following talks, I really liked the talk from Chris Mason who does an awesome job with btrfs and had much patience to answer lots of my questions afterwards. There was generally quite a lot of talks about ARM and also on the development of Linux and how Japanese developers can take take part.

Day 2

The second day had maybe the best ARM talks of the conference. One was the state of Ubuntu ARM and Canonicals plans for ARM servers. They also talked about Ubuntu on Android, but I don’t really believe it is going to take off. The second talk was on the ARM Sub-Architecture Status. I am quite excited about the changes in the ARM tree, especially how they try to unify all the code with a common base and Device Tree (of which I didn’t hear before but which is quite exciting). I’d love to write an entry for a board and then just boot it, without compiling a new kernel, without writing a new port for the board.

Day 3

The third day had a talk on Fedora ARM (haha, see the pattern? Ubuntu and now Fedora) and how they will proceed. Seemed to me like a more down to earth approach. They also had a nice talk about single zImages to boot Linux. Currently distributions have dozens of zImages for ARM because there is no real platform standard and the kernels are exclusive to boards (or board families at most). A rather entertaining talk was made by Satoru Ueda about how Sony uses Linux in Consumer Electronics Products. I liked his style quite a bit.

Actually, I need to say more. Except for the keynotes which had interpreters, the talks were in english and while most were fine, I can see especially Japanese people have problems understanding fast and colloquial american english. Satoru Ueda was unique in this regard because he had japanese translations of some of the more “tricky” terms on his slides. Oh well, his english terms were tricky and uncommon english words as well, but anyway, I think in general the talk was quite good.

From the keynotes, I also really liked James Bottomley’s keynote about Social vs Technical Engineering in the Kernel, it was a delightful session presented with in cambridge english with the perfect combination of facts and wit. Oh they had free food and beer at the closing reception, yay!

Overall

Generally I think the point of the LinuxCon was not really the talks. Some were good, some were bad, but they mostly serve as a entry point for conversations. I met some really cool people that I only knew from “teh internets” before, if at all and had some interesting discussions and learned a lot. Also, going to conferences like these is a great way to find jobs. Many people had connections and of ways to get around HR departments in their companies to hire skilled people right away (which kinda shows what a bad jobs HR departments make). Also interesting how work-from-home has become a quite viable way of work on Linux.

Generally: it is always a pleasure to talk to people with similar interests and similar mindset, so I look forward to my next FOSS conference soonish.

I Don’t Believe in Intellectual Property

All copyright proponents start with “We all agree, that we need to protect intelectual property” and then proceed to explain how they need to create draconian laws and draconian protection technology. It goes without saying that they both suck.

But at that place, they already lost me. I don’t really believe in a thing called intellectual property. For me this is about as useless as the notion of owning land was to the native americans. But really, think about it. It basically says that your thoughts are some kind of sellable product. That’s absurd. Lawyers have came up with a term “immaterial goods” which is an contradiction in itself.

We need to stop trying to fit our physical terms in a world where they don’t belong. And we really should consider the damage copyright has done. In fact, it is crazy to complain about the split between the rich and the poor and then create stricter copyright laws, that protect the mainly the rich copyright managers.

Recently, I read a pro-copyright pamphlet signed by some german book authors, complaining about piracy, no actually about stealing, which would be funny if it wasn’t so sad how misguided it is. Piracy is not theft and with that article they made sure that I won’t ever buy anything which bears their name. I don’t want to support people who view things differently but also lie to the public and try to influence public opinion, maybe by malice but most likely by ignorance. Not to mention that book piracy is not nearly as common as for example music, because the most popular e-Book platform is the kindle and Amazon makes it much easier to buy books than to pirate them.

This is interesting, because I also create copyrightable works, I write software. You can find most of it on my GitHub account. And if I had a couple of wishes, I as a programmer would wish for less copyright, not more. Also, getting rid of patents, but that’s stuff for another post. The last thing that I would want is for my copyright protection to be stronger. Currently, my copyrights expire some 70 years after my death, and given an estimate that I will evade being flattened by a bus for the next 30 years, you can freely incorporate my previously AGPLv3+ licensed code in your proprietary applications in 2112. See, this is ridiculus.

The aforementioned book authors were fearing for their jobs and proclaiming that we can’t keep up our culture level without professional culture creators. That’s maybe the most ridiculous claim. They ask to help them survive. I don’t see a reason for this, it is like supporting a ice-transport industry because it might get replaced by refrigerators so people don’t need ice. And like ice transports we don’t need copyright. Sorry, that’s life. If you can’t survive as author, don’t become author than. I like cycling, but I can’t get people to pay me doing this. I don’t write newspaper articles how unfair this is, it is market at work.

Besides this, the notion of paid professionals necessary for creating culture goods is totally wrong. The most prominent example is the Free/Open Source Comunity. While there are many people paid for developing on the Linux kernel, on the other projects there’s many people working on lots and lots of smaller low-profile things in their free time. You might say, ok, but that’s software, it does not apply to other things. I beg to differ. See the Nine Inch Nails album Ghosts I-IV which was released under a liberal license. Actually, from the statistics I saw, most musicians earn their money from their concerts and not from the actual sold “product”. Also see the works of paniq which are at least as good as many mass-media commercial releases. If you look at DeviantArt you can find lots of beautiful art from hobby artists. And crowd-founding sites like Kickstarter show new ways on how to create great things and still get paid.

But let’s look at it from the other side, let’s see what copyright destroys. Take a look how many videos are blocked on YouTube because they use some copyrighted music (it get’s even funnier, I saw talks that were blocked because the presenter used copyrighted music), take a look at all that crappy DVD encryption bullshit, take a look how much effort DRM is. It is basically impossible to buy legitimate media on Linux because of all that software that is needed for the DRM is not ported to Linux and because it is DRM, it cannot really be implemented freely. Se how the copyrigh spawned an entire industry of lawyers to sue people and other companies.

What if there was a world where copyright didn’t exist… oh wait, there is. See how the industrial Japan came to existance. That’s old history, so take a better example of China. Guess which country’s industry is growing like crazy. Have you seen the chinese copies of Twitter? Nobody gives a damn about what copyrights Twitter might have of that. Is there something wrong with that? I my opinion, not at all, they are free to do that and the world is none the worse because of that. Actually, on the contrary, the copied Twitter style is nice enough and it saved the chinese internet users from having to deal with an ugly, blinky 90ies style site. I can see chinese companies suing western companies for copyright infringement in some years.

We need to stop this madness. We need to get rid of copyright. Chances of that actually happening: 0.

Conservatism in the Linux World

Everytime I read about systemd in the news, I facepalm in advance. Not because I don’t like systemd, mind you. But because I already expect to see the uninformed or idiotic stuff written in the comments. It divides into complaining that systemd makes things differently (well, duh!), personal attacks against Lennart Poettering and complaints about PulseAudio, one of the previous projects of Lennart.

I for one, welcome our unified boot system overlords. Let me start by listing a number of boot systems that current Linux distributions use:

  • System V init, used by many systems because “that’s how it’s always been done”
  • initng, used by Pingwinek, Enlisy, Berry Linux, Bee. Honestly, I never heard about any single of these
  • OpenRC, used by Gentoo. Together with their older Gentoo init scripts. What the heck is “friendly upstream” in the linked Wikipedia article even supposed to mean?!
  • eINIT
  • Upstart, written and used by Ubuntu and sometimes used by other systems
  • The “BSD” boot scripts in Arch Linux
  • At this point I stopped caring about more

They all are basically “init”+asynchroneous process starting. It baffles my mind why everybody has to reinvent the wheel badly, but whatever. The intersting thing is, that they rarely look at the booting problem from a broader perspective. Not only make the computer do things in parallel but also, lo and behold, start only stuff that is necessary.

Now this is a point that many users in Linux cannot accept. Novel approaches are discouraged, whereas reinventing the same thing is great. Please tell me the actual differences between GNOME2 (MATE), XFCE and LXDE. Why do we need so many identical desktops?

I also used to be conservative, afraid that the new “Firefox” will be terrible compared to my beloved Mozilla Suite, nowadays called SeaMonkey. Fast forward couple years, I would be insane to consider SeaMonkey an alternative to Firefox. Also, I tried liking Unity or GNOME3, unlike many others, who looked at screenshots and dismissed it right away. Also, I wanted my smartphone to have a hardware keyboard, but having tried touchscreen keyboards, I realized that the smaller size of the device is much more important.

This is where systemd comes in. systemd takes a broader approach and finally utilizes the Linux kernel with all the extra functionality that it provides to create a solution that is not just a rip off from some old eighties Unix but actually looks how computers are used nowadays. As such it has some intersting ideas. You can read about it in Lennart’s blog. To me, they all sound really reasonable, like the guy behind it knows what he’s talking about. Therefore I don’t understand why people protest because of protests sake. This is all to obvious in this video where the main point was “OMG THE SYSTEM WORKS DIFFERENTLY THAN IT USED TO!!!11!”.

Now, one point is that systemd is bad because it is authored by Lennart Poettering, the guy who wrote PulseAudio, which famously didn’t work in Ubuntu. Now, fast forward to 2012 and PulseAudio is the only audio system on Linux that matters. Who misses the mess with applications written for OSS that blocked other applications from playing? Or the sound-server wars with EsounD, aRts where you had to tell applications which one to use and hope they even supported it. Thanks, my SoundBlaster is IO 220 and IRQ 7. Now, I can plug in additional sound cards via USB at will, control each application volume separately and best of it, it just works. When GNOME 3 on my Arch Linux pulled in PulseAudio, it just worked, no tweaks needed at all.

And I am by far not a systemd fanboy. I am using systemd in Arch which is nonstandard and I had my fair share of problems, when systemd wouldn’t start NetworkManager (by the way, also an evil technology, because it is not distribution-specific config files, you know). But generally it’s been running quite well. In Fedora 16, where it is used by default, it works beautifully. Boots fast and supports the one-to-rule-them-all bootsplash solution, Plymouth. Bootsplash is another topic, some people really need to see the console output of the Linux kernel, because it is always been like this. Yeah, right.

And the next shock will be Wayland, which updates graphic interfaces in Linux to the 21st century. X11 has served us well, especially since Xorg took over from XFree86, but the point is that they are patching around an ancient protocol that was invented 30 years ago. As such it certainly stood the test of time better than FTP for example, but that doesn’t mean it needs to stay around forever.

Ok, so as a final point to you, the reader: please try to be open about things. Don’t dismiss them because they are different. Consider that things like mouses, windows and laptops used to be different but now most people consider these to be great.

Computing in the 2010s

There’s a lot of posts, predicting the future of computing and you might wonder why the heck I need to write about it. Well, maybe I can chip in the opinion of an desillusioned ultra-nerd instead of the consumer view. Just so you know my context: using PCs since 16 years, GNU/Linux since 2001 and online since 28.8 to 56k dial-up age. I believe I have seen a number of fads and trends.

1. Desktop computers will not completely disappear

I haven’t owned a desktop computer since my last one broke down, somewhere in 2007. Because there’s no “mainstream” use for these. Laptops have eaten all the share. You can use your USB input devices with the laptop, your external screen. Storage on Laptops is plenty and even if you need more, there’s a multitude of NAS devices and external discs. Upgradability stopped being an issue – I dumped my last Core2 Duo laptop because I was bored, not because it was too slow. A nerd friend of mine argued, he needs an desktop PC, but all points he brought up could be satisfied with a laptop. It is just unflexible thinking.

From my non-nerd friends, hardly anyone has a desktop computer now. What for? I shudder when I think that I could be using a huge, noisy box that is not portable. So while I don’t think desktops will completely disappear, they are becoming as common as mainframes.

2. A tablet is more than most customers will ever need

If you have older parents, you sometimes think which computer to give them, so they can stay connected to the rest of the world. If you have given them a Windows PC or they got it for themselves you’re probably familiar with the regular visits, to fix up and desinfect their installation. If you went with GNU/Linux, you probably had to debug their problems with the operating system. They updated Ubuntu, it doesn’t even boot up anymore. If you got them a Mac, you just spent a lot of money on a machine where they won’t ever use anything more than the browser.

Looking at how “normal” people use their computers, they have exactly two priorities:

  1. Let me use my browser to surf the net
  2. Don’t bother me with anything else

Lately, a special type of device popped up, that does exactly that: the tablet. While I was critical of netbooks (ugly, small screen, same sucky software) tablets seem to be actually an interesting type of device. To be honest, first time I saw the iPad, I was sceptical, until I realized that this type of device is not meant for me but people like my parents. And my non-nerd friends. And ultimately, I’ll argue in a moment, for me.

Oh, Chromebooks tried to settle for the same niche, as a simplified netbook design that had potential, but as they weren’t pushed aggressively and were too expensive they are just a historical sidenote. Dear manufacturers: noone will buy a device that is basically unknown and more expensive than a full-size laptop. If you really thought this would work out, you’re sillier than expected.

Heck, since I got my smartphone, I stopped carrying my huge 15 inch laptop to university. I don’t ever use this phone for calls, not even fro mobile internet. It is great as an always-with-me-device that is tiny enough that I always have it with me, it rarely breaks down and has enough battery life that I never need to turn it off. I realized that except for programming, that device is plenty enough for mobile use.

3. The move away from individual devices

While the cloud is a grotesquely overused term, it does serve as a great place for storage. I was, like many people hesistant to use it, but after you get over to hand your data to someone else to handle, the upsides are quite an easy sell. I have my contacs “in the cloud”, that is, in my Android address book, that is synchronized with my desktop mail program that is automatically backed up by highly-paid Google guys and not by me who has some other job. Amount I pay for this: 0. I started to synchronize important data between my devices with Dropbox. Also completely for free. The comfort of getting rid of byte-shuffling on USB sticks or E-Mail between my devices is about as big as the switch from the super-small capacity floppies to the huge CD-Rs and then from the read-only CD-Rs to random-access USB sticks.

To quote Sun’s slogan “the network is the computer”, but in a totally different way than Sun expected it to be. This vision was finally realized by Google. See GMail, see Maps, see Android. There’s hardly anything that I cannot do with a browser on any device. GDrive which was announced today continues this trend.

An important point in de-emphasizing the individual device is, that the irreplacable user-data is not bound to the specific device. If I break my phone, that’s sad, but the amount of data-loss is not a problem, since there’s hardly any data unique to the device. There’s a couple of apps and settings that I loose and that is about it. Actually this doesn’t only apply to phones. A friend of mine uses a Macbook Air in the same way, personal data is backed up in the internet, if the device breaks down he could just get a new one and continue where he left of.

4. What about programmers?

Now, many people might argue, ok, that’s for consumers who don’t create content. What about content producers? I suppose for those the desktop, but mainly laptop will stay. Though for programmers, the situation is different now. They can do their work quite fine on tablets. First, there are the browser IDEs. These are ok if you are programming in the language-du-jour and constantly improving. But if you need more flexibility?

Well, I’ve been developing on a virtual server via screen/tmux and a text editor since about 2007 as I realized that I am too lazy to set up my development environment on every new machine, which usually starts with installing my favorite operating system. Having all that in my own personal cloud makes development possible with basically only an SSH client which are available on Linux, Windows, Mac OS X as well as iOS and Android which cover more platforms that I am even likely to use. Incidentally, this blog post is written in exactly this way.

So, what about typing? Many people complain that typing on touchscreens suck. Well, on my phone it certainly is difficult, because it has a crappy digitizer and a small screen. But I still write long E-Mails with it sometimes, and surf with it, even while I use a laptop. And you know what? Bluetooth keyboards exist. While they might not be as good as IBM Model M style clicky keyboards it is still a very much viewable solution and some are actually better than the cheapo wired keyboards that some of my aquaintances use.

If you really need a dedicated system with a “traditional” OS, there is now another possibility: the Rasberry Pi. Thinking of it, this might be one of the most interesting devices for developers ever made. It is true, just 700 MHz but for many things this is already enough. Unless you are compiling huge codebases, the RPi could be a game changer. Thnk of programming books that don’t ship a CD but a full RPi with pre-made environment on SD card. Plus, it runs a full Linux, so you can do multitasking and splitscreen, a field where current tablets lag behind.

What’s So Bad About Dan Brown?

After trying to implement this nifty script in Python 3.2, which is by the way impossible because the Python 3.x ports both PIL and PythonMagick refuse to work, I said dammit, I can also write the blog post I was planning to write anyway.

Ok, so I read The Lost Symbol by Dan Brown. You might remember, the guy who wrote an innocent book about the church and got lots and lots of publicity. I read all his books years ago and for the most part, enjoyed them. So I picked The Lost Symbol and was quite disappointed, up to the point of rethinking my previous judgements.

Here’s why: it is stereotypic and repetitive. I read the book in, maybe a week, because the style is admittedly flowing quite well and hooked up the reader. But it has lots of chapters and many of these chapters recapitulate in the first paragraphs what happened in the previous chapter. Some readers may remember that the Harry Potter books do the same at the beginning of each book, but there it serves a purpose, because they were released years in between. The chapters in The Lost Symbol are meant to be read more or less in one go.

So, why stereotypic? Well, because the “hackers” are portrayed in the stereotypical way how any cheap movie would show them. I think we moved past that. Also, Langdon has no development and apart from knowing his symbolism and being every woman’s darling, no personality. While I do not expect him to be a super-sophisticated character, halfway though the book I just stopped caring about him, whether he is caught or anything.

And then there is the missed chances. There could’ve been some interesting plot turns especially about the origin of the antargonist or well, by simply getting rid of Langdon in an interesting way. But Dan Brown chose not to do it, because, maybe it would make it harder to write a next book where Langdon goes to yet another city and solves other ancient mysteries with the help of another beautiful woman.

Oh and don’t get me started on the “science” part. That was just so much bullshitting and not explained in any credible way that it was just silly. Also, the mysteries in this book seemed to me to simple. I am by far no person with any sort of special knowledge in ancient stuff, but many of the riddles, especially at the beginning were obvious even to me.

After all, not a bad book as I read the 600-odd pages in a week, but I felt some kind of guilt to be absorbed in such a cheesy story.

A Python Programmer on Octopress

I used to have an Ikiwiki installation at this place. I liked it, because of the Git integration, but honestly it looked kinda ugly and stale. I never posted to it. But now that Marc has created a blog using Tinkerer, I was encouraged to re-evaluate.

In the end, I chose against Tinkerer, because I prefer writing prose in Markdown. So, I decided to give Octopress a try. After all, we’re the same league, right, Ruby-guys?

Let me start with what I like, before I commence rambling how terrible everything is:

  • Statically generating blog posts is completely fine with me, after all, my very first blog, some aeons ago was statically generated using Firedrop2. Considering the number of security issues in WordPress and my general dislike for anything PHP that sounds like a sweet deal.
  • I can write the posts in my favorite editor on the server, from anywhere. Yeah, the cloud! Cloud! Whee!
  • I can use version control on my blog posts, shiny.
  • It looks nice out of the box. Also kind of generic, but I take this over the even-more-generic Blogger look.
  • It is more popular than Tinkerer and has more plugins. Well, Tinkerer is quite young, we’ll see.

Ok, now the sucky part is, as far as I see the set-up

  • What I did was to compile Ruby 1.9.3-p125 and use rbenv hoping that it will be something like virtualenv in Python. Turns out it is. Kinda.
  • Octopress forces the local version to 1.9.2-psomething, so I needed to delete the file. Why is it even checked in?
  • Everytime I run rake, it complains that my rake is too new, 0.9.2.2 instead of 0.9.2. Seriously? So I run bundle exec rake generate and everything is fine. Why oh why?
  • No pip, no PyPI equivalent? I am disappoint, Ruby.
  • Need to figure out how to version my posts properly.

Oh, did I mention that I love the solarized colorscheme?

1
print("Hello World")

So that’s that.