Meeting C++ 2016

This is my first time at Meeting C++ in Berlin. I came here with my boss Andi. To profit more, we split up during the talks. Afterwards we shared what we learned.
I will complete this post later, and add links to the presentations and videos as they become available.

I attended the following talks:

Opening Keynote by Bjarne Stroustrup

He talked about the evolution and future direction of C++. Explaining the guiding principles and philosophy of the language. He also explained how the standards committee works, and that even he himself is sometimes over voted. He could tell that and even name the people of other opinions without any bitterness. Very professional and focused!
The main point that sticked out was: “zero overhead abstractions”

C++ Core Guidelines: Migrating your Code Base by Peter Sommerlad

Unfortunately Peter Sommerlad was sick and couldn’t come. So Bjarne Stroustrup agreed ten minutes before his own keynote to jump in, and give the talk without any preparation. He claimed never to have had a talk about this topic. He had some slides with the name of his employer, and he jumped around in those slides. Other than this barely noticeable detail, you couldn’t tell that the talk was not prepared. He talked about how to use the [GSL](https://github.com/Microsoft/GSL) in new code. But the main focus was on how to gradually improve old legacy code by introducing the types the GSL provides. In the future there should be even tools to perform the task automatically.

Reduce: From functional programming to C++17 fold expressions by Nikos Athanasiou

He started out by showing how fold can be performed at runtime with std::accumulate(). Then he gave some theory and showed the syntax of other languages such as: haskell, python and scala. The C++17 fold expression operator doesn’t just add syntactic sugar, but open up a load of new possibilities. With constexpr functions, the folds can be evaluated at compile time. As a consequence they can not only operate on values, but even on types. The talker shared with us how he broke his personal error message record: During his experiments he got an error with a quarter of a million lines!

Implementing a web game in C++14 by Kris Jusiak

In this talk we witnessed how a relatively simple game can be implemented with help of the following libraries: ranges, dependency injection and state machine. The code was all in pure C++14 and was then compiled to asm.js and/or webassembly using emscripten. The result was a static website that runs the game very efficiently in the browser. In the talk we were walked through the different parts of the implementation. In contrast to a naive imperative approach, after the initial learning curve this can be maintained and extended a lot easier.

Learn Robotics with C++ in 1 hour by Jackie Kay

We didn’t actually learn how to program robots. First, she walked us through some history of robotics. By highlighting some of the mayor challenges, she explained different solutions, and how they evolved over time. Because robots run in a real time environment and have lots of data to process, performance is crucial. In the past the problems were solved more analytically, while nowadays the focus is on deep learning with neuronal networks. She had a strong emphasis on libraries that are being used in robotics. To my surprise, I knew and used most of them, even the ones she introduced as lesser known such as dlib.

Nerd Party

In the evening there was free beer in the big underground hall. There was no music, so that people could talk. Not really how you would usually imagine a party. We had a look at the different sponsor booths, and watched some product demos. After a while we went up to the sky lounge in the 14th floor with a marvelous view over the city.

SYCL building blocks for C++ libraries by Gordon Brown

Even though I experimented with heterogeneous parallel computing a few years ago, I was not really aware what is in the works with SYCL. My earlier experiments were with OpenCL and Cuda. They were cool, but left a lot to be desired. I never looked into OpenAMP despite the improved syntax. In Contrast SYCL seems to do it right on all fronts. I hope this brings GPGUP within reach, so that I could use it in my day to day work sometimes. In the talk, he showed the general architecture, how the pipelines work. Rather than defining execution barriers yourself and schedule the work, you define work groups, and their dependencies. SYCL then figures out how to best arrange and schedule the different tasks onto the different cores. Finally he talked about higher level libraries where SYCL is being integrated: std parallel algorithms, tensor flow and computer vision.

Clang Static Analysis by Gabor Horvath

During this talk we learned how static analyzers find the potential problems in the code to warn the developers about. Starting with simple semantic searches, through path tracing with and without branch merging. Bottom line was that there is no one tool to beat them all, but that the more tools you use, the better. Because they all work differently, each on can find individual problems.

Computer Architecture, C++, and High Performance by Matt P. Dziubinski

This talk made me realize how long ago it was, that I learned about hardware architectures in school. Back in the day we thought mainly about the simple theoretical model of how an ALU works. The talk made clear how you could boost performance by distributing the work to the different parallel ALU’s that exist within every CPU core. In the example he boosted the performance by two simply by manually partially unroll a summation loop. Another important point to take home is the performance gap between CPU and memory access. Even for caches, it is widening with every new hardware generation. Traditional algorithm analysis considers floating point operations as the expensive part. But meanwhile you can execute hundreds of FLOP’s in the time it takes to resolve a single cache miss. On one side he showed some techniques to better utilize the available hardware. And on the other hand he demonstrated tools to measure different aspects, such as usage of the parallel components within the core, or cache misses. With so diverse hardware it is really difficult to predict, thus measuring is key.

Lightning talks

The short talks were of varying quality, but mostly funny. As with a good portion of the talks, there were technical difficulties with connecting the notebooks to the projectors.

Closing keynote by Louis Dionne

C++ metaprogramming: evolution and future directions
We both didn’t know what to expect from this talk. But it proved to be one of the best of the conference. He started out by showing some template meta programming with the boost::mpl, transitioned to boost::fusion, and landed at his hana library. The syntax for C++ TMP is generally considered insane. But with his hana library, types are treated like values. This makes the compile time code really readable and only distinguishable from runtime code at a second glance. True to the main C++ paradigm of zero overhead abstraction he showcased an implementation of an event dispatcher that looks like runtime code with a map, but actually resolves at compile time to direct function calls. Cool stuff really. Leveraging knowledge that is available at compile time and use it at compile time. He even claimed that in contrast to some other TMP techniques, compile times should not suffer so much with hana.

Conclusions

C++ is fancy again!
I have been programming professionally for about 17 years. In all this time C++ has been my primary language. Not only that, it has also always been my preferred language. But there were times where it seemed to be stagnating. Other languages had fancy new features. They claimed to catch up with C++ performance. But experience showed that none ever managed to run as fast as C++ or produced such a small footprint. The fancy features proved either not as useful as they first appeared, or they are being added to C++. In retrospect it seems to have been the right choice to resist the urge to add a garbage collector. It’s better to produce no garbage in the first place. RAII turns out to be the better idiom as it can be applied to all sorts of resources, not only memory. The pace with which the language improves is only accelerating.
Yes, there is old ugly code that is using dangerous features. That is how the language evolved, and we can’t get rid of it. But with tools like the GSL and static analyzers we still can improve the security of legacy code bases.
Exciting times!

Electrum 2.7 with better multisig hardware wallet support and Ledger Nano S

Electrum has been my favorite Bitoin wallet software for a very long time. The reason I had a look at it initially was because there was a debian package. Only when Trezor hardware wallet support was added and was not yet released, I downloaded the sources. It is written in Python. I work with python regularly, but it is not my primary language. But for frequently updating and testing experimental software, it is pretty cool. That’s how I started to report bugs in the unreleased development branch, and sometimes even committing the patches myself.
But the reason I’m writing this post is, that the new 2.7 release contains two features that are important to me.

Ledger Nano S

One is that the Ledger devices now also support multisig with electrum. I took this as the trigger to order a Nano S. It works totally different from the HW1 in that it has a display. Thus you can set it up without an air gapped computer. With only the two buttons, you can navigate through the whole setup process. As a bonus it is also to my knowledge the first hardware device to store Ethereum tokens, not counting experiments such as quorum. So I finally moved my presales ETH.

Multisig with hardware wallets

I wrote about multisig with hardware wallets before. But Thomas took it a huge step further. Now it’s not only super secure, but also super user friendly. Now the hardware wallets are directly connected to the multisig wallet. No more saving unsigned transactions to files and load in the other wallet. You can still do that if you have the signing devices distributed geographically. Given a solid backup and redundancy strategy, you can now also have a 3 of 3 multisig hardware wallet. So your bitcoins would still be secure if your computer was hacked, and two of the three major BitCoin hardware wallets had a problem, which is very very unlikely.

The only thing still missing is the debian package for the 2.7 version.

My new notebook

Last week I finally received my new notebook. It was a long journey, but it was worth it. If you didn’t follow my blog, you can read about it here, here and here.

Delivery

It was delivered in two pieces. The first box contained the notebook, and was delivered normally. The second box contained the docking station and an additional power supply. For the second box I had to send a copy of the invoice to the tax office. I expected Dell to place the required documents inside the boxes. But since it was a domestic delivery for Dell, they didn’t. And I forgot to tell my friend who re-shipped them to check. So when the second box was delivered, I had to pay the import taxes for the whole order in one go. That wouldn’t be a problem in itself, but an announcement would have been nice. Because I don’t usually walk around with so much money, I had to ask the whole team to borrow some cash. Yeah, cash was the only option.

First impression

As expected, the first impression was great. And I had high expectations because I owned a previous model already. The border less screen is a blast. The large bezel of some other devices is such an useless waste of space. Also the docking station works flawlessly. I had somehow the impression they had different models for America and Europe. But other than the power cord, I couldn’t tell anything that wouldn’t fit. Only one USB-C cable is between the dock and the notebook. This is enough for charging the notebook, connecting external monitors, USB3 devices and sound. Funny enough the Bluetooth LE Mouse has shorter wakeup times when the docking station is plugged in.
I don’t insist on linux being pre-installed to save the time installing. It is to make sure the drivers stay available also in the future. I want to make sure that the OEM’s are aware of the people who want to have sane operating systems on their devices. It is essentially the same reason I insisted on paying with BitCoin. It is my money that I spend, and thus I want my purchase to show up in the appropriate columns of the statistics. If people don’t care, some corrupt middle managers just make certain options harder to get and then claim nobody wanted them.
The only item that is not according to my wish list is the keyboard layout. I wanted a Swiss layout, but made the compromise to get a US keyboard because the other factors were more important to me. The plan was to get a swiss keyboard, and retrofit it myself. But when I look at the device now, I figured out that this wouldn’t be easy, as it would require a European palm rest. Thus I abandoned that plan. I had devices with US keyboards before. It’s no big deal, I just prefer the Swiss layout.

Installation

Every time I set up a new device, I follow the guides for installing with smartcard backed full disk encryption and smartcard backed ssh. I wanted to automate this process for a while. So I used the opportunity to write the scripts this time. Since I wanted the procedure to work reproducibly, I started over every time I missed something. In the end I installed the OS at least five times. The next script for installing all the software including those from personal packet archives is a classic. I probably created it almost a decade ago, and always refined it. I once tried to do something similar for Windows at work. But in the end I abandoned it.

Problems

No system is perfect, and especially notebooks are known to not always have perfect driver support for linux kernels. The Sputnik team certainly does a great job with routing all their tweaks upstream. So far, I only found two minor problems. Wifi and the touchscreen didn’t work after resuming. Since I use full disk encryption I, suspend only occasionally. The boot times are really ok anyway. This is my first notebook with a touch screen. I force myself to use it sometimes, but on such a small high res screen my fingers are just too big. So, it’s nice to have, but hardly essential.
It is also my first device with a 4k screen. Ubuntu does great with the scaling and settings. The only applications that don’t fully support high res that I found so far, are: electrum, bitsquare and openbazaar. Oh, and it would be nice if the applications used the DPI scaling of the screen they are currently displaying.
Last but not least, the battery time didn’t impress me the only two times I ran on battery so far. It hardly lasts for a full movie. But I will try terminating all my background tasks next time.

Update December 20th 2016

Here is a nice video describing the device:
https://www.youtube.com/watch?v=kvsgTJbIWNo

Decentralized websites and more

“Cool idea, but to be of any use, it would need more functionality and more content” was my impression when I first looked into zeronet. Back then static web pages were all there was, and no UI support for any managing tasks. The next time I checked, probably more than half a year later, it had a blog engine, subscription on the welcome page, mail, chat, forums, wiki, boards and more. Blogs was what hooked me this time. The interesting feature was that you could subscribe, and have the news listed on the hello page. So I started to write new blog posts both on wordpress and on zeronet. True, wordpress has lots of more functionality than the zeronet blog engine. Some things are nice gimmicks, but none of it is really essential. ZeroBlog is really all you need.
Some people started to leave twister for zeronet, but I couldn’t quite understand why. For me, it filled another niche. They are both very nice in their own way.

How it works

To create a site, you can execute a python command on the commandline, or simply clone an existing zite. In both cases, a private key is generated that you need to later sign the content. Signing is really easy, but you better take good care for your private keys. Make sure not to share them, but do make backups for yourself. From the private key, a public key is derived and from that a BitCoin address. The BitCoin address serves as the unique identifier for your zite. If this identifier looks too complicated, you can register a shorter name on the NameCoin blockchain, and link it to your bitcoin address for the zite. Once you sign and publish your zite, you can give the address to your friends, or publish it where other people can pick it up. Whenever another zeronet user requests your address, he sends the query into the mesh. Whoever is closest, serves the files anonymously. Now the user who visited, becomes a seeder who also serves your content. No central server required. Now you can switch off all your computers, and your zite is available. Your zite stays online for as long as there is at least one other user seeding it.

Proxies

To visit zeronet sites, or simply zites as they are called, you should run the zeronet client. The software is written in python with few dependencies. So it is really easy to run. You can either run it locally, or on a personal server. Then just visit the entry page with the browser and navigate from there. But if you want to visit a zite without installing any software, there are also public proxies. There are many reasons why running the software is better than using these proxies, but I won’t go into the details now. And I don’t list the proxies here.

ZeroMe

Then came merger zites. I read about the concept before the release, and was really curious. Some things are not as easy to accomplish with a decentralized anonymous system as with a centralized architecture. But when I had my first play with ZeroMe, my reaction was “Wow this is what I have been waiting for”. I don’t use most social media because of the centralized architecture, and because they own all the data of the users and can make with it whatever they please. There have been decentralized social platforms before, but they were usually a hassle to install and maintain or not so great from a usability standpoint. Now with ZeroMe you choose a hub to store your data, an identity provider, and a presentation. So you have three orthogonal aspects to your experience.

Data Hub

You can subscribe to as many hubs as you wish, but store your data to only one of them per identity. They can be organized by region, language or interests. The more you subscribe to, the more data will be stored on your harddrive, and the more bandwidth will be consumed. You can also run your own hub, and use it only with your friends.

Identity

The identities existed for a while. You needed an identity to write a blog, to comment on other people’s blogs, to write and receive ZeroMail, to write to boards and chats and talks and wikis. Again different identity providers have different requirements. For ZeroId you have to register your handle on the namecoin blockchain. For Zeroverse you had to send a bitmessage. For KaffieId no external proof is required. You can maintain as many identities as you like. Some can be more credible, others totally anonymous.

Presentation

The official frontend is Me.ZeroNetwork.bit. But as it is all opensource. The first forks or clones started to appear. There is the darker themed Dark ZeroMe. There is ZeroMe Plus which adds some nice features.

Worst customer experience ever

The best notebook ever

I blogged about my attempts to buy a decent notebook here before. But let’s recap quickly. In the fall of 2013 I bought a Dell XPS13 Developer edition. When Dell shortly thereafter announced that they now accept BitCoin, I had the feeling I missed out on that opportunity. Nevertheless, it was the best computer I ever had. As it came with ubuntu preinstalled, there was no hassle with drivers. Everything just worked, it was lightning fast and gorgeous. But in February 2015 it was stolen.

Paying with BitCoin

I wanted to buy the same notebook again, but this time I wanted to pay with BitCoin. The option was not available for the Swiss market, but they expanded it to Canada and the UK. I really didn’t want to find out that it would be possible in Switzerland just after I ordered. Thus I decided to hold my breadth. The waiting became very long, as my ancient intermediary notebook was having thermal issues.

Purism

The selection of ultrabooks with linux pre-installed, that can be bought with BitCoin is not so large. If It has to have a backlit Swiss keyboard, it gets really difficult. But somehow I learned about purism. Their librem notebooks looked very good. As with most startups, the people were really approachable and helpful. I was ready to order their best machine, but they kept having delays. Delivery was always two months out. When it was pushed way back again, I decided I didn’t want to wait any longer, and re-targeted for the Dell.

UK

After a lot more than a year of waiting, and asking Dell to make the leap forward, I was ready to give up the Swiss keyboard. I was ready to order from the UK instead. I was ready to retrofit a Swiss keyboard myself, and pay double taxes. I found a service that would forward the parcel. But although BitCoin was listed as a payment option on the UK Dell website, the option was not available on the checkout screen. I reported this to Dell customer support and tried on a regular basis over the course of a month. Finally I gave up on the UK store.

US

The US store had a model with a 1TB SSD that was even better than the models offered in the European stores. So I went for that. All the mail forwarding services in the US either couldn’t process my card to cover their fees, or didn’t provide a phone number. But a domestic phone number was required for the order form at the Dell store. So I asked around if I could have my order delivered to somebody in the US, and he would forward it to me. A former co-worker who lives in California now agreed. I went ahead and placed the order to his address. Because I was really in need of the device, I chose the faster, more expensive delivery method. Shortly after I paid, I received an eMail stating that the formal order confirmation should follow in two days at the latest.

Black hole

That was the one and only, and the last communication I received from Dell. After a week I started to question why I didn’t receive the formal confirmation, and I found out that the order didn’t appear on the order status page. So I tried to contact Dell order support. In order to initiate a support session, one has to enter the order number. And because the order was not properly in the system, I couldn’t contact them. I tried different means to contact them almost on a daily basis. This week I could finally chat with a support representative. He couldn’t find my order in the system neither, and gave me an eMail address. So I wrote to what appears to be the main eMail address for customer support in the US. An automated response came immediately stating that a human would respond within 24 hours. Nobody ever did of course. I reached out to coinbase to ask about my transaction. They very quickly responded. They stated that on their side everything went through normally, and that Dell indeed received the money. Somebody on a forum suggested that the order might be canceled because of some obscure export regulations. But why a company would cancel an order on such a basis without ever notifying the customer is beyond me.
It has been almost a month now, that I have been desperately trying to find out, when I will receive the notebook that I really need. Dell didn’t even bother to tell me anything. How is that different from the worst scams and frauds out there on the internet? To me that was a lot of money that I sent. I thought of Dell as being trustworthy. No more…

Update September 8th 2016

Barton sent me a mail today stating that they found the problem. They made sure it doesn’t happen again. And the notebook should be delivered early next week. Looking forward…

Update September 22th 2016

The box with the precious new power machine was delivered to me today.
Hooray! Finally! Jay! So excited!
Now I know what I will do tonight… Setting it all up.

Game modding with pen and paper

I have lots of good memories from youth camps. Some involve playing Donkey Kong and Mario Brothers while sitting on trees. Another classic video game was Asteroids. When I recently read an article in a German magazine about building an Asteroids clone with an Arduino and an OLED, lots of old memories resurfaced. The source code was provided, and the build was simple. As the control was used as digital, I didn’t use an analog joystick. When I gave it to the kids to play, they didn’t share the same enthusiasm that I had back then. But that’s probably because they grow up with lots more tiny computers than we had. So I wanted to involve them some more, and give them a sense of how this thing works. I don’t know how well they understood, when I explained them the concept of a pixel.
So I grabbed pen and paper, read the source code and drew the pixel art. Next, I told them they could modify the images to their liking, but still preserve the mechanics of the game. It was essentially the spaceship with one frame, the asteroid with three frames and the explosion with four frames. Seven year old Levin understood immediately, and painted his versions. For five year old Noah it might be a bit early, but he also participated enthusiastically.
All I had to do was transform their paintings back into source code and load it onto the AtMega chip. Now they were hooked a lot more to the game than before.

Running hostile software in a container

Remember Skype, the once popular phone software? I used it a lot when we were traveling in South America, and international calls were insanely expensive. But I stopped using it when it was acquired by Microsoft, and they switched from a P2P model to centralized servers. From what I could observe, it gradually worsened from there, and I really thought I wouldn’t have to use it ever again. That was until somebody decided that we had to use Skype for Business instead of XMPP at work. There are a plethora of better alternatives. The one I use the most these days is Tox.

I use the Windows Workstation only for things that I can’t do on Linux. There is not much that falls into this category, besides VisualStudio compiling projects that involve MFC. There is Skype for Linux, but there is no official Skype for Business for Linux. So for a moment it looked like the Windows machine got a second task. But running an obfuscated malicious binary blob from Microsoft with known backdoors, that is online all the time on an operating system that can not be secured makes me uneasy. So I looked for a way to run it securely on Linux. The first thing I found was an open source implementation of the reverse engineered proprietary protocol as a plugin for Pidgin. That sounded good, but it didn’t work unfortunately. The second option was a closed source clone from tel.red. They provide their own apt repository with regular updates. That’s quite good actually, if you don’t care about closed source software, and the security of your device and data in general.

I learned about docker a while back, but only used it marginally so far. This was the first real use I had for it, so I started learning more about it. Copying and adapting a docker file is a lot easier than the articles I read so far made me believe. I found a couple of sites about packing Skype into a docker container, but none for Skype for Business. So I took one of the former ones and adapted it. To use my container, just follow these easy steps:

git clone https://github.com:ulrichard/docker-skype-business
cd docker-skype-business
sudo docker build -t skype .
sudo docker run -d -p 55555:22 --name skype_container skype
ssh-copy-id -p 55555 docker@localhost
ssh -X -p 55555 docker@localhost sky

The password for ssh-copy-id is “docker”.

Then log into sky with your credentials. You can do this every time, or you can store a configured copy of the container as follows:

docker commit skype_container skype_business

The next time, you just run it with:

sudo docker run -d -p 55555:22 skype_business
ssh -X -p 55555 docker@localhost sky

I left some pulseaudio stuff from the original container at least in the README file. I don’t intend to use it for anything but receiving chat messages. But if you want to, feel free to experiment and report back.

keepkey premium bitcoin hardware wallet

I’m always interested when a new hardware wallet is announced. Naturally also for the keepkey. In contrast to most competitors, they didn’t take pre-orders. Instead they began to accept orders only when the product was finished and they were ready to ship. When they announced that the devices were finished and could be ordered, I was disappointed to find out that the price was a lot higher than I anticipated. It costs more than twice as much as a trezor. Since it also looks very shiny, I jokingly called it the iKeepKey.

Fast forward a few months, I packaged a new version of the trezor python library for debian. Since I knew that electrum also has a plugin for the keepkey, I figured I could just as well package the keepkey library to make the usage with electrum a bit more convenient for the owners of these devices on debian and its derivatives. The only thing I could verify without a device was that the option for the keepkey appeared when creating a new wallet with hardware support in electrum. Before I committ the package to debian propper, I wanted to be sure everything worked. So I sent an eMail to keepkey, asking if they could test my experimental package. Within hours I had an answer offering to send me a device free of charge. I couldn’t have hoped for so much generosity, but of course I happily agreed.

Today the parcel was delivered. The device is as shiny and good looking as it appears on the photos. It has a big, nicely readable screen that shows effects and animations. To host the bigger screen it naturally has to be signifficantly bigger than a trezor. The premium appearance doesn’t stop at the device itself, but also the woven cable, and the leather sleeve for storing the seed restoration card are very slick. I don’t know how much for the internals, but at least for the protocol, the trezor was used as a starting point. This is surely a very good choice.

There are other hardware wallets that descend from the trezor. But there is a big and important difference. The keepkey seems to be the only one so far that is trustworthy. The chinese clones such as bwallet or ewallet look good at first. But some people or even satoshilabs themselves were quick to point out that they didn’t properly sign their firmwares and did not release their source code. Effectively stealing the previous work and putting users at risk. In contrast to this, keepkey really play by the rules for the benefit of their users.

The card that comes with the keepkey, is about how to use it with a chrome browser plugin. I almost always prefer native applications over web apps. I try not to use chromium after a recent breach of trust. And it is not in the trisquel repositories anyway. So I want to operate it fully from within electrum. The last time I initialized a trezor, I’m pretty sure I had to use the firefox plugin. But in the meantime I noticed that the initialization part was added to the electrum plugin. So to initialize the keepkey in electrum I executed the following steps:

  • File -> New/Restore
  • provide a name for the new wallet
  • Select “Create a new wallet” and “Hardware Wallet”
  • Select “initialize a new or wiped device” and “KeepKey wallet”
  • Select your preferred use of pin and password
  • The keepkey shows some entropy information
  • Enter your new pin twice using the same method as known from trezor
  • Choose the number of words for your restore seed
  • Write down the words for the seed (very important to store securely)
  • And voila .. your keepkey electrum wallet is ready to use

Spending and everything I tested so far worked flawlessly. The operations work effectively the same way as with the trezor. But where appropriate it makes use of the bigger screen to show more information at once. So I guess I can start preparing my package for debian.

Here are some pictures to compare the size with other bitcoin hardware wallets:

HardwareWallets1 HardwareWallets2

let's encrypt

I never bought a commercial grade SSL certificate for my private website, but I used free ones before. Usually from startssl. While it worked, the process was cumbersome. And then when I wanted to renew, my browser showed a warning that their own certificate was out of order.

When the letsencrypt initiative (supported by mozilla and the electronic frontier foundation) announced it’s goal to make website encryption easier available we all cheered. Last week I finally received an eMail stating my domain was readily white-listed in the beta program. So I took some time and followed their process. It was not always self explanatory, but the ncurses program offered some help. Within a couple of minutes, I had a certificate ready to use. The only thing I did not like, was that if the process transmitted my private key to the server, there was no way of noticing other than actually read the code. I don’t think it did, but I prefer to be certain about these things.

To have my website protected, all I had to do was adding the file location that the utility program provided to the apache site configuration.

Now the bigger work was moving everything to my new server and adapt all the URL’s. Moving the blog was already more work than I expected. It was not a simple export and import. First I had to get the wordpress importer plugin working. The media files are not included in the exported file, and have to be moved manually. Some older blog posts still referenced the old gallery which I wanted to replace with piwigo for a while. So in addition to moving the piwigo gallery, I also had to move lots of photos from the old gallery, and adjust the references in the blog.

Some web apps are not moved yet and will follow. Finally I plan to redirect all http addresses to https.

On the nice side, I could use the new certificate to secure my new email server. I can’t remember when was the first time, but about once every two years I attempted to set up my own email server in the past. Setting up a web server is much simpler. But with the mail servers there was always some problem left that left me not confident enough to really use it. But this time I found a good tutorial that actually worked. It’s geard towards a raspberrypi running raspbian, but worked just fine on my nuc running ubuntu.

Verifying downloads

Last week I stumbled across a post from last year, where somebody described how it was impossible to download an important infrastructure program securely on Windows. My first reaction was of course to pity the poor souls that are still stuck with Windows. How easy is it to just type apt-get install and have all the downloading and validation and installation conveniently handled for you.

Today I was going to set up my new server. First I downloaded the current iso file from ubuntu.com. Before installing it onto an USB stick, I thought about this blog post again. Actually I should validate this ISO! I knew before, but I usually didn’t bother. So I gave it a try. I had to search a bit for it on the download page. The easiest is if you manually pick a mirror. Then you will find the hash sum files in the index page. Some websites along the way were encrypted, others were not. The downloads themselves were not. But that didn’t matter since the hashes were GPG signed. I don’t have to do this all too often, so I  just followed the community howto. My downloaded iso file was valid, so I moved on installing it.

The hardware is actually from computer-discount.ch. For quite some time I was searching for ways to buy computer equipment with BitCoin. The American big name tech companies that accept BitCoin either do it only outside of central Europe, or don’t deliver here. So I was quite excited to find this company from Ticino. The experience so far is very good.