Flying AdHoc Network

The first time I heard about FANET was at a gathering of some paragliding friends last year. They mentioned that they can display each others position on their flight computers. While that sounds cool, I don’t often get to fly cross country any more. Thus this feature was not of particular interest to me. Then some months ago I read an article about the Skytraxx 3.0 in a paragliding magazine. It was mainly focused on the builtin database of aerial obstacles, namely dangerous cables. But it also mentioned that weather stations could broadcast wind information on FANET, which the flight computer would then display in real time. Now that was more interesting to me. The part I like the most about the FANET technology is that it is an open LoRa mesh network. I watched a video where the developer explained that it is even possible to transmit landing procedures based on wind direction to be displayed on the flight computer. Further pilots can send messages to each other, and change the mode from “flying” to “retrieve car” or “need a ride”. All of this together was too much to ignore.
While FANET was developed by Skytraxx, it is an open protocol, and other companies started including support for it in their devices. The Skytraxx devices that come with FANET, also include FLARM. FLARM started as collision avoidance system for sailplanes. But in the meantime, most light aircraft are equipped. Devices for paragliders only transmit to FLARM. They are unlikely to crash into one another due to the slow speed. But by transmitting their position, faster aircraft can be warned soon enough about their presence. Like FlightRadar for big airplanes there is GliderNet based on FLARM and SkyNet based on FANET. These sites are fed by ground stations that decode the signals broadcast by the aircraft. All you have to do in order to appear on these sites, is register with the Open Glider Network. If you register in addition with LiveTrack24 and link your OGN registration (the FLARM id), then your flights are automatically archived. What I like most about this, is that I can give the URL to my beloved ones. If I’m not home in time, they can check if I am still airborne, and where my last recorded position was. So in the improbable case of an accident, they could send search and rescue in the right direction.

When a product is better than the description

When I was a kid I liked wrist watches from Casio. I had one with a calculator, one with an address database, one with an infrared remote control and one with an altimeter. But for the last 25 years I didn’t wear one. I don’t like to wrap anything around my wrist. And since I carry a phone, I have a way to find out what time it is.
When friends and neighbors started wearing fitness trackers, I thought I don’t need that. When I went running, I did it for my personal fitness, not to compare to somebody else. And I can care about my fitness without a device telling me to walk some more before going to bed. When my wive wanted to gift me a step counter for my birthday a couple of years ago, my response was: thanks, but no thanks. I have no use for a step counter.
Some times I brought my phone when I went running to record the GPS track just to try. Some co-workers upload all their activity to Strava, and claim “if it’s not on Strava, it didn’t happen”. Not so for me.
Since I started carrying my ultra light paraglider for run and fly, I took the phone with me more often. In the backpack it disturbs less than in the shirt. The main reason for carrying the phone was to be able to call for help in an emergency. And when I brought the phone with me anyway, I could just as well run the tracker app on it. But unfortunately it didn’t work very reliably. When the screen was off, it stopped tracking, and when the screen was on, it often registered fingers that weren’t there. So it happened often that it stopped tracking after a while, or deleted the track entirely. Sometimes I had a ton of apps open after running and I didn’t know what else happened to my phone. But still, with the few tracks that recorded at least the uphill running part, I could see my progress on that segment. That turned out to be more interesting than I anticipated.
So when my wife recently wanted ideas for my birthday, I told her “a cheap wristwatch with GPS tracker, that works without a crappy lock-in smartphone app”. My absolute nightmare is to have a closed source device that tracks my every move, where I have no control over the data it collects. Worst of all, it would become useless when the manufacturer decided to stop maintaining the app. I don’t want devices with planned obsolescence. Of course I had to do the research myself. On the product page they only mentioned their iOS and Android apps, which are of no use to me. I noticed a while ago, that there are some packages in the debian repo for Garmin Forerunner devices. Further research brought me to quite complicated methods to get the data from these watches. But then I found a page that stated that when you plug in the watch with its USB cable to a computer, it mounts a filesystem and you can just copy the activity files. If it is really that easy, then I really don’t understand all the fuss. Everything seemed to indicate that all Forerunner watches come with a USB cable for charging the device that also acts as a data cable. It is beyond me why they don’t mention that explicitly on the product page. So, for my purposes a relatively cheap Forerunner 30 or 35 should be just fine.
And so I got one for my birthday from my wive. It even has a heart rate sensor that I wouldn’t need. And indeed, just plugging it in with the USB cable, I can grab the fit files and either upload it directly to Strava, or convert it to a more common format using gpsbabel.

Bitcoin Advanced Course by 21lectures

Last week I attended a Bitcoin Advanced Course that was hosted by 21lectures. Lucas who is also the president of the Bitcoin Association Switzerland initially wanted Jimmy Song to teach his Bitcoin courses also in Switzerland. But when that didn’t work out, he decided to build the classes himself, with the help of great quality teachers and developers from the local community.
To guarantee fruitful interaction, the groups are kept small. But when I arrived, the group was even smaller than I expected. What surprised me even more, was that a good portion of the students came to Zurich from other countries especially for this course.
The biggest part of the course was taught by James Chiang. He is preparing a bigger course that he will host online. It consisted of theory and practical exercises.
Setting up the environment for the exercises proved to be almost as challenging as the hardcore crypto theory.
For me, the most interesting part was the last day, which was about the Lightning Network. As it is still new technology that is in heavy development, there is not a lot of learning material around. All the more valuable was the first hand information we received from Christian Decker.
An important part of the whole experience were the lunches. Most of the times, the teachers joined, so that we could ask additional questions and have interesting discussions.
If you are interested in Bitcoin and programming, I can definitely recommend this course.

A somewhat interesting aspect was also how to get to Zurich. Downtown parking during office hours is really expensive, and there can be traffic jams. The venue was very close to the main train station. So it would appear to be reasonable to get there by train. But a return ticket for one day costs CHF 56. Lots of Swiss people have a half price card for public transport. They changed their terms a couple of years ago. I made the mistake of reading the new terms and discovered that they are really not acceptable. So I drove there by car, which cost CHF 4.15 for the electricity and CHF 36 for the parking. Still a lot, but also a lot cheaper then by train.

CppOnSea

I meant to write about CppOnSea for a while. The event is already a month in the past. So I better write down my impressions as long as I can remember anything. My comments will probably be shorter than had I written it down earlier.
Last year I learned from a podcast about a new C++ conference in Great Britain. It made a good first impression. As the details trickled in over the course of the ensuing months, I started to think it would be worth visiting.
When I asked around in the office who would join, I got only one positive answer. Reaching the venue by plane would not only be impractical, but I also didn’t really want to pollute the atmosphere. So I proposed to drive there with my electric car.
I checked the weather in advance, since what I wanted the least, was driving through a snow storm for a whole day. Exactly the night before we left, we had a good portion of fresh snow. As it was even on the highway, we made rather slow progress in the first two hours. The rest of the trip was uneventful, with the exception of having to drive over a small pass because a tunnel was closed in the Elsass. We took the tunnel below the channel. It is different than riding through the Swiss mountains on the back of a train, but not too much different. We arrived late in the evening at a nice old hotel on the cliff right next to the event hall where the conference was going to be. The breakfast was a lot better than what I remembered from previous stays in the UK.
A baroque event hall built right into the cliff served as the venue for the conference. During the breaks we had a nice view onto the sea, and sometimes we had the impression we could see France on the other side.

Opening Keynote: Oh The Humanity

The opening keynote was funny and entertaining. That is all I remember.

Postmodern immutable data structures

The speaker presented his library for immutable data structures. They enable a more functional style. It sure has something to it, but I don’t see a use case in anything that I am currently involved.

What I Talk about When I Talk about Cross Platform Development

He had a much broader scope than what I considered so far. It is interesting to know, but I don’t think I will use any of it in the foreseeable future. But it triggered me to think about using emscripten again.

Better Tools in Your Clang Toolbox: Extending clang-tidy With Your Custom Checks

I have known and sporadically used clang on linux for some years. But even though it is a great compiler I didn’t use it too much because you would have to compile everything yourself, rather than using dependencies from the apt repository. Also I knew that clang is shipped with VisualStudio, but only for cross compiling to ARM. What was new to me, is that you can also compile (but not link) regular desktop applications on Windows, with some work even MFC applications. This in turn allows the usage of clang tidy, which a good portion of this talk was about. What was also new to me, is that the MSVC compiler switch /permissive- causes VisualStudio to use a completely new compiler that is no longer built with YACC, but is much more standards compliant. This better compiler introduces breaking changes to old code. That is why we didn’t use the flag so far. But I think it would be good to slowly introduce it module by module. This way we could sanitize the codebase, and maybe later start using the clang tools.

Deconstructing Privilege

This one was in the main hall, and for all attendees. It had nothing to do with C++ or with programming per se. It was more about social interactions with minorities. I still don’t know why there was such an emphasis on this topic. But it seems to be a phenomenon at lots of IT conferences lately.

The Hitchhiker’s Guide to Faster Builds

Building the CAD I am working on can take up to an hour if I build only locally. Over the years we optimize the pre compiled headers from time to time, but also the linker takes a lot of time. So this was especially interesting for me.
The speaker ran through an extensive list of approaches to reduce build times. Lots of it was not applicable for us, or too esoteric. But one main takeaway was that I should look into union builds. He mentioned cotire to help with that. When we switched to cmake a couple of years ago, I tried to use cotire to simplify the handling of pre compiled headers, but couldn’t really get it to work. Maybe it is time to re-visit it.

Diffuse your way out of a paper bag

This one was entertaining, but I didn’t learn much from it, except for the british humor.

A linear algebra library for C++23

In a way it is surprising that C++ has no linear algebra library standardized by now. Because of this many independent libraries exist, and many companies wrote their own implementations. This could lead to the conclusion that it comes too late. But I was delighted to learn, that the proposed library mixes well with existing libraries and data structures. So we will see how much of it we end up using when it is finally released.

Sailing from 4 to 7 Cs: just keep swimming

This one was about tooling. Nothing that I think will be applicable for us.

Keynote: What Everyone Should Know About How Amazing Compilers Are

This one was informative and entertaining. He had many good examples of how amazingly good modern compilers are at optimizing our code, and work around bugs in certain CPUs. This video is worth watching even if you don’t work with C++.

Why I deactivated Tesla app access

The official Tesla App is unfortunately not available for Ubuntu Phone. And there is no indication that it will be on my next phone, the Librem5 from Purism. On the bright side, from the computer I can control my car using the VisibleTesla desktop app running inside a docker container. But the best part about remotely controlling the car is that the API is publicly documented. Bindings are available for most scripting languages. That allows me to control the car from my Ubuntu phone at the command line. It also allows me to run a cron job to pre heat the car before I drive to and from work. It also allows me to precisely track how much electricity I charge, and where. It also allowed us to open the doors directly from an ethereum smart contract at Hack4Climate. And it allowed me to implement a cool live tracking for our summer holiday road trip. The possibilities are endless.

All my scripts authenticate using a token that is said to expire after 90 days. I set up my scripts so that I can enter my password to get a new token. And then the new token is used from there. Usually I enter the password on a maximally secured system, and then copy the file containing the access token to the other systems. That is because I saw in the API documentation, that remote starting the car requires the password explicitly. So if a hacker gained root access to my server or my phone, he could open the doors, but not drive away with my car.

When I first discovered that the Tesla account is secured only with a password, I was bewildered. I mean, this account is essentially a virtual key to my car. Everything that secures something with a value above a few hundred bucks, has used two factor authentication for many years. Having been in the Bitcoin space for some time, cyber security is very important for me. I refuse to use software based 2FA, instead I insist on hardware solutions. I have used a USB dongle with a secure element to manage my GPG keys for a long time. I use FIDO U2FA wherever I can. Most of my crypto currency holdings are secured by multiple hardware wallets. I switched my bank, because the former used text messages as second factor. And now, I find out that the most expensive thing that I bought in my entire live, is secured with only one factor. Wow! That was shocker No 1! So I picked a very long and hard to guess password. I didn’t store it anywhere. I am very cautious on which devices I even type it. But still I was uneasy about it all along.

Last week some of my scripts started reporting errors. As expected, an access token was expired. But I failed to get a new one by entering the password. So I tried logging in on the Tesla website. What I got to see, was a message that my account was blocked due to too many invalid login attempts. There was a button to reset the password. The result of that reset request was an eMail in my inbox with a link to a web form, where I can enter a new password. Hey, but wait a second. That eMail was NOT encrypted! Even if the link is only valid for a few minutes, everybody who sees it could take over my Tesla account, and steal my car. Seriously? That was shocker No 2!!! If a hacker gained access to my eMail account, he could even delete the mail, and I had no idea what’s going on.

I have regarded unencrypted eMails as an insecure means of communication for many years. And I thought that was common sense. For increased security, I run my own mail server. But my ISP added all the dynamic IP addresses to a spam list, and wants me to pay for an expensive business account in order to have eMail work well. Hence I use an externally hosted eMail address for most of the time, also for my Tesla account. So I wanted to quickly verify the security of that mail account. And while I’m at it, change the password to a more secure one. But the first surprise came in the form of the customer login to the management system. It was http only. No way to enter the password without running the risk of it being eavesdropped on. Seriously? That was shocker No 3!!!

Sure, it’s easy to blame my eMail provider, or me for selecting it. In reality it used to be hosted with another company that was later acquired. That just highlights the fact, that it is outside of your control. Email is not secure, and should not be used to transmit sensitive information, unless it’s encrypted – Period! I read about hacked eMail accounts and account takeovers every week. Lots of websites require some security questions in order to unlock an account. That’s better than nothing, if there is not a lot at stake. But if an account controls anything of value, solid 2 factor authentication is a must. Even if the mail account offers FIDO U2FA, I wouldn’t trust it with my car. For example gmail offers U2FA. But guess what happens when you log in with a browser that has no support for it. Yes right, convenience gets priority over security.

Account Recovery Exploitation is a known problem. Let me quote a paragraph from an article by yubico: 5 Surprisingly Easy Ways Your Online Account Credentials Can Be Stolen

Due to the large scale of users for many services and the general desire to keep support costs low everywhere, account recovery flows can be much weaker than the primary authentication channel. For example, it’s common for companies deploying strong two-factor authentication (2FA) solutions as their primary method to leave SMS as a backup. Alternatively, companies may simply allow help desk personnel to reset credentials or set temporary bypass codes with just a phone call and little to no identity verification requirements.
Services implementing 2FA need to strengthen both the primary and the recovery login flow so that users aren’t compromised by the weaker path.

Unfortunately, both the primary and the recovery login flow of the Tesla account are incredibly weak. As much as I love the cool and convenient features from remotely controlling my car, I disabled app access in the settings screen of the car. I would like to re-enable it very much. But only once I can trust the security of it again.

I read many times how important security is for Tesla. And how fast they respond to fix vulnerabilities. But then I found numerous reports of people complaining about the very same problems from FOUR years ago: 1 2 3. Sure, security means different things to different people. I’m grateful to the engineers who make sure, I don’t get killed in the car. But I also don’t want my car to get stolen or broken into so easily. When discussing this topic on a forum, one guy stated he doesn’t want to carry a secure hardware device the size of a key, and that he doesn’t care if his car is stolen. He has insurance. I have insurance too, but still don’t want to go through that experience.

Now, if you read this far and have a Twitter account, may I ask you to visit https://www.dongleauth.info/#iot, and click the button next to Tesla?

The mother of all hackathons

I just returned from #hack4climate. Even if it was just my third hackathon, I can state with certainty that this one was unlike any other. None of the 100 hackers from 33  countries experienced anything remotely comparable before.

The topic of the event was to develop solutions how blockchain technology can help fighting climate change.

First let me explore how the event differentiated from other hackathons. The hacking session was 24 hours, but the whole event lasted four full days. There were pre-workshops around the world. 100 participants were selected and invited to Bonn. Travel expenses were covered. We stayed on a five star hotel ship. It was adjacent to the UN climate conference. We had balcony suites on the ship. The food was appropriate for a 5 star ship, complete with wine to every dinner. The days before and after the hacking session were filled with interesting talks, a guided city tour, interesting discussions and lots of networking. There were so many interesting people and so much to talk about. At the last day they wanted to make a photo of us on the boat in front of the UN building. Drones were forbidden in the security zone, so the photographer rented a crane to get the perfect shot.

I knew nobody from the event in advance. But I knew that out of the sub topics, I was most interested in “sustainable transportation”. At the team building session, I headed straight to the guy with the most interesting pitch that contained something about cars. Our team was formed soon after, and I had a good feeling from the start. Two were from Singapore who already knew each other. Two were from India one living in San Francisco and the other in China. And one was also from Switzerland, but we didn’t know each other before.

When the hack session started at Tuesday noon, we shaped our rough ideas into a project that we could realize in the short amount of time. Then everybody stated what he would like to do. It all seemed to fit together wonderfully. I wanted to implement the smart contract. I didn’t have much experience in that area, and was grateful that the others could help me and answer my questions. Rather than drawing large diagrams, we collaborated on the interfaces, and then worked towards these. We didn’t hit mayor roadblocks or problems, everything seemed to flow in place. Most of us agreed that we are not productive after 2AM and that is is better to get some hours of sleep. In the morning we went out to shoot a video of our product in action.  The guys from SBB (who was a sponsor of the event) were around us most of the time. They helped where they could, and were generally very interested and engaged. We had many great discussions with them.

Our project was about end to end transportation. On the mobile app, you select a destination, and it identifies legs to use different means of transportation. We focused on car sharing, but other options include trains, bikes or buses. Our smart contract abstracts a car that can be rented over the ethereum blockchain. The owner of the car registers it by creating an instance of the smart contract. A person who wants to rent it can do so by sending ether. The required amount is determined by the price per km the owner wants, times the number of km the renter wants. If he doesn’t use up the credit, the rest is reimbursed at the end of the trip. But if he drives too far, the cars performance is degraded by the smart contract. The car was represented by a RaspberryPi running an ethereum node and our backend running on nodejs. Initially opening the car was indicated by an LED attached to the RPI. But to make it more realistic, the RPI then called the Tesla API to open a real car. At the end of the trip the RPI collected information about the car such as odometer and battery level as well as firmware version, stored it on the IPFS and registered the IPFS address with the smart contract to form an unfalsifiable audit trail. Last but not least, one of our team members used data from moving cars and turned it into an appealing 3D animation that highlights the hot spots in a city.

We were thrilled all along, even more after all the positive reactions to our presentation. And hooray, we made it into the finalists! That meant, we could present our project at the COP. That’s the fair for NGO’s which is attached to the UN climate conference. The team that won the hackathon, did so deservedly. Their project was about incentivizing land owners not to cut their trees. They used blockchain and game theory for the monetary part. In addition they trained a neuronal network to predict areas which are endangered most of deforestation, and need special attention.

A first official video appeared here, and I’m sure others will follow on the official website.

Update Dec 16 2017

The official after movie of #hack4climate was released:
https://youtu.be/UOANny6i0QM

Quality

Quality is important

During the apprenticeship at Victorinox we were indoctrinated with “quality above all else”. The teacher told us that also in our private lives we would be better off buying quality products even if they are considerably more expensive than throwaway products. Part of it stuck with me ever since. But I relaxed the rule somewhat. I usually assess how much I am going to use a product before I buy it. If it is a tool I expect to only need once a year, I buy a cheap one. The good ones are for people who need them in their everyday work.
But shoes are something entirely different. When I switched from the machine industry to software development early in my professional career, live became better. Shortly after the salary hike, I visited a shoe store. When the sales lady talked me into buying a pair of fine English goodyear shoes, I reminded myself that quality pays off in the long run. I have had these very same shoes for 19 years. I used them a lot and they are still very comfortable. Over the years, they needed some minor repairs. From above they still look perfectly fine, but now they are worn off, so that a bigger repair would be required. The cost would be about half of what a pair of new ones cost. I hesitated a while. Nineteen years is already quite remarkable, but how cool would it be to have 40 year old shoes that are still in regular use? On the other hand, by buying the almost same model from the same manufacturer at the same store, I am rewarding them for their exceptional work.

Economics are even more important

Producing quality products is unfortunately no guarantee for success in our society. A school mate once told me that his father worked as an electrician in New York. He was fired because the stuff he did never broke.
Quality control is well understood in the mechanical manufacturing industry. We worked with precisions of 0.01mm on a daily basis. It is also understood on the broader scale. Companies rigorously test their products before going into mass production.
But it is an entirely different beast in software. There are many facets to it. They range from not crashing, deliver expected results in all edge cases, robust against malicious attacks, execution time, energy efficiency, to maintainability of the source code. As an engineer, it saddens me to no end to witness that companies with the lowest quality software do so well economically. What could be the reasons for that? What is so different in software from hardware? The most obvious difference is that software can be patched so easily. Found a critical bug? Just push an update that fixes it. I used to say that software is newer and not as well understood as other disciplines. But after 17 years in the field, I no longer think of this as a major contributing factor. After all, the mechanical first industrial revolution happened also not a lot more than 100 years ago. I think the customers or end users are responsible to a large degree. As long as companies can sell sloppy products with aggressive marketing, why should they invest into quality? I hope that the digital natives, that are growing up with ever more technology, have a better understanding. But it might also be that they accept the current state as the reality we live in.

Resetting the Logitech K810 bluetooth keyboard

The Logitech K810 has been my favorite keyboard for many years. I have one at home and one in the office. It allows to easily switch between 3 different devices. It has the same size and layout as most notebooks, is stylish, and a blast to type.
But one day about a year ago something strange happened. When I wanted to type a ‘\’ I got nothing. When I wanted to type a ‘<‘, I would get a ‘§’. When I wanted to type a ‘>’, I would get a ‘°’. And vice versa. So, effectively two keys were swapped. This was only on the Windows workstation in the office. When I switched the keyboard to the Linux notebook or to the phone, all keys were correct. It also never happened at home.
At first I thought this was a joke by my co-workers. So I tried everything on the windows machine: un-pairing and re-pairing. Scanning the registry for uncommon key mappings. Nothing helped until I found a page that described how to perform a factory reset on the keyboard. The problem was solved, but only for a year. Last week when I came to work after the weekend, the very same keys were swapped again. Finding the page with the reset instructions was more difficult than I remembered. It was not freely available, but only after logging in on the Logitech support page. That is why I want to preserve it here, in case it happens again to me or anybody else:

  • un-pair and remove the keyboard from the bluetooth settings on your pc
  • reset your keyboard: with keyboard on and unpaired from any device, press the following key sequence:
  • “Escape”, “o”,“Escape”, “o”,“Escape”, “b”
  • if the reset is accepted the lights on top of your K810 will blink for a second
  • reconnect your K810 to your pc and test it on other devices if possible.

Meeting C++ 2016

This is my first time at Meeting C++ in Berlin. I came here with my boss Andi. To profit more, we split up during the talks. Afterwards we shared what we learned.
I will complete this post later, and add links to the presentations and videos as they become available.

I attended the following talks:

Opening Keynote by Bjarne Stroustrup

He talked about the evolution and future direction of C++. Explaining the guiding principles and philosophy of the language. He also explained how the standards committee works, and that even he himself is sometimes over voted. He could tell that and even name the people of other opinions without any bitterness. Very professional and focused!
The main point that sticked out was: “zero overhead abstractions”

C++ Core Guidelines: Migrating your Code Base by Peter Sommerlad

Unfortunately Peter Sommerlad was sick and couldn’t come. So Bjarne Stroustrup agreed ten minutes before his own keynote to jump in, and give the talk without any preparation. He claimed never to have had a talk about this topic. He had some slides with the name of his employer, and he jumped around in those slides. Other than this barely noticeable detail, you couldn’t tell that the talk was not prepared. He talked about how to use the [GSL](https://github.com/Microsoft/GSL) in new code. But the main focus was on how to gradually improve old legacy code by introducing the types the GSL provides. In the future there should be even tools to perform the task automatically.

Reduce: From functional programming to C++17 fold expressions by Nikos Athanasiou

He started out by showing how fold can be performed at runtime with std::accumulate(). Then he gave some theory and showed the syntax of other languages such as: haskell, python and scala. The C++17 fold expression operator doesn’t just add syntactic sugar, but open up a load of new possibilities. With constexpr functions, the folds can be evaluated at compile time. As a consequence they can not only operate on values, but even on types. The talker shared with us how he broke his personal error message record: During his experiments he got an error with a quarter of a million lines!

Implementing a web game in C++14 by Kris Jusiak

In this talk we witnessed how a relatively simple game can be implemented with help of the following libraries: ranges, dependency injection and state machine. The code was all in pure C++14 and was then compiled to asm.js and/or webassembly using emscripten. The result was a static website that runs the game very efficiently in the browser. In the talk we were walked through the different parts of the implementation. In contrast to a naive imperative approach, after the initial learning curve this can be maintained and extended a lot easier.

Learn Robotics with C++ in 1 hour by Jackie Kay

We didn’t actually learn how to program robots. First, she walked us through some history of robotics. By highlighting some of the mayor challenges, she explained different solutions, and how they evolved over time. Because robots run in a real time environment and have lots of data to process, performance is crucial. In the past the problems were solved more analytically, while nowadays the focus is on deep learning with neuronal networks. She had a strong emphasis on libraries that are being used in robotics. To my surprise, I knew and used most of them, even the ones she introduced as lesser known such as dlib.

Nerd Party

In the evening there was free beer in the big underground hall. There was no music, so that people could talk. Not really how you would usually imagine a party. We had a look at the different sponsor booths, and watched some product demos. After a while we went up to the sky lounge in the 14th floor with a marvelous view over the city.

SYCL building blocks for C++ libraries by Gordon Brown

Even though I experimented with heterogeneous parallel computing a few years ago, I was not really aware what is in the works with SYCL. My earlier experiments were with OpenCL and Cuda. They were cool, but left a lot to be desired. I never looked into OpenAMP despite the improved syntax. In Contrast SYCL seems to do it right on all fronts. I hope this brings GPGUP within reach, so that I could use it in my day to day work sometimes. In the talk, he showed the general architecture, how the pipelines work. Rather than defining execution barriers yourself and schedule the work, you define work groups, and their dependencies. SYCL then figures out how to best arrange and schedule the different tasks onto the different cores. Finally he talked about higher level libraries where SYCL is being integrated: std parallel algorithms, tensor flow and computer vision.

Clang Static Analysis by Gabor Horvath

During this talk we learned how static analyzers find the potential problems in the code to warn the developers about. Starting with simple semantic searches, through path tracing with and without branch merging. Bottom line was that there is no one tool to beat them all, but that the more tools you use, the better. Because they all work differently, each on can find individual problems.

Computer Architecture, C++, and High Performance by Matt P. Dziubinski

This talk made me realize how long ago it was, that I learned about hardware architectures in school. Back in the day we thought mainly about the simple theoretical model of how an ALU works. The talk made clear how you could boost performance by distributing the work to the different parallel ALU’s that exist within every CPU core. In the example he boosted the performance by two simply by manually partially unroll a summation loop. Another important point to take home is the performance gap between CPU and memory access. Even for caches, it is widening with every new hardware generation. Traditional algorithm analysis considers floating point operations as the expensive part. But meanwhile you can execute hundreds of FLOP’s in the time it takes to resolve a single cache miss. On one side he showed some techniques to better utilize the available hardware. And on the other hand he demonstrated tools to measure different aspects, such as usage of the parallel components within the core, or cache misses. With so diverse hardware it is really difficult to predict, thus measuring is key.

Lightning talks

The short talks were of varying quality, but mostly funny. As with a good portion of the talks, there were technical difficulties with connecting the notebooks to the projectors.

Closing keynote by Louis Dionne

C++ metaprogramming: evolution and future directions
We both didn’t know what to expect from this talk. But it proved to be one of the best of the conference. He started out by showing some template meta programming with the boost::mpl, transitioned to boost::fusion, and landed at his hana library. The syntax for C++ TMP is generally considered insane. But with his hana library, types are treated like values. This makes the compile time code really readable and only distinguishable from runtime code at a second glance. True to the main C++ paradigm of zero overhead abstraction he showcased an implementation of an event dispatcher that looks like runtime code with a map, but actually resolves at compile time to direct function calls. Cool stuff really. Leveraging knowledge that is available at compile time and use it at compile time. He even claimed that in contrast to some other TMP techniques, compile times should not suffer so much with hana.

Conclusions

C++ is fancy again!
I have been programming professionally for about 17 years. In all this time C++ has been my primary language. Not only that, it has also always been my preferred language. But there were times where it seemed to be stagnating. Other languages had fancy new features. They claimed to catch up with C++ performance. But experience showed that none ever managed to run as fast as C++ or produced such a small footprint. The fancy features proved either not as useful as they first appeared, or they are being added to C++. In retrospect it seems to have been the right choice to resist the urge to add a garbage collector. It’s better to produce no garbage in the first place. RAII turns out to be the better idiom as it can be applied to all sorts of resources, not only memory. The pace with which the language improves is only accelerating.
Yes, there is old ugly code that is using dangerous features. That is how the language evolved, and we can’t get rid of it. But with tools like the GSL and static analyzers we still can improve the security of legacy code bases.
Exciting times!

Electrum 2.7 with better multisig hardware wallet support and Ledger Nano S

Electrum has been my favorite Bitoin wallet software for a very long time. The reason I had a look at it initially was because there was a debian package. Only when Trezor hardware wallet support was added and was not yet released, I downloaded the sources. It is written in Python. I work with python regularly, but it is not my primary language. But for frequently updating and testing experimental software, it is pretty cool. That’s how I started to report bugs in the unreleased development branch, and sometimes even committing the patches myself.
But the reason I’m writing this post is, that the new 2.7 release contains two features that are important to me.

Ledger Nano S

One is that the Ledger devices now also support multisig with electrum. I took this as the trigger to order a Nano S. It works totally different from the HW1 in that it has a display. Thus you can set it up without an air gapped computer. With only the two buttons, you can navigate through the whole setup process. As a bonus it is also to my knowledge the first hardware device to store Ethereum tokens, not counting experiments such as quorum. So I finally moved my presales ETH.

Multisig with hardware wallets

I wrote about multisig with hardware wallets before. But Thomas took it a huge step further. Now it’s not only super secure, but also super user friendly. Now the hardware wallets are directly connected to the multisig wallet. No more saving unsigned transactions to files and load in the other wallet. You can still do that if you have the signing devices distributed geographically. Given a solid backup and redundancy strategy, you can now also have a 3 of 3 multisig hardware wallet. So your bitcoins would still be secure if your computer was hacked, and two of the three major BitCoin hardware wallets had a problem, which is very very unlikely.

The only thing still missing is the debian package for the 2.7 version.