Working from home during the Corona shutdown

Like many people these days I currently work from home. Due to the corona pandemic, everything that is not essential is closed in Switzerland. Since I work in software development, it is possible to work at home. Most if not all people in my team do so. What is great about the new situation is that I can now eat lunch with my family. We also try to go for a short walk to the lake or the forest after lunch. But there are a couple of factors that make working at home challenging:

  • The factor that I anticipated to be the worst, is distraction. Normally when I try to work on something at home that needs concentration, it takes on average five minutes before somebody comes and wants something from me. And then again after another five minutes, and again and again. This is why I can usually only work at home, when everybody else is asleep. All the more astonished am I, that they leave me do my work now that it’s for my employer and not a hobby project of mine. A very important contribution is from my noise cancelling headphones. Without them this would not be possible.
  • The office at home is the size of a broom closet. It is 1.4 by 2.2 meters with no window or direct daylight. Thus it is important to go out to the patio from time to time to get some fresh air and some rays of sun light.
  • I have a very comfortable chair in my home office, and a nice solid table. But I grew so used to the table I have at work that I can raise to a standing position whenever I want. Even if I wanted to buy such a table, I couldn’t fit it in my small office at home. So I have to take care to move my body enough, not to develop back pain. Especially now that I can’t go swimming in the communal pool. I just hope the lock-down won’t get so bad that I can’t go running any more.
  • My screens are roughly 20 years old, and the low resolution makes it a bit challenging to work effectively. I wanted to order a new screen for years, but always postponed the purchase. Now that I knew that I would work from home, I figured it is the time to go ahead. Even though I ordered it very soon, delivery took more than a week, as the online shops and delivery companies are totally overwhelmed at the moment. The new screen is a blast. It is even bigger than I imagined.
  • And then, there is the big elephant in the room. Let me begin by a quote I recently read on the website of the Session messenger: “Friends don’t let friends use compromised messengers”. This statement really resonated with me. On the opposite end of the spectrum, there is a communication software that is closed source, has a proprietary protocol, centralized infrastructure, no end-to-end encryption, constant access to the internet, the microphone, the webcam, the keyboard and the screen. On top of that it also has the capability to take over control of the computer. Back Orifice pales in comparison with these capabilities. That it is tedious to use and only fully works about half of the time is only the lesser evil. It was developed by a company with a long track record of deception and abuses. This software is called Microsoft Teams, and it was recently declared as the primary means of communication in our company. In the past, I flat out refused to use it. But in the current environment of emergency, I felt that I could not complicate things. Apparently, there was not much opposition against inviting the panopticon into our homes. When a co-worker told me that it could be used from within the browser, I was slightly relieved. As long as it is contained in the browser sandbox, the amount of harm it can do is somewhat limited. With the browser you have some control over what access you grant it. Unfortunately in the browser only the chat feature worked, but no audio or video calls. So my team lead asked me to install the desktop client. Installing malware directly on the machine was a no-go for me. So I installed it quarantined inside an empty virtual machine. This now works for audio conferences. But I feel uneasy, uncomfortable even stressed, whenever it is running. My stress level when Teams is running is comparable with sitting on a dentist chair. That is not healthy over time. Thus I often block access to the microphone and the network to the VM, but that brings only light improvement. So, when somebody writes on slack, I enable Teams, but I can’t have it running with full access all the time, I just can’t. I am reachable through slack, email, phone, text messages, tox, session, even telegram. They all have an open source client that I can trust. I just need a quick note, to start Teams on request. Isn’t it ironic that with the struggle to care about physical hygiene, nobody seems to think about digital hygiene.

I often think about why I care more about digital security and privacy than average people. So many people carelessly ignore the security of their devices, it is completely reckless. How people voluntarily put something like an Amazon Alexa in their home is beyond me. I don’t think I have more to hide than other people. For a long time I have cared about FLOSS . It is not only that I dislike artificial barriers, vendor-lock-ins and planned obsolescence. It is also the trust gained through being able to inspect the software. But the biggest impact came, when I started to be involved with Bitcoin. This is when I really learned about the value of information, and how to protect it. There were times when I had more wealth sitting on my computer than on my bank account. Who wouldn’t think about how to protect it from the grabs of thieves. With Bitcoin, you are responsible for the private keys. When you fail to protect them, your wealth is gone. There is no bank you can beg to reverse the transaction. But on the plus side, If you protect your data well, nobody can steal it from you. If your bank goes bust, your Bitcoins are still save. Many people don’t want that responsibility, and prefer somebody else to handle it for them. I can see the same behavior with cloud computing. Bitcoin people are very passionate about OpSec. I am talking about the original cypherpunk people here, not the “get rich quick” crowd that showed up later on. There is a mantra in the Bitcoin world: “don’t trust, verify”. Everything that can’t be verified, such as closed source software has to be considered compromised.

Ok, enough of going on a tangent. This post is about working from home. My wive calculated the first day, that I should now be able to finish at least an hour earlier, because there is no more commute. Sounds reasonable, right? My usual day now looks like this: I get up at the same time as usual and take a shower. I dress the same way and groom the beard the same way as I would, if I went outside. Instead of having breakfast alone and driving to work, I start working. When everybody is ready, we have breakfast together. After that I work again until lunch is ready. When I have to go to the toilet, I also grab a fresh tea and go outside for a minute to get some fresh air and some sun. The lunch break is longer than at the office. The kids eat very slowly, and we have a rule at home that we all wait at the table for everybody to finish. Then we usually go for a walk to the forest and/or the lake. We are very fortunate that both are only about two hundred meters away. Because the lunch break is longer, I often work as long in the afternoon, as I usually am in the office. Sometimes I even work till the time I would otherwise arrive at home.

We also currently spend the weekends mostly at home. So I took the chance, to tidy up and clean my small office at home.

Interesting reading about privacy in the current state of emergency:

Resetting the Logitech K810 bluetooth keyboard

The Logitech K810 has been my favorite keyboard for many years. I have one at home and one in the office. It allows to easily switch between 3 different devices. It has the same size and layout as most notebooks, is stylish, and a blast to type.
But one day about a year ago something strange happened. When I wanted to type a ‘\’ I got nothing. When I wanted to type a ‘<‘, I would get a ‘§’. When I wanted to type a ‘>’, I would get a ‘°’. And vice versa. So, effectively two keys were swapped. This was only on the Windows workstation in the office. When I switched the keyboard to the Linux notebook or to the phone, all keys were correct. It also never happened at home.
At first I thought this was a joke by my co-workers. So I tried everything on the windows machine: un-pairing and re-pairing. Scanning the registry for uncommon key mappings. Nothing helped until I found a page that described how to perform a factory reset on the keyboard. The problem was solved, but only for a year. Last week when I came to work after the weekend, the very same keys were swapped again. Finding the page with the reset instructions was more difficult than I remembered. It was not freely available, but only after logging in on the Logitech support page. That is why I want to preserve it here, in case it happens again to me or anybody else:

  • un-pair and remove the keyboard from the bluetooth settings on your pc
  • reset your keyboard: with keyboard on and unpaired from any device, press the following key sequence:
  • “Escape”, “o”,“Escape”, “o”,“Escape”, “b”
  • if the reset is accepted the lights on top of your K810 will blink for a second
  • reconnect your K810 to your pc and test it on other devices if possible.

Running hostile software in a container

Remember Skype, the once popular phone software? I used it a lot when we were traveling in South America, and international calls were insanely expensive. But I stopped using it when it was acquired by Microsoft, and they switched from a P2P model to centralized servers. From what I could observe, it gradually worsened from there, and I really thought I wouldn’t have to use it ever again. That was until somebody decided that we had to use Skype for Business instead of XMPP at work. There are a plethora of better alternatives. The one I use the most these days is Tox.

I use the Windows Workstation only for things that I can’t do on Linux. There is not much that falls into this category, besides VisualStudio compiling projects that involve MFC. There is Skype for Linux, but there is no official Skype for Business for Linux. So for a moment it looked like the Windows machine got a second task. But running an obfuscated malicious binary blob from Microsoft with known backdoors, that is online all the time on an operating system that can not be secured makes me uneasy. So I looked for a way to run it securely on Linux. The first thing I found was an open source implementation of the reverse engineered proprietary protocol as a plugin for Pidgin. That sounded good, but it didn’t work unfortunately. The second option was a closed source clone from tel.red. They provide their own apt repository with regular updates. That’s quite good actually, if you don’t care about closed source software, and the security of your device and data in general.

I learned about docker a while back, but only used it marginally so far. This was the first real use I had for it, so I started learning more about it. Copying and adapting a docker file is a lot easier than the articles I read so far made me believe. I found a couple of sites about packing Skype into a docker container, but none for Skype for Business. So I took one of the former ones and adapted it. To use my container, just follow these easy steps:

git clone https://github.com:ulrichard/docker-skype-business
cd docker-skype-business
sudo docker build -t skype .
sudo docker run -d -p 55555:22 --name skype_container skype
ssh-copy-id -p 55555 docker@localhost
ssh -X -p 55555 docker@localhost sky

The password for ssh-copy-id is “docker”.

Then log into sky with your credentials. You can do this every time, or you can store a configured copy of the container as follows:

docker commit skype_container skype_business

The next time, you just run it with:

sudo docker run -d -p 55555:22 skype_business
ssh -X -p 55555 docker@localhost sky

I left some pulseaudio stuff from the original container at least in the README file. I don’t intend to use it for anything but receiving chat messages. But if you want to, feel free to experiment and report back.

What could go wrong when ordering pizza?

For some months now it was possible to order pizza for BitCoin in our area. I wanted to give it a try since it was announced. But only last Thursday, I proposed to my coworkers to order pizza. And that I would pay with BitCoin. It was meant as a demonstration how cool the virtual currency is, and that it is actually useful in the real world. I was going to take pictures and blog about it. After all, a pizza deal was the first real use and most famous BitCoin transaction in history.

So I placed the order with lieferservice.ch for pizza’s from Angolo, where we used to go for lunch before. The website was really cool, we could order extra ingredients on top of the regular pizza. Payment was a breeze, as always with BitCoin. It was 11:25 when I placed the order, and I picked 12:30 for the delivery. The email confirmation from lieferservice.ch followed immediately. But when we all grew more and more hungry, I tried to call Angolo at 12:45 to ask where our food was. Nobody answered the phone. I tried again, and again, and again. Nothing, not even an answering machine. After 13:00 we decided we would drive to Angolo with the confirmation email, and eat our pizza in the restaurant. When we arrived, it was closed for holiday.

This is clearly not how this is supposed to work. The guy from lieferservice appologized, and told me their contractors are ment to tell them when they change opening hours. He couldn’t refund me in BitCoin, and asked for my IBAN instead. One of my colleagues was so pissed off, he said he wouldn’t go to Angolo ever again.

Presentations with code that actually works

I don’t do presentations that often these days. And if I do, more often than not, they contain some form of source code. With most things you write, you refine it over and over. This is especially true with stuff that you present. Applied to code snippets, that can mean you test it initially, but once it is in the presentation it is a burden to copy it back and forth to verify every change, and then start over again with the formatting. So you often end up changing your code snippets in the presentation, without verifying if the code is still valid. Sometimes you find these errors during proof reading, but even famous presenters caught compile errors during the presentation. That’s how it works when you use the traditional PowerPoint style of products. As I expressed earlier, the Office suite and their opaque file formats doesn’t belong to my favourite tools.

Thus after I recently learned LaTex, I wondered if presentations could be done with it. Sure enough TexMaker offers a good set of templates for just that.

Next I wanted to see if I could link in code from external files, and sure enough, there is the listings package for LaTex. Now that enables me to have the code in files that I can actually compile.

But wouldn’t it be cool, if I could compile the code snippets for verification and generate a pdf file from the tex source all at one go? Sure enough there is a cmake UseLATEX package.

Now wouldn’t it be even cooler, if I could edit and generate all from within the same console window, without having to exit the editor, start the editor from a specific directory, or type complicated commands? Sure enough I found out how to write project specific .vimrc files. With everything prepared, I just have to type :make in vim to trigger the process, to get a new pdf file with all code snippets verifyed.

A small project to demonstrate the technique is at: https://github.com/ulrichard/experiments/tree/master/initializerlists

And you can find the resulting pdf file at: https://github.com/ulrichard/experiments/releases/download/initlist_0.1/initializerlists.pdf

vim meets VisualStudio

There are two camps of neckbeards: Those who use emacs, and those who use vi or vim. I can’t tell which is better, and most of the arguments seem to be rhetoric. Until about three years ago, I perceived both as insufferable. I was however curious to learn either of them. The question was which one to pick. During my uncertainty, vim was praised more on hacker news. So I gave it a try. At first, it was awkward to work with, but after a while I managed to get along. People often tell how blazingly fast you are editing with vim. But for a long time, I was not nearly as efficient as with other editors. At the moment I’m reading the book “Practical vim” which has a ton of good tips. It seems the flood of shortcuts is never ending. In a way memorizing more commands and shortcuts is like having more keys. That kind of reminds me of an article, I once read. It compared working with a GUI vs on the console to listening radio vs playing piano. I can’t find the article right now, but it had similar reasoning as this one.

So I’m constantly improving my vim skills. In the meantime I’m about on par with how efficient I am at using the style of editors, that I have been using for two decades. To improve further, I thought I would need to practice more. So the natural progression was to use it on the job. For work we use VisualStudio, and unless I could easily compile and debug out of vim, switching back and forth would be counter effective. So I was thrilled to find out that there is a plugin to bring vim style editing to VisualStudio. I only just started using it, but it certainly looks promising.

The crapware platform

I complained many times that there is no standard package manager on Windows, and that installations and especially upgrading software on that platform is an unholy mess. On my office computer there are probably close to ten different mechanisms present to keep different software packages up to date. Some lurk in the system tray, and most of them constantly waste resources. The update mechanism of our software is a little bit better than most in that respect. It doesn’t waste resources while it’s not in use, but it’s still a separate proprietary solution. And the worst part is, that most of the software on usual Windows Systems don’t even get updated at all.

I looked for a solution as simple, elegant and powerful as apt-get many times. The best I found so far was Npackd. It’s still a decade short of the debian system, but better than anything else I found. The repository has grown significantly in the years I have used it. But even if Npackd implements dependency management, the packages rarely make use of it. It’s just not the way Windows packages are made. Rather than managing the dependencies, they keep inventing new versions of dll hell.

I don’t know what is the reason that upgrades in Npackd frequently fail. It’s usually that the uninstall of the old version fails, and thus the update stops. What I usually did in the past, was installing the new version in parallel. I think there is not much Npackd could do about WindowsInstaller packages failing to uninstall. Having crafted WindowsInstaller packages myself, I know how brittle and error prone this technology can be.

Today I upgraded some packages that Npackd flagged as upgradeable. You select the ones you want to bring up to date, and click update. It’s not like “sudo apt-get upgrade” and done, but it still makes Windows a lot more bearable. And for a long time the quality of the packages was good, at least for Windows standards. It started out with mostly open source projects and a few big name packages. The crapware that is so stereotypical for the Microsoft platform had to stay out.

That impression changed today. One of the packages that I upgraded was IZArc, a compression package with nice Windows Explorer integration. Already during the upgrade process I had a strange feeling, when I saw the ads in the installer window. And when it was done, I was certain something fishy had happened. Some windows popped up wanting to install browser toolbars, changing the default search engine and scan the computer for possible improvements. Holly shit I thought is this some scareware? I would expect this from some random shareware downloaded from a shady page, but not from Npackd.

And that’s my main point. When you install software on your computer, you trust the issuer not to hijack your system. And if you install software through a software repository, you trust the repository even more. On Windows, you’re pretty much dependant on lots of individuals and companies involved in the creation of all the packages you install. There is a Microsoft certification process, and I don’t know what it checks and entails. There is also the possibility to sign your packages with a key signed by Microsoft. But that merely protects from tampering between the issuer and you. With OpenSource software however, you can examine the sourcecode yourself, and rely on the fact that other people checked it as well. Then most distributions have build hosts that compile and sign the binary packages. To be included in the repository, a maintainer has to take responsibility for the package, and upload a signed source package. The source package can be verified by everyone. So, the only thing you have to trust is the build host. But even that you could verify by building the package yourself, and compare the result. So the whole thing is fully transparent. Hence, if one individual decided he wanted to earn some bucks from advertising and bundling crapware, he wouldn’t get very far. As a nice add on, apt (or synaptic for that matter), can tell you exactly what files get installed to what location for every package in the system.

Just as a side note, crapware is the unwanted software that is pre-installed when you buy a new computer, or that is sneaked onto your computer when you install oracle’s java. When I bought my netbook, I booted Windows exactly once to see how much crapware they bundled, before wiping the disk and installing ubuntu. Needless to say no such problems exist on the Linux side.

So I checked the “Programme und Funktionen” in the system settings. That’s one of the configuration items that changes its name and appearance with every version of Windows. I found about 7 unwanted packages with today’s installation date. I removed them immediately, and I can only hope that they didn’t install additional malware.

Adding a display to rfid time tracking

More than a year ago, I blogged here about using RFID to track presence times in the BORM ERP system. I used the system a lot since then. But the BlinkM was really limited as the only immediate feedback channel. To use it with multiple users, a display was needed. The default Arduino compatible displays seemed a bit overpriced, and the Nokia phone that I disassembled didn’t have the same display as I used for the spectrum analyzer. But these displays are available for a bargain from china. The only problem was that the bifferboard didn’t have enough GPIO pins available to drive the “SPI plus extras” interface. But i2c was already configured for the BlinkM.

So, the most obvious solution was to use an AtMega8 as an intermediary. I defined a simple protocol and implemented it as i2c and uart on the AVR. I also wrote a small python class to interface it from the client side. As I buffer only one complete command, I had to add some delays in the python script to make sure the AVR can complete the command before the next one arrives. Apart from that, it all worked well when testing on an Alix or RaspberryPi. But i2c communication refused to work entirely when testing with the bifferboard. Not even i2cdetect could locate the device. That was bad, since I wanted to use it with the Bifferboard, and the other two were only for testing during the development. I checked with the oscilloscope, and found out that the i2c clock on the bifferboard runs with only 33kHz while the other two run at the standard 100kHz. So I tried to adjust the i2c clock settings on the AVR, as well as different options with the external oscillators and clock settings, but I was still still out of luck. Then I replaced the AtMega8 with an AtMega168 and it immediately worked. Next, I tried another AtMega8 and this one also worked with the Bifferboard. I switched back and forth and re-flashed them with the exact same settings. Still, one of them worked with all tested linux devices, while the other one still refused to work with the Bifferboard. So I concluded, one of these cheap AVR’s from china must be flaky, and I just used the other one. Seems like that’s what you get for one 6th of the price you pay for these chips in Switzerland.

Apart from the display, I also added an RGB LED that behaves like the BlinkM before. And on top of that a small piezo buzzer. But since I could hardly hear it’s sound when driven with 3.3V, I didn’t bother re-soldering it when it fell off.

Now, my co-workers also started logging their times with RFID.

The code is still on github.

cmake with MSVC

I have used cmake for a couple of years with my hobby projects, and I love it. It is a cross platform meta build system. Like with Qt, people tend to first think that “cross platform” is the main feature. But like with Qt it’s actually one great feature amongst many others. It brings so many advantages that I can’t even list them all here.  Since last week, we also use it for PointLine at work. While the process is straightforward on linux, there are some things worth mentioning when using it on Windows.

Finding External libraries

Cmake has lots of finder scripts for commonly used libraries, and they work great in most cases. But we want to have multiple versions of the same libraries side by side, and depending on the version of PointLine we develop for, use the appropriate versions of the libraries. To be precise, not just the libraries, but also the headers and debug symbols need to be present in different versions. And we want to be able to debug different versions of our product using different versions of the libraries, simultaneously on the same machine. Continue reading “cmake with MSVC”

Optimizing compile time of a large C++ project

The codebase of our PointLine CAD is certainly quite large. sloccount calculated roughly  770’000 lines of C++ code. I know, this is not a very good metric to describe a project, but it gives an idea. Over time the compile time steadily increased. Of course we also added a lot of new stuff to the product. We also used advanced techniques to reduce the risk of bugs, that have to be paid with compile time. But still, the increase was disproportionate. We mitigated it by using IncrediBuild. Just like distcc, it distributes the compilation load across different machines on the LAN. If I’m lucky, I get about 20 cores compiling for me.

About once a year, one of us does some compile time optimization and tunes the precompiled headers. I did so about three years ago, and then this week it was my turn again. Reading what I could find about precompiled headers on the internet and applying that, I could get only a small speedup, roughly 10%. So I cleaned up the physical structure of the codebase. Here are some things I performed: Continue reading “Optimizing compile time of a large C++ project”