Resetting the Logitech K810 bluetooth keyboard

The Logitech K810 has been my favorite keyboard for many years. I have one at home and one in the office. It allows to easily switch between 3 different devices. It has the same size and layout as most notebooks, is stylish, and a blast to type.
But one day about a year ago something strange happened. When I wanted to type a ‘\’ I got nothing. When I wanted to type a ‘<‘, I would get a ‘§’. When I wanted to type a ‘>’, I would get a ‘°’. And vice versa. So, effectively two keys were swapped. This was only on the Windows workstation in the office. When I switched the keyboard to the Linux notebook or to the phone, all keys were correct. It also never happened at home.
At first I thought this was a joke by my co-workers. So I tried everything on the windows machine: un-pairing and re-pairing. Scanning the registry for uncommon key mappings. Nothing helped until I found a page that described how to perform a factory reset on the keyboard. The problem was solved, but only for a year. Last week when I came to work after the weekend, the very same keys were swapped again. Finding the page with the reset instructions was more difficult than I remembered. It was not freely available, but only after logging in on the Logitech support page. That is why I want to preserve it here, in case it happens again to me or anybody else:

  • un-pair and remove the keyboard from the bluetooth settings on your pc
  • reset your keyboard: with keyboard on and unpaired from any device, press the following key sequence:
  • “Escape”, “o”,“Escape”, “o”,“Escape”, “b”
  • if the reset is accepted the lights on top of your K810 will blink for a second
  • reconnect your K810 to your pc and test it on other devices if possible.

Running hostile software in a container

Remember Skype, the once popular phone software? I used it a lot when we were traveling in South America, and international calls were insanely expensive. But I stopped using it when it was acquired by Microsoft, and they switched from a P2P model to centralized servers. From what I could observe, it gradually worsened from there, and I really thought I wouldn’t have to use it ever again. That was until somebody decided that we had to use Skype for Business instead of XMPP at work. There are a plethora of better alternatives. The one I use the most these days is Tox.

I use the Windows Workstation only for things that I can’t do on Linux. There is not much that falls into this category, besides VisualStudio compiling projects that involve MFC. There is Skype for Linux, but there is no official Skype for Business for Linux. So for a moment it looked like the Windows machine got a second task. But running an obfuscated malicious binary blob from Microsoft with known backdoors, that is online all the time on an operating system that can not be secured makes me uneasy. So I looked for a way to run it securely on Linux. The first thing I found was an open source implementation of the reverse engineered proprietary protocol as a plugin for Pidgin. That sounded good, but it didn’t work unfortunately. The second option was a closed source clone from tel.red. They provide their own apt repository with regular updates. That’s quite good actually, if you don’t care about closed source software, and the security of your device and data in general.

I learned about docker a while back, but only used it marginally so far. This was the first real use I had for it, so I started learning more about it. Copying and adapting a docker file is a lot easier than the articles I read so far made me believe. I found a couple of sites about packing Skype into a docker container, but none for Skype for Business. So I took one of the former ones and adapted it. To use my container, just follow these easy steps:

git clone https://github.com:ulrichard/docker-skype-business
cd docker-skype-business
sudo docker build -t skype .
sudo docker run -d -p 55555:22 --name skype_container skype
ssh-copy-id -p 55555 docker@localhost
ssh -X -p 55555 docker@localhost sky

The password for ssh-copy-id is “docker”.

Then log into sky with your credentials. You can do this every time, or you can store a configured copy of the container as follows:

docker commit skype_container skype_business

The next time, you just run it with:

sudo docker run -d -p 55555:22 skype_business
ssh -X -p 55555 docker@localhost sky

I left some pulseaudio stuff from the original container at least in the README file. I don’t intend to use it for anything but receiving chat messages. But if you want to, feel free to experiment and report back.

What could go wrong when ordering pizza?

For some months now it was possible to order pizza for BitCoin in our area. I wanted to give it a try since it was announced. But only last Thursday, I proposed to my coworkers to order pizza. And that I would pay with BitCoin. It was meant as a demonstration how cool the virtual currency is, and that it is actually useful in the real world. I was going to take pictures and blog about it. After all, a pizza deal was the first real use and most famous BitCoin transaction in history.

So I placed the order with lieferservice.ch for pizza’s from Angolo, where we used to go for lunch before. The website was really cool, we could order extra ingredients on top of the regular pizza. Payment was a breeze, as always with BitCoin. It was 11:25 when I placed the order, and I picked 12:30 for the delivery. The email confirmation from lieferservice.ch followed immediately. But when we all grew more and more hungry, I tried to call Angolo at 12:45 to ask where our food was. Nobody answered the phone. I tried again, and again, and again. Nothing, not even an answering machine. After 13:00 we decided we would drive to Angolo with the confirmation email, and eat our pizza in the restaurant. When we arrived, it was closed for holiday.

This is clearly not how this is supposed to work. The guy from lieferservice appologized, and told me their contractors are ment to tell them when they change opening hours. He couldn’t refund me in BitCoin, and asked for my IBAN instead. One of my colleagues was so pissed off, he said he wouldn’t go to Angolo ever again.

Presentations with code that actually works

I don’t do presentations that often these days. And if I do, more often than not, they contain some form of source code. With most things you write, you refine it over and over. This is especially true with stuff that you present. Applied to code snippets, that can mean you test it initially, but once it is in the presentation it is a burden to copy it back and forth to verify every change, and then start over again with the formatting. So you often end up changing your code snippets in the presentation, without verifying if the code is still valid. Sometimes you find these errors during proof reading, but even famous presenters caught compile errors during the presentation. That’s how it works when you use the traditional PowerPoint style of products. As I expressed earlier, the Office suite and their opaque file formats doesn’t belong to my favourite tools.

Thus after I recently learned LaTex, I wondered if presentations could be done with it. Sure enough TexMaker offers a good set of templates for just that.

Next I wanted to see if I could link in code from external files, and sure enough, there is the listings package for LaTex. Now that enables me to have the code in files that I can actually compile.

But wouldn’t it be cool, if I could compile the code snippets for verification and generate a pdf file from the tex source all at one go? Sure enough there is a cmake UseLATEX package.

Now wouldn’t it be even cooler, if I could edit and generate all from within the same console window, without having to exit the editor, start the editor from a specific directory, or type complicated commands? Sure enough I found out how to write project specific .vimrc files. With everything prepared, I just have to type :make in vim to trigger the process, to get a new pdf file with all code snippets verifyed.

A small project to demonstrate the technique is at: https://github.com/ulrichard/experiments/tree/master/initializerlists

And you can find the resulting pdf file at: https://github.com/ulrichard/experiments/releases/download/initlist_0.1/initializerlists.pdf

vim meets VisualStudio

There are two camps of neckbeards: Those who use emacs, and those who use vi or vim. I can’t tell which is better, and most of the arguments seem to be rhetoric. Until about three years ago, I perceived both as insufferable. I was however curious to learn either of them. The question was which one to pick. During my uncertainty, vim was praised more on hacker news. So I gave it a try. At first, it was awkward to work with, but after a while I managed to get along. People often tell how blazingly fast you are editing with vim. But for a long time, I was not nearly as efficient as with other editors. At the moment I’m reading the book “Practical vim” which has a ton of good tips. It seems the flood of shortcuts is never ending. In a way memorizing more commands and shortcuts is like having more keys. That kind of reminds me of an article, I once read. It compared working with a GUI vs on the console to listening radio vs playing piano. I can’t find the article right now, but it had similar reasoning as this one.

So I’m constantly improving my vim skills. In the meantime I’m about on par with how efficient I am at using the style of editors, that I have been using for two decades. To improve further, I thought I would need to practice more. So the natural progression was to use it on the job. For work we use VisualStudio, and unless I could easily compile and debug out of vim, switching back and forth would be counter effective. So I was thrilled to find out that there is a plugin to bring vim style editing to VisualStudio. I only just started using it, but it certainly looks promising.

The crapware platform

I complained many times that there is no standard package manager on Windows, and that installations and especially upgrading software on that platform is an unholy mess. On my office computer there are probably close to ten different mechanisms present to keep different software packages up to date. Some lurk in the system tray, and most of them constantly waste resources. The update mechanism of our software is a little bit better than most in that respect. It doesn’t waste resources while it’s not in use, but it’s still a separate proprietary solution. And the worst part is, that most of the software on usual Windows Systems don’t even get updated at all.

I looked for a solution as simple, elegant and powerful as apt-get many times. The best I found so far was Npackd. It’s still a decade short of the debian system, but better than anything else I found. The repository has grown significantly in the years I have used it. But even if Npackd implements dependency management, the packages rarely make use of it. It’s just not the way Windows packages are made. Rather than managing the dependencies, they keep inventing new versions of dll hell.

I don’t know what is the reason that upgrades in Npackd frequently fail. It’s usually that the uninstall of the old version fails, and thus the update stops. What I usually did in the past, was installing the new version in parallel. I think there is not much Npackd could do about WindowsInstaller packages failing to uninstall. Having crafted WindowsInstaller packages myself, I know how brittle and error prone this technology can be.

Today I upgraded some packages that Npackd flagged as upgradeable. You select the ones you want to bring up to date, and click update. It’s not like “sudo apt-get upgrade” and done, but it still makes Windows a lot more bearable. And for a long time the quality of the packages was good, at least for Windows standards. It started out with mostly open source projects and a few big name packages. The crapware that is so stereotypical for the Microsoft platform had to stay out.

That impression changed today. One of the packages that I upgraded was IZArc, a compression package with nice Windows Explorer integration. Already during the upgrade process I had a strange feeling, when I saw the ads in the installer window. And when it was done, I was certain something fishy had happened. Some windows popped up wanting to install browser toolbars, changing the default search engine and scan the computer for possible improvements. Holly shit I thought is this some scareware? I would expect this from some random shareware downloaded from a shady page, but not from Npackd.

And that’s my main point. When you install software on your computer, you trust the issuer not to hijack your system. And if you install software through a software repository, you trust the repository even more. On Windows, you’re pretty much dependant on lots of individuals and companies involved in the creation of all the packages you install. There is a Microsoft certification process, and I don’t know what it checks and entails. There is also the possibility to sign your packages with a key signed by Microsoft. But that merely protects from tampering between the issuer and you. With OpenSource software however, you can examine the sourcecode yourself, and rely on the fact that other people checked it as well. Then most distributions have build hosts that compile and sign the binary packages. To be included in the repository, a maintainer has to take responsibility for the package, and upload a signed source package. The source package can be verified by everyone. So, the only thing you have to trust is the build host. But even that you could verify by building the package yourself, and compare the result. So the whole thing is fully transparent. Hence, if one individual decided he wanted to earn some bucks from advertising and bundling crapware, he wouldn’t get very far. As a nice add on, apt (or synaptic for that matter), can tell you exactly what files get installed to what location for every package in the system.

Just as a side note, crapware is the unwanted software that is pre-installed when you buy a new computer, or that is sneaked onto your computer when you install oracle’s java. When I bought my netbook, I booted Windows exactly once to see how much crapware they bundled, before wiping the disk and installing ubuntu. Needless to say no such problems exist on the Linux side.

So I checked the “Programme und Funktionen” in the system settings. That’s one of the configuration items that changes its name and appearance with every version of Windows. I found about 7 unwanted packages with today’s installation date. I removed them immediately, and I can only hope that they didn’t install additional malware.

Adding a display to rfid time tracking

More than a year ago, I blogged here about using RFID to track presence times in the BORM ERP system. I used the system a lot since then. But the BlinkM was really limited as the only immediate feedback channel. To use it with multiple users, a display was needed. The default Arduino compatible displays seemed a bit overpriced, and the Nokia phone that I disassembled didn’t have the same display as I used for the spectrum analyzer. But these displays are available for a bargain from china. The only problem was that the bifferboard didn’t have enough GPIO pins available to drive the “SPI plus extras” interface. But i2c was already configured for the BlinkM.

So, the most obvious solution was to use an AtMega8 as an intermediary. I defined a simple protocol and implemented it as i2c and uart on the AVR. I also wrote a small python class to interface it from the client side. As I buffer only one complete command, I had to add some delays in the python script to make sure the AVR can complete the command before the next one arrives. Apart from that, it all worked well when testing on an Alix or RaspberryPi. But i2c communication refused to work entirely when testing with the bifferboard. Not even i2cdetect could locate the device. That was bad, since I wanted to use it with the Bifferboard, and the other two were only for testing during the development. I checked with the oscilloscope, and found out that the i2c clock on the bifferboard runs with only 33kHz while the other two run at the standard 100kHz. So I tried to adjust the i2c clock settings on the AVR, as well as different options with the external oscillators and clock settings, but I was still still out of luck. Then I replaced the AtMega8 with an AtMega168 and it immediately worked. Next, I tried another AtMega8 and this one also worked with the Bifferboard. I switched back and forth and re-flashed them with the exact same settings. Still, one of them worked with all tested linux devices, while the other one still refused to work with the Bifferboard. So I concluded, one of these cheap AVR’s from china must be flaky, and I just used the other one. Seems like that’s what you get for one 6th of the price you pay for these chips in Switzerland.

Apart from the display, I also added an RGB LED that behaves like the BlinkM before. And on top of that a small piezo buzzer. But since I could hardly hear it’s sound when driven with 3.3V, I didn’t bother re-soldering it when it fell off.

Now, my co-workers also started logging their times with RFID.

The code is still on github.

cmake with MSVC

I have used cmake for a couple of years with my hobby projects, and I love it. It is a cross platform meta build system. Like with Qt, people tend to first think that “cross platform” is the main feature. But like with Qt it’s actually one great feature amongst many others. It brings so many advantages that I can’t even list them all here.  Since last week, we also use it for PointLine at work. While the process is straightforward on linux, there are some things worth mentioning when using it on Windows.

Finding External libraries

Cmake has lots of finder scripts for commonly used libraries, and they work great in most cases. But we want to have multiple versions of the same libraries side by side, and depending on the version of PointLine we develop for, use the appropriate versions of the libraries. To be precise, not just the libraries, but also the headers and debug symbols need to be present in different versions. And we want to be able to debug different versions of our product using different versions of the libraries, simultaneously on the same machine. Continue reading “cmake with MSVC”

Optimizing compile time of a large C++ project

The codebase of our PointLine CAD is certainly quite large. sloccount calculated roughly  770’000 lines of C++ code. I know, this is not a very good metric to describe a project, but it gives an idea. Over time the compile time steadily increased. Of course we also added a lot of new stuff to the product. We also used advanced techniques to reduce the risk of bugs, that have to be paid with compile time. But still, the increase was disproportionate. We mitigated it by using IncrediBuild. Just like distcc, it distributes the compilation load across different machines on the LAN. If I’m lucky, I get about 20 cores compiling for me.

About once a year, one of us does some compile time optimization and tunes the precompiled headers. I did so about three years ago, and then this week it was my turn again. Reading what I could find about precompiled headers on the internet and applying that, I could get only a small speedup, roughly 10%. So I cleaned up the physical structure of the codebase. Here are some things I performed: Continue reading “Optimizing compile time of a large C++ project”

OpenCL First Steps

There is an increasing noise about GPGPU computing and how much faster than CPU (even parallel) it is. If you didn’t hear about all that, GPGPU is about using the computer’s graphics card(s) to do general purpose computations. The key to the performance lies in the parallel architecture of these devices. From what I read, an average graphics card has 64 parallel units, but they are not as versatile as the CPU of which a typical PC these days has 4 cores. That means, if the task is well suited, it can boost performance significantly, but if not, it’s nothing more than a lot of wasted work.

So I wanted to see for myself. To get started I read the book “OpenCL Programming Guide“. It gave a good overview. But now it was time to give it a try.

Continue reading “OpenCL First Steps”