Category: Software

  • let’s encrypt

    I never bought a commercial grade SSL certificate for my private website, but I used free ones before. Usually from startssl. While it worked, the process was cumbersome. And then when I wanted to renew, my browser showed a warning that their own certificate was out of order.

    When the letsencrypt initiative (supported by mozilla and the electronic frontier foundation) announced it’s goal to make website encryption easier available we all cheered. Last week I finally received an eMail stating my domain was readily white-listed in the beta program. So I took some time and followed their process. It was not always self explanatory, but the ncurses program offered some help. Within a couple of minutes, I had a certificate ready to use. The only thing I did not like, was that if the process transmitted my private key to the server, there was no way of noticing other than actually read the code. I don’t think it did, but I prefer to be certain about these things.

    To have my website protected, all I had to do was adding the file location that the utility program provided to the apache site configuration.

    Now the bigger work was moving everything to my new server and adapt all the URL’s. Moving the blog was already more work than I expected. It was not a simple export and import. First I had to get the wordpress importer plugin working. The media files are not included in the exported file, and have to be moved manually. Some older blog posts still referenced the old gallery which I wanted to replace with piwigo for a while. So in addition to moving the piwigo gallery, I also had to move lots of photos from the old gallery, and adjust the references in the blog.

    Some web apps are not moved yet and will follow. Finally I plan to redirect all http addresses to https.

    On the nice side, I could use the new certificate to secure my new email server. I can’t remember when was the first time, but about once every two years I attempted to set up my own email server in the past. Setting up a web server is much simpler. But with the mail servers there was always some problem left that left me not confident enough to really use it. But this time I found a good tutorial that actually worked. It’s geard towards a raspberrypi running raspbian, but worked just fine on my nuc running ubuntu.

  • Verifying downloads

    Last week I stumbled across a post from last year, where somebody described how it was impossible to download an important infrastructure program securely on Windows. My first reaction was of course to pity the poor souls that are still stuck with Windows. How easy is it to just type apt-get install and have all the downloading and validation and installation conveniently handled for you.

    Today I was going to set up my new server. First I downloaded the current iso file from ubuntu.com. Before installing it onto an USB stick, I thought about this blog post again. Actually I should validate this ISO! I knew before, but I usually didn’t bother. So I gave it a try. I had to search a bit for it on the download page. The easiest is if you manually pick a mirror. Then you will find the hash sum files in the index page. Some websites along the way were encrypted, others were not. The downloads themselves were not. But that didn’t matter since the hashes were GPG signed. I don’t have to do this all too often, so I  just followed the community howto. My downloaded iso file was valid, so I moved on installing it.

    The hardware is actually from computer-discount.ch. For quite some time I was searching for ways to buy computer equipment with BitCoin. The American big name tech companies that accept BitCoin either do it only outside of central Europe, or don’t deliver here. So I was quite excited to find this company from Ticino. The experience so far is very good.

  • connecting home securely

    It has been probably close to a decade that I run a small server at home. At first it was only because I could not find a web hosting company that would serve my fcgi libwt apps at an affordable price. Then I added this blog to it. In the meantime I added a lot of other stuff as well. One of the more important things became ssh. Not only for remote shell sessions, but also for securely copying files and tunneling. In fact I use ssh tunnels instead of a more traditional VPN.

    Discovery

    Static IP addresses are expensive in Switzerland. So I used dyndns from the start. At first the free offering, and then switched to a paid plan long before they discontinued the free offering. Just last week I received a note that they grabed the annual fee from my (scheduled to be deactivated) credit card. Generally I strongly dislike services that automatically grab money from my accounts. They didn’t even mention that the fee doubled. That’s one side of the story, the other is that dyndns is an American company. They could take my domain name hostage without even telling me.  So there has to be a better alternative. In fact there is one. It’s good technology wise, but not generally available to the unintroduced yet.

    DyName for namecoin

    I wrote about namecoin in a previous blog post. One of its main uses is a censorship resistant domain name registration. And the simple python script from DyName is to namecoin what dyndns and ddclient are to traditional domain names. Just prepare your registered name to include a dd entry, edit your config file, and call the script periodically from cron. That way you separate the private key where your name is registered from the hot wallet on the server. My provider used to reassign new ip addresses more frequently, now it’s about once every two months I would guess. The transition with namecoin was very smooth the last two times. I have a script that queries namecoin for the current ip address and then connects. There are dns resolvers and browser plugins or even public dns servers that would resolve namecoin domains. My experience with them was not as smooth as with the namecoin core itself. But I’m sure these parts will improve as well. So, with namecoin we have high confidence, that the ip address is correct that we are connecting to, but it can’t protect against man in the middle attacks. SSH has means to protect from that. The ssh client has a list of known ip addresses or host names and corresponding key fingerprints. But after an ip address change, there is no entry for the new destination, so ssh prints an error message and refuses to connect until you accept to add a new entry to your known_hosts file.

    ssh known hosts

    When you search for the error with your preferred search engine, you’ll find advices to delete offending lines in your known_hosts file. This of course is not what we need here. Just accepting to add a new entry the next time you connect would circumvent the protection against MITM that ssh provides. Since we already have the key fingerprint from the previous address, there is another more secure solution. If you have only one entry in your known_hosts file, you can skip the next few lines. Maybe you know which fingerprint is valid, for example because the file already contains a couple of lines with the same key fingerprint because the ip address changed a couple of times, and you just accepted it.

    If you are not sure, which fingerprint you need, ask the server what it provides:

    $ ssh-keyscan 85.3.164.135
    # 85.3.164.135 SSH-2.0-OpenSSH_6.7p1 Ubuntu-5ubuntu1
    85.3.164.135 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE7WE5vtqSxUnQRX5CjOzEzUAdewqHRV5MXcSCQcylcKanpnDHRE4yVlEn770MFP6EfJ61ukdNYMDnSO9eoRiZY=

    Now search for this fingerprint in your known_hosts file, and copy the whole line. On the new line, you replace the first hash with the actual new ip address in clear text. You could leave it like that, but you can also hash it with the following command.

    $ ssh-keygen -H -f ~/.ssh/known_hosts

    After writing all this I started wondering if it would be possible to keep the host key on an external hardware device like a NitroKey or a YubiKey. I already keep my client key for authenticating to the ssh server on one of these.  That’s something to find out in a future post maybe.

  • ubuntu phone will be great, but it is not yet

    The BQ Aquaris ubuntu phone that I waited for so eagerly was delivered today. Full of anticipation I unpacked it and switched it on. After playing with it for a while the excitement turned into dissatisfaction. I hate to say it, but on a phone the solid base and polished user experience is not enough, some basic functionality is required as well. Rough edges are much harder to work around on a phone than on a computer with a regular keyboard. Let’s face it, most people who opt for ubuntu phone want to some degree escape the freedom hating ecosystems prevalent on the big platforms. Yet instead of welcoming users with freedom loving functionality, the phone is loaded with Google, Facebook and Twitter apps.
    As long as you don’t expect anything from it, it’s a pleasant experience. Knowing that it’s based on debian packages gives me great comfort. The touch interface and the settings dialogs are very nice. Yet it is lacking basic PIM and email functionality.

    Phone

    Nowadays one could consider the phone functionality not the most important part of a smartphone anymore. I first had to have my SIM card cut to the smaller form factor. Text messages seem to work nicely. Phone calls work fine. MMS messages were automatically configured to look up on a website by the carrier. I don’t know if the phone would support them propperly, but that’s a feature that I rarely use anyway.

    Contacts

    When opening the contacts app, I was greeted with the question if I wanted to sync with Google. Hell no! If I did, I would have stayed with Android. But that seems to be the only option other than having a standalone address book and typing in everything by hand. I could not find an option to sync my CardDAV address book. Lots of people complained badly about this, so it got medium priority. There is a complicated workaround using evolution sync. That way I got my address book synced from the commandline. An entry in crontab keeps it synced.

    Calendar

    Basically the same as contacts, except that the calendar app was not pre-installed and had to be fetched from the app store. I configured syncevolution from the commandline the same way as the address book, including crontab. But the calendar does not properly synchronize. It pushes appointments I create on the phone. But it doesn’t fetch them from the server. I will have to do some more debugging here.

    Email

    There doesn’t seem to be a standard email client. Instead it ships with a GMail app. People complained that there was no IMAP support whatsoever. At least I could find an email client in the app store called Dekko. The bad thing however is that instead of connecting to the email server it just hangs for an hour. When I try it without encryption, it appears to work. I can send mails, but it won’t fetch them. Another IMAP account works well, just not the one that is most important for me. Mails from my main account were fetched exactly once. Before and after that, all I get is the following error message: “Too many invalid IMAP commands”
    Update: It took some manual editing of the config file to get it finally working.  Now I’m looking forward to support for notificatoin about new mails, but that is less important in comparison.

    Bluetooth

    Connecting to the Jabra headphones was simple as always, and the sound quality is good. But I didn’t manage to connect any of the four bluetooth keyboards I tried. Also the yubikey does not work as an external keyboard, so at first I thought it might be a general HID problem. But when I connect a USB keyboard, that works.

    BitCoin

    The BitCoin client from the app store is not usable for real life. It doesn’t work with qr codes, and has no key backup functionality. I can work around the missing key backup, by manually copying the file “/home/phablet/.local/share/org.sambull.bitcoin-app/ubc.wallet” to a safe place, but qr code reading is really a must. Even if there was a qr reader app, pasting in the bitcoin app is missing.  I might have to resort to a web wallet for some time.
    There is a webapp for coinbase already in the store, so I tried this one first. I can scan the qr code from out of the browser by automatically launching the camera app. The picture is then uploaded to the server for the qr code reading. This seems to be common practice, but of course it is way inferior to having an app where you can move your camera until it successfully reads the qr code. But after I enter the amount and click “next”, I get a white screen, and the web app won’t respond any more. A coinbase support representative told me he had the same with safari mobile, and using the back button helped. There is no back button in the webapp, so I tried it in the browser. “Back” landed me on another white page, and “forward” led to an error message.
    The next web wallet I tried was xapo. Since I use their debit card, it would be convenient. But their send page has no qr functionality.
    So I moved on to greenaddress. I almost succeeded. If it wasn’t due to the defunct email. They sent me a 2FA code to my main email address, which unfortunately doesn’t work on the phone yet.

    XBMC remote control

    I was releaved to find more than one XBMC remote in the app store, and some are even better than what I had on Android.

    News

    The rss reader and the news scope make for a pleasant appearance. They find my preferred rss feeds without needing the exact URL For podcasts, I had to install PodBird. It works fine for audio podcasts. It also downloads video files, but won’t play them.

    Apps

    I seem to remember that they planned to be able to run android apps on ubuntu phone. But it appears those plans were abandoned a long time ago. Hence naturally for a new platform the selection of available apps is very sparse. Seems I will have to live without some apps I used frequently on Android such as SBB, 20min, MeteoSwiss… All this information and functionality is also available on the respective websites. The apps are just more convenient.

    APT

    Part of the reason why I wanted a ubuntu phone is the underlying debian package system. I maintained an ubuntu chroot system image on my android phone so that I could perform some tasks on a full blown shell. But it always was quirky at some points and a second class citizen all along. So I wanted the ubuntu shell to be a first class citizen. Indeed you can start a terminal which behaves very well. The keyboard is missing tab and arrow keys though. You have access to apt, or so it seems at first. when you actually want to install something you see error messages about some lock files. To get around that, one needs to enable developer mode in the phone settings and remount the root file system as readwrite. But then came something disturbing:

    $sudo mount -o remount,rw /
    $sudo apt-get update
    $sudo apt-get install git tig nmap htop pcsc-tools gpgsm gnupg-agent
    Paketlisten werden gelesen... Fertig
    Abhängigkeitsbaum wird aufgebaut.
    Statusinformationen werden eingelesen.... Fertig
    Package git is not available, but is referred to by another package.
    This may mean that the package is missing, has been obsoleted, or
    is only available from another source
    
    E: Package 'git' has no installation candidate
    E: Paket tig kann nicht gefunden werden.
    E: Paket nmap kann nicht gefunden werden.
    E: Paket htop kann nicht gefunden werden.
    E: Paket pcsc-tools kann nicht gefunden werden.

    WTF is going on here! A repository that is missing crucial packages? Mixing repositories with ubuntu propper is probably not a great idea. I don’t know yet what to do about that.
    People on IRC confirmed that rather than changing the root filesystem, it’s better to have a chroot of ubuntu proper for the additional tools. This is what I had on android and hoped it would no longer be necessary on ubuntu phone.

    GPG

    gpg and gpg-agent were already installed. Udev is running as well. So after adding an udev rule and configuring the gpg-agent, I was able to use my YubiKey neo in OpenPGP mode for ssh authentication and similar tasks. This is great news, as it was one of the soar points with my old phone.

    GPS

    The phone comes with a mapping app pre installed. It looks decent, if finds the addresses, displays maps and calculates routes, everything online as it appears. What it does not however, is displaying the current position, which is crucial if you want to use it for navigation. On the internet I found people claiming that the GPS on the Aquarius doesn’t work at all, or very badly. There is some commandline program for analyzing GPS reception, which I plan to try.
    Update: The utility confirmed that the GPS is not able to get a fix, not even on a mountain with clear sky.

    SPOT Connect

    The SPOT Connect is a satellite messenger that I use for cross country paragliding. In contrast to other live tracking systems it also works in areas without GSM reception, as it transmits the current location directly to the GlobalStar satellite network. They have an app to control it for Android and iOS, but not yet for ubuntu. I told them two years ago that it would be nice to be able to start tracking on the device itself without having to do it in the app. Now that I just lost that app support, I asked them again what options I have. But as with lots of big companies, I have the impression the support staff has a database with answers and no means to escalate feature requests or even bug reports from customers. Then I remembered a site that I found two years ago when I got the device, where a guy reverse engineered parts of the comms protocol. And sure enough I got the python utility running inside my chroot environment on the phone. That allows me to send custom ok messages, but I have yet to find out how to start tracking.

  • libreboot and trisquel

    Last month I saw somebody on the fsfe mailing list talk about an OpenMoko phone. As I had one of those collecting dust in the drawer, I asked if anybody was interested. Promptly I got an offer to exchange it for a Lenovo X60 notebook with libreboot. I didn’t need another notebook, but libreboot seemed interesting enough, so I agreed. It came preinstalled with trisquel gnu linux, and with a docking station. I’m not sure if I heard about that distribution before. It is based on ubuntu, but includes only the free open libre stuff. The default desktop is gnome3. Since it’s a good fit with libreboot, I kept trisquel. The first impression was that it runs extremely well for such an old device. I was also amazed how rounded and complete a full on libre distro can be these days. Gone are the days where the compromises you had to make for freedom were hard to justify. The first thing, friends ask is about flash. But I don’t miss it at all, I mean html5 has been around for a while. At first, I started to install games for the kids. They run a lot better than on my old Atom netbook. As it’s my first device with a fingerprint reader, I had a little play installing this option for logging in, fully aware that it’s not that secure. The only two things that are not so optimal are sound and heat. Neither the speakers nor the headphones give any sign of live, event hough the operating system seems to have recognized the sound card. This is not such a big deal, as the bluetooth headphones still work perfectly. The other issue is that it heats up a lot under full load. And when the core temperature hits 100°C it just switches off. This happened a couple of times when the BitCoin BlockChain synchronized. And it still happens once every second day.

    Then, my XPS13 was stolen, and I needed something to fill the gap until I have a proper replacement. I must say it does the job well. I miss the XPS13 a lot, but at least I have something I can work on. And who knows how long it takes before I have an XPS13 again. They recently announced a new version with tiny bezels around the screen, bigger SSD and newer processors. But the new developer edition is not available yet, and the old version is not available any more. When it becomes available, I want to pay it with BitCoin, which also is not available yet. Dell accepts BitCoin payments in the US, Canada and the UK. I hope they will soon roll out worldwide, or at least to the rest of Europe. Once I can order on my terms, I will still have to wait about a month for delivery.

  • Code coverage for C++

    Ever since I wrote automated tests, I wondered how complete the coverage was. Of course you have a feeling which parts are better covered than others. For some legacy code you might prefer not to know at all. But I thought test coverage was something easy to do with a language running on a VM such as Java, but hard with C++. Some things are not as hard as you think, once you give it a try.

    The thing that triggered my interest was the coveralls badge on the readme page of vexcl. By following it through, I learned that coveralls is just for presenting the results that are generated by gcov. Some more research showed what compiler- and linker flags I need to use. In addition I found out that lcov’s genhtml can generate nice human readable html reports, while gcovr writes machine readable xml reports. So the following is really all that needs to be added to your CMakeLists.txt:

    OPTION(CODE_COVERAGE       "Generate code coverage reports using gcov" OFF)
    
    IF(CODE_COVERAGE)
        SET(CMAKE_C_FLAGS          "${CMAKE_C_FLAGS}
            -fprofile-arcs -ftest-coverage")
        SET(CMAKE_CXX_FLAGS        "${CMAKE_CXX_FLAGS}
            -fprofile-arcs -ftest-coverage")
        SET(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS}
            -fprofile-arcs -ftest-coverage")
    
        FILE(WRITE ${PROJECT_BINARY_DIR}/coverage.sh "#! /bin/sh"n)
        FILE(APPEND ${PROJECT_BINARY_DIR}/coverage.sh "lcov --zerocounters
            --directory . --base-directory ${MyApp_MAIN_DIR}"n)
        FILE(APPEND ${PROJECT_BINARY_DIR}/coverage.sh "lcov --capture --initial
            --directory . --base-directory ${MyApp_MAIN_DIR} --no-external
            --output-file MyAppCoverage"n)
        FILE(APPEND ${PROJECT_BINARY_DIR}/coverage.sh "make test"n)
        FILE(APPEND ${PROJECT_BINARY_DIR}/coverage.sh "lcov --no-checksum
            --directory . --base-directory ${MyApp_MAIN_DIR} --no-external
            --capture --output-file MyAppCoverage.info"n)
        FILE(APPEND ${PROJECT_BINARY_DIR}/coverage.sh "lcov
            --remove MyAppCoverage.info '*/UnitTests/*' '*/modassert/*'
            -o MyAppCoverage_filtered.info"n)
        FILE(APPEND ${PROJECT_BINARY_DIR}/coverage.sh
            "genhtml MyAppCoverage_filtered.info"n)
    
        FILE(APPEND ${PROJECT_BINARY_DIR}/coverage.sh
            "gcovr -o coverage_summary.xml -r ${MyApp_MAIN_DIR} -e '/usr.*'
             -e '.*/UnitTests/.*' -e '.*/modassert/.*' -x --xml-pretty"n)
    
        ADD_CUSTOM_TARGET(CODE_COVERAGE bash ${PROJECT_BINARY_DIR}/coverage.sh
                            WORKING_DIRECTORY ${PROJECT_BINARY_DIR}
                            COMMENT "run the unit tests with code coverage and produce an index.html report"
                            SOURCES  ${PROJECT_BINARY_DIR}/coverage.sh)
        SET_TARGET_PROPERTIES(CODE_COVERAGE PROPERTIES
            FOLDER "Testing"
        )
    
    ENDIF(CODE_COVERAGE)

    The resulting html page is very detailed and shows you the untested lines in your source files in red.
    From the produced xml file it’s easy to extract the overall percentage for example. You could use this figure to fail your nightly builds when it’s decreasing.

  • decentralized social communication

    When you think about social networks, do you even realize how centralized and compartmentalized the prevalent systems are? Neither centralization nor artificial borders are inherent traits of a network though. Imagine you could only talk to customers of the same phone company you use. Or you could exchange emails only with customers of the same service provider. Wouldn’t that be ridiculous? And yet this lack of interoperability is the reality with most social networks today.

    Blogging -> wordpress

    Blogging is about the only category here that is fairly decentralized. You can host your own blog without any problem. Even though wordpress seems to have the lion’s share of feeds, rss and atom are open standards. And indeed lots of products and platforms offer that functionality. And most important: you can freely choose the software that fetches all the news for you. The same system is also used for podcasts, videocasts and various other content you can subscribe to. Lately, wordpress is even used increasingly to build regular websites. It is also what powers the blog you’re currently reading.

    Microblogging -> twister

    Everybody knows twitter. People who use it say it was great before they had to start pleasing their share holders. It was used for communicating in the North African revolutions. Sounds ironic, given it’s centralized nature. It’s easy to revoke free speech with centralized systems. Nobody is astonished when it happens in turkey.  Lately I read that even in the UK they think about blocking twitter when things are going out of control.

    There was a more open alternative called identica, but I don’t know if it’s still used a lot. I saw twister mentioned a while ago, and thought that’s something I should have a closer look at. Only last week I installed it and started playing with it. It triggered new interest in the whole topic. It is based on BitCoin and torrent systems, thus completely decentralized. A blockchain is used to register users, and torrents to distribute the content. Installing is as simple as adding a ppa (personal package archive from launchpad.net) and apt-get install it. As I don’t use twitter, I don’t know for sure, but I think the user experience should be similar except for ads. And while twitter provided rss feeds a long time ago, but stopped due to monetization, it is no problem with twister. While they say it’s in alpha stage, I had no issues, and the experience is better than with many commercial software. One downside it currently has is that a lot of handles for big company names or celebrity names were reserved early on by hwo knows whom. There is no mechanism to transfer a handle other than sharing the secret key. Maybe an expiration model such as with namecoin would be appropriate here. My handle is @ulrichard, if you want to follow me.

    Social networks -> diaspora or gnu social?

    I never really got it why I should be on facebook. You could describe their business model as a man in the middle attack. You chat with friends and there is always someone nearby who listens in and takes notes. Then he sells the information he gathered. And if he pleases so, he can even block you from chatting with your friends altogether. Sounds over the top? Think about it.

    I do have a google+ account, but I actually never used it. It was forced on me to be able to keep uploading videos to youtube. The same criticism as for facebook also apply to google+. But the worst thing is that they are not interoperable. Why do people have to be on the same platform to interact? That is a huge step backwards.

    Diaspora was touted as an alternative for a long time. I wanted to give it a try, and I routinely check the packaging status. Usually I only use software that I can apt-get install, and thus is automatically updated, cleanly uninstalled, and I can check what files belong to it and where they go. If it is written in a language and environment that I’m familiar with, I might compile it to give it a try. I’m not familiar with ruby at all. Apart from that, I make very few exceptions from my apt-get rule. So, I’m still waiting for the diaspora packages.

    Then I recently learned about gnusocial. It also looks viable, but again, no deb package. So I’m waiting here as well.

    Messengers and Video calls -> Tox

    Skype used to be great before it was sold to Microsoft. We used it a lot to phone home on our South America trip in 2007. Then GoogleTalk used to be even better until they terminated xmpp federation, and subsequently even switched to a proprietary protocol.

    For text messages, xmpp is still perfect, but for voice calls it was difficult for a while. I once tried mumble, but can’t remember at the moment, what I didn’t like about it. My SIP VoIP experiments didn’t lead anywhere. And all the proprietary apps like WhatsApp really don’t cut it for me.

    Only through twister I learned about tox. It’s still a mystery to me why I didn’t know about it sooner. It is easy to apt-get install from a ppa, and just works. They say it’s at an early stage and can be buggy. I had no issues so far. Nothing more to say… other than my tox id : 75A6B5F621BF142FA836E58A96023EE8F51AE0446FD85B2FBAFB378F4034E265EFF16B919A7A

    Chat -> IRC, BitMessage, TorChat

    I almost forgot to mention chat. IRC has been there forever. In my early chat experiences in the nineties I didn’t know about the technology behind, but in retrospect I assume it was powered by IRC. I still use IRC regularly, mainly on freenode to discuss about OpenSource software.

    There is BitMessage which uses some ideas from BitCoin to run a fully anonymous stealth communication network. I like the idea and the concept, but getting a message through can sometimes take it’s time.

    And recently I learned about TorChat. It worked fine the one time I used it. It makes use of the tor onion router to hide the communication, but appart from that it’s not associated with the tor project.

     

  • wake up to a clean state

    I used to have problems when my ultrabook woke up from sleeping mode. Nothing serious, but annoying. One thing was that the empathy messenger application fully occupied one CPU core, effectively transforming the power out of the battery into heat. I grew tired of manually terminate it every time. So I did some research, and put the following lines into  /etc/pm/sleep.d/20_empathy_cpu_hog :

    case "${1}" in
        resume|thaw)
            killall empathy-gabble
            ;;
    esac

    The other problem was the ssh connection that I keep to my server. After waking up from sleep it took a while to time out. Now, I terminate it right after wakeup, so that it can be automatically re-established. To accomplish this , I wrote the following lines into /etc/pm/sleep.d/30_ssh_ulrichard :

    #! /bin/bash
    case "${1}" in
        resume|thaw)
            kill `ps aux | grep ssh | grep user@server.ch
                     | grep -v grep | awk '{print $2}'`
            ;;
    esac

    I love linux, where problems are rare, every problem can be solved, and the solution is just a few lines away…

  • fido universal 2nd factor authentication

    In the time since my rant about passwords, more and more sites adopt OAuth. I don’t like this development. Usually they offer login with facebook, sometimes with google or twitter and rarely with linkedin. The problem with OAuth is that the site operator decides what providers are supported. With OpenID on the other hand, I can host my own OpenID provider and secure it with whatever 2nd factor authentication I choose. It’s sad to see that OpenID lost traction, and is actually removed in many places. One concern about OAuth is that exactly the companies that track you the most, get this extra information about where you log into and when. And on top of that you usually have to grant the site you log into the permission to tweet or post on your behalf. But what bothers me most, is that you grant your id provider more power than you are probably ready to admit. Say for example you use google as your id provider for every site you can, because it is just so convenient. Then one day google decides for whatever reason to block your account. As a result you are locked out not just from all google services, but out of most of the sites you care. And it does happen that google blocks accounts for no good reason.

    Most BitCoin exchanges these days offer some sort of 2nd factor authentication. Some use YubiKeys, some use GoogleAuthenticator and some send you text messages. They are somewhat similar as they all use something called “one time passwords“. Only how the user gets them is different. Text messages seem like an ugly hack, and phones known to be insecure.  That’s also why I don’t like the Google Authenticator as it is just software running on the regular processor of your smart phone. The YubiKey is clearly the best option out of these, but it also has its weakness. If you use it for different purposes, an OTP generated for one site could be reused for a different site. As it emulates a keyboard it’s a one way track and it has no way of knowing where it is used. This is why the now defunct MtGox distributed dedicated YubiKeys. At least some parts they did right .But there is something in the works to solve all of this…

    Last week I received a new USB security token. It’s a PlugUp fido u2fa device. It has exactly the same form factor as the HW1 BitCoin hardware wallet. And that is actually how I paid it. Not directly, but through Brawker. The device implements the new FIDO universal 2nd factor authenticator standard. Finally a conglomerate of big name companies got together to solve the password authentication problem.

    When I first read up on it, I found lots of marketing speech, and overly detailed specification, but not the kind of technical overview I was looking for. But it seemed interesting enough to give it a try. So far, there are USB devices available from only two vendors: Yubico and PlugUp. Even though I love the YubiKey NEO, the price was too high just to give it a try. The PlugUp device is much cheaper but also less rigid. Also there are not a lot of places where you can use it so far. But looking at all the companies that form the alliance, that is hopefully going to change.  The only place I could use was to log into my google account, and only with the Chromium browser. My browser of choice is Firefox, but it doesn’t look as if fido support is imminent. I did like what I saw so far. You can register multiple devices per account. And you can use the same device for multiple accounts. There were no technical hiccups. It just worked.

    But still I thought, I would prefer a solution based on OpenPGP Card with EnigForm. With GPG, I can manage my identity myself, how I want it. Of course this is great for power users, but not something regular users want or can do. FIDO is targeted at regular users, and I think they found a good compromise. It appeared that from the security standpoint they should be similar, in that both work in a challenge response scheme. The server knows the public key, and lets the device sign something.

    Then I found the technical information I was looking for on this blog. Now that looks promising. The device generates a new set of keys for every site. That is perfect for authentication, i.e. making sure it’s the same user as last time. If you want to compartmentalize your identity, you don’t even have to do it by hand. But it doesn’t help with identification. GPG would be better in that regard. So while GPG would be enough to identify a user, with fido the user will still have to fill in some required information. But most important, with both approaches fido and GPG/EnigForm, you don’t need a central service like with OpenID or OAuth that can track you.

    Once fido gains more traction, the new YubiKey NEO will be perfect, as it combines fido u2fa with an OpenPGP applet. In the meantime, you can check which sites offer what type of 2nd factor auth at dongleauth.info

  • MultiSig with HardwareWallets

    2014 is touted as the year of multi-signature for BitCoin. It is being integrated into some wallets and services. But not quite the way I expected.

    • Electrum has an implementation that assumes multiple hierarchical deterministic wallets distributed over different machines, that know the other’s master private keys. -> This should work well for corporate environments or other organizations.
    • GreenAddress has a cool, but for my taste too obscure solution. I would recommend it for new users. But for myself, I want to be fully in control.
    • OpenBazaar, although not fully functional yet, will integrate arbitration with multi-sig.
    • and I hear more announcements almost on a daily basis…

    When I first read into MultiSig, I understood it like I could combine any Bitcoin Addresses of my choosing to create a MultiSig address. If one of the involved addresses was in my wallet, it would automatically display the MultiSig address as well. And I could then partially sign a transaction with the GUI, and magically forward to the other signing parties. Turns out that is not quite how it works. To combine addresses of my choosing into a MultiSig address, I have to resort to the commandline. There are a couple of good tutorials on the net on how to do that, and also on how to spend. But it’s not like executing a few simple commands. It’s quite hardcore. There are wallets where you can add them as view only addresses, but I’m not aware of a wallet where you can partly sign a transaction in such a setting.

    MultiSig brings us escrow services and a load of similar stuff that was not even imaginable before the rise of BitCoin. MultiSig is also good if you want to implement a setting where at least two of your accountants need to sign transaction in a corporate environment. What this adds is security. You surely saw movies where a few generals had to use their physical keys to launch missiles. That’s done to add security. So that the terrorists would have to steal the keys from more than one general, before they could launch a missile. The same works for bank vaults. And the same idea is behind BitCoin MultiSig, only that it goes much further.

    MultiSig is just one facet of pay to script (P2SH). You can implement other rules than just MultiSig. I became only recently aware of that, when GreenAddress gave me a transaction that I could use to get my funds off the MultiSig wallet in case they went out of business. What that means, is that if too many parties loose their keys, funds on a MultiSig address are rendered inaccessible. As a measure against that, they created and signed a transaction with their key to transfer all funds, but with a time restriction. This transaction will only become valid after a certain configurable point in time. BitCoin has a stack based scripting language for expressing such rules. For my taste it’s very complicated at first sight, but it’s cool what you can do with it. That’s actually, where ethereum’s main focus is to improve. That’s all good and nice, but wasn’t it possible to program rules for a long time? Of course, but with BitCoin nobody can cheat, and you have to trust nobody. You cannot just change the system time on your computer, or buy a fake certificate to trick a system into using your timestamp server. BitCoin has a distributed consensus, that is very hard to come by.

    So in essence, MultiSig is about increasing the security. This is mainly against malware that can infect your notebook and steal the files of your wallet software. There is also another cure against the same threat: HardwareWallets. I wrote about the Trezor and HW1 on my blog before. Now how about combining the two measures? That should raise the level of security up to a point equivalent as storing your gold and silver and diamonds inside a bunker in the Swiss mountains, and guard it with a Russian tank, driven by a rogue artificial intelligence. But I can tell you upfront: just like that rogue AI, it’s not going to be user friendly. While user friendliness and security are often opposing, this is an extreme case. After reading this, don’t be tempted to think BitCoin was difficult to use. BitCoin is wonderful and easy – for normal use.

    So let’s begin with the commandline fu. I won’t repeat every step from the gist from atweiden, but concentrate on the special parts:

    You don’t need to create any wallets. I assume, the hardware wallets are initialized and ready to use. (more…)