Author: ulrichard

  • We have been using passwords for too long

    Every time I have to register to a website using a password, I grow more annoyed. Passwords were fine when you only had one, to log in to your corporate mainframe. But these days, computers are better at cracking passwords than humans at remembering them.

    It only gets worse with the more sites you maintain profiles. You shouldn’t use the same password all over. If it was hacked, your entire online identity could be compromised. And nobody can remember good strong passwords for every site he visits. Password managers are no solution. You need to have them with you all the time. They are protected by a master password. So if an attacker can get hold of your database and your master password, which is easily attainable with a trojan, then good luck. He even gets a list of sites to visit.

    OpenId and OAuth are a step in the right direction. In theory, you could maintain your identity with a central entity, and use it as a proxy to authenticate you. You have to choose that central entity that manages your identity well, as is can now track your every move. Hence, It would be best, if you could host it yourself. But it is usually still only protected by a password. Since you now only have to remember one, it’s easier to choose a strong one. But again, if an attacker gets hold of your password, he can impersonate you.

    So, we need hardware based two factor authentication (something you have and something you know). For about one and a half years I’ve been using a CryptoStick for said two factor authentication. It works great for email, files, ssh, package signing, full disk and disk image encryption, but I couldn’t figure out so far how to use it for web authentication. They mention a service for a SmartCard backed OpenId. That would be just what I want, but I couldn’t figure out how to make it happen. (more…)

  • an ultrabook for developers

    My old netbook still runs, but it shows signs of senility. I have been thinking of a replacement for a while, but as it still worked, that was constantly postponed. When I first read about project sputnik, I thought this is great news and I want one. The device that followed looked very nice, but was a little bit over my budget. Only when the value of BitCoin rised to new hights, I ordered a Dell XPS13 developer edition. The dell representative told me that they don’t YET accept BitCoin for payment, but he was well aware of what it is. Apparently the device shipped from Asia. Since I didn’t know that, I waited eagerly and checked the status every day. After it was in delivery already three days after ordering, I didn’t understand why UPS didn’t even receive the box more than two weeks after that.

    The device is really slick. I had no issues so far, not even with the graphics driver. That is also why I wanted this device that comes with ubuntu, and fully supports it. All the drivers are in the vanilla kernel. The graphics card drivers were always the culprit with my previous netbooks. They both had binary drivers when they came out, no 3D acceleration, and the situation degraded gradually. After the second OS upgrade I usually even lost 2D acceleration. Now that I have an ultrabook with a GPU that is apparently fully supported, I wanted to see how well the GPU performed. So I grabbed my very first OpenCL program to give it a try. I was glad to see, that the intel OpenCL driver was already packaged in the ubuntu repository, and that the 4400 GPU support was recently added. This situation is much better than when I started with OpenCL. But I soon realized that this GPU or it’s driver doesn’t support the kind of memory sharing that I used in the example. So, I had to slightly rewrite the host program, no big deal. On the other hand, it would support double precision floats which my geforce in the workstation doesn’t. But after that, I found out that this tiny ultrabook outperforms my five year old workstation by a big margin on CPU and GPU. And that is by using only a fraction of the power. Then I applied the same changes to my GPU accelerated ray tracer. The ultrabook ran the homework image in 15 minutes. So this one was a bit slower than the workstation.

    In general, the experience with the XPS13DE is just great. Everything is so responsive, totally different than with the Atom based netbook. The only thing I would have ordered differently if I had a choice was a bigger SSD. Although I was lucky already, If I had ordered a month earlier, It would have come with 128 instead of the 256GB SSD.

    The setup was about as follows:

    • OS install with smart card backed full disk encryption
    • setup smart card authentication for ssh
    • checkout of my git home repo.
    • software install with my setup script that adds ppa repositories and apt-get installs everything I need
    • Checking out all source repositores (git and hg) that I usually work with that are not already submodules of my home repo
    • integrate the plasma-desktop into unity so that I could still use the bitcoin plasmoids. But the experience with this integration was not so good, so I reverted that. I will look into writing a screenlet for gnome.
    • syncing the git repos for photos and music. They are why I would have wished for a bigger SSD.
    • syncing the BitCoin block chain

    I’m grateful that the BitCoin price surge gave me the opportunity to “vote with my wallet“. Otherwise I would maybe ended up doing the same as last time: buying a cheaper model with a mediocre operating system that I don’t want. That would send the wrong signals, and reinforce the vicious circle. At least Dell has realized that people want good hardware with good linux support. Yes, people are willing to pay a premium for good hardware support for a free and open operating system.

  • Pimp my miner

    For a while now, I thought about mounting a simple display somewhere that shows the most important parameters of my BitCoin miner. First I started with an AtMega equipped with an Ethernet module. But parsing json without any library support became too cumbersome quickly. So I copy pasted together a small python script, and used a nokia display that I already equipped with an i2c interface. If I knew how easy it is, to use those json interfaces, I would have implemented this display project earlier. The code is at the bottom. It is specific to my setup with p2pool and CHF, but it is so easy to change that you can adapt it to whatever your setting is. The python script now runs on an Alix on the same i2c bus as my simple home automation transmitter. Now all that is left to do, is adding a line to /etc/crontab to execute the script once a minute.

    As you can see on the image below, the hash rate is too hight for a stock Saturn. That is because I recently added an extension module. So it has now three instead of two hashing modules. All of a sudden, KncMiner announced they had 200 extension modules for the October batch, and that future modules would be incompatible. So, that was pretty much the only chance for an upgrade. My existing power supply should have room for one more module, and they were moderately priced. The demand was high enough that they were sold out in three minutes. The i30 cooler that was recommended was not available at the time, so I had to use an Xtreme rev2. I had a fun time finding out how to correctly mount it. Even for the original, there was no manual or description how to mount it. Just look at the existing modules said someone in the forum.

    (more…)

  • chording bluetooth keyboard

    Wearable computing is much older than Google glass, and even head mounted displays were around for a while. Personally, I’m looking forward to affordable devices of that type. The display seems to be a very good solution, while voice entry can be awkward. The Hak5 podcast aired an episode last year about a guy that has walked around with a head mounted display and a computer in his backpack for a long time. While the display is certainly cool, what was most intriguing to me was the keyboard. He uses a one hand device with key press combinations that he can operate while walking around.

    I didn’t find his exact model when searching the Internet, and while there are some devices around in this category, the selection is very sparse. They are called chorded keyboards, and were first introduced in 1968 at what is often called “The Mother of All Demos“. Then I found out that there is an open standard for this sort of thing. It’s called GKOS and stands for Global Keyboard Open Standard. They experiment in lots of different directions, but no commercial product seems to have come out of this so far. Amongst the different experiments, there is an Arduino project to build a GKOS keyboard, but I considered an Arduino with custom buttons too bulky for practical use.

    A while ago, I ordered a cheap 6-key HID device that I wanted to use to try GKOS myself. I tried a while with key remapping but to no avail. And I strongly suspected, the device could not handle key combinations at all.

    Last week, I somehow remembered my failed past attempts, and thought that a bluetooth device would be cool. I quickly confirmed that all the DIY bluetooth modules that I had were not capable of HID but only UART. Then I found a simple to use bluetooth HID module, that was apparently released just two months ago. What a coincidence!

    The first test with the GKOS Arduino code on a breadboard was successful. So, I disassembled the USB device, and re-soldered the buttons to an AtMega8 and added a lithium battery from a defunct tiny quadrocopter.  But after I soldered everything together, only some keys would work. I was sure, an AtMega8 would be able to handle this simple task with ease, but I had to use an Atmega328 to make it work. It costs a few bucks more, but much less than the time for finding out what the problem with the AtMega8 was. I didn’t inspect the code throughly enough yet, but maybe the AtMega8 is just missing some hardware interrupts.

    So far, I’m very slow at typing, and I have to peek at the cheat sheet for most characters, but with a bit of training that should improve. My prototype works well for two handed operation, but I think one handed operation would be the way to go, although I don’t know if GKOS is really suited for that.

  • BitCoin mining pools

    As stated in an earlier post, after I was mining for a few days, the 50btc mining pool was hacked. A month later, I’m still waiting for my coins. So I tried some other pools. As advised in many places, I avoided the biggest pools, thus mitigating the risk of a 51% attack. I mined for about three days each with 50btc, slush, bitminter and eligius. Like 50btc, slush and bitminter required registration and payed for the submitted work to an account on the site. You could manually cash out, or define automatic payouts with a threshold. These pools are good for ease of mind when you start mining, or have underpowered hardware, as you get a predictable, steady flow of income. Because these pools pay for submitted work, they have to absorb the risk of bad luck periods. Thats when the pool doesn’t find as many blocks as it statistically should. Because of that, they naturally need to collect higher fees.

    The eligius pool has an entirely different strategy. As happened to 50btc, the pools above accumulate funds for payouts, and are thus exposed to hacks. You don’t have to register for eligius. Instead, you just provide your payout address as user name. When a block is found, it is split amongst the miners, and no funds are kept on the server. This manifested in a different action in my bitcoin client. Rather than an usual transaction with an originating address, it showed two hammers, indicating that this came directly from mining. Though not vulnerable to hacks as the other pools, it is still attackable by DDoS. And yes, the BitCoin world is more hostile than the broader OpenSource community. That’s what money does to people.

    Then I found what I consider much more in line with the bitcoin spirit: p2pool. It is decentrally organized as peer to peer network, just as bitcoin itself. Having no single point of failure, it is save from both hacking and DDoS attacks. It is very clever how it works: (more…)

  • revisiting enable_if

    It was roughly 2008, when I wanted to make a template function for serialization, only available to container types. Template stuff can become complicated at times, and from reading the documentation boost::enable_if seemed to be just what I needed. I didn’t get it to work, and I blamed Microsoft Visual Studio 2005 for not being standards compatible enough. And somehow I remembered enable_if as being difficult and hard to get to work, despite highly desirable if it would work. I ended up providing explicit template overloads for all the supported container types.

    Fast forward to five years later, enable_if made it into the C++11 standard, and I didn’t even notice until reading “The C++ programming language” by Barne Strousup. In the book the facility is presented as a concise template that is easy to use and even to implement. To understand it’s value, let’s start with an example. Suppose, I want to implement a template function to stream the contents of containers to stdout.

    #include <iostream>
    #include <vector>
    #include <list>
    
    template<class ContainerT, class StreamT>
    StreamT& operator<<(StreamT& strm, const ContainerT& cont)
    {
    	strm << '{';
    	for(const auto& element : cont)
    		strm << element << " ";
    	strm << "} ";
            return strm;
    }
    
    int main()
    {
    	std::vector<int> ints{8, 45, 87, 90, 99999};
    	std::list<float> floats{3.14159, 2.71828, 0.57721, 1.618033};
    	std::cout << ints << floats;
    
    	return 0;
    }

    So far so good, this does the trick. And the output is just what we expected: {8 45 87 90 99999 } {3.14159 2.71828 0.57721 1.61803 } But now we also write an output stream operator for some user defined interface type. (more…)

  • trading agents

    I always considered finance and accounting as the most boring things you can do with a computer. And while you can earn big bucks, working for a Swiss bank, I have always preferred topics with a more physical background.

    But BitCoin got me interested in how some aspects of the established financial systems work. Looking at the bitcoin price fluctuations, I long suspected that it should be possible to write a trading agent to exploit the volatility. It could follow some fix pre-programmed rules, or find the rules by itself using machine learning. All the data it would need to work on, is easily available.

    Last summer started btcrobot, a service that promised just that. They have a subscription model, and I’m sure, if it doesn’t work out, they still gain and the users loose. I didn’t really want to pay hundreds of dollars just to find out if it works. And to be honest, the whole site smelled like a scam.

    So I completed the Coursera class “Computational Investing 1“. It was more about portfolio management and algorithmic trading of stocks. But a lot of the material can be applied to currency trading and in special to bitcoin as well. In the homeworks we built a small trading agent and portfolio optimizer. The main metric we used was the Bollinger Bands technical indicator.

    So I started implementing a bitcoin trading agent that would use bollinger bands. I didn’t want to start completely from scratch, so I skimmed through github and sourceforge for a starting point. I selected funny-bot, and started extending it. But soon, my interest switched to other projects. Remember, finance is not my primary interest. In the last months I had an eye on the exchange rates, trying to see how such an agent might perform. And I think it would be very difficult to tune, at least without experience in that field.

    Last week I found out again that I suck at trading. The bitcoin price started rising like cracy. I thought if it goes up so fast, it must come down again. In a rush, I sold some of my bitcoins. I wanted to buy again after the price would crash. But the price kept rising, and I would have gotten a lot more if I sold them just two days later. Apparently I was not alone with my false prediction.

  • Mining BitCoins

    Today I read an article called “Why you should care about BitCoin” with a quote that I want to repeat here:

     “Hackers are the animals that can detect a storm coming or an earthquake. They just know, even though they don’t know why, and there are two big things hackers are excited about now and can’t articulate why – Bitcoin and 3D printing”
    – Paul Graham

    In late June I ordered a KncMiner Saturn BitCoin mining machine. I knew that it would be delivered in October. What I didn’t know was how the difficulty for mining would develop. It was rising fast, exponentially fast. Nevertheless it was tempting to see, that if I received the miner when I ordered it, return on investment would have been reached in 9 days. In the months since I ordered the device, the network difficulty went through the roof. So at the moment it’s not even sure if I will reach a break even point for the investment. It was a substantial investment for me. But I won’t complain, it was less than what I gained just by holding onto some BitCoins, that I earned with paragliding tandem flights. The mining business appears even riskier than just investing in BitCoin. But in late spring it looked as if the BitCoin price would stabilize, which would be a good thing for BitCoin adoption. Thus mining seemed to be a good strategy to gain something. And on top of that, everybody heard the stories of the people who made a fortune through mining BitCoins.

    Last wednesday, I received an eMail from DHL, that they picked up a packet for me in Sweden and that it is on it’s way to me. With the tracking number I could see where it was. The first part of the voyage was quite impressive, but then they couldn’t locate my home, which shouldn’t be that hard. So I had it delivered to the office a day later:

    October 16, 2013 14:13 Vasteras - Sweden     Shipment picked up
    October 16, 2013 19:35 Vasteras - Sweden     Processed at Vasteras - Sweden
    October 16, 2013 20:57 Vasteras - Sweden     Departed from DHL facility in Vasteras - Sweden
    October 16, 2013 22:40 Copenhagen - Denmark  Transferred through Copenhagen - Denmark
    October 16, 2013 22:42 Copenhagen - Denmark  Departed from DHL facility in Copenhagen - Denmark
    October 17, 2013 00:52 Leipzig - Germany     Arrived at DHL facility in Leipzig - Germany
    October 17, 2013 01:08 Leipzig - Germany     Processed at Leipzig - Germany
    October 17, 2013 04:40 Leipzig - Germany     Departed from DHL facility in Leipzig - Germany
    October 17, 2013 06:35 Basel - Switzerland   Arrived at DHL facility in Basel - Switzerland
    October 17, 2013 06:43 Basel - Switzerland   Processed for clearance at Basel - Switzerland
    October 17, 2013 06:44 Basel - Switzerland   Clearance processing complete at Basel - Switzerland
    October 17, 2013 06:46 Basel - Switzerland   Processed at Basel - Switzerland
    October 17, 2013 07:58 Basel - Switzerland   Departed from DHL facility in Basel - Switzerland
    October 17, 2013 09:06 Basel - Switzerland   Arrived at DHL facility
    October 17, 2013 09:19 Basel - Switzerland   With delivery courier
    October 17, 2013 16:46 Basel - Switzerland   Address information needed; contact DHL
    October 18, 2013 09:36 Basel - Switzerland   With delivery courier
    October 18, 2013 15:03 Basel - Switzerland   Shipment delivered

    Usually when I look at the tracking,  the steps that take minutes or hours for DHL, would take days or weeks for stuff that comes from China with economy shipping.

    When I opened the package, I noticed that something shakes inside the case. When I opened it, I saw that the fans had fallen off the big heat-sinks. So I re-mounted them before starting the device. The software came fully configured, so this part was plug n play. Over the first few hours it was hashing at approx 240 GH/s. When I ordered it, 200+ GH/s was promised, but they stated that 275 GH/s would be normal after the first prototypes. For comparison: a high-end graphics card hashes at about 0.3 GH/s. So I upgraded the firmware to get some more diagnostics. One core was always at 57°C while the other was always at 44°C. So I guessed there should be a way to make the one with the lower temperature work faster.

    On the KncMiner forum, I found a firmware mod called BertMod that should offer more detailed diagnostic. It didn’t work with the newest firmware though. Fortunately I could de-compose it, and run the perl script in an ssh session to get the diagnostics. It showed that on the second chip there were about 40 cores disabled.

    The next thing I found was the official EnableCores patch. As the name suggests, it enables all the cores in case they were disabled erroneously. Shortly after applying it, the hashrate went up to 260GH/s, but not for long. It stabilized at 250GH/s. And now, I get lots of messages like this:

    KnC: core 4-xx was disabled due to 10 HW errors in a row

    The technical support told me they are working on a new firmware to improve the performance.

    Since solo mining is too risky for my taste at the moment, I am participating in pooled mining. My device should statistically find about 1.3 Blocks in it’s lifetime. But if I were unlucky, it could find nothing at all. When experimenting with GPU mining a while back, I used the 50btc pool. Thus, it was my first choice for the Saturn as well. But after a day or two, they were attacked by a DDoS and later also their billing server was hacked. Mining still worked, but the situation seemed a bit risky, so I looked for alternatives. At the moment I configured slush, bitminter, eligius and solo mining as failover alternatives. They have problems of their own, but these can be worked around:

    • slush wouldn’t send confirmation emails to my regular account. So I had to use gmail.
    • bitminter uses OpenId for logging in, which is great. But the first two OpenId’s that I tried, didn’t work with their site. They would provide additional security as they are self hosted or backed by a client certificate. So I had to use my launchpad.net OpenId, which is only secured with a password.
    • eligius is based in the US

  • VW Bus Treffen Schwarzsee

    Last saturday we went to the vw bus gathering at the Schwarzsee. There were more than 460 VW Busses present from all different types. I had the impression, to see less vehicles than last time, but comparing the pictures from 2009, I’m not so sure anymore. It’s amazing in how good a shape some of the vintage hippie mobiles still are.

    By accident we discovered a DVD of “The Bus” movie on the bugbus booth. It was apparently crowd-funded by a kickstarter campain.

    Of course I went for a short flight, to see the event from the top, while Mirella and the kids listened to a “Guggämusig”.

    The drive there was a good opportunity to test the SPOT Connect that I got for my birthday. [map link] Contrary to my previous understanding, It doesn’t provide internet connectivity, but allows to send custom messages to pre-defined phone numbers and eMail addresses. As the simpler SPOT devices, it contains a transmit only unit for the GlobalStar satellite network. The very bad thing about it, is that it was hard to perform the required firmware upgrade. They provide the upgrade program only for Windows and Mac. But communication afterwards seems to be better, as lined out by this blog post.

    Enough blabbing, pictures tell more than words:

  • sniffing i2c with the BusPirate

    I received my BusPirate v4 a while ago, but didn’t really use it so far. That’s a cool analysis/debug tool for serial buses such as uart, spi, i2c and the like. For me i2c is the most interesting. From time to time, the communication doesn’t work as it should, and so far, I worked it out with trial and error. I hope the BusPirate can be of help in such situations in the future. So, here is my first test run.

    The BusPirate is controlled through an uart textual interface:

    minicom -D /dev/ttyACM1 -b 115200

    When you connect to it, it performs a self test, and then you can choose the mode by entering m. In my case, that’s 4 for i2c. Next I get to choose between hardware and software. I don’t know the implications yet, but what I see is that hardware offers higher speeds, and locks up more often. Then I get to choose the bus speed. 100KHz is the standard. With ? you can always get a list of possible commands. (0) shows a list of available macros. (1) scans all possible addresses for connected devices, just like i2cdetect would do it on the computer. (2) finally is what I was after, that’s the i2c sniffer.

    I was actually hoping it could find out why I’m having problems reading back a simple value from an AtMega8 to a RaspberyPi. The AtMega8 is at address 0x11 and the command to read the value is 0xA1. I verified with a serial connection to the AtMega8 that it has a proper value, but on the RaspberryPi I always get a 0. At least the command was received on the AVR as I could verify with the UART, but writing the value back is the problem. So here is what the sniffer outputs for the attempted read:

    [[[][]][[[0x01+][0x04-[][[0x20+][[[[[][0x20-[][0x4C+][0x04-[][0x24+][0x20-][]]]

    Let’s decipher those numbers. Plus means ACK and minus means NACK. Opening square bracked means start bit, and closing square bracket means stop bit. The expected sequence would be 0x22 (the address for sending to the AVR) 0xA1 (send back which value) 0x23 (the address for receiving from the AVR) 0x08 (or whatever value was stored on the AVR). But the above output doesn’t look like this at all. So, lets try to communicate from the BusPirate to the AVR directly. Here we go: (more…)