Category: Software

  • We have been using passwords for too long

    Every time I have to register to a website using a password, I grow more annoyed. Passwords were fine when you only had one, to log in to your corporate mainframe. But these days, computers are better at cracking passwords than humans at remembering them.

    It only gets worse with the more sites you maintain profiles. You shouldn’t use the same password all over. If it was hacked, your entire online identity could be compromised. And nobody can remember good strong passwords for every site he visits. Password managers are no solution. You need to have them with you all the time. They are protected by a master password. So if an attacker can get hold of your database and your master password, which is easily attainable with a trojan, then good luck. He even gets a list of sites to visit.

    OpenId and OAuth are a step in the right direction. In theory, you could maintain your identity with a central entity, and use it as a proxy to authenticate you. You have to choose that central entity that manages your identity well, as is can now track your every move. Hence, It would be best, if you could host it yourself. But it is usually still only protected by a password. Since you now only have to remember one, it’s easier to choose a strong one. But again, if an attacker gets hold of your password, he can impersonate you.

    So, we need hardware based two factor authentication (something you have and something you know). For about one and a half years I’ve been using a CryptoStick for said two factor authentication. It works great for email, files, ssh, package signing, full disk and disk image encryption, but I couldn’t figure out so far how to use it for web authentication. They mention a service for a SmartCard backed OpenId. That would be just what I want, but I couldn’t figure out how to make it happen. (more…)

  • an ultrabook for developers

    My old netbook still runs, but it shows signs of senility. I have been thinking of a replacement for a while, but as it still worked, that was constantly postponed. When I first read about project sputnik, I thought this is great news and I want one. The device that followed looked very nice, but was a little bit over my budget. Only when the value of BitCoin rised to new hights, I ordered a Dell XPS13 developer edition. The dell representative told me that they don’t YET accept BitCoin for payment, but he was well aware of what it is. Apparently the device shipped from Asia. Since I didn’t know that, I waited eagerly and checked the status every day. After it was in delivery already three days after ordering, I didn’t understand why UPS didn’t even receive the box more than two weeks after that.

    The device is really slick. I had no issues so far, not even with the graphics driver. That is also why I wanted this device that comes with ubuntu, and fully supports it. All the drivers are in the vanilla kernel. The graphics card drivers were always the culprit with my previous netbooks. They both had binary drivers when they came out, no 3D acceleration, and the situation degraded gradually. After the second OS upgrade I usually even lost 2D acceleration. Now that I have an ultrabook with a GPU that is apparently fully supported, I wanted to see how well the GPU performed. So I grabbed my very first OpenCL program to give it a try. I was glad to see, that the intel OpenCL driver was already packaged in the ubuntu repository, and that the 4400 GPU support was recently added. This situation is much better than when I started with OpenCL. But I soon realized that this GPU or it’s driver doesn’t support the kind of memory sharing that I used in the example. So, I had to slightly rewrite the host program, no big deal. On the other hand, it would support double precision floats which my geforce in the workstation doesn’t. But after that, I found out that this tiny ultrabook outperforms my five year old workstation by a big margin on CPU and GPU. And that is by using only a fraction of the power. Then I applied the same changes to my GPU accelerated ray tracer. The ultrabook ran the homework image in 15 minutes. So this one was a bit slower than the workstation.

    In general, the experience with the XPS13DE is just great. Everything is so responsive, totally different than with the Atom based netbook. The only thing I would have ordered differently if I had a choice was a bigger SSD. Although I was lucky already, If I had ordered a month earlier, It would have come with 128 instead of the 256GB SSD.

    The setup was about as follows:

    • OS install with smart card backed full disk encryption
    • setup smart card authentication for ssh
    • checkout of my git home repo.
    • software install with my setup script that adds ppa repositories and apt-get installs everything I need
    • Checking out all source repositores (git and hg) that I usually work with that are not already submodules of my home repo
    • integrate the plasma-desktop into unity so that I could still use the bitcoin plasmoids. But the experience with this integration was not so good, so I reverted that. I will look into writing a screenlet for gnome.
    • syncing the git repos for photos and music. They are why I would have wished for a bigger SSD.
    • syncing the BitCoin block chain

    I’m grateful that the BitCoin price surge gave me the opportunity to “vote with my wallet“. Otherwise I would maybe ended up doing the same as last time: buying a cheaper model with a mediocre operating system that I don’t want. That would send the wrong signals, and reinforce the vicious circle. At least Dell has realized that people want good hardware with good linux support. Yes, people are willing to pay a premium for good hardware support for a free and open operating system.

  • Pimp my miner

    For a while now, I thought about mounting a simple display somewhere that shows the most important parameters of my BitCoin miner. First I started with an AtMega equipped with an Ethernet module. But parsing json without any library support became too cumbersome quickly. So I copy pasted together a small python script, and used a nokia display that I already equipped with an i2c interface. If I knew how easy it is, to use those json interfaces, I would have implemented this display project earlier. The code is at the bottom. It is specific to my setup with p2pool and CHF, but it is so easy to change that you can adapt it to whatever your setting is. The python script now runs on an Alix on the same i2c bus as my simple home automation transmitter. Now all that is left to do, is adding a line to /etc/crontab to execute the script once a minute.

    As you can see on the image below, the hash rate is too hight for a stock Saturn. That is because I recently added an extension module. So it has now three instead of two hashing modules. All of a sudden, KncMiner announced they had 200 extension modules for the October batch, and that future modules would be incompatible. So, that was pretty much the only chance for an upgrade. My existing power supply should have room for one more module, and they were moderately priced. The demand was high enough that they were sold out in three minutes. The i30 cooler that was recommended was not available at the time, so I had to use an Xtreme rev2. I had a fun time finding out how to correctly mount it. Even for the original, there was no manual or description how to mount it. Just look at the existing modules said someone in the forum.

    (more…)

  • revisiting enable_if

    It was roughly 2008, when I wanted to make a template function for serialization, only available to container types. Template stuff can become complicated at times, and from reading the documentation boost::enable_if seemed to be just what I needed. I didn’t get it to work, and I blamed Microsoft Visual Studio 2005 for not being standards compatible enough. And somehow I remembered enable_if as being difficult and hard to get to work, despite highly desirable if it would work. I ended up providing explicit template overloads for all the supported container types.

    Fast forward to five years later, enable_if made it into the C++11 standard, and I didn’t even notice until reading “The C++ programming language” by Barne Strousup. In the book the facility is presented as a concise template that is easy to use and even to implement. To understand it’s value, let’s start with an example. Suppose, I want to implement a template function to stream the contents of containers to stdout.

    #include <iostream>
    #include <vector>
    #include <list>
    
    template<class ContainerT, class StreamT>
    StreamT& operator<<(StreamT& strm, const ContainerT& cont)
    {
    	strm << '{';
    	for(const auto& element : cont)
    		strm << element << " ";
    	strm << "} ";
            return strm;
    }
    
    int main()
    {
    	std::vector<int> ints{8, 45, 87, 90, 99999};
    	std::list<float> floats{3.14159, 2.71828, 0.57721, 1.618033};
    	std::cout << ints << floats;
    
    	return 0;
    }

    So far so good, this does the trick. And the output is just what we expected: {8 45 87 90 99999 } {3.14159 2.71828 0.57721 1.61803 } But now we also write an output stream operator for some user defined interface type. (more…)

  • trading agents

    I always considered finance and accounting as the most boring things you can do with a computer. And while you can earn big bucks, working for a Swiss bank, I have always preferred topics with a more physical background.

    But BitCoin got me interested in how some aspects of the established financial systems work. Looking at the bitcoin price fluctuations, I long suspected that it should be possible to write a trading agent to exploit the volatility. It could follow some fix pre-programmed rules, or find the rules by itself using machine learning. All the data it would need to work on, is easily available.

    Last summer started btcrobot, a service that promised just that. They have a subscription model, and I’m sure, if it doesn’t work out, they still gain and the users loose. I didn’t really want to pay hundreds of dollars just to find out if it works. And to be honest, the whole site smelled like a scam.

    So I completed the Coursera class “Computational Investing 1“. It was more about portfolio management and algorithmic trading of stocks. But a lot of the material can be applied to currency trading and in special to bitcoin as well. In the homeworks we built a small trading agent and portfolio optimizer. The main metric we used was the Bollinger Bands technical indicator.

    So I started implementing a bitcoin trading agent that would use bollinger bands. I didn’t want to start completely from scratch, so I skimmed through github and sourceforge for a starting point. I selected funny-bot, and started extending it. But soon, my interest switched to other projects. Remember, finance is not my primary interest. In the last months I had an eye on the exchange rates, trying to see how such an agent might perform. And I think it would be very difficult to tune, at least without experience in that field.

    Last week I found out again that I suck at trading. The bitcoin price started rising like cracy. I thought if it goes up so fast, it must come down again. In a rush, I sold some of my bitcoins. I wanted to buy again after the price would crash. But the price kept rising, and I would have gotten a lot more if I sold them just two days later. Apparently I was not alone with my false prediction.

  • sniffing i2c with the BusPirate

    I received my BusPirate v4 a while ago, but didn’t really use it so far. That’s a cool analysis/debug tool for serial buses such as uart, spi, i2c and the like. For me i2c is the most interesting. From time to time, the communication doesn’t work as it should, and so far, I worked it out with trial and error. I hope the BusPirate can be of help in such situations in the future. So, here is my first test run.

    The BusPirate is controlled through an uart textual interface:

    minicom -D /dev/ttyACM1 -b 115200

    When you connect to it, it performs a self test, and then you can choose the mode by entering m. In my case, that’s 4 for i2c. Next I get to choose between hardware and software. I don’t know the implications yet, but what I see is that hardware offers higher speeds, and locks up more often. Then I get to choose the bus speed. 100KHz is the standard. With ? you can always get a list of possible commands. (0) shows a list of available macros. (1) scans all possible addresses for connected devices, just like i2cdetect would do it on the computer. (2) finally is what I was after, that’s the i2c sniffer.

    I was actually hoping it could find out why I’m having problems reading back a simple value from an AtMega8 to a RaspberyPi. The AtMega8 is at address 0x11 and the command to read the value is 0xA1. I verified with a serial connection to the AtMega8 that it has a proper value, but on the RaspberryPi I always get a 0. At least the command was received on the AVR as I could verify with the UART, but writing the value back is the problem. So here is what the sniffer outputs for the attempted read:

    [[[][]][[[0x01+][0x04-[][[0x20+][[[[[][0x20-[][0x4C+][0x04-[][0x24+][0x20-][]]]

    Let’s decipher those numbers. Plus means ACK and minus means NACK. Opening square bracked means start bit, and closing square bracket means stop bit. The expected sequence would be 0x22 (the address for sending to the AVR) 0xA1 (send back which value) 0x23 (the address for receiving from the AVR) 0x08 (or whatever value was stored on the AVR). But the above output doesn’t look like this at all. So, lets try to communicate from the BusPirate to the AVR directly. Here we go: (more…)

  • The crapware platform

    I complained many times that there is no standard package manager on Windows, and that installations and especially upgrading software on that platform is an unholy mess. On my office computer there are probably close to ten different mechanisms present to keep different software packages up to date. Some lurk in the system tray, and most of them constantly waste resources. The update mechanism of our software is a little bit better than most in that respect. It doesn’t waste resources while it’s not in use, but it’s still a separate proprietary solution. And the worst part is, that most of the software on usual Windows Systems don’t even get updated at all.

    I looked for a solution as simple, elegant and powerful as apt-get many times. The best I found so far was Npackd. It’s still a decade short of the debian system, but better than anything else I found. The repository has grown significantly in the years I have used it. But even if Npackd implements dependency management, the packages rarely make use of it. It’s just not the way Windows packages are made. Rather than managing the dependencies, they keep inventing new versions of dll hell.

    I don’t know what is the reason that upgrades in Npackd frequently fail. It’s usually that the uninstall of the old version fails, and thus the update stops. What I usually did in the past, was installing the new version in parallel. I think there is not much Npackd could do about WindowsInstaller packages failing to uninstall. Having crafted WindowsInstaller packages myself, I know how brittle and error prone this technology can be.

    Today I upgraded some packages that Npackd flagged as upgradeable. You select the ones you want to bring up to date, and click update. It’s not like “sudo apt-get upgrade” and done, but it still makes Windows a lot more bearable. And for a long time the quality of the packages was good, at least for Windows standards. It started out with mostly open source projects and a few big name packages. The crapware that is so stereotypical for the Microsoft platform had to stay out.

    That impression changed today. One of the packages that I upgraded was IZArc, a compression package with nice Windows Explorer integration. Already during the upgrade process I had a strange feeling, when I saw the ads in the installer window. And when it was done, I was certain something fishy had happened. Some windows popped up wanting to install browser toolbars, changing the default search engine and scan the computer for possible improvements. Holly shit I thought is this some scareware? I would expect this from some random shareware downloaded from a shady page, but not from Npackd.

    And that’s my main point. When you install software on your computer, you trust the issuer not to hijack your system. And if you install software through a software repository, you trust the repository even more. On Windows, you’re pretty much dependant on lots of individuals and companies involved in the creation of all the packages you install. There is a Microsoft certification process, and I don’t know what it checks and entails. There is also the possibility to sign your packages with a key signed by Microsoft. But that merely protects from tampering between the issuer and you. With OpenSource software however, you can examine the sourcecode yourself, and rely on the fact that other people checked it as well. Then most distributions have build hosts that compile and sign the binary packages. To be included in the repository, a maintainer has to take responsibility for the package, and upload a signed source package. The source package can be verified by everyone. So, the only thing you have to trust is the build host. But even that you could verify by building the package yourself, and compare the result. So the whole thing is fully transparent. Hence, if one individual decided he wanted to earn some bucks from advertising and bundling crapware, he wouldn’t get very far. As a nice add on, apt (or synaptic for that matter), can tell you exactly what files get installed to what location for every package in the system.

    Just as a side note, crapware is the unwanted software that is pre-installed when you buy a new computer, or that is sneaked onto your computer when you install oracle’s java. When I bought my netbook, I booted Windows exactly once to see how much crapware they bundled, before wiping the disk and installing ubuntu. Needless to say no such problems exist on the Linux side.

    So I checked the “Programme und Funktionen” in the system settings. That’s one of the configuration items that changes its name and appearance with every version of Windows. I found about 7 unwanted packages with today’s installation date. I removed them immediately, and I can only hope that they didn’t install additional malware.

  • AtTiny Advent Wreath

    An advent wreath in late spring, you ask? Yes, the timing is a bit off, and that’s not just because the coldest spring in ages has not finished yet. While browsing for the topic of my last post, I discovered a nice little one-evening-project: Geeky advent from tinkerlog.
    I had all the required parts here, so I just gave it a try. The adaptation from the AtTiny13 to an AtTiny45 was straight forward. But finding the right threshold value for the ambient light sensor was a bit trickier. Especially, as the ADC didn’t work at first. That was probably a difference between the two AtTiny’s. But once I configured the ADC properly for the AtTiny45, I flashed it a couple of times with different values, and turned the room light on an off, until I had a good threshold value.
    It’s interesting how the flickering is done with the random values and the manual PWM. And especially, how one of the LED’s is used to sense the ambient light was intriguing. To save battery power during the day, it goes to sleep and waits for the watchdog timer to wake it up. It then senses the ambient light. If it is bright, it goes straight back to sleep. If it’s dark, it lights up the LED’s. Going through the four modes for the four weeks of advent is done by resetting, or just quickly disconnecting the power from the battery.

    But now I look forward for the summer to come, before we can put the mini advent wreath to use…

    As my modified code is so similar to the original, it’s not really worth to create a project on github. So, I just pasted the code below.

    (more…)

  • AtMega breadboard header

    A while ago, I ordered some AtTiny breadboard headers from tinkerlog.com. Unfortunately, they didn’t have any boards for AtMega’s left. The ones for the AtTiny’s are very handy, and I used them whenever prototyping something with an AtTiny. In fact, I used it almost whenever flashing an AtTiny. Many times I wished I had one of these tiny boards for the AtMega’s and at some point I even forgot that they existed. Often times I just included in ICSP header on the stripboard.

    Last week I thought I must have such a board for the AtMega’s as well, and created one with a bit of stripboard. The wiring is not pretty, but the device works well, and is a real help when prototyping.

    Fritzing layout on github

  • Jumping ship as google is getting evil

    For many years google stood out of the big IT enterprises as an example of respecting their users and embracing open standards. Sadly, they are drifting away from that, and it looks as if they want to get as insidious as the others. Gone are the times of “do no evil”.

    Just recently, I wanted to upload a video to youtube. For some reason, it didn’t let me, unless I created a Google+ profile. WTF!!! This is insane. I don’t want your creepy, time wasting social platform. I just wanted to upload a video!

    Instant Messaging

    This week, I read in the news that they want to abandon XMPP in their GoogleTalk. This one is even worse. After Skype was assimilated by the evil empire, I was happy to find a better alternative. Long before that, I was unhappy with the closed proprietary nature of Skype. And with GoogleTalk I was not forced to use some crappy piece of proprietary software to chat with my friends.

    The good thing about such events, is that every time I learn a little something about the underlying technology. For instance, I just learned about the federated nature of XMPP. So far, I only used it to communicate with people on the same server. So I thought about running my own xmpp server, but then I created a jabber account on FSFE today. With this I’m able to chat with people on any standards conforming XMPP server….. Google is sadly soon no longer one of them. My xmpp address is  ulrichard@jabber.fsfe.org if you want to connect.

    PIM Syncronization

    Next thing was contacts and calendar synchronization. I used to do this directly via infrared or bluetooth between phone and computer. But when I bought my Android phone, it was just sooo convenient to use the Google services. That even outweighted the unease of having my private data on their servers. So far it has always been easy to download all my PIM data from the Google website to make backups and be prepared to use it somewhere else, just in case… Thanks for that Google, that is exemplary! But who knows what they are up to, next.

    To be prepared, I looked for alternatives, to have the same ease of use, but re-gaining control of my data. I didn’t want to set up a box with owncloud or something similar, as I have an ubuntu server running already. It didn’t take much duckducking to find davical. The installation is easy, but I’m sure the debian package could be made in a way that none of this manual configuration would be necessary. It used to be only for calendar, but with recent versions, they also added contacts.

    Setting it up in Evolution was also quickly done.  The only thing to find out was that apparently CardDAV and WebDAV are the same thing.

    Contrary to my expectation, Android 4.0 has no native support for CardDAV nor CalDAV. But the app’s from the market work well. I use CalDAV-Sync and CardDAV-Sync both from the same developer. They nicely synchronize the built in address book and calendar. He promised to opensource them, once he cleans up the code. I also tried Caldav Sync Free which is already opensource, but it currently has only one way sync.

    eMail

    I never used my gmail for much other than mailing lists. For the most part, I use my paraeasy address over imap. It is hosted at a regular provider and works good enough. It would offer webmail, but who needs webmail anyway? I made more than one attempt to set up an eMail server on my own server, but so far I’m not confident enough to use it publicly. From what I know, correctly maintaining an eMail server, and not ending up on a black-list is more difficult than a webserver. This is because spammers are happy to abuse it, if it is configured incorrectly. I tried different tutorials with varying degrees of success. Some tutorials are actually quite intimidating. Now today, I found iredmail which seems to be very easy to set up, but is meant only for freshly installed servers, and has some stuff that lives outside of the repository. As my server has been running for some years, was upgraded multiple times and runs a variety of services, it didn’t go so well. I will probably keep trying, but it has still no priority.

    Files

    I don’t use GoogleDrive nor UbuntuOne nor dropbox nor wuala nor anything similar to synchronize files. While I appreciate the ease of use, git suits me better. That’s right, git, the distributed version control system, a developer tool. It was not developed to synchronize the home folder, but it works very well for that purpose. Long before anybody talked about file synchronization (other than rsync alikes), I used subversion. I admit, there is some typing involved, but I have full control, full offline history and can compare revisions. That is on any device, and yes I run debian inside my Android phone. The master repository lives on my own server and is protected with a key that is stored on a smart card.

    Conclusion

    It’s sad to see Google services deteriorating, but there are alternatives