ScroogeCoin

After a long pause, I just started attending a MOOC again. It’s on Coursera, from Princeton and it’s about BitCoin. In one of the first lectures the teacher goes through some simple hypothetical digital coin concepts. I don’t know if the lectures are publicly available piecewise, but as a whole they are at youtube. Jump to minute 50 for scroogecoin.

Name property problem
GoofyCoin signed receipts double spend
ScroogeCoin centralized blocks corruption
BitCoin fully decentralized solved

 

ScroogeCoin reminds me a lot of the blockchain projects a lot of big financial institutions announced over the last few months. They talk about permissioned blockchains. That sounds like exclusive access and centralized control. BitCoins inclusiveness is one of the important characteristics, and I hope enough people recognise it as such.

BlockChain

Computational neuroscience class

This year didn’t start out so great for my online classes. I signed up and started a bunch, but quit all but one so far. Some were not as interesting as I thought, some didn’t contain enough new stuff or the material covered was too different from what I expected. I just couldn’t motivate myself to invest the time and effort to complete them. Maybe it’s not as exciting as it was for the first few classes or maybe these teachers are just trying out a new channel and are not as determined and enthusiastic about this new form of education. For me personally, the first MOOC that I completed, the introduction to AI is still the best.

Finally I found a class that I was keen enough to complete. That was about computational neuroscience. I read some books about neurology before, and was familiar with the basic structure of neurons and synapses as well as with some neuro transmitters such as GABA. But the details about ion channels and their detailed behaviour was new to me. The calculations with the spike voltages and spike triggered averages were very interesting. They highlighted to me just how simplified the common perceptron neural network models are. The second part of the class that was more about the application of the insight from the biological neuroscience into artificial intelligence and machine learning was more familiar and partly repetition.

GPGPU programming class

It’s already a while back that I completed the coursera class “Heterogeneous Parallel Programming“. It was mainly concerned with cuda, which is Nvidia’s GPGPU framework. GPGPU is about running common computations on the graphics card. The class also quickly covered OpenCL, OpenACC, C++AMP and MPI.

In the programming assignments, we juggled a lot with low level details such as distributing the work load to thread blocks, which I almost didn’t care about when using OpenCL so far. After seeing cuda and OpenCL, it was a little surprise, that C++AMP is indeed a more convenient programming model, and not just a C++ compiler for the graphics card. Let’s hope that it gets ported to other platforms soon.

The most eye opening revelation for me was, that it is possible to parallelize prefix sum computation. When I was first presented with the problem, I thought that’s a showcase for serial execution. But apparently it’s not. Making it parallel is a two step process. First make a number of blocks, and compute the sum at the boundary for each one using something like a tree structure (in parallel). Once you have that, it’s more obvious, how to parallelize the rest.

accelerated ray tracer

In all the great online classes I attended over the last year, there was one topic missing. Finally I found an offering for a Computer Graphics class. After all, that’s the field I ‘ve been working in for the last five and a half years. The class is offered at edx.org and is from Berkley. It’s the first class I’m taking from edx, and the style of the class is comparable to coursera and udacity.

The first part of the class was concerned about OpenGL, and we implemented an interactive scene viewer. Although I didn’t work directly with regular OpenGL before, only with WebGL which is based on OpenGL ES, it was mostly repetition. But nonetheless it was good training for working with homogeneous coordinates and matrices with different orderings. For grading, we had to produce 12 screenshots from the same scene with different transformations. Once it was implemented I had only to change the order of some transformations to have all images correct.

The second part was concerned with ray tracing. Eventhough I was familiar with the basic concept, working with it was new to me. And in the class, we had to build a ray tracer from scratch.The theory sounded straight forward. But somehow I was not so lucky in implementing it. In every new part I made some silly mistake. I developed it not exemplary test driven, but with unit tests for every key part that I wanted to verify. With that in place I could usually find and correct the problem in time. For grading, we had to produce seven images. Continue reading “accelerated ray tracer”

machine learning class

Another one of these incredibly interesting online classes came to an end. Machine learning was one of the first two classes that started last fall. As I thought to make AI and ML in parallel would be too much, I decided for AI with the intention to make ML later. The second round of ML was announced for January, but actually started in April. Andrew Ng from Stanford, who some people call a rock star in ML, thought the class. The videos were longer than what I was used to, and I downloaded them to my Android to watch them in the train to work. The homework consisted of review questions and work assignments for octave. The last time I did something with MatLab was more than ten years ago, and I remembered nothing of it.

The class started with gradient descent and logistic regression. And almost everything that followed was compared against and related to them.

I had some prior experience in ML, but no formal training. At TCG I learned the basics from a co worker. Then I implemented a document classification engine using an SVM. I read many books on the topic. Later I develpoed a prediction system for good days and locations for paragliding, again using an SVM as well as an evolutionary optimization. Continue reading “machine learning class”

Full disk encryption with the crypto stick

Last week I finished the udacity applied cryptography course. I did not as well as in the other courses, nonetheless I learned a lot and it was (as always) really interesting. We learned about symmetric and asymmetric encryption, hashes as well as key exchange and management. Each week in addition to the regular homework, we got a challenge question. For most of them, I invested some time, but then had to surrender. Well, I still managed to complete some of the challenges. The most fun for me was a side channel attack on the diffie hellman key exchange protocol. We had information on how many multiplications were required for the fast exponentiation of the RSA key on one end. That was enough to decypher the secret message. It was a good illustration of what has to be taken into account when developing real world cryptographic algorithms. And it reminded me of how some smart cards were hacked by closely monitoring the power consumption.

Now, it was time to put my crypto stick to use. My netbook still ran Ubuntu Maverick due to the horrible graphics card (gma500). So I waited for the release of Linux Mint 13 LTS. In the 3.3 line of kernels there is a poulsbo driver already included.

First I prepared the crypto stick according to this tutorial. After initially generating the keys on the stick for maximum security, I let myself convince to generate them on the computer to be able to make backups. I could not regenerate the authentication key so far, and thus I can’t use it for ssh at the moment. I’m still looking for a solution on that.

Then I installed the operating system along with the full disk encryption according to this tutorial. At first it didn’t work, but then I discovered that there was a mount command missing in the tutorial and thus the generated ramdisk was not written to the correct boot partition.

Here is how it works (as I understand it):

  • grub loads the kernel along with the initial ramdisk which contains everything necessary to communicate with the card.
  • The ramdisk also contains the keyfile for the encrypted root partition. Upon entering the correct pin, the smart card decrypts the key file (asymmetrically).
  • The key file in turn is used to (symmetrically) on the fly decrypt (and encrypt) all accesses to the root partition.

It was new to me how to put stuff into the vmlinuz ramdisk. Apparently the script to ask for the key and decrypt the key file, as well as the keyfile itself and all the other required stuff can be added by installing a hook that is executed whenever a new ramdisk is created. For example when installing a new kernel.

Not that I would have something stored on the harddisk, that would require such a level of security. But it’s interesting to set up and see how it works in action. The crypto stick adds a fair bit of security. As it has a smart card built in, a trojan couldn’t get hold of the private key, and a 2048 bit key is way harder to crack than a password that one can remember and type in every time.

Driving assistant

Recently I completed the udacity class “programming a robotic car”, where Sebastian Thrun thought us what makes the self driving cars tick. He drew from his experience of winning the DARPA grand challenge in 2005. Now he’s leading the Google self driving car project. It was a very interesting course. Some stuff was already covered in the ai-class, but was a lot more detailed this time. We got homework assignments in python that we could complete directly within the website’s integrated editor. So, we implemented some of the key components in simpified form. Namely Kalman Filters, Particle Filters, Motion Planners with smoothing, and last and most interresting, SLAM.

So, a while ago an idea started forming in my head. Todays smartphones should be powerfull enough to run some computer vision algorithms to help the driver identify obstacles, or warn him when he’s about to leave the lane. In fact, some premium cars already have such systems installed. First I looked in the android market, but found nothing. So I started looking around for how to integrate OpenCV in Android. I knew this part had been done before. I was not too keen to start yet another time consuming toy project, as I’m very busy at the moment. Another more extensive search in the Android market revealed some apps. And I was releaved to find some that implemented just what I was thinking about. There are two that I installed on my phone and am currently testing. Although I must confess, instead of increasing the security, they can also distract.

Drivea

The first app that I installed was Drivea. It may not be as polished as competing apps, but I like it when yu have the feeling, you know how it works. On my Galaxy S it runs smoothly without any problems other than some inaccuracies in the classifiers. Would be great if ot were opensource, so we all could learn from it, and maybe even contribute to the evolution.

iOnRoad

A bit too shiny for my taste. The core of it works really smooth. The classifiers and filters are better tuned than with the competing apps I tested.

Playing with Smart-Cards

Ever since reading the book “Kryptographie und IT-Sicherheit” where I first learned about how SmartCards work, I wanted to do some SmartCard programming. In the book it describes some inner workings of Smart Cards, and that some of them have a small Java VM inside. But it turned out that the entry was not as easy as in many other fields. First of all, you have many smart cards (SIM of your mobild phone, Credit Card, Debit Card, Health insurance card, …), but usually they are protected so you can’t install anything of your own. Technically, it would be possible to have many applications on the same card, like CreditCard, DebitCard, HealthInsurance, PublicTransport, and so on. But with very few exceptions, the issuers don’t feel confortable sharing a card with someone else. Then there seem to be many different standards, and the companies seem to bee keen to obscure as much as they can. And then you also need kind of specialized hardware, but that’s the easier part.

Continue reading “Playing with Smart-Cards”