pero on anything

Change Keyboard Shortcuts in Gnome-Shell (Gnome 3)

I really love Gnome 3 and its Shell, but I almost went nuts on this. It really took me a while to finally figure this out. To make a really short blog post even shorter: Use dconf-editor (from package dconf-tools) and go to org.gnome.desktop.wm.keybindings. There you have it. Why would I want to change keyboard […]

Back online

After a few sudden server deaths, this blog is back online. Which does not necessarily mean that I am back online writing new posts. That said, there is a lot in my mind that might be worth blogging about, so stay tuned.

Automated performance degradation tests with JUnit4

There are several extension to JUnit that provide means to test performance like JUnitPerf or p-unit. But it is hard to formulate the right assertions. What if the test runs on a beefier machine or in another environment? Did performance degrade? I just want to answer a simple question: Did performance degrade? (And if, when?) […]

Integrating MySQL and Hadoop – or – A different approach on using CSV files in MySQL

We use both MySQL and Hadoop a lot. If you utilize each system to its strengths then this is a powerful combination. One problem we are constantly facing is to make data extracted from our Hadoop cluster available in MySQL. The problem Look at this simple example: Let’s say we have a table customer: CREATE […]

MySQL Connector/J randomly hanging at com.mysql.jdbc.util.ReadAheadInputStream.fill

In the past months we struggled with large SELECT queries just get stuck at: Method) com.mysql.jdbc.util.ReadAheadInputStream.fill( com.mysql.jdbc.util.ReadAheadInputStream.readFromUnderlyingStreamIfNecessary( – locked com.mysql.jdbc.util.ReadAheadInputStream@cb9a81c com.mysql.jdbc.MysqlIO.readFully( com.mysql.jdbc.MysqlIO.reuseAndReadPacket( com.mysql.jdbc.MysqlIO.reuseAndReadPacket( com.mysql.jdbc.MysqlIO.checkErrorPacket( com.mysql.jdbc.MysqlIO.sendCommand( com.mysql.jdbc.MysqlIO.sqlQueryDirect( com.mysql.jdbc.ConnectionImpl.execSQL( – locked java.lang.Object@70cbccca com.mysql.jdbc.ConnectionImpl.execSQL( com.mysql.jdbc.StatementImpl.execute( – locked java.lang.Object@70cbccca com.mysql.jdbc.StatementImpl.execute( org.apache.commons.dbcp.DelegatingStatement.execute( org.apache.commons.dbcp.DelegatingStatement.execute( Whenever this happened we just restarted the Tomcat server and everything was fine again […]

Improve performance on small hadoop clusters

Hadoop is designed to run on huge clusters containing several hundred machines. But some people just don’t need such a big cluster and are able to use the benefits of HDFS and MapReduce on a smaller scale. We managed to improve performance of our 10-node-test-cluster by almost 100% by adjusting the heartbeat intervals. Namenode and […]

“Internet slow” on Ubuntu Karmic Koala (9.10)

“Internet slow” means actually “DNS slow”. After upgrading to Ubuntu 9.10 I experienced a strange and very annoying lag in DNS resolution. Running dig in a shell worked like a charm. But Firefox, Synaptic and everything else was hanging at DNS resolution. To make a long story short (you probably read a lot of forum […]

collectd + drraw.cgi – zoom into your graphs like you used to with cacti

I fell in love with collectd and drraw.cgi (a front-end to collectd). This combination is great: Fast, simple and yet sufficient. But there was one thing I missed in drraw that I loved in cacti: Zooming. (This is how it looks like in cacti) So I went on and hacked it into drraw.cgi using jQuery. […]

Linux: Executables on a Samba/CIFS Share

Just a quick note: Don’t mount a cifs share with flag directio if you want to execute binaries that reside on that share. Otherwise you will get the following error: <command>: cannot execute binary file Took me 2 hours to find out.

Simulating indexes in Hadoop

You should not try to use Hadoop as a “drop-in” replacement of your current (R)DBMS. That said it is still possible to utilize the power of cluster computing while circumventing its weaknesses when it comes to ad-hoc or real-time queries. We use Hadoop as an on-line system tightly integrated with our application and use it […]

Previous Posts