Read encrypted emails via webmail?

I was recently asked how to read encrypted emails securely in some untrusted environment via webmail. Imagine you’re sitting on someone else’ computer and absolutely need to check your inbox for this one encrypted email which contains a password without which you can’t continue. Or you’re in some internet cafe and got an important encrypted email – how would you do that?

Actually, the only thing which comes into my mind here is a combination of Portable Firefox and FireGPG on an USB stick (possibly encrypted). This, of course, bears a couple of problems:

  1. If you don’t know which OS your “target” computer has, you need to have this “tandem” in at least three different binary versions, Mac OS X, Linux and Windows. While this doesn’t sound too hard (three partitions on the same drive), it’ll probably harder to encrypt all three and have something like “plug-and-mail-ready” for the target OS.
  2. If you use a non-standard webmailer (i.e. no public service, but an own setup, like I have with roundCube Webmail), you won’t have a really good integration with FireGPG (i.e. no interface buttons, auto-decryption and other stuff) unless the webmail software plans support for FireGPG. (roundCube targeted it for “later“.)
  3. And maybe the greatest show-stopper is the question: Is it really secure in untrusted environments? After all, GnuPG needs to load your private key into RAM to decrypt your message, and if it resides unprotected there (does it?), it could be, at any time, be read out by some hidden daemon and boom, your private key would be compromised…

How would you solve this dilemma? A VPN to a trusted PC from which you send and receive emails?

If there are no other good solutions then I guess people will have to choose between accessibility from everywhere and email security. And I bet they don’t choose security…

Change svn:externals quickly

If you’ve worked with external repository definitions and branches before, you probably know the problem: If you create a new branch off an existing one or merge one branch into another, subversion is not smart enough to update svn:externals definitions which point to the same repository, but rather keep them pointing to the old (wrong) branch. (I read they fixed that with SVN 1.5 by supporting relative URLs, but still, a couple of people might not be able to upgrade and I want to keep rather explicit with external naming anyways.)

Anyways, today at work I was so sick of the problem that I decided should hack something together. Here is the result:

#!/bin/bash
export LANG=C
if [ $# -ne 2 ]
then
    echo "Usage:" $(basename $0) "<old> <new>"
    exit 1
fi

old=$1
new=$2
repo_root=`svn info | grep "Repository Root" | cut -f3 -d" "`;

if [ -n "$(svn info $repo_root/$new 2>&1 | grep "Not a valid URL")" ]
then
    echo "$repo_root/$new is not a valid URL"
    exit 1
fi

for ext in  $(svn st | grep -e "^X" | cut -c 8- | xargs -L1 dirname | uniq)
do
    externals=$(svn propget svn:externals $ext)
    if [[ "$externals" == *$repo_root/$old* ]]
    then
        externals=${externals//$repo_root\/$old/$repo_root\/$new}
        svn propset svn:externals "$externals" $ext
    fi
done

Save this into a file, make it executable and you’re good to go! The script is smart enough to check if the target URL (based on the repositories’s root and the given <new> path) actually exists and also only changes those external definitions which actually match the repository root.

Fun!

SSL Verification with Qt and a custom CA certificate

So I wanted to make my application updater for guitone SSL-aware the other day. The server setup was an easy job: Add the new domain (guitone.thomaskeller.biz) to cacert.org, create a new certificate request with the new SubjectAltName (and all the other, existing alternative names – a procedure where this script becomes handy), upload to CAcert, sign it there, download and install the new cert on my server, setup a SSL vhost for the domain – done!

Now, on Qt’s side of things using SSL is rather easy as well, the only thing you have to do is give the setHost method another parameter:

QHttp * con = new QHttp();
con->setHost("some.host.com", QHttp::ConnectionModeHttps);
con->get("/index.html");
// connect to QHttp's done() signal and read the response

This should actually work for all legit SSL setups if Qt (or, to be more precise, the underlying openssl setup) knows about the root certificate with which your server certificate has been signed. Unfortunately, CAcert’s root certificate is not installed in most cases, so you basically have two options:

  1. Connect to QHttp’s sslErrors(...) signal to the QHttp::ignoreSslErrors() slot. This, of course, pretty much defeats the whole purpose of an SSL connection, because the user is not warned on any SSL error, so also legit errors (certificate expired or malicious) are just ignored. (*)
  2. Make the root certificate of CAcert known to the local setup, so the verification process can proceed properly.

I decided to do the latter thing. This is how the code should now look like:

QHttp * con = new QHttp();
QFile certFile("path/to/root.crt");
Q_ASSERT(certFile.open(QIODevice::ReadOnly));
QSslCertificate cert(&certFile, QSsl::Pem);
// this replaces the internal QTcpSocket QHttp uses; unfortunately
// we cannot reuse that one because Qt does not provide an accessor
// for it
QSslSocket * sslSocket = new QSslSocket(this);
sslSocket->addCaCertificate(cert);
httpConnection->setSocket(sslSocket);
con->setHost("some.host.com", QHttp::ConnectionModeHttps);
con->get("/index.html");
// connect to QHttp's done() signal and read the response

Particularily interesting to note here is that the QIODevice (in my case the QFile instance) has to be opened explicitely before it is given to QSslCertificate. I did not do this previously, Qt neither gave me a warning nor an error, but simply refused to verify my server certificate, just because it didn’t load the root certificate properly.

(*) One could, of course, check the exact triggered SSL error from QSslError::error(), in our case this could be f.e. QSslError::UnableToGetLocalIssuerCertificate, but this is rather hacky and could certainly be abused by a man in the middle as well.

Parallelen

Der Bundespräsident hatte keine Bedenken beim Unterzeichnen des BKA-Gesetzes, meldet heute heise online, sodass dieses pünktlich zum 01. Januar 2009 in Kraft treten kann. Gerhart Baum und Burkard Hirsch (beide FDP) sowie weitere Personen haben schon angekündigt, Verfassungsbeschwerde gegen das Gesetz einzulegen – es ist damit zu rechnen, dass das Gesetz zumindest weiter eingeschränkt wird.

Die Parallelen zum letzten Jahr sind unübersehbar: Am 27. Dezember 2007 unterzeichnete Horst Köhler das Gesetz zur Vorratsdatenspeicherung, quasi als nachträgliches “Weihnachtsgeschenk” an unseren verehrten Innenminister (heise online). Und dieses Gesetz wurde schon im Laufe des gerade zu Ende gehenden Jahres u.a. durch die Verfassungsbeschwerde des Arbeitskreises Vorratsdatenspeicherung vom Bundesverfassungsgericht gestutzt.

Da fragt man sich doch, was im nächsten Jahr um diese Zeit anstehen könnte – E-Pass? Elektronische Gesundheitskarte? Und wie oft muss eigentlich ein Ministerium von den obersten Verfassungsschützern noch gerügt werden, bis die dort beschäftigten Beamten und Minister begreifen, dass sie Gesetzesvorlagen besser ausgefertigen müssen, damit diese nicht wieder von Datenschützern zerissen und den Richtern in Karlsruhe kassiert werden?

Besteht die Dreifaltigkeit bei der Verabschiedung von Sicherheitsgesetzen ab sofort immer aus Bundestag, Bundesrat und Bundesverfassungsgericht?

Fragen über Fragen – fest steht nur, es wird auch 2009 nicht einfacher, trotz des Überwachungswahns oder den Datenschutzpannen und -peinlichkeiten des Staates und großer Kommunikationsunternehmen, den Menschen das Thema näherzubringen. Die Demo in Berlin im Oktober hat zwar erneut mehr Leute als je zuvor auf die Straße gebracht, von einer wirklichen Volksbewegung sind wir mengenmäßig immer noch weit entfernt.

Ich wünsche uns allen weiterhin viel Erfolg, unendlich viel Geduld und Kraft sowie eine nie endende Zuversicht, dass die Anstrengungen, die hinter uns liegen, und auch die, die noch auf uns warten, am Ende nicht umsonst sind.

Frohes neues Jahr.

Server-side email filtering (update)

If you’re having more than one computer where you look regularily for your emails (f.e. at home, at work and while you’re on the way) and you get a reasonable amount of (non-spam) emails every day, you probably know the problem: Client-side email filters just don’t do it.

Being a novice with all the mail software stuff my initial simple idea was “hey, lets look for a web-based procmailrc frontend” – but all I found didn’t really catch it. So I looked a bit further and stumbled across the sieve mail filtering language (RFC). Here is an example sieve file (taken from libsieve-php, a PHP Sieve library):

require [“fileinto”];

if header :is “Sender” “owner-ietf-mta-filters@imc.org”
{
fileinto “filter”; # move to “filter” mailbox
}
elsif address \:DOMAIN :is [“From”, “To”] “example.com”
{
keep; # keep in “In” mailbox
}
elsif anyof (NOT address :all :contains
[“To”, “Cc”, “Bcc”] “me@example.com”,
header :matches “subject”
[“*make*money*fast*”, “*university*dipl*mas*”])
{
fileinto “spam”; # move to “spam” mailbox
}
else
{
fileinto “personal”;
}

To let this work you need to setup an LDA (Local Delivery Agent) for your MTA (Mail Transfer Agent, such as Exim) which puts incoming emails into the local user’s mailboxes. This LDA reads in the Sieve script (which f.e. resides in the user’s home directory) and evaluates the expressions to figure out where it should go.

Now while the script itself is already easier to read and understand than a cryptic procmail script, it’s far from being perfect:

* Non-technical people will still have a hard time to write these rules
* Users need physical (i.e. ftp / shell) access to the mail server to edit the script, this is especially problematic if your email users are virtual

The solution: ManageSieve

ManageSieve is a protocol specification which is relatively new and still pretty much in flux, but gains support pretty quickly. It especially targets problem number two, i.e. it allows the management of sieve scripts without giving a user shell access to the machine. ManageSieve clients authenticate via the IMAP login credentials and run their commands against a dedicated server port (usually 2000).

KMail already supports the ManageSieve protocol since KDE 3.5.9 and there is a Thunderbird plugin in the work. KMail was not an option, being on Mac and while Thunderbird is my main email client, its not the client of my girl (which uses Apple’s Mail). Even if I’d have been able to persuade her using Thunderbird again, it would have been a no-go area for her anyways: The Thunderbird plugin has no nice end user interface as of now, but merely comes across as a managed script editor (though a “real” UI is planned).

So I was very happy to find out that somebody at least wrote a ManageSieve plugin for my webmail client of choice, roundcube.

The setup: Dovecot’s ManageSieve server + Dovecot’s Sieve plugin for deliver + Roundcube’s managesieve patch

Since the ManageSieve standard is not yet completed, implementations tend to differ. The roundcube managesieve implementation was built around and only tested with Dovecot, a popular POP3/IMAP server, so my initial setup (the Exim/Courier IMAP tandem) didn’t fit. I quickly read dovecot’s docs with respect to Exim integration and decided to give the Courier replacement a try since it seemed well supported. This was supposed to be the easiest part, `sudo apt-get remove courier-imapd && sudo apt-get install dovecot-imap`, until I noticed that the installed dovecot version in Hardy (1.0.10) did not include the needed sieve patches, so I had to compile and patch everything myself (again, since the ManageSieve specification is not yet finished, its not part of the main dovecot distribution, either). Luckily, the exact workflow – downloading and patching Dovecot, downloading managesieve, installing and configuring everything – is documented here.

The final missing piece now was roundcube. Downloading and applying the patch was a no-brainer and worked out as expected – after patching a new “Filters” menu popped up in roundcube’s settings view:

Fine, until my first test showed that the rules weren’t applied. So I checked back into my server – everything seemed to be in place:

$ ls -lh .dovecot.sieve* sieve
lrwxrwxrwx 1 me me 21 2008-11-16 12:09 .dovecot.sieve -> sieve/roundcube.sieve
-rw——- 1 me me 560 2008-11-16 12:16 .dovecot.sievec

sieve:
total 16K
-rwx—— 1 me me 459 2008-11-16 12:09 roundcube.sieve
drwx—— 2 me me 4,0K 2008-11-16 12:09 tmp

(The ManageSieve specification allows to activate and deactivate multiple existing sieve scripts, dovecot’s implementation does this by symlinking to the correct one from .dovecot.sieve into sieve/<scriptname>. The .dovecot.sievec is the compiled, i.e. syntax checked version of the script, another implementation detail.)

And yes, my rules editing from within roundcube found their way into the file:

$ cat sieve/roundcube.sieve
require [“fileinto”];
# rule:[spam]
if anyof (header :contains “Subject” “*****SPAM*****”)
{
fileinto “Trash”;
}
[…]

Looking into the logfile of `deliver` (dovecots LDA) shed light into the darkness:

deliver(me): 2008-11-16 02:20:28 Info: msgid=: save failed to Trash: Unknown namespace
deliver(me): 2008-11-16 02:20:28 Info: sieve runtime error: Fileinto: Generic Error
deliver(me): 2008-11-16 02:20:28 Error: sieve_execute_bytecode(/home/me/.dovecot.sievec) failed
Since dovecot 1.1 `deliver` respects the IMAP `prefix` setting in dovecot.conf, which I had to set during my courier-imap -> dovecot transition. This basically “virtually” adds a string prefix like “INBOX.” or something else to all mailbox names. (The actual use case is to have distinct “public” and “private” IMAP folders with namespaces, but I don’t use that.)

A simple example: If your Maildir folder structure looks like this on harddisk

cur
new
tmp
.Foo
.Foo.Bar

than this means that your mail client actually gets reported this structure on IMAP’s LIST command:

INBOX
INBOX.Foo
INBOX.Bar

This was actually what was reported to roundcube as well, but roundcube’s IMAP code removes the INBOX.-prefix for some reason, thus reporting the ManageSieve plugin the wrong mailbox path, “Trash” instead of “INBOX.Trash”.

After diving a bit through roundcube’s PHP code I could fix the issue with the rather ugly usage of a meant-to-be private function of roundcube’s IMAP API (patch is available here for the interested), but wohoo, now finally everything works as expected!

And again, a weekend is gone. The outcome? I can filter emails server-side and – I wrote this blog. I feel its hell about time to do some more substantial things again…

Update: If you managed to set up server-side filtering and wonder why your favourite Mail reader Thunderbird does not show you new emails in various IMAP target folders even though you’ve subscribed to them, ensure you’ve set the preference “mail.check_all_imap_folders_for_new” to “true (source).

Qt Creator

Wow, I absolutely did not see this coming – finally the Trolls^WNokians offer a lean and nice cross-platform IDE for Qt which incorporates all other Qt tools and a gdb frontend! Formerly dubbed “Project Greenhouse” the baby just got a new name and fancy logo: Qt Creator. There is a pre-release version available for download, licensed under a special preview license. The final product should be dual-licensed though, like the rest of the Qt tools are.

Oh what a happy day for Qt users (ever wanted to look at the value of a QString in gdb…?) and what a sad one for all the other free Qt IDEs out there, like edyuk or QDevelop. Especially edyuk looked very promising since it provided a lot of features and a good user interface.

Global AJAX responders in Prototype

I encountered a small, but ugly problem in our Symfony-driven project today: Unauthenticated AJAX requests, which f.e. may happen when the session timed out on the server, but the user hasn’t reloaded the page in the meantime, are also forwarded to the globally defined login module / action. This of course leaves the HTML page, which is constructed upon single HTML components, in a total mess. Ouch!

So yeah, rendering the complete login mask HTML as partial to the client is stupid, but also relatively easy to fix:

public function executeLogin($request)
{
    if ($request->isXmlHttpRequest())
    {
        // renderJSON is a custom function which json_encode's
        // the argument and sets an X-JSON header on response
        return $this->renderJSON(array("error" =>
                                    "Your session expired"));
    }
    ...
}

This was of course only half of the fix. I still had to handle this (and other) special JSON response on the browser’s side:

new Ajax.Request("mymodule/myaction", {
     onSuccess: function(response, json) {
         if (json.error)
         {
             // display the error
             alert(json.error);
             return;
         }
         // the actual callback functionality
     }
}

Uh, anyone screams “spaghetti code”? Yeah, you’re right. I quickly headed for a more general implementation, also since we can’t do that for a couple of symfony-specific prototype helpers anyways, like update_element_function, whose Javascript code gets generated by Symfony dynamically. So how can this be generalized?

Ajax.Responders to the rescue

Actually, prototype already contains some kind of “global hook-in” functionality for all Ajax requests triggered by the library: Ajax.Responders.

While this seemed to support all common callbacks (among them onCreate, onSuccess, onFailure and onComplete), some testing showed though that f.e. the onComplete callback was always called after the specific AJAX request’s onComplete callback, so this was pretty useless for me. After all, I also wanted to prevent that the specific callback gets executed when I encountered an error…

After diving through prototype’s code for some hours I found a solution. Particularily helpful here is that prototype signals every created Ajax request to the onCreate handler and gives the request and response object handling this request as arguments to it. Time to overwrite prototype’s responder code! Here is it:

Ajax.Responders.register({
    onCreate: function(request) {
        var oldRespondToReadyState = request.respondToReadyState;
        request.respondToReadyState = function(readyState) {
            var state = Ajax.Request.Events[readyState];
            var response = new Ajax.Response(this);
            var json = response.headerJSON;
            
            if (state == 'Complete' && json && json.error)
            {
                alert(json.error);
                return;
            }
            oldRespondToReadyState.call(response.request, 
                                           readyState);
        }
    }
});

Another particularily useful piece of knowledge I gathered today to let this work is how Function.prototype.call and Function.prototype.apply work (both are available since Javascript 1.3).
Basically they allow the execution of a function in scope of the object given as first parameter (there is a nice introduction available here).

If you’ve ever wanted to “send an event to some object to make its listener fire” because the listener’s code depended on the fact that the this reference points to the object the event was fired upon, you should now have a viable alternative:

Event.observe(myObj, 'click', myHandler);
// is call-wise equivalent to
myHandler.call(myObj);

No need to create custom mouse events and throw them around any longer… 😉

Videos der Oktober-Demo online

Der AK Vorrat hat heute Videomitschnitte der Redebeiträge der “Freiheit statt Angst”-Demonstration vom 11. Oktober 2008 veröffentlicht. Für alle, die nicht in Berlin sein konnten (wie meine Wenigkeit), zumindest ein kleines Trostpflaster.

Auszug der einer Rede von Monty Cantsin, Mitglied der Hedonistischen Internationalen:

Wir sind nach gut einem Jahr wieder hier. Und wir sind diesmal noch mehr Leute. Wir haben zusammen eine riesige Bewegung auf die Beine gestellt. Wir sind besser organisiert als je zuvor. Und wir sind verdammt viele.

Wir haben also die größte Grundrechts-, Datenschutz- und Freiheitsbewegung seit langen Jahren. Und eigentlich könnte sich ja jetzt alles zum Guten wenden….

Das tut es aber nicht. Der radikale Kahlschlag im Wald der Freiheit geht unvermindert weiter. Das Grundgesetz wird demontiert, demoliert, ja mit der Planierraupe wird es dem Erdboden gleichgemacht – als gäbe es kein Morgen mehr.

(Quelle)

Leider ging es an den Massenmedien auch wieder weitesgehend vorbei, dass in Berlin eine Demonstration für Freiheitsrechte und gegen Überwachung mit weit über 50.000 Menschen stattgefunden hat.

Goldene Fallschirme für gierige Finanzinstitutionen zu spannen und die Spareinkünfte des kleinen Mannes zu besichern, die ja wenigstens bis zum Weihnachtskonsumrausch existent bleiben müssen; das alles waren wohl wichtigere Themen, über die berichtet werden musste.

Windows binary available and Outlook

I’ve just uploaded a windows binary for guitone 0.9 – sorry that it took a little longer this time. I’ve been quite busy during the past days and having no windows machine at home doesn’t help much either 😉
Of course if there are other people willing to package guitone on windows, drop me a note. Its actually not much work. A detailed explanation and a InnoSetup installer script are already in place.

On a related note I’m working on a couple of new features for guitone. The next version will be able to create new monotone database and also create new projects from existing ones (basically a frontend for `mtn setup`). Furthermore I decided I should finally implement some workspace commands, so at least the equivalent of `mtn add` and `mtn drop` should be possible, `mtn revert` and `mtn rename` probably as well.

The monotone additions for netsync automation still not made it into trunk, mainly because I was not in the mood to finally fix the anticipated lua testing for stdio traffic (I really should not push this task further away, because the branch where the automate netsync stuff resides in diverges more and more over time…). And of course before this is not in monotone’s trunk it makes no sense to implement it in guitone either – so yeah, if you particularily wait for this feature, give me a kick in the butt so I get finally around.

You can’t keep me here

I wanna shake
I wanna wind out
I wanna leave
This mind and shout
I’ve lived
All this life
Like an ocean
In disguise

I don’t live for
Ever
You can’t keep
Me here

I wanna race
With the sundown
I want a last breath
Forgive
Every being
The bad feelings
It’s just me

I won’t wait
For answers
You can’t keep
Me here

I wanna rise
And say goodnight
Wanna take
A look on the other side
I’ve lived
All those lives
It’s been wonder
Full at night

I will live for
Ever
You can’t keep
Me here

(Source)