On Friday, The Guardian newspaper accused Facebook’s WhatsApp messaging app of having a “backdoor” security vulnerability on the basis of a security issue revealed to it by researcher, Tobias Boelter of the University of California at Berkeley.
The newspaper has since backed away from the emotive word but the fire had been lit. Was this a fair accusation to throw at WhatsApp?
The report described how the app generates a new key pair for “offline” users, for example when a user loses or changes a phone or phone number and then (after a period of time) reinstalls the app anew.
In the respected Signal app, whose underlying encryption protocol was adopted by WhatsApp in 2016, messages sent to anyone in this situation are deleted and the sender is informed that something has changed. The message can then be re-encrypted and resent after verification that the recipient is still the same person.
In WhatsApp, by apparent contrast, the sending app is simply asked to re-encrypt and re-send the message, something the sender will only be told about if alerting is turned on, after the fact.
The issue is that WhatsApp’s servers could, hypothetically, force the resend of a message using a new key under its control without the sender being able to stop that a man-in-the-middle (MitM) compromise of sorts.
The first objection with this is that hiding a malicious key reset indefinitely would be difficult on WhatsApp given the software’s “verify security code” feature that ensures both sides are using the same key and no MiTM is taking place.
This also looks more like a design trade-off than a backdoor. As a mass-market product, WhatsApp was designed to make itself as transparent as possible and not to bother users with possibly confusing alerts about key pair changes.
The developer who co-authored the Signal protocol used by WhatsApp, Open Whisper Systems’ Moxie Marlinspike, said the backdoor claim was a misnomer : “Under no circumstances is it reasonable to call this a ‘backdoor,’ as key changes are immediately detected by the sender and can be verified.”
“It is great that the Guardian thinks privacy is something their readers should be concerned about. However, running a story like this without taking the time to carefully evaluate claims of a ‘backdoor’ will ultimately only hurt their readers.”
For something to be a true “backdoor”, it must simultaneously satisfy two criteria beyond simply compromising security or privacy. First, it must have been put there deliberately, for either benign or villainous reasons. Second, it must be undocumented, which is to say only the people who put it there know about it.
The minute a backdoor (or well-intentioned trapdoor) becomes public knowledge, it stops being one and becomes just another security flaw that needs to be fixed if that product wants to hang on to its users.
On that basis, it is inaccurate to describe the WhatsApp issue as a “backdoor” when it is really a known design compromise, albeit one that people should be aware of.
Trapdoors put in products for convenience have popped up fairly regularly, an infamous example being that discovered in Borland’s InterBase in 2001 that allowed anyone entering the user name “politically” with the password “correct” to take control of versions 4.0, 5.0 and 6.0 running on any platform.
In contrast, secret backdoors put there specifically to spy on users have been vanishingly rare, in part because it’s incredibly difficult to prove that something that might be a backdoor wasn’t just slapdash programming.
Rumours circulated that go-to encryption software TrueCrypt had one after its apparent developers put out an ambiguous security alert even if a subsequent audit found nothing untoward . This brings us to another type of backdoor: one that probably doesn’t exist but enough people believe it does.
Encryption used to be extremely sensitive to theoretical weaknesses, and rightly so. In recent times, it’s become almost as vulnerable to rumour and bad headlines.
On Friday, WhatsApp founds itself in the latter camp. But for all the fuss, public discussion of the company’s design decisions could still work in its favour if the user base starts to understand the product rather than simply using it on blind trust.