TLS secures the connection between your browser and the websites you visit (and a lot of other Internet connections that do not involve either a browser or a web server). TLS should provide confidentiality (so nobody can steal your passwords or see which webpages you are visiting), integrity (so nobody can modify the transactions you send to your bank) and authenticity . When properly used, TLS provides the first two guarantees, but it is increasingly becoming apparent that it fails to provide the latter: authenticity. The use of certificates (and the poor understanding of what authenticity on the web really means) is to blame.
The problem with certificatesCertificates tell your browser which public key belongs to a domain (like paypal.com or eff.org . With that information, your browser is supposed to be able to verify that it is indeed setting up a connection with that particular domain, and you are supposed to be certain that you are communicating with the website you intended to visit. You are sure because the green padlock in front of the URL in your browser tells you TLS is securing the connection.
But this reasoning is wrong.
Many so-called certification authorities (CAs) can issue certificates for a domain. They are supposed to do so faithfully. And not to issue certificates for keys to entities that do not own the claimed domain. Unfortunately not all certification authorities are honest. Either because they are hacked (see the DigiNotar case ) or because they are controlled by an ‘evil’ government (both China and the Netherlands own CAs that are trusted by all major browsers). As all browsers equally trust all certification authorities, even sites with a secure connection can be spoofed: you think you are visiting PayPal, but instead you are visiting a very convincing copy hosted by a criminal gang.
Also people don’t really understand URLs. Sometimes the difference between a genuine domain ( paypal.com ) and a fake one ( paypa1.com ) is hard to tell. Moreover, many people will not necessarily notice a problem with a domain like paypal.com.singin.cc , espcially if it is the start of a long, complex URL. Such domains can also obtain a (valid!) certificate. Especially now that Let’s Encrypt makes getting certificates easy and free. (As a result, Let’s Encrypt has been blamed to make phishing easier. This is not fair, as I will explain in this post.)
We see that TLS has been quite successful in preventing a man-in-the-middle attack to be staged once a connection has been set up . However, there are still many opportunities for an attacker to stage a man-in-the-middle attack whenever the connection is being set up.
Previously proposed solutionsThis is not a new problem and several countermeasures have been proposed, of course.
One of them is certificate transparency . The main idea of this approach is the observation that if some party tries to generate a fake certificate for a domain, this adversary typically does not want to be detected doing this. Hence it will try to limit circulation of the fake certificate to targeted individuals or regions. The rest of the world still sees the true, original, certificate for the same domain. If somehow these different views of what the certificate for a domain is could be brought together, then an attack would easily be detected. This is what certificate transparency aims to achieve. The idea of this approach is to log all issued certificates to several public logs, spread all over the Internet. Certificates are only valid, and accepted by a browser, if they appear in such a log. Independent parties monitor these logs to detect any inconsistencies, like different certificates issued for the same domain that contain different public keys. This makes it hard for rogue issuers to create fake certificates for domains as the real issuer will certainly make sure the certificates it issues are added to the logs as well.
The other solution is public key pinning (also known as certificate pinning), and works like this. Each website includes the hash of its public key (or the public key of the certificate authority it asked to issue its certificate), in the header of each web page it serves. Browsers are expected to store these hashes the first time they visit a new website. Whenever a browser visits the website again, it must check that the public key in the certificate (or the public key of the certificate issuer) it receives this time matches the hash stored for this website the first time it was visited. This ensures that any changes to the certificate (and thus any attempt of a man-in-the-middle attack) will be detected.
But the problem really is more fundamentalBoth solutions described above keep the basic idea of a public key infrastructure intact. They still trust certificate authorities to issue certificates that bind public keys to domain names. This by itself is a problem.
Recall what I said in the introduction: many people have a poor understanding of what authenticity on the web really means. One of my favourite papers on this topic (‘ The nature of a usable PKI ‘, by Carl Ellison) talks about this problem extensively. One problem Ellison identifies is that names (e.g. domain names) may be globally unique , yet this does not mean that they are globally meaningful . For example, even for a global, well known, brand like Apple, there exist other well known companies, like Apple Records, with a very similar name that could easily be confused. If you heard about Apple Records (and never heard of Apple computers), would it be wrong for you to think that apple.com was its website? Similarly, is there any particular reason to believe that amazon.de belongs to the same company Amazon as the company running amazon.com ? For these (and other reasons), Ellison argues that the current method of assigning permissions to names (using access control lists) and assigning names to keys (using identity certificates) is wrong. And he proposes to assign permissions directly to keys (using authorization certificates) to avoid the fraught indirection through names.
His observations equally apply to the problem of authenticating websites. The core of the issue is that you should not rely on a name (and a certificate) to obtain the public key that belongs to it. Or rather, one should not interpret an identity certificate as something that binds a key to a name, but as something that binds a name to a key: given a key, you can ‘reliably’ obtain the name of the entity controlling it (but not the other way around).
In fact when authenticating websites, we really don’t care about the name of the website. All we care about is whether it is authentic: whether the website we are visiting now is the same website we visited before (and that we enjoyed, laughed at, surprised us, …). This is what authenticity on the web is all about. Being able to tell that we are visiting the same website as before allows us to establish a trust relationship with that website over time. With every positive interaction, our trust increases (and with every bad experience, our trust drops). We don’t need the website’s name for that.
The real solution: get rid of certicificatesOnce we realise the true nature of authenticity on the web, the solution becomes trivial. Instead of relying on certificates, browsers store the public keys of each website we visit (yes it is very similar to public key pinning, but with significant differences). It runs like this.
Every time you visit a website, it will send your browser its public key. If this is the first time you are visiting this website, a warning message will appear. Your browser will provide some useful information about this site (see below) and ask you whether you want to continue and really visit this site. If you agree, the public key is stored and associated with this particular site (or rather, its domain name). The next time you visit this site no question is asked and the public key the browser stored is used to set up a secure and authenticated connection with the website. This prevents man in the middle attacks and ensures that you are talking to the same website as before.
If the public key you receive when visiting a website differs from the one your browser stored for it, this means one of two things. Either you are visiting a rogue site that is trying to pull of a man-in-the-middle attack, or the website you visited some time ago has changed keys in the meantime. This happens occasionally because cryptographic keys need to be changed once in a while. Your browser can easily distinguish these two cases if the new key in the header of the webpage is actually signed with the previous key for the same website (whose public counterpart is stored by the browser). This indicates a legitimate key rollover. For additional security, websites may preannounce when they will refresh their keys, so that browsers know for which period the keys they store are valid.
How does this prevent phishing?Now if you click a link to a phishing site, this site is most likely one you never visited before. (Unless the phishers hacked a well known and often visited site, but this is something that cannot be solved by securing or authenticating the connection .) As a result you browser will present you with a dialog to ask you whether you really want to visit this new site.
It is vital that the user is presented with clear and concise information that will help him or her to make the right decision here. Of course the domain name of the site about to be visited should be clearly visible. But what else? Some UX designers should really jump in here to create a seamless yet secure user experience. But one thing I can imagine is a system where, as soon as the dialog is created, information about the domain being visited is displayed from various sources (e.g. Google, or known phishing domain databases like PhishTank ) that have a global network picture and hence are in a unique position to tell very early on whether a certain domain is untrustworthy. Such sites could actually be fed with information from users across the world using this system by collecting their decisions when responding to the website-visited-for-the-first-time dialog.
Of course care has to be taken here to ensure that the process of identifying possible malicious sites is transparent and not abused for e.g. censorship to block undesirable content.
Some more usability aspectsWe should avoid a situation where a user is presented with a large number of warning messages. The user would quickly get annoyed and no longer pay attention. The first time a user starts surfing the web, all websites will be new to him and his browser, so he will get to see quite a few warning messages. I tried to find any information about the average number of websites people visit regularly (and that they have to verify once before being able to visit them securely forever), and the number of websites that they visit maybe once or twice in a year (for which the above process may be too heavy handed). I couldn’t find any, so if you know some reliable statistics, let me know and I will update this post.
To reduce the number warning messages significantly browsers should by default contain the public keys of the most popular websites, so that these are available right after the browser is installed. In any case to prevent user from automatically accepting to visit a new site without verifying the information, the default choice should be to cancel and not visit the site.
TOFU: Trust On First UseThe method outlined implements the TOFU (Trust On First Use) principle, and is used by ssh (a secure remote access protocol) for example. The main drawback of this approach is that it shifts the problem of preventing a man-in-the-middle attack to the first time you visit a website. How do you know this new website you are visiting is genuine? How do you know you can trust it? Well… the idea to use a kind of global network view outlined above is supposed to detect the obviously malicious sites. But actually, building real trust is a process that takes time, within which your experiences dealing with the site (and knowing it is the same site each and every time you visit is) determine your current confidence in dealing with this particular website.
How does this differ from public key pinning?At first sight the new approach does not appear to differ much from public key pinning. Both use the TOFU principle, and both rely on the browser to keep information about domains to detect man-in-the-middle attacks. Yet there are significant differences.
First and foremost, the new approach no longer relies on certificates at all. Given the fundamental issues with certificates discussed above, they should be avoided and abandoned altogether. Relying on certificates keeps attack vectors open that the new approach no longer suffers from. In particular, sites no longer have the option to specify a (hash of a) public key belonging to certification authority that the site trusts to issue its certificates. Hence compromise of such a certification authority (and such compromises have happened, also for CAs that were deemed too big too fail) is no longer an issue. And if in the end the certificate can be made irrelevant, it is much cheaper and easier not to need the services of a CA anymore.
Second of all, the new approach aims to leverage global network insights (from say Google or other large data brokers that have reliable information about the background of a particular domain) to inform users about the pedigree of a new website they are visiting. This should increase the usability of the approach and should reduce the risk that users become victims of phishing scams.
Practical aspectsThe approach outlined above can be easily implemented in practice. We can still use TLS as the underlying secure communication protocol. Some minor changes would be required to not exchange certificates but instead rely on public key stored by the browser (or keys sent by the webserver the first time the site is visited). But we can avoid even those changes by relying only (and accepting only) self-signed certificates and let the browser store and use the public keys contained in them.