One unfortunate (albeit entirely predictable) consequence of making HTTPS certificates “fast, open, automated, and free” is that both good guys and bad guys alike will take advantage of the offer and obtain HTTPS certificates for their websites.
Today’s bad guys can easily turn a run-of-the-mill phishing spoof:
…into a somewhat more convincing version, by obtaining a free “domain validated” certificate and lighting up the green lock icon in the browser’s address bar:
The resulting phishing site looks almost identical to the real site:
By December 8, 2016, LetsEncrypt had issued 409 certificates containing “Paypal” in the hostname; that number is up to 709 as of this morning. Other targets include BankOfAmerica ( 14 certificates ), Apple, Amazon, American Express, Chase Bank, Microsoft, Google, and many other major brands. LetsEncrypt validates only that (at one point in time) the certificate applicant can publish on the target domain; the CA also grudgingly checks with the SafeBrowsing service to see if the target domain has already been blocked as malicious, although they “disagree” that this should be their responsibility. LetsEncrypt’s short position paper is worth a read; many reasonable people agree with it.
The “race to the bottom” in validation performed by CAs before issuing certificates is what led the IE team to spearhead the development of Extended Validation certificates over a decade ago. The hope was that, by putting the CAs name “on the line” ( literally, the address line ), CAs would be incentivized to do a thorough job vetting the identity of a site owner. Alas, my proposal that we prominently display the CAs name for all types (EV, OV, DV) of certificate wasn’t implemented, so domain validated certificates are largely anonymous commodities unless a user goes through the cumbersome process of manually inspecting a site’s certificates. For a number of reasons (to be explored in a future post), EV certificates never really took off.
Of course, certificate abuse isn’t limited to LetsEncrypt―other CAs have also issued domain-validated certificates to phishing sites as well:
Who’s Responsible?
Unfortunately, ownership of this mess is diffuse, and I’ve yet to encounter any sign like this:
Blame the Browser
The core proposition held by some (but not all) CAs is that combatting malicious sites is the responsibility of the user-agent (browser), not the certificate authority. It’s an argument with merit, especially in a world where we truly want encryption for all sites , not just the top sites.
That position is bolstered by the fact that some browsers don’t actively check for certificate revocation , so even if LetsEncrypt were to revoke a certificate, the browser wouldn’t even notice.
Another argument is that browsers overpromise the safety of sites by using terms like Secure in the UI―while the browser can know whether a given HTTPS connection is present and free of errors, it has no knowledge of the security of the destination site or CDN, nor its business practices. Internet Explorer’s HTTPS UX used to have a helpful “Should I trust this site?” link, but that content went away at some point. Security wording is a complicated topic because what the user really wants to know (“Is this safe?”) isn’t something a browser can ever really answer in the affirmative, and users tend to be annoyed when you tell them only the truth “ This download was not reported as not safe .”.
The obvious way to address malicious sites is via phishing and malware blocklists, and indeed, you can help keep other users safe by reporting any unblocked phish you find to the Safe Browsing service; this service protects Chrome, Firefox, and Safari users. You can also forward phishing messages to scam@netcraft.com and/or PhishTank . Users of Microsoft browsers can report unblocked phish to SmartScreen (in IE, click Tools > SmartScreen > Report Unsafe Website). Known-malicious sites will get the UI treatment they deserve:
Unfortunately, there’s always latency in block lists, and a phisher can probably turn a profit with a site that’s live less than one hour. Phishers also have a whole bag of tricks to delay blocks, including cloaking whereby they return an innocuous “Site not found” message when they detect that they’re being loaded by security researchers’ IP addresses, browser types, OS languages, etc.
Blame the WebsitesSome argue that websites are at fault, for:
Relying upon passwords and failing to adopt unspoofable two-factor authentication schemes which have existed for decades Failing to adopt HTTPS ordeploy itproperly until browsers started bringing out the UI sledgehammers Constantly changing domain names and login UIs Emailing usersnon-secure links to redirector sites Providing bad security advice to users Blame the HumansFinally, many lay blame with the user, arguing user education is the only path forward. I’ve long given up much hope on that front―the best we can hope for is raising enough awareness that some users will contribute feedback into more effective systems like automated phishing block lists.
We’ve had literally decades of sites and “experts” telling users to “ Look for the lock! ” when deciding whether a site is to be trusted. Even today we have bad advice being advanced by security experts who should know better, like this message from the Twitter security team which suggests that https://twitter.com.accessupdatecenter.info is a legitimate site.
Where Do We Go From Here? Unfortunately, I don’t think there are any silver bullets, but I also think that unsolvable problems are the most interesti