Congress has never made a law saying, “Corporations should get to decide who gets to publish truthful information about defects in their products,”― and the First Amendment wouldn’t allow such a law ― but that hasn’t stopped corporations from conjuring one out of thin air, and then defending it as though it was a natural right they’d had all along.
Some background: in 1986, Ronald Reagan, spooked by the Matthew Broderick movie Wargames (true story!) worked with Congress to pass a sweeping cybercrime bill called the Computer Fraud and Abuse Act (CFAA) that was exceedingly sloppily drafted. CFAA makes it a felony to “exceed[] authorized access” on someone else’s computer in many instances. Have you visited TNW's hype-free blockchain and cryptocurrency news site yet?It's called Hard Fork.
TAKE ME THERE
Fast forward to 1998, when Bill Clinton and his Congress enacted the Digital Millennium Copyright Act (DMCA), a giant, gnarly hairball of digital copyright law that included section 1201 , which bans bypassing any “technological measure” that “effectively controls access” to copyrighted works, or “traffic[ing]” in devices or services that bypass digital locks.Notice that neither of these laws bans disclosure of defects, including security disclosures! But decades later, corporate lawyers and federal prosecutors have constructed a body of legal precedents that twist these overbroad laws into a rule that effectively gives corporations the power to decide who gets to tell the truth about flaws and bugs in their products.
Businesses and prosecutors have brought civil and criminal actions against researchers and whistleblowers who violated a company’s terms of service in the process of discovering a defect. The argument goes like this: “Our terms of service ban probing our system for security defects. When you login to our server for that purpose, you ‘exceed your authorization,’ and that violates the Computer Fraud and Abuse Act.”
Likewise, businesses and prosecutors have used Section 1201 of the DMCA to attack researchers who exposed defects in software and hardware. Here’s how that argument goes: “We designed our products with a lock that you have to get around to discover the defects in our software.
Since our software is copyrighted, that lock is an ‘access control for a copyrighted work’ and that means that your research is prohibited, and any publication you make explaining how to replicate your findings is illegal speech, because helping other people get around our locks is ‘trafficking.'”
The First Amendment would certainly not allow Congress to enact a law that banned making true, technical disclosures. Even (especially!) if those disclosures revealed security defects that the public needed to be aware of before deciding whether to trust a product or service.
But the presence of these laws has convinced the tech industry ― and corporations that have added ‘smart’ tech to their otherwise ‘dumb’ products ― that it’s only natural that they should be the sole custodians of the authority to embarrass or inconvenience them. The worst of these actors use threats of invoking CFAA and DMCA 1201 to silence researchers altogether, so the first time you discover that you’ve been trusting a defective product is when it is so widely exploited by criminals and grifters that it’s impossible to keep the problem from becoming widely known.
Even the best, most responsible corporate actors get this wrong. Tech companies like Mozilla , Dropbox and, most recently, Tesla , have crafted “coordinated disclosure” policies in which they make sincere and legally enforceable promises to take security disclosures seriously and act on them within a defined period, and they even promise not to use laws like DMCA 1201 to retaliate against security researchers who follow their guidelines.
This is a great start, but it’s a late and limited solution to a much bigger problem.
The point is that almost every company is a “tech company” ― from medical implant vendors to voting machine companies ― and not all of them are as upstanding and public-spirited as Mozilla.
Many of these companies do have “coordinated disclosure” policies by which they hope to tempt security researchers into coming to them first when they discover problems with their products and services.
But these companies don’t make these policies out of the goodness of their hearts: those policies exist because they’re the companies’ best hope of keeping security researchers from embarrassing them and leaving them scrambling by just publishing the bug without warning .
If corporations can simply silence researchers who don’t play ball, we should expect them to do so. There is no shortage of CEOs who are lulling themselves to sleep tonight with fantasies about getting to shut their critics up.
EFF is currently suing the US government to invalidate DMCA 1201 and the ACLU is trying to chip away at CFAA , and there will come a day when we succeed, because the idea of suppressing bug reports (even ones made indisrespectful or rude ways) is totally incompatible with the First Amendment.
Rather than crafting a disclosure policy that says “We’ll stay away from these unjust and absurd interpretations of these badly written laws, provided you only tell the truth in ways we approve of,” companies that want to lead by example could do so by putting something like this in their disclosure policies:
We believe that conveying truthful warnings about defects in systems is always legal. Of course, we have a strong preference for you to use our disclosure system where we promise to investigate your bugs and fix them in a timely manner. But we don’t believe we have the right to force you to use our system. Accordingly, we promise to NEVER invoke any statutory right ― for example, rights we are granted under trade secret law, anti-hacking law, or anti-circumvention law ― against ANYONE who makes a truthful disclosure about a defect in one of our products or services, rega