Thread
I've got trust issues. We all do. Some infosec pros go so far as to say #TrustNoOne, a philosophy more formally known as #ZeroTrust, that holds that certain elements of your security should *never* be delegated to *any* third party. 1/
If you'd like an essay-formatted version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:

pluralistic.net/2022/11/09/infosec-blackpill/ 2/
The problem is, it's trust all the way down. Say you maintain your own cryptographic keys on your own device. How do you know the software you use to store those keys is trustworthy? Well, maybe you audit the source-code and compile it yourself. 3/
But how do you know your *compiler* is trustworthy? When Unix/C co-creator Ken Thompson received the Turing Prize, he admitted (or joked) that he had hidden back doors in the compiler he'd written, which was used to compile all of the other compilers:

pluralistic.net/2022/10/11/rene-descartes-was-a-drunken-fart/ 4/
OK, say you whittle your own compiler out of a whole log that you felled yourself in an old growth forest that no human had set foot in for a thousand years. How about your hardware? 5/
Back in 2018, @Business published a blockbuster story claiming that the server infrastructure of the biggest cloud companies had been compromised with tiny hardware interception devices:

www.bloomberg.com/news/features/2018-10-04/the-big-hack-how-china-used-a-tiny-chip-to-infiltrate-amer... 6/
The authors claimed to have verified their story in every conceivable way. The companies whose servers were said to have been compromised rejected the entire story. Four years later, we still don't know who was right. 7/
How do we trust the Bloomberg reporters? How do we trust Apple? If we ask a regulator to investigate their claims, how do we trust the regulator? Hell, how do we trust our *senses*? And even if we trust our senses, how do we trust our *reason*? 8/
I had a lurid, bizarre nightmare last night where the most surreal events seemed perfectly reasonable (tldr: I was mugged by invisible monsters while trying to order a paloma at the @DNAlounge, who stole my phone and then a bicycle I had rented from the bartender). 9/
If you can't trust your senses, your reason, the authorities, your hardware, your software, your compiler, or third-party service-providers, well, shit, that's pretty frightening, isn't it (paging R. Descartes to a white courtesy phone)? 10/
There's a joke about physicists, that all of their reasoning begins with something they know isn't true: "Assume a perfectly spherical cow of uniform density on a frictionless surface..." 11/
The world of information security has a lot of these assumptions, and they get us into *trouble*. 12/
Take internet data privacy and integrity - that is, ensuring that when you send some data to someone else, the data arrives unchanged and no one except that person can read that data. 13/
In the earliest days of the internet, we operated on the assumption that the major threat here was technical: our routers and wires might corrupt or lose the data on the way. 14/
The solution was the ingenious system of packet-switching error-correction, a complex system that allowed the sender to verify that the recipient had gotten all the parts of their transmission and resend the parts that disappeared en route. 15/
This took care of integrity, but not privacy. We mostly just pretended that sysadmins, sysops, network engineers, and other people who *could* peek at our data "on the wire" *wouldn't*, even though we knew that, at least some of the time, this was going on. 16/
The fact that the people who provided communications infrastructure had a sense of duty and mission didn't mean they wouldn't spy on us - sometimes, that was *why* they peeked, just to be sure that we weren't planning to mess up "their" network. 17/
The internet *always* carried "sensitive" information - love letters, private discussions of health issues, political plans - but it wasn't until investors set their sights on *commerce* that the issue of data privacy came to the fore. 18/
The rise of online financial transactions goosed the fringe world of cryptography into the mainstream of internet development.

This gave rise to an epic, three-sided battle, between civil libertarians, spies, and business-people. 19/
For years, the civil liberties people had battled the spy agencies over "strong encryption" (more properly called "working encryption" or just "encryption"). 20/
The spy agencies insisted that civilization would collapse if they couldn't wiretap any and every message traversing the internet, and maintained that they would neither abuse this facility, nor would they screw up and let someone else do so ("trust us," they said). 21/
The business world wanted to be able to secure their customers' data, at least to the extent that an insurer would bail them out if they leaked it; and they wanted to *actually* secure their own data from rivals and insider threats. 22/
Businesses lacked the technological sophistication to evaluate the spy agencies' claims that there was such a thing as encryption that would keep their data secure from "bad guys" but would fail completely whenever a "good guy" wanted to peek at it. 23/
In a bid to educate them on this score, @EFF co-founder John Gilmore built a $250,000 computer that could break the (already broken) cryptography the NSA and other spy agencies claimed businesses could rely on, in just a couple hours. 24/
The message of this DES Cracker was that anyone with $250,000 will be able to break into the communications of any American business:

cryptome.org/jya/des-cracker.htm 25/
Fun fact: John got tired of the bar-fridge-sized DES Cracker cluttering up his garage and he sent it to my house for safekeeping; it's in my office next to my desk in LA. If I ever move to the UK, I'll have to leave it behind because it's (probably) still illegal to export. 26/
The deadlock might have never been broken but for a key lawsuit: Cindy Cohn (now EFF's executive director) won the *Bernstein* case, which established that publishing cryptographic source-code was protected by the First Amendment:

www.eff.org/cases/bernstein-v-us-dept-justice 27/
With cryptography legalized, browser vendors set about securing the data-layer in earnest, expanding and formalizing the "public key infrastructure" (PKI) in browsers. 28/
Here's how that works: your browser ships with a list of cryptographic keys from trusted "certificate authorities." These are entities that are trusted to issue "certificates" to web-hosts, which are used to wrap up their messages to you. 29/
When you contact "foo.com," Foo sends data encrypted with a key identified as belonging to "foo.com" (this key is Foo's "certificate" - it certifies that its user is Foo). That certificate is, in turn, signed by a "Certificate Authority." 30/
Any Certificate Authority can sign a certificate - your browser ships with a list of CAs, and if any of them certifies the bearer is "Foo.com," that server can send your browser "secure" traffic. 31/
Your broswer will dutifully display the data with all assurances that it arrived from one of Foo, Inc's servers. 32/
This means that you are trusting *all* of the Certificate Authorities that come with your browser, and you're also trusting the company that made your browser to choose good Certificate Authorities. This is a lot of trust. 33/
If any of those CAs betrays your trust and issues a bad cert, it can be used to reveal, copy, and alter the data you send and receive from a server that presents that certificate. 34/
You'd hope that certificate authorities would be very prudent, cautious and transparent - and that browser vendors would go to great lengths to verify that they were. 35/
There are PKI models for this: for example, the "DNS root keys" that control the internet's domain-name service are updated via a formal, livestreamed ceremony:

www.cloudflare.com/dns/dnssec/root-signing-ceremony/ 36/
There are 14 people entrusted to perform this ceremony, and at least three must be present at each performance. The keys are stored at two facilities, and the attendees need to show government ID to enter them. 37/
(Is the government that issued the ID trustworthy? Do you trust the guards to verify it? Ugh, my head hurts.)

Further access to the facility is controlled by biometric locks (do you trust the lock maker? How about the person who registers the permitted handprints?). 38/
Everyone puts a wet signature in a logbook. A staffer has their retina scanned and presents a smartcard. 39/
Then the staffer opens a safe that has a "tamper proof" (read: "tamper resistant") hardware module whose manufacturer is trusted (why?) not to have made mistakes or inserted a back-door. A special laptop (also trusted) is needed to activate the safe's hardware module. 40/
The laptop "has no battery, hard disk, or even a clock backup battery, and thus can’t store state once it’s unplugged." Or, at least, the people in charge of it claim that it doesn't and can't. 41/
The ceremony continues: the safe yields a USB stick and a DVD. Each of the trusted officials hands over a smart card that they trust and keep in a safe deposit box in a tamper-evident bag. The special laptop is booted from the trusted DVD and mounts the trusted USB stick. 42/
The trusted cards are used to sign three months worth of keys, and these are the basis for the next quarter's worth of secure DNS queries. 43/
All of this is published, videoed, livestreamed, etc. It's a real "defense in depth" situation where you'd need a *very* big conspiracy to subvert *all* the parts of the system that need to work in order to steal underlying secrets. 44/
Yes, bottom line, you're still trusting people, but in part you're trusting them not to be able to all keep a secret from the rest of us. 45/
The process for determining which CAs are trusted by your browser is a *lot* less transparent and, judging from experience, a lot less thorough. Many of these CAs have proven to be manifestly untrustworthy over the years. 46/
There was Diginotar, a Dutch CA whose bad security practices left it vulnerable to a hack-attack:

en.wikipedia.org/wiki/DigiNotar 47/
Some people say it was Iranian state hackers, seeking signing keys to forge certificates and spy on Iranian dissidents, who are liable to arrest, torture and execution. Other people say it was the NSA pretending to be Iranian government hackers:

www.schneier.com/blog/archives/2013/09/new_nsa_leak_sh.html 48/
In 2015, the China Internet Network Information Center was used to issue fake Google certificates, which gave hackers the power to intercept and take over Google accounts and devices linked to them (e.g. Android devices):

thenextweb.com/news/google-to-drop-chinas-cnnic-root-certificate-authority-after-trust-breach 49/
In 2019, the UAE cyber-arms dealer Darkmatter - an aggressive recruiter of American ex-spies - applied to become a trusted Certificate Authority, but was denied:

www.reuters.com/investigates/special-report/usa-spying-raven/ 50/
Browser PKI is very brittle. By design, any of the trusted CAs can compromise *every* site on the internet. 51/
An early attempt to address this was "certificate pinning," whereby browsers shipped with a database of which CAs were authorized to issue certificates for major internet companies. 52/
That meant that even though your browser trusted Crazy Joe's Discount House of Certification to issue certs for any site online, it also knew that Google didn't use Crazy Joe, and any google.com certs that Crazy Joe issued would be rejected. 53/
But pinning has a scale problem: there are billions of websites and many of them change CAs from time to time, which means that every browser now needs a massive database of CA-site pin-pairs. 54/
It also needs a means to trust the updates that site owners submit to browsers with new information about which CAs can issue their certificates.

Pinning was a stopgap. It was succeeded by a radically different approach: surveillance, not prevention. 55/
That surveillance tool is #CertificateTransparency (CT), a system designed to quickly and publicly catch untrustworthy CAs that issue bad certificates:

www.nature.com/articles/491325a 56/
Here's how Certificate Transparency works: every time your browser receives a certificate, it makes and signs a tiny fingerprint of that certificate, recording the date, time, and issuing CA, as well as proof that the CA signed the certificate with its private key. 57/
Every few minutes, your browser packages up all these little fingerprints and fires them off to one or more of about a dozen public logs:

certificate.transparency.dev/logs/ 58/
These logs use a cool cryptographic technology called #MerkleTrees that make them tamper-evident: that means that if some alters the log (say, to remove or forge evidence of a bad cert), everyone who's got a copy of any of the log's previous entries can detect the alteration. 59/
Merkle Trees are super efficient. A modest server can easily host the eight billion or so CT records that exist to date. 60/
Anyone can monitor any of these public logs, checking to see whether a CA they don't recognize has issued a certificate for their own domain, and then prove that the CA has betrayed its mission. 61/
CT works. It's how we learned that @Symantec engaged in *incredibly* reckless behavior: as part of their test-suite for verifying a new certificate-issuing server, they would issue *fake Google certificates*. 62/
These were supposed to be destroyed after creation, but at least one leaked and showed up in the CT log:

arstechnica.com/information-technology/2017/03/google-takes-symantec-to-the-woodshed-for-mis-issuing-...

It wasn't just Google - Symantec had issued *tens of thousands* of bad certs. 63/
Worse: Symantec was responsible for more than a *third* of the web's certificates. We had operated on the blithe assumption that Symantec was a trustworthy entity - a perfectly spherical cow of uniform density - but on inspection it was proved to be a sloppy, reckless mess. 64/
After the Symantec scandal, browser vendors cleaned house - they ditched Symantec from browsers' roots of trust. 65/
A lot of us assumed that this scandal would also trigger a re-evaluation of how CAs demonstrated that they were worth of inclusion in a browser's default list of trusted entities.

If that happened, it wasn't enough. 66/
Yesterday, the @washingtonpost's @josephmenn published an in-depth investigation into @TrustCor, a certificate authority that is trusted by default by Safari, Chrome and Firefox:

www.washingtonpost.com/technology/2022/11/08/trustcor-internet-addresses-government-connections/ 67/
Menn's report is alarming. Working from reports from @UCalgary privacy researcher Joel Reardon and @ICSIatBerkeley security researcher @v0max, Menn presented a laundry list of profoundly disturbing problems with Trustcor:

groups.google.com/a/mozilla.org/g/dev-security-policy/c/oxX69KFvsm4/m/etbBho-VBQAJ 68/
CORRECTION: A previous version of this thread reported that Trustcor had the same officers as Packet Forensics; they do not; they have the same officers as Measurement Systems. I regret the error. Thread continues here:

Mentions
See All