In a Reddit Ask Me Anything last Wednesday, Intel CEO Brian Krzanich opened the floor for questions, but notably ignored the most popular one in the thread: in light of recent NSA revelations, what will the company do to assure that its chips don’t contain a backdoor for the NSA?
While Kzarnich never answered any of the security-related questions–Intel PR says this is because the questions came late and Kzarnich either missed them entirely or couldn’t reply in time–one Redditor, Bardfinn, responded at length on the issue of encryption and security.
Bardfinn’s real name is Steve Akins, and in an email correspondence he describes his interest in cryptography and Internet security as personal and societal/political. But he’s quite literate on the subject.
“It’s an immense problem for the layman,” Akins says. “Cryptography is difficult to use, touches many parts of our lives, and has not become significantly less difficult in the past 30 years… In our tablets and smartphones, and the networks they connect to, cryptography is handled for us by the manufacturers. We never see it, never interact with it, and in many cases *cannot* interact with it.” We’re placing an immense amount of trust in the cryptography of manufacturers, Akins argues, and therefore we’re effectively “trusting them not to peek.”
Of course, everyone can’t be a skilled cryptographer, and since absolute security isn’t really possible, there will always have to be some element of trust involved between manufacturers and everyday people–but Akins believes that trust needs to be verifiable, mitigated, and distributed:
The problem isn’t that we have to trust a black box in our personal devices. The problem is that we have to trust that one black box, and many black boxes on the Internet (or cellular network) which may or may not be as secure as the black box in our devices, and the ones in our computers and the ones in the networks interoperate at the lowest common denominator, and they all probably have back doors (which makes it really hard to actually trust them), and the ones on the Internet are highly targetable by the bored kids, criminals, etc: Bad Actors.
To understand the root cause of this concern, and what can be done about it, it helps to have some understanding of how your computer goes about encrypting things to ensure that prying eyes don’t see what you don’t want them to see. For your computer to lock your data up tight and send it on its way, it relies on something that computers are in reality quite bad at: randomness.
Random numbers are a necessity for building secure systems, as they’re the only way to make sure your encryption key stays secure. However, generating random numbers can be extraordinarily difficult, especially with software. Programs and computers are run by logic and if-then conditionals–asking them to pull numbers out of thin air without a prescribed formula is the sort of simple thing human minds can do that trip up computers. We call that predictability entropy. The higher your entropy, the harder it is to crack your encryption.
Since it’s so hard to come up with a software solution that adequately generates random numbers with high entropy for encryption, it’s become possible to mitigate that by turning to your computer’s processor. Which is where Intel comes in.
Ever since the company launched its Ivy Bridge line of processors in May of 2012, it’s included what it calls Secure Key technology for the purpose of random number generation. It is, essentially, a black box–an opaque system built for a specific purpose (random number generation) but with little to no insight as to how it actually accomplishes it.
This became problematic last fall, in light of further information about how the NSA and GCHQ surveillance programs cracked Internet encryption protocols leaked by Edward Snowden was made public. While neither Intel nor any of its competitors were ever mentioned in the report, details about the NSA decryption program code-named BULLRUN stated that the agency had “inserted secret vulnerabilities — known as backdoors or trapdoors — into commercial encryption software” in order to get past the most common web security protocols like HTTPS and SSL.
As such, the security community entered into a state of heightened concern and paranoia, as each discovered exploit could now be interpreted as intentional government surveillance, or at least an invitation for it. As a result, the developers of secure open-source OS FreeBSD stated that they could not solely trust chip-based cryptography from Intel or its competitor Via without running their output through additional random number generation software. What’s more, Linux developer Theodore Ts’o claimed Intel engineers tried to pressure him into relying solely on their processor for cryptography–which he resisted for this very reason.
Internet security and cryptography expert Bruce Schneier has also expressed concern that such a backdoor could easily exist on Intel processors, functioning as a virtually undetectable means by which the entropy of the chip-based cryptography could be dramatically lowered and compromised. In an editorial for Wired, Schneier elaborates further on the the backdoor problem:
In general, what we need is assurance: methodologies for ensuring that a piece of software does what it’s supposed to do and nothing more. Unfortunately, we’re terrible at this. Even worse, there’s not a lot of practical research in this area — and it’s hurting us badly right now.
When contacted about these security concerns, Intel spokesman Chuck Mulloy said that “there has never been any association between Intel and the NSA. That’s not something we do. We’ve taken a firm position–we don’t do anything to compromise the security of our technology.”
As far as assurance goes, well, they’re working on it. “You can rest assured that we’re working on addressing this. It’s clearly an important issue, clearly something we’ve been following,” Mulloy said.
For now, though, we’re just going to have to take Intel’s word for it.