This article is part of a larger series on cryptographic keycards and secrets storage.

On October 30, 2017, a group of Czech researchers from Masaryk University presented the ROCA paper at the ACM CCS Conference, which earned the Real-World Impact Award. We briefly mentioned ROCA when it was first reported but haven't dug into details of the vulnerability yet. Because of its far-ranging impact, it seems important to review the vulnerability in light of the new results published recently.

ROCA and its impacts

As we all probably know, most modern cryptography is based on the fundamental assumption that finding the prime factors of a large number is much harder than generating that number from two large primes. For this assumption to hold, however, the prime numbers need to be randomly chosen, otherwise it becomes possible to guess those numbers more easily. The prime generation process can take time, especially on embedded devices like cryptographic tokens.

The ROCA vulnerability occurred because Infineon, a popular smartcard manufacturer, developed its own proprietary algorithm based on the fast prime technique. If used correctly, fast prime allows the creation of randomly distributed primes faster than traditional methods, which generally consist of generating random numbers and testing for primality. The ROCA paper shows that Infineon goofed on the implementation and the resulting primes were not evenly distributed. This opened the possibility of recovering private keys from public key material in a reasonable time frame.

Private key recovery is one of the worst security vulnerabilities (short of direct cleartext recovery) that can happen in asymmetric crypto systems. It turns out that Infineon is one of only three secure chip manufacturers and therefore its chips are everywhere: in cryptographic keycards (e.g. the popular Yubikey 4) but also Trusted Platform Modules (TPMs) that live in most modern computers; even some official identity cards, which are used in everything from banking to voting, have Infineon chips. So the impact of this vulnerability is broad: medical records, voting, OpenPGP signatures and encryption, Digital Rights Management (DRM), full disk encryption, and secure boot; all become vulnerable if the wrong keycard generated the private key material.

Hacking the Estonian elections

Let's take an extreme example of identity theft to illustrate the severity of the ROCA vulnerability. Estonia used Infineon chips in its state-issued identity cards. Because those cards are used in its electronic voting system, it was speculated that votes could be forged in an election. Indeed, there was a parliamentary election in 2015, at a time when vulnerable cards were in the wild.

The Estonian government claims that leveraging the attack to commit electoral fraud in Estonia is "complicated and not cheap". This seems to rely on an unnamed expert as saying it would "cost approximately €60 billion" to mount such an attack, a number found in this article published by the Estonian state media (ERR). Indeed, estimates show that cracking a single key would cost €80,000 in cloud computing costs. Since then, however, the vulnerability was also reviewed by cryptographers Daniel J. Bernstein and Tanja Lange, who found that it was possible to improve the performance of the attack: they stopped after a 25% improvement, but suspect even further speed-ups are possible. So let's see what effect those numbers would have in an election now.

There are 750,000 vulnerable cards, according to ERR, out of the 1.3 million cards currently in circulation, according to National Electoral Committee. There were around 900,000 eligible voters in the 2015 parliamentary elections, about 600,000 of those voted and about 30% of voters cast their votes electronically. So, assuming the distribution of compromised cards is uniform among voters, we can use the percentage of compromised cards (roughly 60%) to estimate that vulnerable cards could have been used in about 17% of the total votes.

The 2015 election was pretty close: the Estonian Reform Party beat its closest rival, the Estonian Centre Party, by only 5%. So, it could have been possible to affect the result of that election, even without compromising all the cards. Bernstein and Lange were actually generous when they said that "'large-scale vote fraud' does not require breaking all of the ID cards; 10% already buys massive influence".

In fact, according to their numbers, the €80,000 required to break one card can be reduced by a factor of four thanks to various performance improvements and by another factor of ten using dedicated hardware, which means we can expect targeted attacks to be 40 times cheaper than the original paper, bringing the cost of one vote down to around €2000. An Ars Technica article quoted another researcher as saying: "I'm not sure whether someone can slash the cost of one key below \$1,000 as of today, but I certainly see it as a possibility."

So, being generous with the exchange rates for the sake of convenience, we could use a price per vote range of about €1,000-2,000. This means that buying the 5% of the votes necessary to win that 2015 election (or around 30,000 votes) would cost €30-60 million, which is a much lower number than originally announced and definitely affordable for a hostile foreign-state actor.

Now, I am not saying that this happened during the 2015 Estonian election. I am merely pointing out that the resources required to buy votes (one way or another) are much cheaper than previously thought. And while the Electoral Committee released the server-side source code in 2013, that wasn't where the ROCA problem lies. The vulnerability, this time, is client-side: the identity cards based on proprietary hardware. Fortunately, the Estonian authorities took action to block the Infineon cards on November 3 and issue new certificates for the vulnerable cards in a state-wide replacement program. So far, there is no evidence of foul play or identity theft, according to state officials.

The attack and disclosure

ROCA is an acronym for "Return Of the Coppersmith Attack" which, in turn, refers to a class of attacks on RSA that uses some knowledge about the secret key material that allows the key to be guessed in less than brute-force time. The actual mathematical details are available in the full paper [PDF] and go beyond my modest mathematical knowledge, but it certainly seems like Infineon took shortcuts in designing its random number generator. What the Czech researchers found is that the primes generated by the Infineon algorithm had a specific structure that made them easier to guess than randomly distributed primes. Indeed, all primes generated by RSALib, the library Infineon created for those chips, followed this pattern:

    p = k ∗ M + (65537^a mod M)

Here p is one of the secret primes used to construct the public key. The secrets in the equation are k and a, while M varies depending with the chosen key size. The problem is that M is disproportionately large, close enough to the size of p so that k and a are too small. And this is where the Coppersmith method comes in: it relies on small integers being part of the equation generating the prime. It is also where I get lost in the math; interested readers can read the papers to find out more.

Something interesting here is that Bernstein and Lange were able to reproduce the results before the paper was publicly released, using only information disclosed as part of the public advisory. As such, they raise the interesting question of whether delaying publication of the actual paper helped in containing the security vulnerability, since they were able to construct an attack within about a week of part-time work. They outlined that publicizing the keyword "Coppersmith" in the title of the research paper (which was public at the time of disclosure) was especially useful in confirming they were on the right path during their research. In that sense, Bernstein and Lange imply that delaying the publication of the paper had no benefit to the security community, and may have actually benefited attackers in a critical period, while impeding the work of honest security researchers.

They also questioned the unusually long delay given to Infineon (eight months) before disclosure. Usually, "responsible disclosure" ranges from a few days to several months. For example, Google's Project Zero team gives projects 90 days to fix problems before it publicizes the results. The long delay was presumably given to provide ample time for companies and governments to organize mitigation measures. Yet, when the paper was finally released, some companies (e.g. Gemalto) still weren't clear if their products were affected. According to Ars Technica, Gemalto's customers, including Microsoft, which uses the products for two-factor authentication, are now still left wondering if their products are secure. Similarly, Estonia recently scrambled to suspend all affected cards, earlier than originally planned, even though they were notified of the vulnerability back in August. It seems the long delay didn't work: stakeholders did not proceed with those changes promptly and thousands if not millions of devices are still vulnerable at the time of writing.

Part of the problem is that a lot of those devices simply cannot be upgraded in the field. For example, Yubico forbids firmware changes to the Yubikey 4. So the company had to set up a free replacement program for the popular keycards. TPM modules, fortunately, are often flashable, but those need to be done through a manufacturer update, which may be hard to arrange. Most often, fixes are just not deployed at all, because supply chains make it difficult to find the original manufacturer that can provide fixes. In fact, the ROCA paper described the impact evaluation as "far from straightforward", partly because secure hardware is often designed as an opaque system that is deliberately hard to evaluate. In particular, the researchers stated that "prevalence of the RSALib in the area of programmable smartcards is notoriously difficult to estimate". The researchers further explained that we may deal with the fallout of ROCA for a long time:

We expect to see the cards for a rather long time (several years) before all the vulnerable cards are eventually sold out, especially when dealing with low volume markets. The buyers should check the cards for the presence of fingerprinted keys and/or opt for longer key lengths if they are supported by the card hardware.

The ROCA paper also noted that medical records identification cards and electronic payment cards (EMV) may be vulnerable to this issue as well, although they were not aware of publicly available sources for the public keys for those devices—a necessary condition to mount an attack.

Yet the researchers are confident their disclosure led to good results. In an email interview, Petr Svenda said:

The main driver for us to publicly announce the vulnerability including details is to force companies to really act. This hopefully happened (new firmware for TPMs, updates to Bitlocker, revocation of eID keys...)

Svenda also believes the security vulnerability is "more likely to be unintentional" because the prime structure leak was "so obvious" when compared to other cryptographic vulnerabilities like Dual_EC_DRBG. He pointed to some of the recommendations from the paper:

  1. Don't keep the design secret and have verifiable implementation

  2. Use some kind of secure multiparty computation to remove single points of failure

The latter idea is novel for me, but was proposed in academic literature back in 1997. The idea is to not rely solely on one algorithm or device to generate primes but collaboratively generate them among multiple parties. This makes vulnerabilities like ROCA much less likely because the same mistake would need to happen among all the components for the attack to be effective.

Obviously, you can also generate private keys on another computer and move them to the keycard later. But, as the Debian OpenSSL debacle showed, desktop computers are not immune to this type of vulnerability either.

The ROCA paper highlights the "dangers of keeping the design secret and the implementation closed-source, even if both are thoroughly analyzed and certified by experts". They were also critical of the certification process which "counter-intuitively 'rewards' the secrecy of design by additional certification 'points' when an implementation is difficult for potential attackers to obtain - thus favoring security by obscurity."

The paper concludes by stating that "relevant certification bodies might want to reconsider such an approach in favor of open implementations and specifications." This is a similar conclusion to the one I reached in my comparison of cryptographic tokens: we should be able to design secure, yet open, hardware and hopefully these kinds of vulnerabilities will serve as a lesson for the future.

This article first appeared in the Linux Weekly News.

Comments on this page are closed.
Created . Edited .