Get Your Hands Off My Laptop:
Physical Side-Channel Key-Extraction Attacks On PCs

Daniel Genkin
Itamar Pipman
Eran Tromer
Technion and Tel Aviv University Tel Aviv University Tel Aviv University

assisted by numerous others


This work was presented in CHES 2014 and published in its proceedings. An extended version (PDF, 5.3MB) is available, archived as IACR ePrint 2014/626.

In February 2015 we published a follow-up paper, "Stealing Keys from PCs using a Radio: Cheap Electromagnetic Attacks on Windowed Exponentiation",  which shows how to extract RSA and ElGamal keys from modern implementations (that use windowed exponentiation), in a few seconds, using cheap radio receivers.


We demonstrated physical side-channel attacks on a popular software implementation of RSA and ElGamal, running on laptop computers. Our attacks use novel side channels and are based on the observation that the "ground" electric potential in many computers fluctuates in a computation-dependent way. An attacker can measure this signal by touching exposed metal on the computer's chassis with a plain wire, or even with a bare hand. The signal can also be measured at the remote end of Ethernet, VGA or USB cables.

Through suitable cryptanalysis and signal processing, we have extracted 4096-bit RSA keys and 3072-bit ElGamal keys from laptops, via each of these channels, as well as via power analysis and electromagnetic probing. Despite the GHz-scale clock rate of the laptops and numerous noise sources, the full attacks require a few seconds of measurements using Medium Frequency signals (around 2 MHz), or one hour using Low Frequency signals (up to 40 kHz).

We have extracted keys from laptops of various models, running GnuPG (popular open source encryption software, implementing the OpenPGP standard). The attacks exploit several side channels, enumerated below: We also revisit two traditional physical side channels and demonstrate their applicability to software running on PCs:
In a recent paper, we also demonstrated attacks using acoustic emanations, i.e., using microphones to record the sound made by computers' electronics and deducing the secret keys.


Q1: What information is leaked?

This depends on the specific computer hardware. We have tested numerous laptop computers, and found the following:
A good way to visualize the signal is as a spectrogram, which plots the measured power as a function of time and frequency. For example, in the following spectrogram, time runs vertically (spanning 10 seconds) and frequency runs horizontally (spanning 0-2.3 MHz).  During this time, the CPU performed loops of different operations (multiplications, additions, memory accesses, etc.). One can easily discern when the CPU is conducting each operation, due to the different spectral signatures.
various CPU operations

Q2: Why does this happen?

The electric potential on a laptop computer's chassis (metal panels, shields and ports) is ideally equal to that of the mains earth ground potential, but in reality it fluctuates greatly. Even when the laptop is grounded (via its power supply or via shielded cables such as Ethernet, USB or VGA), there is non-negligible impedance between the grounding point(s) and other points in the chassis. Due to currents and electromagnetic fields inside the computer, voltages of large magnitude develop across this impedance (often 10mV RMS or more, after filtering out the 50 or 60 Hz mains frequency). This is the voltage we measure.

Q3: Does the attack require special equipment?

While the attack is most effective using professional lab equipment, a regular mobile phone is sometimes good enough. For example, we have used a mobile phone to measure the key-dependent chassis potential from the far side of a 10m Ethernet cable, as shown here:
mobile phone attack
The above picture shows a mobile phone (Samsung Galaxy S II) being used to measure the chassis potential of the laptop from the far side of a 10 meter long Ethernet cable (blue). An alligator clip connected to a plain wire (green) taps the shield of the Ethernet cable where it connects to an Ethernet switch. The signal passes, through a simple passive filter, into the microphone/earphone jack of the phone, where it is amplified and digitized. The phone itself is grounded to mains earth via its USB port. It is possible to perform the adaptive attack using this setup.

Q4: What if I can't physically touch the computer or any of its cables and peripherals?

There are still two attacks that require only proximity, not direct contact:

Q5: What's new since your paper on acoustic cryptanalysis?

Q6: Can an attacker use power analysis instead?

Yes, power analysis (measuring the current drawn from the laptop's DC power supply) is another way to perform our low-bandwidth attack.

Traditional power analysis would measure power consumption at a frequency comparable to the CPU's clockrate (a few GHz), and is foiled by dampening emanations at these frequencies. Our attack extracts the key using much lower bandwidth (a few kHz to a few MHz, depending on settings and duration). Our attack is also more resilient to filtering and noise.

Q7: How can low-frequency (kHz) leakage provide useful information about a much faster (GHz) computation?

This is the key idea behind our technique. Individual CPU operations are too fast for our measurement equipment to pick up, but long operations (e.g., modular exponentiation in RSA) can create a characteristic (and detectable) spectral signature over many milliseconds. Using a chosen-ciphertext, we are able to use the algorithm's own code to amplify its own key-leakage, creating very drastic changes, detectable even by low-bandwidth means.

Q8: How vulnerable is GnuPG now?

We have disclosed our attack to GnuPG developers under CVE-2013-4576 and CVE-2014-5270, suggested suitable countermeasures, and worked with the developers to test them. New versions of GnuPG 1.x and of libgcrypt (which underlies GnuPG 2.x), containing these countermeasures and resistant to the key-extraction attack described here, were released concurrently with the first public posting of these results.

GnuPG version 1.4.16 onwards, and libgcrypt 1.6.0 onwards, resist the key-extraction attack described here. Some of the effects we discovered (including RSA key distinguishability) remain present.

Q9: How vulnerable are other algorithms and cryptographic implementations?

This is an open research question. Our attack requires careful cryptographic analysis of the implementation, which so far has been conducted only for the GnuPG 1.x implementation of RSA. Implementations using ciphertext blinding (a common side channel countermeasure) appear less vulnerable.

Q10: Is there a realistic way to perform a chosen-ciphertext attack on GnuPG?

We found a way to cause GnuPG to automatically decrypt ciphertexts chosen by the attacker. The idea is to use encrypted e-mail messages following the OpenPGP and PGP/MIME protocols. For example, Enigmail (a popular plugin to the Thunderbird e-mail client) automatically decrypts incoming e-mail (for notification purposes) using GnuPG. An attacker can e-mail suitably-crafted messages to the victims, wait until they reach the target computer, and observe the target's chassis potential during their decryption (as shown above), thereby closing the attack loop.

Q11: What countermeasures are available?

Physical mitigation techniques include Faraday cages (against EM attacks), insulating enclosures (against chassis and touch attacks), and photoelectric decoupling or fiberoptic connections (against "far end of cable" attacks). However, inexpensive protection of consumer-grade PCs appears difficult, especially for the chassis channel.

Alternatively, the cryptographic software can be changed, and algorithmic techniques employed to render the emanations less useful to the attacker. These techniques ensure that the rough-scale behavior of the algorithm is independent of the inputs it receives; they usually carry some performance penalty, but are often used in any case to thwart other side-channel attacks. This is what we helped implement in GnuPG (see Q8).

Q12: Why software countermeasures? Isn't it the hardware's responsibility to avoid physical leakage?

It is tempting to enforce proper layering, and decree that preventing physical leakage is the responsibility of the physical hardware. Unfortunately, such low-level leakage prevention is often impractical due to the very bad cost vs. security tradeoff: (1) any leakage remnants can often be amplified by suitable manipulation at the higher levels, as we indeed do in our chosen-ciphertext attack; (2) low-level mechanisms try to protect all computation, even though most of it is insensitive or does not induce easily-exploitable leakage; and (3) leakage is often an inevitable side effect of essential performance-enhancing mechanisms (e.g., consider cache attacks).

Application-layer, algorithm-specific mitigation, in contrast, prevents the (inevitably) leaked signal from bearing any useful information. It is often cheap and effective, and most cryptographic software (including GnuPG and libgcrypt) already includes various sorts of mitigation, both through explicit code and through choice of algorithms. In fact, the side-channel resistance of software implementations is nowadays a major concern in the choice of cryptographic primitives, and was an explicit evaluation criterion in NIST's AES and SHA-3 competitions.

Q13: What does the RSA leakage look like?

Here is an example of a spectrogram (which plots the measured power as a function of time and frequency) for a recording of GnuPG decrypting several RSA ciphertexts:

spectrogram of multiple GnuPG RSA decryptions

In this spectrogram, the horizontal axis (frequency) spans ranges from 1.9 MHz to 2.6 MHz, and the vertical axis (time) spans 1.7 seconds. Each yellow arrow points to the middle of a GnuPG RSA decryption. It is easy to see where each decryption starts and ends. Notice the change in the middle of each decryption operation, spanning several frequency bands. This is because, internally, each GnuPG RSA decryption first exponentiates modulo the secret prime p and then modulo the secret prime q , and we can actually see the difference between these stages. Moreover, each of these pairs looks different because each decryption uses a different key. So in this example, by simply observing the chassis potential during decryption operations, we can distinguish between different secret keys,

Q14: How do you extract the secret key bits?

This depends on the attack type. In the paper we present two types of attacks.