I’d previously written about the Yubikey 5 and how we could use it to solve various use cases and when to trust it. Personally, I think it’s a great device for corporate authentication solutions.
But…
This week, Yubico released an advisory that stated that ECDSA private keys could be stolen from a Yubikey 5 that’s running firmware older than 5.7.0 (or 2.4.0 for the YubiHSM). This is due to a flaw in the cryptographic libraries written by Infineon and discovered by NinjaLab.
What is the issue?
There’s a lot of maths involved in the NinjaLab document, but basically it boils down to a certain action not being “time constant”. This means a successful action and a failed action can take different times. This timing can be observed, and this leads to a side channel attack. NinjaLab were able to do this and recover the ECDSA private key embedded into the device.
This is kinda bad; the promise of security keys is that the private key can never be leaked. Now we have no guarantees!
So time to panic?
Let’s look at the different authenticators on a yubikey and how they might be impacted
FIDO2
FIDO2 can work in two ways. When you register two a FIDO2 service a unique public/private key is created. The public key is sent to the server and this is then used to verify you when you login.
What changes is in how the private key is stored.
The first are known as “non-discoverable” credentials. Here the private key is encrypted with an embedded key in the yubikey and the encrypted key also sent to the server. When you attempt to login you are sent back the encrypted private key, the yubikey can then use the embedded key to decrypt this and then authentication happens as normal. This is nice because it means that you can register your device with unlimited services; no local storage is needed. However to login you need to tell the remote service who you are (typically requires a login) so it can send you the encrypted key.
The second is known as “discoverable” credentials. Here the private key is stored directly on the yubikey. Of course this takes up space so only a limited number can be stored. The advantage is that we don’t need to login to the remote service in order to get the private key, so it can be used for true passwordless logins.
You can see what discoverable credentials are on your machine by using the yubikey manager (you need to be Admin in Windows).
e.g using a test token from token2.com
C:\Program Files\Yubico\YubiKey Manager>ykman fido credentials list
Enter your PIN:
Credential ID RP ID Username Display name
63cd48c8... www.token2.com token2_user_66d9dff6baa59_445105966d9dff6baa97 token2_user_66d9dff6baa59_445105966d9dff6baa97
C:\Program Files\Yubico\YubiKey Manager>
So there’s a potential difference in exposure between these credentials, and it’s up to the server to decide what type to use.
With a discoverable token then the attacker can see what sites are available for access and which may not need a password (‘cos true passwordless) but they will need the PIN. If they have the PIN then they could retrieve the private key for this site.
For non discoverable tokens the attacker could register your key to another site; they choose the username and password and a retrieve the master private key. They would still need the PIN, but this would allow them to attack multiple sites… if they knew what they were and your login details!
What about OTP, static secrets, PIV and PGP?
OTP doesn’t do this type of calculation. It’s based on a seed value and a generator function. This is not impacted.
Static secrets don’t do this either… but if someone has your yubikey then they can just press the button to get the secret. Don’t use static secrets!
Now, by default, PIV and PGP modes uses RSA keys for device attestation. These are not impacted. However a corporation could replace those attestation certs and so use ECDSA, but I’ve never thought this a good idea; the nice thing with using the inbuilt keys from Yubico is that you can use an off-the-shelf device straight from packaging and sent to an user.
PIV and PGP operations, however, could be done using ECDSA keys. PIV mode requires a PIN and PGP can be configured to use one. Operations on RSA keys aren’t impacted.
How easy is the attack?
Firstly, in order to do this attack you need access to the yubikey itself. This isn’t that big a deal; indeed it’s recommended that people have a backup key that’s stored somewhere secure in case they lose the first one. How often is that backup key checked? But physical access is needed.
NinjaLab claims they only need access to the key for a few minutes, but that doesn’t include the time needed to take apart the key, hook up into the internals and then reseal the key afterwards. Of course if the attacker doesn’t care if the key is noticed as missing…
You need pretty expensive equipment (maybe $11,000 worth according to ars Technica) to get the data. I could see this coming down in price.
For non-discoverable credentials you need to know what service the key has been registered to.
You may need a username and password to the service, and a PIN to access the credential.
So it’s not easy to do, but it’s not out of the realm of possibility. ars thinks “nation-state” level attackers but there may be more than that if the return is more than the cost.
Mitigations
I believe a PIN will help block attacks because it’s another thing the attacker needs to have in order to perform the side channel attack. As long as the PIN isn’t set to the default value, of course! Yubikeys only allow limited number of PIN retries before the application is locked and the secrets unreadable. PINs are harder to get than username/password information because the UI presented makes it harder to be phished. But it’s not impossible and keylogger malware would have access.
So definitely ensure any yubikeys in your environment are configured to require PINs and that the PIN isn’t set to the default. Note than “PIN” is a bit of a misnomer for yubikey; it can contain more than numbers, so allow alphanumerics in whatever app you provide for management of PIV/PGP certs.
Obviously use of RSA keys, rather than ECDSA, will block this attack.
Train staff to report missing keys immediately and have processes (SOC or even helpdesk) to revoke the token.
Require staff to verify they know where the backup key is (maybe force them to use it once a month?).
Wrap keys with tamper-evident seals with unique numbers (this will be annoying since it means off-the-shelf keys aren’t allowed, and it’s not enforceable, but it’d help) so that evidence of the key being taken apart is obvious.
Summary
It seems that the “big” risk is U2F/FIDO mode. Exposure could allow an attacker to authenticate as a user. But it’s not necessarily easy.
Do you need to worry about it? Or this is just yet another theoretical attack? That’s a risk management decision. Use of these “vulnerable” keys will still prevent most attacks on your users. It’s a targeted well financed attacker that’s the worry.
It may be that you would want to take a tiered approach; staff with access to highly sensitive data may need a new key with fixed firmware, but staff who just use it to access their laptop to read email don’t need an upgrade.
Hmm, I wonder how many vulnerable devices are still in the supply chain? I assume anything bought direct will have the new firmware, but what about if you buy from Amazon; will they still have old keys warehoused?
The main question I have, though, is where else this chipset and library has been used? What other devices may be vulnerable?
After all, it’s not the first time NinjaLab have broken hardware tokens
And, of course, remember
538