Question about FIDO2 authentication

So for the chain of trust to work, the server needs to be able to trust the browser. What’s to prevent me from recompiling Firefox with a bit of code that just gives a blank check to any authentication challenge? I’m guessing they need the binary of some part of the browser to have some kind of certificate of authenticity saying that the WebAuthn part has not been modified on the user’s end.

That is NOT how it works–period. There is no trust what-so-ever in the path. You don’t even need HTTPS. The HTTPS is presumably just for the return cookie to set the user identity for the site. The security key locks and contains the private key. The site houses the public key. This is bog standard public key cryptography. The browser doesn’t get access to the private key, so it doesn’t need to be trusted. (I mean yes the user needs to trust it’s not also performing authentication actions on behalf of a remote attacker, but that is all standard and well understood anti-malware protection.) Assuming the relying party (the site the user is authenticating into, is not inclined to attack itself, there is no other need for trust beyond security device. Since it is explicitly designed to never reveal a private key, to anyone, there is no risk of your private key(s) leaking. The only issue the security device has, is whether it can trust the browser to only ask for authentication for the one specific user at the keyboard. If there is malware in your system, you are still hosed, as ever.

EDIT:

I did my best to draw a diagram to illustrate what I am saying at two different levels of detail:

1 Like

Thanks for drawing a diagram. I’m going to co-opt it and label the “secure element” 1, the web browser 2, and the relying party 3.

There is a need for trust on the side of both the user and the server. Based on my quote from the W3C (steps 1-2) party 3 gives party 2 a script. This script requests an authentication assertion from party 2. Party 2 gives Party 1 a challenge. Party 1 gets your consent and biometric authentication. If these are rendered, Party 1 responds to party 2, performing the handshake. Party 2 verifies the challenge and sends Party 3 an Authentication Assertion, which says that Party 1 was queried and rendered the appropriate credentials.

Https is just the protocol by which the script from Party 3 and authentication assertion provided by Party 2 travel. The USB protocol provides the protocol for the challenge verification to take place between Parties 1 and 2. But the user has to trust Party 3 sending the likely javascript (phishing: solved by CAs, I’d guess, although they probably need a bit more than https only because of CDNs serving scripts, which I suppose is why the spec includes some language about this) and Party 3 is given an attestation to trust Party 1 by Party 2. So Party 3 now has to trust Party 2, that the assertion was conducted appropriately on the user’s own computer, or else the chain is broken. Otherwise, I could just write an extension that intercepts the JavaScript and auto-asserts and spoofs an assertion of a correct response.

It’s hard to think of exactly how to defeat this system, but it seems like the browser becomes a target for attacks on the website if it isn’t somehow verified or trusted by the server.

Edit: I only emphasize that the script happens on the user’s own computer, because this is represents a potential risk for party 3. They send the script out, and they have to trust that the script is executed faithfully on a device which the user supposedly has complete control over.

I say “supposedly”, because this could be the raison d’etre for systems like Intel’s ME and the AMD PSP, to provide client-side processing completely out of reach of the user.

I think you’ve been betrayed by a trick of the WebAuthn documentation writers. They don’t declare explicitly a boundary around the functioning of the spec because they know that password managers (of a sort) are already built into browsers. Instead they get very hand wavy. But I did see this:

allowCredentials, of type sequence<PublicKeyCredentialDescriptor>, defaulting to []

This OPTIONAL member is used by the client to find authenticators eligible for this authentication ceremony. 

The key is that, implementing FIDO2, find authenticators most probably means invoking the OS support to find a USB hardware key (but other options may be possible.)

If you poke around more in that document you’ll see that, after finding the authenticator(aka hardware device), they can request it to produce the correct response to an authentication challenge which they will forward onward to the server to complete the authentication ceremony.

In no way will the authentication a server desires occur on the client side. The client will mediate the input of the server challenge into the correct hardware device, which will generate a response that would prove its identity to the server, and then the client will forward that to the server and expect to receive a server redirect which will set a cookie so the site can recognize the connection in an ongoing manner.

Hey, @PHolder I got pulled into work so I haven’t had a chance to properly consider and RTFM. You may very well be right, but I haven’t been able to dive all the way into the docs again. It’s confusing because they could delegate the verification out to the client using a chain of trust based on the wording, or they could just implement a symmetric key. Who knows? Maybe they actually don’t want to say…

One of the complications I remember seeing is specifically about this allowCredentials field. It’s optional specifically to enable that Discoverable Credentials flow that involves the user.id stored on the authenticator. It is possible that the spec enables different implementations based on the type of login.

It’s so confusing because there are so many corner cases (which we love in security contexts) that are being allowed. The authenticators, based on what I’ve read, include USB tokens, Bluetooth-connected phones, and even the local device, provided there is some biometrics hardware.

The cynical side of me says there’s a lot of work going on for 50% of the population to install an extension that spoofs a “yeah, yeah I put my finger on the reader” response. Sometimes I get cranky just waiting for my 2FA notification to push, so I can’t imagine what this will really look like in practice…

Agreed, in the case of Yubikey, it could be NFC on a smartphone, there are BlueTooth variants also out there, plus USB and, of course, the secure enclave of a smartphone or PC and authenticator software, such as the proposed Apple PassKeys, LastPass, BitWarden etc.

I suspect the operating systems will have some form of registration process for devices and software to register themselves as authenticators. This is already the case with the Yubikey, for example, with FIDO2 and password managers for suggesting login credentials - 1Password, for example, can replace the Keychain on macOS and even provide signatures for ssh sessions in the terminal, in the new version. The OS just needs to implement the relevant hook, if it isn’t already doing so.