Quantcast
Channel: CodeSection,代码区,网络安全 - CodeSec
Viewing all articles
Browse latest Browse all 12749

Public key authenticated encryption and why you want it (Part III)

0
0

InPart I, we saw that authenticated encryption is usually the security goal you want in both the symmetric and public key settings. InPart II, we then looked at some ways of achieving public key authenticated encryption (PKAE), and discovered that it is not straightforward to build from separate signing and encryption methods, but it is relatively simple for Diffie-Hellman. In this final part, we will look at how existing standards approach the problem and how they could be improved.

JOSE and JWT

The JSON Object Signing and Encryption (JOSE) standards, used for JWTs , define a number of encryption modes , both symmetric and public key. The symmetric modes all provide authenticated encryption, but the public key encryption modes typically do not. Even the ECDH-ES algorithms do not, as they follow the ECIES approach that we previously showed discards sender authentication.

This has led standards like OpenID Connect (OIDC) to mandate that its tokens must always be signed , and if encryption is desired then the tokens must be first signed and then encrypted. This has obvious downsides as the resulting nested JWT can be quite bulky, especially as the inner (signed) JWT is Base64-encoded and will then be Base64-encoded again after encryption. If you use RSA signatures and encryption (which are inexplicably still popular), the resulting JWT can easily become very large.

But does this nested JWT structure even achieve what we want? We saw inPart IIthat no simple composition of signing and encryption achieves PKAE. For example, if Alice sends a signed-then-encrypted message to Bob saying “You’re fired!”, then Bob can decrypt the message and then re-encrypt the signed inner message to Charlie. Charlie receives an apparently authentic message from Alice, clears his desk and leaves in tears, never to return. Naughty Bob!

The situation in JWT isn’t quite so bad though, as JWT defines a number of standard claims that can be used to prevent these attacks. In particular, the standard “iss” (issuer, like “from”) and “aud” (audience, or “to”) would make it very hard for Bob to pull off his nasty trick, as Charlie (or his mail reader) would see that the message was intended for Bob and not himself. These claims are mandatory in OIDC . If you are using JWTs, you should generally consider these claims to be mandatory too, even if the spec says they are optional. Failing to include them, or failing to check them, almost always leads to a security weakness.

Improving JOSE

JOSE consists of two parts: JWS provides digital signatures and MACs, while JWE provides encryption. This seems like a sensible split, but if we look at the security properties provided by individual algorithms, things become less clear:

Symmetric MAC algorithms provide message authentication and (strong) unforgeability. RSA and ECDSA signatures also provide third-party verifiability and potentially non-repudiation . The symmetric encryption algorithms all provide authenticated encryption. The public key encryption algorithms generally just provide some form of confidentiality, mostly IND-CCA2, except for RSA1_5 (which is an abomination).

This makes moving between algorithms, particularly switching between symmetric and public key algorithms, problematic as the security properties may change. As I mentioned in Part I, I have seen situations in which developers switched from symmetric encryption to RSA, without realising that they lost all authentication in the process. While this may seem obvious, the standard presents them all as valid encryption algorithms and makes them appear interchangeable.

Furthermore, when moving from simple JWS signatures or MACs to also requiring encryption, developers are suddenly faced with a lot more complexity to navigate on their own.

My proposal for improvement is that all the algorithms in JWE and JWS should be interchangeable. If they all shared the same security goals then this could be achieved. The idea in detail is that:

The security goal for JWE should be authenticated encryption in all cases, for both symmetric and public key. Algorithms that do not provide authenticated encryption (all of the current public key encryption algorithms) should be deprecated and eventually removed in favour of authenticated replacements. (Hey, I didn’t say this was going to be a popular proposal!) For JWS, we should concentrate on the stronger third-party verification and non-repudiation goals of a real (public key) digital signature. That means removing the HMAC algorithms from JWS.

I have argued in this three part series that authenticated encryption is a useful and achievable security goal for encryption. By deprecating/removing the non-authenticated public key encryption schemes, we can replace them with authenticated alternatives such as the Noise one-way authenticated patterns we discussed in Part II.

If all JWE modes are authenticated, then we can recommend that all applications default to using JWE rather than JWS. JWS can then be reserved for cases where you genuinely want the stronger properties provided by public key signatures, for example when messages convey legal or financial transactions.

But what if you really do just want an authenticated set of claims without confidentiality, as with the current HMAC JWS algorithms? One (poor) solution would be to just put your claims in the JWE protected header and leave the payload empty. This would work, as the protected header is authenticated and integrity protected, but it forces you to mix your application data with generic metadata. A better solution would be to allow a JWE to have two payloads: one public and one private. Both would have the same content-type, but one is encrypted while the other is only authenticated (as associated data in the sense of AEAD ). The JWE JSON encoding already allows such additional data in the form of the JWE AAD section, but this is currently missing from the compact encoding .

This is a useful idea in many cases anyway. Consider JWK , the standard for representing cryptographic keys as JSON documents. Currently all claims related to a key are stored in a single bag of attributes. This is problematic, as some of these claims are confidential (for instance private key material), while many are not, such as public key material or metadata including key IDs and usage constraints. Consider this example JWK for a X25519 key pair :

{ "kty": "OKP", "crv": "X25519", "x": "Mldalirlj1rJaZ88_sueClsTkOVrIgAukdp6WNEOxj8", "d": "F15VvXfZGXAg6mSzOeUw0RBb7hD6Fwb-NYj8qdy-9J4" }

Unless you are familiar with the specs or the details of elliptic curve cryptography, it may not be immediately obvious to you that the “d” claim here is actually the private key. The “x” claim is the (compressed) public key, which happens to be the x-coordinate of a point on the elliptic curve.

Mixing these all together in a single bag of attributes increases the chance of accidental disclosure of private key material, especially as JWKs are often published to publicly accessible HTTP endpoints. Imagine instead that all private/secret claims in a JWK were placed into separate public and secret key sections:

{ "kty": "OKP", "crv": "X25519", "public": { "x": "..." }, "secret": { "d": "..." }}

As a JWE, the same JWK could be written as follows, where public claims go in the “aad” block and the (encrypted) private key material in the “ciphertext” block:

{ "protected": { ... JWE Header ... }, "aad": { "kty": "OKP", "crv": "X25519", "public": { "x": "Mldalirlj1rJaZ88_sueClsTkOVrIgAukdp6WNEOxj8" } }, "ciphertext": "zuKfZSLQy7owFbuAY6W36V8SmK8W1yyuxP4uvYr2Sp2VAEmiYwEG..."}

The compact notation could also be extended to allow the extra public payload portion:

<header>.<encrypted-key>.<public>.<iv>.<private>.<tag>

With these changes, together with key-driven cryptographic agility , I think a JOSE 2.0 could start to be a much more robust standard with clearly defined security goals and fewer opportunities for mistakes.

OpenID Connect

We’ve already discussed how OpenID Connect (OIDC) mandates that ID Tokens are signed, and only optionally encrypted. So long as implementations follow the strict guidance on token validation in the spec, then I think the recommended signed-then-encrypted JWTs are reasonably secure. However, it is a shame that encryption is only an optional requirement, while signatures are mandatory. I believe this is largely because of the difficulties of combining encryption with signatures we have discussed, and the resulting bloating of the JWT size caused by nested signed-then-encrypted structures with multiple layers of Base64-encoding.

But this default is almost exactly the opposite of what you would want. ID Tokens quite regularly contain sensitive information about users: names, email addresses, even dates of birth or postal address information. You absolutely want these to be encrypted in most cases . On the other hand, I suspect very few people care about non-repudiation of ID Tokens. Indeed, I suspect very few implementations bother to keep the ID Token around at all after authentication has completed, let alone store it away as evidence for future legal proceedings.

This is very much a case in which the security requirements at the application layer are for authentication (of course!) and confidentiality . But we don’t get that by default because PKAE is difficult to achieve in JWTs. If PKAE modes were the norm in JWE then ID tokens could be encrypted and authenticated by default, and only signed in the rare cases that you need the additional assurances.

Authenticated API requests

There has been some interest in providing authenticated HTTP requests for enhanced API security. For example, Amazon famously requires HMAC-signed requests for AWS API calls, and there are a couple of proposals for adding signed requests to OAuth 2.0. The reasons for wanting signed API requests over and above the protections provided by HTTPS are usually given in terms of stronger authentication and integrity guarantees. None of the three documents linked above mentions non-repudiation or 3rd-party verifiability.

Most APIs really care about (data origin) authentication and authorization did this request come from an authorised, trusted source? Using public key signatures for this is using a sledgehammer to crack a nut. There is a reason why TLS only uses signatures during the handshake, they are expensive to compute and verify. So using genuine signed requests is very expensive in practice. To get around this, most “signed” requests, like Amazon’s, actually use symmetric HMAC authenticators instead. But this negates some of the advantages of signed requests, as both parties must know the shared secret. If we want to move away from pure bearer tokens for OAuth, partly because we are worried about the impact of compromised API servers, then a solution that requires the server to store recoverable copies of all client keys doesn’t seem like much of an improvement.

Contrast this with some of the Diffie-Hellman PKAE systems we have seen in this series. Here we get a genuine public key approach, but crucially the client (and server) can cache and reuse a derived symmetric key for multiple requests. This gives us the speed of symmetric cryptography, with the least-authority of public key: the server shouldn’t need to store client’s secret keys, and with PKAE it doesn’t.

Furthermore, as requests are now encrypted, we can gain real end-to-end encryption and authentication of requests. This provides defence in depth against failures at the TLS layer, and avoids the shortcomings of point-to-point authentication evident in this recent critical Kubernetes vulnerability . If API requests in Kubernetes were strongly authenticated and authorized at the application level, rather than merely authenticated at each hop at the transport level (TLS), then this potentially catastrophic vulnerability might have been avoided.

Of course, there are cases where you might really want the stronger guarantees of a real signature financial transactions for example. But those cases are the exception rather than the norm.

Summary

I have argued in this series that the right default security goal for most applications is authenticated encryption . While this goal is now widely accepted for symmetric cryptography, it is still relatively rarely adopted in the public key setting. Hopefully the examples I have given will go some way to promoting that goal.


Viewing all articles
Browse latest Browse all 12749

Latest Images

Trending Articles





Latest Images