We have seen that passwords and PINs suffer from interception and replay: an attacker sniffing a password can reuse it arbitrarily to authenticate. This can be improved using OTPs, i.e., passwords that are never reused. But even in this case, if the attacker is in the middle, he can sniff the password in transit and use it to authenticate once.
The problem with passwords and PIN is that we prove their knowledge by exhibiting the secret value. Strong authentication techniques, instead, allow for proving the knowledge of a secret without showing it. This can be achieved by showing a value that depends on the secret but does not allow to compute it.
Symmetric key authentication protocols
We discuss strong authentication protocols based on symmetric-key cryptography. The secret shared among the claimant and the verifier is a symmetric cryptographic key. In order to prove the knowledge of the key K, and thus her identity, the claimant sends to the verifier a message encrypted under K. Since generating an encrypted message without knowing the key is assumed to be infeasible, this proves its knowledge.
The general idea is to have a challenge and a response:
- Challenge: the verifier challenges the claimant to send a particular message encrypted under K
- Response: the claimant sends the required message
For example, a challenge might be “send me your name encrypted under K”. The response from Alice would then be . The verifier will decrypt the message and will check that it matches name A. However, even if this does not reveal K, if the challenge is always the same an attacker can simple interceptĀ and replay it to authenticate as Alice. We thus have thatĀ the challenge should never be the same. A way to achieve this is to add a time-variant parameter.
Sequence numbers
The challenge becomes “send me your name and your sequence number encrypted under K” with response . We note the protocol as:
The sequence number of Alice is then increased by 1 so that it never repeats. In a sense, we are following the same idea behind OTPs: we never send the same response twice so that it cannot be replayed. The verifier has to store the last sequence number from Alice so that he can decrypt the message and check that both A and the sequence number (increased by 1) match. In this case he accepts Alice identity and increments the sequence number so that it is ready for next authentication.
Sequence numbers have the drawback of requiring the verifier to store the last sequence number of each possible claimant. Moreover it is unclear what to do if for any reason (system or network failure) the sequence numbers go out of sync: restarting from 0 is unacceptable since any old authentication would become reusable by an attacker. An authenticated protocol to resync is necessary but it cannot be based on sequence numbers, of course.
Timestamps
The challenge is “send me your name and a recent timestamp encrypted under K”. So the protocol is:
The timestamp is the time at the local clock of Alice when sending the message. The verifier decrypts the message and check that Alice name matches. Then he extracts a local timestamp and verifies that where is the acceptance-window, i.e., the maximum allowed delay for the received message. For example, if is 1 minute the timestamp of A have to be at most 1 minute behind the timestamp of B.
In order to prevent replays we should now pick so that it is big enough to receive honest messages but so small that no replay will ever be accepted. This is very hard in practice since delays on networks can vary a lot. For this reason, is typically taken much bigger than the average delivery time. To avoid replays, all the received timestamps are buffered so that double-reception of a valid timestamp can be easily checked. Periodically, the expired timestamps (out of W) in the buffer are deleted. For example consider the following message, intercepted by the attacker:
Bob accepts Alice identity and stores in the buffer. The attacker E tries a replay still inside W (we write E(A) to note the attacker pretending to be Alice):
The timestamp is still valid but Bob finds it in the buffer and refuses authentication. Later on, when expires it is deleted from the buffer. Any further attempt of replay from the attacker will be refused since is out of W.
Timestamps do not require to store any per-user information (such as sequence numbers). It is enough to termporarily buffer the received-and-still-valid timestamps. Moreover, in case synchrony is lost it is enough to synchronize local clocks. It is however important to notice that this synchronization should be authenticated as malicious changes in clocks might easily allow the attacker to make old timestamps valid or to prevent any honest message to be accepted. As for sequence numbers, this authenticated synchronization cannot be implemented based on timestamps.
Nonces
We have seen that sequence numbers and timestamps are based on some form of authenticated synchronization. We thus need at least one time variant parameter that do not assume any form of synchronization and can be used, if needed, to synchronize sequence numbers and clocks. Nonces are “Numbers used only once”. Here the challenge implies an additional message from the verifier to the claimant containing the nonce. The challenge is “send me your name and nonce encrypted under key K”.
Bob generates a random nonce and sends it to Alice. He then decrypts the received message and checks that both and match. In this case he accepts Alice identity. The nonce is discarded as it is supposed to be used only once.
If nonces are big enough (e.g. 128 bits), picking them at random implies that the probability of reusing the same nonce is negligible. Thus any replay will be prevented since the nonce will mismatch with overwhelming probability. An example follows:
Bob refuses since .
Attacks on cryptography
Since challenge-response protocols provide cryptographic material to the attacker it is important to avoid as much as possible scenarios that facilitate cryptanalysis. For example, the protocol:
is likely to provide a known-plaintext scenario, since it is not hard to guess the value of Alice time-stamp, with some approximation. More critically, the protocol:
provides a (partial) chosen-plaintext scenario where the attacker can ask for encryption of any plaintext with arbitrary . In fact it in enough for the attacker to impersonate Bob, noted E(B), and send as nonce:
Alice becomes a sort of “encryption oracle”.
To avoid these problems a typical countermeasure is to randomize cryptography. This can be done in different way. For example, by adding a random padding to the plaintext. At a logical level, we can think of appending a random number that we call “confounder”, as follows:
and
In this way, the known and chosen-plaintext scenario are prevented. When decrypting, the random confounder will be ignored by Bob. We will always assume this form of randomization at the cryptographic level, with no need of making it explicit at the protocol level.
Redundancy. Consider the following protocol based on sequence numbers:
The identifier A is sent in the clear while the sequence number in encrypted. Assume that Bob only checks monotonicity of so to deal more flexibly with network delays (even if some message is lost the next will be accepted as the sequence number will be bigger than the stored one). In this case, if the attacker sends an arbitrary message:
the probability that the decryption of z is a valid number bigger than the stored one is probably very high (especially if we start from small sequence numbers). Encrypting arbitrary numbers with no format or message redundancy makes it impossible to check the integrity of the decryption: a random z can always be decrypted in a meaningful arbitrary number. The presence of the identifier mitigates this problem since z, once decrypted, should at least match A. If A is n bits long the probability that this happen is .
As for randomization, there are standard techniques to add redundancy. A simple one is to enclose a one-way hash of the encrypted message, called witness, as in . The hash is a proof that the sender knows the content of the message. When Bob decrypts he checks that the hash of the decrypted message matches the received one. The attacker might send arbitrary , but with a hash of 128 bits the probability of passing the test would be , thus negligible. The fact the hash is one-way makes this technique applicable even when it is important to preserve the secrecy of the sent message.
Reflection
We now discuss another reason why it is important to have the identifier encrypted together with a time variant parameter. Consider the protocol
Suppose that Bob is allowed to run the same protocol to authenticate with Alice using the same shared key K:
The attacker can pretend to be Bob, written E(B), as follows:
The message from Alice trying to authenticate with Bob is sent back (reflection) to Alice in a second session where the attacker pretends to be Bob. If this is fast enough to be in the acceptance window Alice accepts the identity of Bob (who is instead the attacker). Notice that this is not a replay: Alice has never received timestamp before. She has generated but never received it, in fact.
This attack shows that the symmetry of the key is dangerous if there is no information in the ciphertext about who are the intender sender and receiver. As a matter of fact, it is enough to specify A or B, as far as Alice and Bob agree on what they expect to see in the message (even one bit would suffice, to indicate the first in alphabetical order, for example).
ISO/IEC 9798-2 protocols
Protocols from the standard ISO/IEC 9798-2 are exactly as the ones discussed above apart that they enclose the identifier of the verifier B (to prevent reflection) instead of A.
One-pass unilateral authentication.
where is a timestamp or a sequence number.
Two-pass unilateral authentication.
Two-pass mutual authentication.
Here Alice and Bob authenticate each other.
where are either timestamps or sequence numbers. Notice that this is just the composition of two independent unilateral authentication protocols.
Three-pass mutual authentication.
This protocol can be understood starting from the composition of two unilateral authentications based on nonces:
Now, including the nonce in the encryption of the second message is harmless and makes the two unilateral authentications tied in a unique mutual authentication session. The same holds for adding in the third message. Moreover, the fact that intended receiver (Bob) is specified in the second message together with challenge (that is now encrypted) makes it possible to remove from the last message, as it is enough to prevent reflections.
Key-exchange
Authentication is always preliminary to some other task that requires identification. However, it is useless to adopt a strong-authentication protocol and then start an unprotected remote session:
The attacker can intercept message and substitute it with a different one. If used in this way, strong-authentication becomes the same as OTPs: the attacker, if in the middle, can impersonate the claimant once.
This can be easily solved by exchanging a new (session) key while identifying. Since strong authentication is based on encrypted responses, one simple technique is to enclose the new key inside the ciphertext. Notice that this is not possible with password-based authentication, unless we use passwords to derive cryptographic keys. We obtain the following:
ISO/IEC 11770-2 protocols
One-pass unilateral key-establishment.
where is a timestamp or a sequence number.
Two-pass unilateral key-establishment.
Two-pass mutual key-establishment.
Here Alice and Bob authenticate each other.
where are either timestamps or sequence numbers. The session key is then computed as a function (for example a bitwise XOR) of the two subkeys .
Three-pass mutual key-establishment.
As above, the session key is computed as .
Server-based protocols
The point-to-point protocols we have studied so far assume a pre-shared key K between Alice and Bob. This does not scale if we have many users as we should share different keys for any possible pairs (meaning a squared number of keys with respect to the number of users). Moreover, it is unclear how a new user might establish a new key for each different existing user.
For these reasons, in practice we need a centralized trusted server that behaves as Key Distribution Center (KDC), a service for distributing new session keys to any pair of user asking for them. The KDC shares one key with each user U, noted , and users do not directly share keys among them. When a new user Alice is added she just needs to register to the KDC and get her key .
The Needham-Schroeder shared-key protocol
One of the most famous key-establishment protocol based on symmetric-key cryptography and on a KDC is the Needham-Schroeder shared-key protocol:
We describe the protocol in detail:
- Alice sends a request to KDC for communicating with B. The request contains a nonce
- KDC sends a message encrypted under Alice’s key containing a new session key , the name of Bob and the nonce (which are both checked by Alice), plus a message encrypted for Bob
- Alice forwards the message encrypted for Bob to Bob
- Bob decrypts the message and obtains the pair representing a new session key to be used with user . He sends a nonce encrypted under the new session key to check that Alice knows the key (this is called the key confirmation step)
- Alice decrypts the nonce sent by Bob, decrements it by 1 and sends it back encrypted. In this way she proves the knowledge of the session key.
To understand what security guarantees are provided by the protocol we reason on the various entities involved:
- KDC: it generates two messages encrypted under Alice and Bob keys. In this way the KDC is guaranteed that the new session key will only be accessed by Alice and Bob and each of them will know who is the expected counterpart, since the relative identifiers are encrypted together with the session key. For example, when Alice decrypts her message she knows that the intended party is Bob, since B is included in the encryption.
- Alice: she challenges the KDC with a nonce to avoid an attacker in the middle can replay old messages from the KDC. In this way she is guaranteed that the key is a new one and that the KDC is in fact answering her request. The first two messages are similar to the two-pass unilateral key-establishment we already discussed.
- Bob: he receives a message (from Alice) encrypted by the KDC. This message does not contain any time-variant parameter so it might be a replay of an old message. To mitigate this, Bob challenges Alice to prove she knows the session key by sending a nonce encrypted under the key. Only knowing it is possible to decrypt the nonce and send back its value, decremented by 1.
Known-key attack. This protocol is secure only if we assume that session keys are never broken by the attacker. This, however, is a fairly strong assumption. We never want the security of a system to depend on its security in the past. Breaking a session key should never compromise future sessions. To see how this happens, think of an attacker breaking a old session key and also storing the relative key exchange message . He can easily pretend to be Alice as follows:
The fact the attacker has broken the session key allows him to easily answer to the key-confirmation challenge by Bob (messages 4 and 5). Notice, in fact, that this step only aims at checking that the other party knows the session key which is what happens if the key has been discovered. Thus the attacker might store a old session, work on it for whatever time he needs to break the key and re-establish the same session key pretending to be Alice. Notice that the presence of KDC is not required here.
Kerberos
Kerberos is a distributed authentication service originating from the Massachusetts Institute of Technology (MIT) Project Athena. Its core key establishment and authentication protocol derives from the Needham-Schroeder protocol above, and fixes the above discussed problem by adding lifetimes, i.e., validity periods, to keys. Moreover the key-confirmation step is improved so to provide mutual confirmation.
We describe the protocol in detail:
- This is the same as in the Needham-Schroeder protocol;
- This extends Needham-Schroeder by adding a lifetime L to the session key. Moreover the message encrypted for Bob is not nested into the one for Alice (as it was unnecessary for security);
- Alice forwards the message encrypted for Bob (called the ticket) to Bob and also encrypts under the session key her name and a valid timestamp. This additional message, called the authenticator, proves that she knows the session key and strongly authenticates her to Bob, avoiding replays. It replaces that key-confirmation performed in the Needham-Schroeder protocol by messages 4 and 5;
- Bob decrypts the message and obtains the triplet representing a new session key to be used with user up to time L. He checks the time validity of the key and decrypts the second part. If the timestamp is also valid he re-encrypts it under and sends it back to Alice to confirm his knowledge of the key. The timestamp is used as a nonce in this case: Alice will check it matches with the one sent out in the previous message.
The known-key attack discussed above is prevented by setting a lifetime short enough to make any analysis of previous sessions infeasible. The lifetime is also important to repeat authentications without the presence of KDC: Alice can re-authenticate by repeating the last two messages:
This can be done until the lifetime of expires.
We finally discuss an extension to the protocol that allows for distributing a new key using . It is enough to exchange two subkeys as follows:
and compute the new session key as . In this way behaves as a middle-term key shared between Alice and Bob that can be used to authentication and sessions key establishment up to time L.