CS 513 Lecture 5 notes


Exchanging secrets without shared keys

Having a key distribution center ala Kerberos makes key exchange scalable but doesn't remove the fundamental problem of trust, because in order for principal A to exchange keys, he has to have already exchanged a key kA with a principal KDC; and kA and KDC must be at least as trusted as any key generated by KDC. trusted. The vulnerability is moved from the key exchange problem for two communicating principals to the key exchange problem between each principal and the KDC.

Consider the following puzzle. Two people A and B on distant desert islands want to exchange treasure securely. Unfortunately, their only shipment method is a pirate who wants to steal the treasure. They do, however, have a strong box that the pirate is unable to open whenever there is a lock on it that he doesn't have the key to. And each of A and B has their own key and lock initially (in fact, a supply of as many keys and locks as they want), but not the key to the other's locks. Can they ship the treasure without the pirate being able to steal it?

This is a reasonable analogy to the problem of key exchange, or more generally, to the problem of sending messages without a prior established key. The treasure is some secret that is being transmitted. It could be a key itself that A and B want to share. In the mid-70's, various people started to realize that these kinds of problems can be solved.

Here is how the puzzle can be solved.

  1. Alice puts a secret in a box, which she locks with her own lock. Only Alice has the key to this lock.
  2. Alice then ships the box to Bob.
  3. Bob adds his own lock to this box in parallel, so that now the box has two locks.
  4. Bob then ships the box back to Alice.
  5. Alice, knowing that the box is secure with Bob's lock, then takes her own lock off the box (with her key).
  6. Alice sends the box back to Bob.
  7. Bob then removes his lock and receives the secret.

What makes this solution possible? It's the fact that applying and removing Alice's lock commutes with the same operations on Bob's lock. If we can find a cryptographic method with the same kinds of commutation properties, we ought to be able to solve these kinds of problems.

Diffie–Hellman key exchange

Diffie–Hellman Key Exchange allows two principals to agree on a shared key even though they exchange messages in public. In the protocol given below, there is no authentication, so either side could be be spoofed by an active wiretapper. The protocol can easily be extended into one that does also implement the necessary authentication.

The first step is to choose a large prime number p (around 512 bits). The second is to choose an integer g where g < p (with another technical restrictions: g must be a generator for p) The protocol works as follows:

At this point, A can compute: Similarly B can compute :

Now A and B have a shared secret, the value (gsA sB) mod p. They can use this as a shared key for further cryptography.

A wiretapper can see all the messages that are sent. If this attacker could compute sA from tA, g, and p, that is by computing (logg tA) mod p, then the key would be compromised. However, this computation, solving the discrete logarithm problem, is thought to be computationally hard.

One problem with Diffie-Hellman is that it does not generalize to send arbitrary messages. But it's a first step toward that goal and its development led to full public-key cryptography. Diffie-Hellman can be used to exchange a shared key and then the communicating principals can used shared-key crypto to exchange messages securely. However, public-key cryptography has some additional advantages, like the ability to do digital signing.

Public-key cryptography

The idea of a public key cryptosystem is to have two keys: a private (secret) key k and a public key K. Anyone can know the public key. Plaintext to a principal B is encrypted using B's public key, KB. B decrypts the enciphered text using its private key, kB. As long as B is the only one who knows the private key, then only B can decrypt messages encrypted under B's public key.

c = E(K, p)
p = D(k, c)

Public-key cryptography was introduced by Diffie in 1975, though Merkel also concurrently developed the idea. By 1977, there was a strong, practical public-key cryptosystem, the RSA cryptosystem. However, the idea had been developed a couple of years earlier in 1973 by Clifford Cocks, a cryptographer in the UK in the Government Communications Headquarters, GHCQ (the successor to Bletchley Park where the Enigma machine was cracked). He developed the idea of public-key crypto and RSA. Another GHCQ cryptographer developed Diffie-Hellman in 1974 while trying to break RSA. The fact that these cryptosystems had been developed earlier by the government was not known for decades. The Code Book gives a nice history of the development of public-key cryptography.

Other than RSA, the other public-key cryptosystem especially worth knowing about is ElGamal, described by Taher Elgamal in 1984. It is based loosely on Diffie-Hellman key agreement. It assumes that solving the discrete logarithm problem is intractable. There are actually several related cryptosystems. We'll look at ElGamal more later.

Uses of public-key cryptography


If A wants to send any message to B, he just sends it under B's public key, obtaining confidentiality:
A→B : E(kB, p) = {p}kB
B: compute D(kB, E(kB, p)) = p

Since the public key K is known to everyone, it is important that the plaintext p not be short or predictable. Otherwise an exhaustive search can be conducted by anyone. For example, if p is either "yes" or "no", an attacker can just compute E(K, "yes") and E(K, "no") and see which one matches. This is known as a dictionary attack. The simple solution is to pad out the cleartext with enough random noise to make guessing the plaintext infeasible.

Key distribution

In public key cryptography, the public keys are known to everyone, solving the key distribution problem. In fact, we can also use public-key encryption to share keys, by making the message p be the key to be shared.

You might wonder why we need to bother with sharing keys once we have public-key cryptography. The main reason is that public key cryptography is usually much slower than secret key cryptography. Typically it is about 1000 times slower or worse. Therefore, public-key cryptography is rarely used to encrypt long messages. Typically, a message is encrypted using shared key cryptography (with a secret key). That secret key is then encrypted using public key cryptography, and the encrypted message and key are sent. This is known as hybrid encryption.

As an example, we might send a long message p as follows:

A→B : {p}k {k}KB
B: compute D(kB, {k}KB) = k
B: compute D(k, {p}k) = p   (shared-key D)

To produce messages that are readable by several principals, we can just encrypt the secret key k under the various public keys:

A→B,C : {p}k {k}KB {k}KC
B: compute D(kB, {k}KB) = k
B: compute D(k, {p}k) = p
C: compute D(kC, {k}KC) = k
C: compute D(k, {p}k) = p
Here we have a message readable by B or C but no one else.


To authenticate a principal, it is not necessary to share a key with that principal. Instead, a challenge can be sent that allows the remote principal to prove that he has the private key corresponding to the public key. In fact, a principal essentially becomes a public key. The public key is the public name of the principal, and the private key is the secret identity.

To make a principal B authenticate himself, A sends an encrypted challenge, e.g.:

1. A→B: {r}KB = m1
2. B: compute D(kB, m1) = r
3. B→A: r

This is actually implementing a decryption service, so we wouldn't want to implement it this way. We could use a one-way hash function and a second nonce to protect against a principal A who doesn't know r:

1. A→B: {r}kB
2. B→A: h(r, r2), r2

If A knows r, she can check whether the hash is correct. Otherwise she knows nothing. B could include additional information in the message sent back as long as that information is included in the hash, and A would know that that information comes from B.

Digital signatures

Some public key cryptography schemes allow plaintext to be run through the decryption algorithm (using the private key). What is produced is referred to as signed text and it can be "deciphered" by anyone using the public key:

E(K, D(k, p)) = p

Only the possessor of a private key can create text that is decipherable using the public key, so the ability to compute D(k,p) proves that the sender is the principal corresponding to the public key K. The functionality of signed text cannot be replicated using shared-key cryptography.

In practice you probably don't want to encrypt all of p with the private key, because that is too slow. Instead, you can make use of a one-way hash function, and only sign the digest:

1. A→ B: p, D(kA, h(p))
2. B: compute h' = E(KA, D(kA, h(p))), check that h' = h(p)

In fact, using the hash is crucial for some popular public-key cryptosystems. For example, in RSA, if you sign two values presented by an attacker, the attacker can used the signed values to construct a signature of a third value.

Example: RSA

RSA (Rivest, Adleman, Shamir) was developed in 1976. Its security is based on the difficulty of factoring large numbers. It turns out to be easy to find large prime numbers, but once two prime numbers p and q are multiplied, it seems to be hard to compute them from their product. At least, mathematicians have been working on the factoring problem for centuries without finding any efficient way to do it.

The recipe works as follows:

To encrypt a message m, compute me mod n and send the result as ciphertext. To decrypt ciphertext c: m = cd mod n. RSA can also be used for digital signatures. To sign a message m: s = md mod n. To check a signature: m = se mod n.

A fair amount of number theory is needed to prove that this technique works. The key theorem is this:

m = (me mod n)d mod n = (md mod n)e mod n.

The reason this works is because of Euler's Totient Theorem, which says that if x and n are relatively prime and x < n,

1 = xφ(n) mod n

where φ(n) is the number of positive integers less than n that are relatively prime to it. For example φ(6) = 2 because only 1 and 5 are relatively prime to 6. And therefore 52 mod 6 = 12 mod 6 = 1. If you think about it, you'll see that for a product of two primes n = pq, we have φ(n) = (p−1)(q−1).

If e and d are multiplicative inverses modulo φ(n), then we have ed mod φ(n) = 1, or ed = 1 + k φ(n) for some k. Therefore, if we raise a message m to the ed power, we have

med mod n = m1 + k φ(n) mod n
= m * (mk φ(n)) mod n
= m * (m1 + k φ(n)) mod n
= m mod n * (m1 + k φ(n)) mod n
= m mod n * 1 = m


Suppose p = 5, q = 11, n = 55. Then &phi(n); = 4*10 = 40. If we choose e = 7, then d = 23 because 7*23 = 161 = 1 (mod 40). Try m=2. Then me = 128 = 18 (mod 55). And 1823 = 2 (mod 55).