# Lösungsvorschlag Information Security FS13

## Inhaltsverzeichnis

- 1 Multiple Choice
- 2 Cryptographic Primitives
- 3 Pseudo-Randomness
- 4 Authentication Mechanisms
- 5 Number-Theoretic Cryptography
- 6 Game-Based Security of Encryption
- 7 CPA Security of ElGamal
- 8 Trust-Free Security Transformations
- 9 Trust and Authentication
- 10 Dolev-Yao Derivations
- 11 Secrecy
- 12 Authentication
- 13 Traces
- 14 Network Security
- 15 Role-Based Access Control
- 16 Mandatory Access Control
- 17 Data Usage
- 18 Privacy

# Multiple Choice

- False. Choose the identity function as Enc and Dec
- True. Otherwise you could not use a key to Enc and Dec on different sides
- No clue
- No clue

# Cryptographic Primitives

## a)

It is true. Proof by reduction:

I construct a TM that inverts by using a TM that is able to invert . On input uses that input to pass , where is chosen uniformly at random, as input for . Since can invert we get such that . outputs . Since works in constant time and is efficient and only called polynomial times I conclude that is efficient as well. It is obvious that we can invert with the same probability as can invert . Since breaks with non-negligible probability, so does .

## b)

A single hash function maps the infinite set to the finite set . Therefore, by the pidgeon-hole principle there must be some inputs that map to the same output, which is called a collision. So every single hash function has collisions which can, once known, be output by an adversary in constant time. To prevent this, the adversary is given some identifier that refers to one hash function out of a family of functions, where each has different collisions. So the adversary has to compute the collision while he is running rather than having a collision hard-coded inside.

# Pseudo-Randomness

## a)

The distinguisher gets as input and has to determine whether it is a truely random string or the output of . It simply runs on every input in . If there is an input such that it outputs for "pseudo-random", otherwise . If was a pseudo-random string, outputs with probability . If was a truely random string outputs with probability . This sums up as .

Alternative: Randomly output or with probability each. We guess correctly with probability which is .

## b)

Proof by reduction:

Let B be a TM that distinguishes functions in G from truly random functions and A be the TM that reduces F to G. A now impersons the oracle for B by composing the function represented by A's oracle twice. Hmm.. Dunno how to proceed...

Another approach:

Assuming that functions in G are not pseudo-random. Let B be a poly-time adversary who can distinguish functions in G from truly random functions. If is the result of such a function, then B can distinguish it from a truly random string of the same length with non-negligible probability.

B can now also distinguish functions in F from random functions:

- B selects a random string and a random function from F and calculates .

- B sends to the oracle O, which choses a random and returns either or a truly random string.

- If is returned, then it equals the value produced by a function from G and B can differentiate it from a truly random number. B wins the game, so F is not a family of pseudo-random functions.

Note: I believe this alternative approach does not work because you can't be sure that the oracle chooses the same as the in

# Authentication Mechanisms

## a)

A MAC is existentially unforgeable under an adaptive chosen-message attack if for all adversaries A

where the Mac-forge experiment is as follows:

- A key is generated.
- The adversary can query an oracle (owning the key) to tag arbitrary messages.
- A wins the experiment if he can create a valid tag for a message he has not queried the oracle for.

## b)

The adversary queries his oracle for the tag to a message where . Then he computes . He outputs which is a valid message.

## c)

In practice, all environments are of finite size. For example, the key space is limited by the length of the key defined by the scheme. If an adversary doesn't need to be efficient, brute force becomes an option. Since all spaces are finite in practice, they can all be brute-forced (by trying all the possibilities).

EDIT: When doing brute force, how does the adversary know when to stop? I think he needs to be able to make some kind of evaluation of each try and therefore needs to know something about the content of the signed message. But what would that be?

Comment on EDIT: The adversary could query the oracle to sign a few known messages. For each candidate key, the adversary signs these known messages and compares the result to the answers from the oracle. If they all match the adversary has guessed the right key with very high probability.

# Number-Theoretic Cryptography

## a)

If is in the group of quadratic residues of G, then its discrete logarithm is even (). By a theorem y is in the group of quadratic residues iff . Where p is the size of the group. So x is even iff modulo 2m of course

## b)

We know c, n, e and . We can find the secret key d by using the extended euclidean algorithm: EEA() gives us d and y such that . Then compute .

To compute p and q we start with the ES:

**Fehler beim Parsen (Unbekannte Funktion „\begin{split}“): {\displaystyle \begin{split} n &= p \cdot q \Longleftrightarrow p = \frac{n}{q}\\ \phi (n) &= (p-1)(q-1) \end{split}}**

Now we insert the first equation into the second and yield a quadratic equation:

. Solving yields q and reinserting in the first equation yields p.

# Game-Based Security of Encryption

## a)

Numbers refer to the boxes from left to right and top to bottom:

- n, , oracle access to
- Run Gen to generate a key k
- Choose uar.
- Return
- Output 1 iff

The security definition requires the probability being less than a negligible function in n. (Shouldn't it be <=0.5+eps(n), where esp(n) is a negligible function in n)?

## b)

COA: The adversary has no information about the plain text, so no oracle access and message for challenge is chosen by the oracle. Adversary must output the correct plain text to win the game.

CCA: The adversary is additionally given a decryption oracle for that key. He can use both oracles until he outputs b', but must not query the decryption oracle for the challenge ciphertext.

For a public-key cryptosystem it is not necessary to give oracle access to the adversary, since he can be given the public key and compute the ciphertexts himself.

## c)

No it is not CPA-secure. Query the oracle for some messages and remember the results. Then he sends them to the challenger to choose one and compares the returned message with the two stored ciphertexts. He wins the game with probability 1.

## d)

The problem is, that the IV used is part of the ciphertext. When encrypting a message m using a block cipher in CBC, we get back . We use that fact now in our advantage and construct the following attacker. First, we send anny two distinct and (both only the length of exactly one block). As a result, we get back . We now fix any and ask the oracle to decode for us (we see that the asked ciphertext is different than the challenge ciphertext, so we are allowed to ask for that query). When looking at the definition of a block cipher in CBC, we see that we get back . We now immediately see that we can "xor out" and now compare with our originals and .

# CPA Security of ElGamal

## a)

- n,
- Choose uar.
- Return
- Output 1 iff

The distinguisher D needs to tell for the trippel where for some , for some and for some , if (we then output a 1) or not (in which case we output a 0).

## b)

In the case where , the distinguisher performs an idealized encryption. To see that, recall the one-time pad generalized to arbitrary groups where the key is shared in advance and encryption / decryption is done using group operations of and the message .

We define for random and . Since the distribution of the second element of the ciphertext is uniformly at random over all group elements, and, in particular is independent of the message it follows that said probability is exactly

## c)

Considering the ElGamal cryptosystem the adversary wins the game with probability where is not negligible, since we assume that he can attack ElGamal. As already argued in b) the adversary wins the game in case of the idealized game with probability . Our distinguisher outputs 1 iff . Therefore the inequality of DDH reads as follows:

**Fehler beim Parsen (Unbekannte Funktion „\begin{split}“): {\displaystyle \begin{split}&|\text{Pr}[\mathcal{A}(\mathbb{G}, q, g, g^x, g^y, g^z) = 1] - \text{Pr}[\mathcal{A}(\mathbb{G}, q, g, g^x, g^y, g^{xy}) = 1]|\\ =& |\frac{1}{2} - \frac{1}{2} - \varepsilon (n)|\\ =& \varepsilon (n)\end{split} }**

This is not negligible in n and thus breaks the DDH assumption. Therefore I conclude that ElGamal is secure only if the DDH assumption holds.

# Trust-Free Security Transformations

Not covered in FS15

# Trust and Authentication

Not covered in FS15

# Dolev-Yao Derivations

I don't know whether the syntactic format is as wanted:

## a)

It is contained:

**Fehler beim Parsen (Unbekannte Funktion „\begin{split}“): {\displaystyle \begin{split} &sk(I) \in IK \land \{K_2\}_{pk(I)}\in IK \Longrightarrow K_2\in IK\\ &\{N_1, K_1\}_{K_2} \in IK \land K_2\in IK \Longrightarrow K_1 \in IK \end{split}}**

## b)

It is contained. are contained in IK. See a)

**Fehler beim Parsen (Unbekannte Funktion „\begin{split}“): {\displaystyle \begin{split} &K_1, K_2 \in IK \Longrightarrow h(K_1, K_2) \in IK\\ &h(K_1, K_2) \in IK\land \{N_2\}_{h(K_1, K_2)} \in IK\Longrightarrow N_2 \in IK\\ &A, B, N_2\in IK \Longrightarrow h(A, B, N_2) \in IK \end{split}}**

## c)

It is not possible to derive. The only message containing N3 is encrypted for A. Since no message contains A's secret key and it is not known initially the intruder cannot decrypt the message and therefore not learn N3.

## d)

It is contained:

**Fehler beim Parsen (Unbekannte Funktion „\begin{split}“): {\displaystyle \begin{split} &sk(I) \in IK \land \{K_2\}_{pk(I)}\in IK \Longrightarrow K_2\in IK\\ &\{N_1, K_1\}_{K_2} \in IK \land K_2\in IK \Longrightarrow N_1 \in IK\\ &K_1, K_2 \in IK \Longrightarrow h(K_1, K_2) \in IK\\ &h(K_1, K_2) \in IK\land \{N_2\}_{h(K_1, K_2)} \in IK\Longrightarrow N_2 \in IK\\ &N_1, N_2 \in IK \Longrightarrow h(N_2, N_1) \in IK\\ &h(N_2, N_1), sk(I) \in IK \Longrightarrow \{h(N_2, N_1)\}_{sk(I)}\in IK \end{split}}**

# Secrecy

## a)

Copied from the slides...

**Fehler beim Parsen (Syntaxfehler): {\displaystyle \forall t \in \text{Trace}(P)\ \forall i \in \mathbb{N}\ \forall X\in \text{Thread}\ \forall x\in \text{Term}\ \forall a\in\text{Agents}.\\ \text{X tries to communicate with non-compromised agents in } t\land t(i) = claim(A, secret, x)\#X\longrightarrow x\not\in has(IK(t))}**

## b)

The intruder I can impersonate B and generate kb himself and knows it rightaway. The protocol run reads like this:

**Fehler beim Parsen (Unbekannte Funktion „\begin{split}“): {\displaystyle \begin{split} A\rightarrow S:& A, B, \{ka\}_{pk(S)}\\ S\rightarrow I(B):& A\\ I(B)\rightarrow S:& B, A, \{kb\}_{pk(S)}\\ S\rightarrow A:& B, \{kb\}_{ka} \end{split}}**

## c)

The intruder I can impersonate A and gets the key from the server:

**Fehler beim Parsen (Unbekannte Funktion „\begin{split}“): {\displaystyle \begin{split} I(A)\rightarrow S:& A, B, \{ka\}_{pk(S)}\\ S\rightarrow B:& A\\ B\rightarrow S:& B, A, \{kb\}_{pk(S)}\\ S\rightarrow I(A):& B, \{kb\}_{ka} \end{split}}**

# Authentication

- a)
- Weak aliveness holds as B has encrypted something with its secret key at some point.
- Non-injective agreement holds as we know that at some point A has been apparently communicating with B and B has been apparently communicating with A and they agree on all messages.

- b)
- Weak aliveness holds as B has encrypted something with its secret key at some point.
- Non-injective agreement does not hold, since the message might just be a replay from a previous run of the protocol between B and C. So B did not necessarily ever run the protocol with A.

- c)
- : no, as replays are possible.
- : no, as no non-injective agreement.

# Traces

## a)

No there is no possible continuation. In the trace there is already one agent who has played his A role beyond the send event in the protocol run. Also the only responder that is able to receive a packet has performed his send operation. Note that Dave cannot send yet, because a receive event must preceed the sending.

## b)

Yes it exists and looks like this:

Alternative:

(Because of tr(3))

(Because of tr(5))

(another fake message)

## c)

It is the step presented in b)

I don't think so. Since may n* <> n*

```
tr(6): create(B, {B->Bob, A->Alice})#4
```

```
tr(7): recv({Alice,n}pk(Bob))#4
```

```
tr(8): send(n)#4
```

```
tr(9): recv(n)#1
```

Alternative:

(Because of tr(3))

# Network Security

## a)

PFS is the property in which an attacker who records an encrypted conversation cannot later decrypt it even if he has compromised each side’s long-term keys. An example would be that the adversary compromises the server’s systems and steals the private key. PFS now means, that although he has the private key he can not decrypt past conversations.

## b)

The attacker can generate a series of random (which is computationally easy) and send those to the server (spoofed with different source addresses). The server doesn't know that these are not "real" requests and does an expensive computation next to storing all the generated values.

One possibility to defend this attack is to demand a response from the sender's address before doing any computation (like another handshake). In that way, the attacker would need to be at the sender's address and response to the challenge sent by the server.

## c)

We can defend the attack by authenticating public-keys using certificates.

## d)

- Message authentication and encryption.
- Protect packet headers (addrs & ports) to thwart traffic analysis.

## e)

- IPSec: Setting up a VPN (Virtual Private Network).
- SSL/TLS: e-banking over a HTTPS connection to the bank.

# Role-Based Access Control

## a)

We define the three sets as agents, roles and permissions. First we have a membership relation which defines which agent plays what role in the RBAC system. Then we need a authorisation relation that tells which role has what permissions in the system. Then a user is permitted to perform an action by checking whether holds.

Advantages (probably not formal enough):

- Relations and are both smaller in a typical use case of RBAC then a relation and thus more efficient in many ways as access time, possibility to distribute, ... (since usually |R| is small and [A| and |P| are large)
- In a typical RBAC, users only belong to one or a few roles, thus there are few entries in . Adding a new user or editing a user just takes editing of , and thus only few changes, in comparison to changes in a AC Matrix model, where there are many permissions which users usually have.

## b)

We extend the set of permissions to , extend the set of roles to and extend the relation on RxP to

## c)

No. We can model a turing machine with the given users, roles and access permissions and then reduce the halting problem to the problem of deciding the safety problem.

I don't think so, even if I cannot reason about it formally. On slide 43 of access control 1 in FS15 it says that AC is decidable for fixed number of subjects and objects. Since RBAC is only a simplification of ACMs it should be decidable...

-> how about: in a finite universe, we have finitely many possible states. from each state there can only be finitely many state transitions, so we can decide the safety problem by looking at all possible state transitions. In particular in opposition to the argument above the problem can be modeled with a linear bounded automaton (LBA) (a subset of TM with finite tape) for which the halting problem is decidable.

# Mandatory Access Control

## a)

In the lattice-based security model a subject s with label may only read documents with label . He can only write to documents with security label

## b)

- One can temporarlily downgrade subjects to enable them to write the to lower level documents.
- One can declare trusted subjects that are allowed to violate the write-up property.

## c)

- error messages
- Timing behavior based on CPU usage. To send a 0-bit use 100% of CPU, to send a 1-bit, sleep.
- Locking and unlocking files.
- Encode information in the invoice sent for services rendered.

## d)

No it is not. Obviously if the admin can list the files in a folder he can also read the file's name, which might leak information. Also if the admin adds a high file himself, he can have read the content of that file before upgrading it.

# Data Usage

# Privacy

## a)

Anonymity is a state in which the identity of a communicating party is secret.

## b)

Perhaps if it is impossible to map the pseudonym to a real name of that person.

## c)

Size credit card (cc) and store loyalty card (slc):

Size no cc and slc:

Size cc and no slc:

Size no cc and no slc:

Alternative: I think whenever a credit card is used, since it records the customer's name, the anonymity set is 1.