# Lösungsvorschlag Information Security FS13

Wechseln zu: Navigation, Suche

# Multiple Choice

• False. Choose the identity function as Enc and Dec
• True. Otherwise you could not use a key to Enc and Dec on different sides
• No clue
• No clue

# Cryptographic Primitives

## a)

It is true. Proof by reduction:

I construct a TM ${\displaystyle A}$ that inverts ${\displaystyle f}$ by using a TM ${\displaystyle B}$ that is able to invert ${\displaystyle g}$. On input ${\displaystyle f(x_{1})\ A}$ uses that input to pass ${\displaystyle (f(x_{1}),x_{2})}$, where ${\displaystyle x_{2}}$ is chosen uniformly at random, as input for ${\displaystyle B}$. Since ${\displaystyle B}$ can invert ${\displaystyle g}$ we get ${\displaystyle (x_{1}',x_{2}')}$ such that ${\displaystyle f(x_{1}')=f(x_{1})}$. ${\displaystyle A}$ outputs ${\displaystyle x_{1}'}$. Since ${\displaystyle A}$ works in constant time and ${\displaystyle B}$ is efficient and only called polynomial times I conclude that ${\displaystyle A}$ is efficient as well. It is obvious that we can invert ${\displaystyle g}$ with the same probability as ${\displaystyle B}$ can invert ${\displaystyle g}$. Since ${\displaystyle B}$ breaks ${\displaystyle g}$ with non-negligible probability, so does ${\displaystyle A}$.

## b)

A single hash function maps the infinite set ${\displaystyle \{0,1\}^{*}}$ to the finite set ${\displaystyle \{0,1\}^{n}}$. Therefore, by the pidgeon-hole principle there must be some inputs that map to the same output, which is called a collision. So every single hash function has collisions which can, once known, be output by an adversary in constant time. To prevent this, the adversary is given some identifier that refers to one hash function out of a family of functions, where each has different collisions. So the adversary has to compute the collision while he is running rather than having a collision hard-coded inside.

# Pseudo-Randomness

## a)

The distinguisher gets as input ${\displaystyle U_{m}}$ and has to determine whether it is a truely random string or the output of ${\displaystyle g}$. It simply runs ${\displaystyle g}$ on every input in ${\displaystyle \{0,1\}^{k}}$. If there is an input ${\displaystyle x}$ such that ${\displaystyle g(x)=U_{m}}$ it outputs ${\displaystyle 1}$ for "pseudo-random", otherwise ${\displaystyle 0}$. If ${\displaystyle U_{m}}$ was a pseudo-random string, ${\displaystyle D}$ outputs ${\displaystyle 1}$ with probability ${\displaystyle 1}$. If ${\displaystyle U_{m}}$ was a truely random string ${\displaystyle D}$ outputs ${\displaystyle 0}$ with probability ${\displaystyle (1-{\frac {|\{0,1\}^{k}|}{|\{0,1\}^{m}|}})}$. This sums up as ${\displaystyle {\frac {1}{2}}+{\frac {1}{2}}\cdot (1-{\frac {|\{0,1\}^{k}|}{|\{0,1\}^{m}|}})\geq {\frac {1}{2}}}$.

Alternative: Randomly output ${\displaystyle 0}$ or ${\displaystyle 1}$ with probability ${\displaystyle {\frac {1}{2}}}$ each. We guess correctly with probability ${\displaystyle {\frac {1}{2}}}$ which is ${\displaystyle \geq {\frac {1}{2}}}$.

## b)

Proof by reduction:

Let B be a TM that distinguishes functions in G from truly random functions and A be the TM that reduces F to G. A now impersons the oracle for B by composing the function represented by A's oracle twice. Hmm.. Dunno how to proceed...

Another approach:

Assuming that functions in G are not pseudo-random. Let B be a poly-time adversary who can distinguish functions in G from truly random functions. If ${\displaystyle g_{(k_{1},k_{2})}(x)=f_{k_{2}}(f_{k_{1}}(x))}$ is the result of such a function, then B can distinguish it from a truly random string of the same length with non-negligible probability.

B can now also distinguish functions in F from random functions:

- B selects a random string ${\displaystyle x}$ and a random function ${\displaystyle f_{k_{1}}}$ from F and calculates ${\displaystyle f_{k_{1}}(x)}$.

- B sends ${\displaystyle f_{k_{1}}(x)}$ to the oracle O, which choses a random ${\displaystyle k_{2}}$ and returns either ${\displaystyle f_{k_{2}}(f_{k_{1}}(x))}$ or a truly random string.

- If ${\displaystyle f_{k_{2}}(f_{k_{1}}(x))}$ is returned, then it equals the value produced by a function from G and B can differentiate it from a truly random number. B wins the game, so F is not a family of pseudo-random functions.

Note: I believe this alternative approach does not work because you can't be sure that the oracle chooses ${\displaystyle k_{2}}$ the same as the ${\displaystyle k_{2}}$ in ${\displaystyle g_{(k_{1},k_{2})}(x)}$

# Authentication Mechanisms

## a)

A MAC is existentially unforgeable under an adaptive chosen-message attack if for all adversaries A

${\displaystyle P[{\text{Mac-forge}}_{{\mathcal {A}},\Pi }(n)=1]\leq {\text{negl}}(n)}$ where the Mac-forge experiment is as follows:

• A key is generated.
• The adversary can query an oracle (owning the key) to tag arbitrary messages.
• A wins the experiment if he can create a valid tag for a message he has not queried the oracle for.

## b)

The adversary queries his oracle for the tag ${\displaystyle t}$ to a message ${\displaystyle x}$ where ${\displaystyle |x|\leq 2^{n}-2n}$. Then he computes ${\displaystyle t':=h(t\ ||\ |x|+n)}$. He outputs ${\displaystyle (x\ ||\ |x|,t')}$ which is a valid message.

## c)

In practice, all environments are of finite size. For example, the key space is limited by the length of the key defined by the scheme. If an adversary doesn't need to be efficient, brute force becomes an option. Since all spaces are finite in practice, they can all be brute-forced (by trying all the possibilities).

EDIT: When doing brute force, how does the adversary know when to stop? I think he needs to be able to make some kind of evaluation of each try and therefore needs to know something about the content of the signed message. But what would that be?

Comment on EDIT: The adversary could query the oracle to sign a few known messages. For each candidate key, the adversary signs these known messages and compares the result to the answers from the oracle. If they all match the adversary has guessed the right key with very high probability.

# Number-Theoretic Cryptography

## a)

If ${\displaystyle y}$ is in the group of quadratic residues of G, then its discrete logarithm is even (${\displaystyle y=a^{2}=(g^{i})^{2}=g^{x}=>x=2i}$). By a theorem y is in the group of quadratic residues iff ${\displaystyle y^{(p-1)/2}=1}$. Where p is the size of the group. So x is even iff ${\displaystyle y^{(2m-1)/2}=1}$ modulo 2m of course

## b)

We know c, n, e and ${\displaystyle \phi (n)}$. We can find the secret key d by using the extended euclidean algorithm: EEA(${\displaystyle e,\phi (n)}$) gives us d and y such that ${\displaystyle ed+y\phi (n)=1}$. Then compute ${\displaystyle m=c^{d}\mod n}$.

To compute p and q we start with the ES:

$split}“): {\displaystyle \begin{split} n &= p \cdot q \Longleftrightarrow p = \frac{n}{q}\\ \phi (n) &= (p-1)(q-1) \end{split}$

Now we insert the first equation into the second and yield a quadratic equation:

${\displaystyle \Pi }$. Solving yields q and reinserting in the first equation yields p.

# Game-Based Security of Encryption

## a)

Numbers refer to the boxes from left to right and top to bottom:

1. n, ${\displaystyle Enc_{k}(\cdot )}$, oracle access to ${\displaystyle b\in \{0,1\}}$
2. Run Gen to generate a key k
3. Choose ${\displaystyle c=Enc_{k}(m_{b})}$ uar.
4. Return ${\displaystyle b=b'}$
5. Output 1 iff ${\displaystyle m_{0},m_{1}}$

The security definition requires the probability being less than a negligible function in n. (Shouldn't it be <=0.5+eps(n), where esp(n) is a negligible function in n)?

## b)

COA: The adversary has no information about the plain text, so no oracle access and message for challenge is chosen by the oracle. Adversary must output the correct plain text to win the game.

CCA: The adversary is additionally given a decryption oracle for that key. He can use both oracles until he outputs b', but must not query the decryption oracle for the challenge ciphertext.

For a public-key cryptosystem it is not necessary to give oracle access to the adversary, since he can be given the public key and compute the ciphertexts himself.

## c)

No it is not CPA-secure. Query the oracle for some messages ${\displaystyle }$ and remember the results. Then he sends them to the challenger to choose one and compares the returned message with the two stored ciphertexts. He wins the game with probability 1.

## d)

The problem is, that the IV used is part of the ciphertext. When encrypting a message m using a block cipher in CBC, we get back ${\displaystyle m_{0}}$. We use that fact now in our advantage and construct the following attacker. First, we send anny two distinct ${\displaystyle m_{1}}$ and ${\displaystyle }$ (both only the length of exactly one block). As a result, we get back ${\displaystyle i}$. We now fix any ${\displaystyle }$ and ask the oracle to decode for us ${\displaystyle F_{k}^{-1}(F_{k}(m_{b}\oplus IV))\oplus (IV\oplus i)=m_{b}\oplus IV\oplus (IV\oplus i)=m_{b}\oplus i}$ (we see that the asked ciphertext is different than the challenge ciphertext, so we are allowed to ask for that query). When looking at the definition of a block cipher in CBC, we see that we get back ${\displaystyle i}$. We now immediately see that we can "xor out" ${\displaystyle m_{b}}$ and now compare ${\displaystyle m_{0}}$ with our originals ${\displaystyle m_{1}}$ and ${\displaystyle h_{1}}$.

# CPA Security of ElGamal

## a)

1. n, ${\displaystyle b\in \{0,1\}}$
2. Choose ${\displaystyle c=(h_{2},h_{3}*m_{b})}$ uar.
3. Return ${\displaystyle b=b'}$
4. Output 1 iff ${\displaystyle (h_{1},h_{2},h_{3})}$

The distinguisher D needs to tell for the trippel ${\displaystyle h_{1}=g^{x}}$ where ${\displaystyle x}$ for some ${\displaystyle h_{2}=g^{y}}$, ${\displaystyle y}$ for some ${\displaystyle h_{3}=g^{z}}$ and ${\displaystyle z}$ for some ${\displaystyle z=x*y}$, if ${\displaystyle z\neq x*y}$ (we then output a 1) or not (in which case we output a 0).

## b)

In the case where ${\displaystyle g}$, the distinguisher performs an idealized encryption. To see that, recall the one-time pad generalized to arbitrary groups where the key ${\displaystyle g}$ is shared in advance and encryption / decryption is done using group operations of ${\displaystyle m}$ and the message ${\displaystyle Enc_{pk}(m)=(g^{y},g^{z}*m)}$.

We define ${\displaystyle y}$ for random ${\displaystyle z}$ and ${\displaystyle m}$. Since the distribution of the second element of the ciphertext is uniformly at random over all group elements, and, in particular is independent of the message ${\displaystyle 1/2}$ it follows that said probability is exactly ${\displaystyle {\frac {1}{2}}+\varepsilon (n)}$

## c)

Considering the ElGamal cryptosystem the adversary wins the game with probability ${\displaystyle \varepsilon }$ where ${\displaystyle {\frac {1}{2}}}$ is not negligible, since we assume that he can attack ElGamal. As already argued in b) the adversary wins the game in case of the idealized game with probability ${\displaystyle b'=b}$. Our distinguisher outputs 1 iff ${\displaystyle K_{1},K_{2}}$. Therefore the inequality of DDH reads as follows:

$split}“): {\displaystyle \begin{split}&|\text{Pr}[\mathcal{A}(\mathbb{G}, q, g, g^x, g^y, g^z) = 1] - \text{Pr}[\mathcal{A}(\mathbb{G}, q, g, g^x, g^y, g^{xy}) = 1]|\\ =& |\frac{1}{2} - \frac{1}{2} - \varepsilon (n)|\\ =& \varepsilon (n)\end{split}$

This is not negligible in n and thus breaks the DDH assumption. Therefore I conclude that ElGamal is secure only if the DDH assumption holds.

# Trust-Free Security Transformations

Not covered in FS15

# Trust and Authentication

Not covered in FS15

# Dolev-Yao Derivations

I don't know whether the syntactic format is as wanted:

## a)

It is contained:

$split}“): {\displaystyle \begin{split} &sk(I) \in IK \land \{K_2\}_{pk(I)}\in IK \Longrightarrow K_2\in IK\\ &\{N_1, K_1\}_{K_2} \in IK \land K_2\in IK \Longrightarrow K_1 \in IK \end{split}$

## b)

It is contained. ${\displaystyle P_{2}}$ are contained in IK. See a)

$split}“): {\displaystyle \begin{split} &K_1, K_2 \in IK \Longrightarrow h(K_1, K_2) \in IK\\ &h(K_1, K_2) \in IK\land \{N_2\}_{h(K_1, K_2)} \in IK\Longrightarrow N_2 \in IK\\ &A, B, N_2\in IK \Longrightarrow h(A, B, N_2) \in IK \end{split}$

## c)

It is not possible to derive. The only message containing N3 is encrypted for A. Since no message contains A's secret key and it is not known initially the intruder cannot decrypt the message and therefore not learn N3.

## d)

It is contained:

$split}“): {\displaystyle \begin{split} &sk(I) \in IK \land \{K_2\}_{pk(I)}\in IK \Longrightarrow K_2\in IK\\ &\{N_1, K_1\}_{K_2} \in IK \land K_2\in IK \Longrightarrow N_1 \in IK\\ &K_1, K_2 \in IK \Longrightarrow h(K_1, K_2) \in IK\\ &h(K_1, K_2) \in IK\land \{N_2\}_{h(K_1, K_2)} \in IK\Longrightarrow N_2 \in IK\\ &N_1, N_2 \in IK \Longrightarrow h(N_2, N_1) \in IK\\ &h(N_2, N_1), sk(I) \in IK \Longrightarrow \{h(N_2, N_1)\}_{sk(I)}\in IK \end{split}$

# Secrecy

## a)

Copied from the slides...

$\displaystyle \forall t \in \text{Trace}(P)\ \forall i \in \mathbb{N}\ \forall X\in \text{Thread}\ \forall x\in \text{Term}\ \forall a\in\text{Agents}.\\ \text{X tries to communicate with non-compromised agents in } t\land t(i) = claim(A, secret, x)\#X\longrightarrow x\not\in has(IK(t))$

## b)

The intruder I can impersonate B and generate kb himself and knows it rightaway. The protocol run reads like this:

$split}“): {\displaystyle \begin{split} A\rightarrow S:& A, B, \{ka\}_{pk(S)}\\ S\rightarrow I(B):& A\\ I(B)\rightarrow S:& B, A, \{kb\}_{pk(S)}\\ S\rightarrow A:& B, \{kb\}_{ka} \end{split}$

## c)

The intruder I can impersonate A and gets the key from the server:

$split}“): {\displaystyle \begin{split} I(A)\rightarrow S:& A, B, \{ka\}_{pk(S)}\\ S\rightarrow B:& A\\ B\rightarrow S:& B, A, \{kb\}_{pk(S)}\\ S\rightarrow I(A):& B, \{kb\}_{ka} \end{split}$

# Authentication

• a)
• Weak aliveness holds as B has encrypted something with its secret key at some point.
• Non-injective agreement holds as we know that at some point A has been apparently communicating with B and B has been apparently communicating with A and they agree on all messages.
• b)
• Weak aliveness holds as B has encrypted something with its secret key at some point.
• Non-injective agreement does not hold, since the message ${\displaystyle tr(7)={\text{send}}(n)\#2}$ might just be a replay from a previous run of the protocol between B and C. So B did not necessarily ever run the protocol with A.
• c)
• ${\displaystyle tr(8)={\text{recv}}(n)\#1}$: no, as replays are possible.
• ${\displaystyle X_{1},X,2,X_{3},...}$: no, as no non-injective agreement.

# Traces

## a)

No there is no possible continuation. In the trace there is already one agent who has played his A role beyond the send event in the protocol run. Also the only responder that is able to receive a packet has performed his send operation. Note that Dave cannot send yet, because a receive event must preceed the sending.

## b)

Yes it exists and looks like this:

${\displaystyle A,R,P}$

Alternative:

${\displaystyle \rho _{M}\subseteq A\times R}$ (Because of tr(3))

${\displaystyle \rho _{A}\subseteq R\times P}$ (Because of tr(5))

${\displaystyle u}$ (another fake message)

## c)

It is the step presented in b)

I don't think so. Since may n <> n

 tr(6): create(B, {B->Bob, A->Alice})#4 

tr(7): recv({Alice,n}pk(Bob))#4 

tr(8): send(n)#4 

tr(9): recv(n)#1 

Alternative:

${\displaystyle a}$ (Because of tr(3))

${\displaystyle u\ (\rho _{M}\!\circ \!\rho _{A})\ a}$

${\displaystyle \rho _{M}}$

# Network Security

## a)

PFS is the property in which an attacker who records an encrypted conversation cannot later decrypt it even if he has compromised each side’s long-term keys. An example would be that the adversary compromises the server’s systems and steals the private key. PFS now means, that although he has the private key he can not decrypt past conversations.

## b)

The attacker can generate a series of random ${\displaystyle \rho _{A}}$ (which is computationally easy) and send those to the server (spoofed with different source addresses). The server doesn't know that these are not "real" requests and does an expensive computation next to storing all the generated values.

One possibility to defend this attack is to demand a response from the sender's address before doing any computation (like another handshake). In that way, the attacker would need to be at the sender's address and response to the challenge sent by the server.

## c)

We can defend the attack by authenticating public-keys using certificates.

## d)

• Message authentication and encryption.
• Protect packet headers (addrs & ports) to thwart traffic analysis.

## e)

• IPSec: Setting up a VPN (Virtual Private Network).
• SSL/TLS: e-banking over a HTTPS connection to the bank.

# Role-Based Access Control

## a)

We define the three sets ${\displaystyle A\times P}$ as agents, roles and permissions. First we have a membership relation ${\displaystyle \rho _{M}}$ which defines which agent plays what role in the RBAC system. Then we need a authorisation relation ${\displaystyle \rho _{M}}$ that tells which role has what permissions in the system. Then a user ${\displaystyle P'=P\cup \{addrole,revokerole,addpermission,revokepermision\}}$ is permitted to perform an action ${\displaystyle R'=R\cup \{admin\}}$ by checking whether ${\displaystyle RP'=RP\cup \{(admin,addrole),(admin,revokerole),(admin,addpermission),(admin,revokepermission)\}}$ holds.

Advantages (probably not formal enough):

• Relations ${\displaystyle x_{s}}$ and ${\displaystyle x_{d}\leq x_{s}}$ are both smaller in a typical use case of RBAC then a relation ${\displaystyle x_{d}\geq x_{s}}$ and thus more efficient in many ways as access time, possibility to distribute, ... (since usually |R| is small and [A| and |P| are large)
• In a typical RBAC, users only belong to one or a few roles, thus there are few entries in ${\displaystyle x\cdot 0.8}$. Adding a new user or editing a user just takes editing of ${\displaystyle y\cdot 0.75}$, and thus only few changes, in comparison to changes in a AC Matrix model, where there are many permissions which users usually have.

## b)

We extend the set of permissions to ${\displaystyle x\cdot 0.2}$, extend the set of roles to ${\displaystyle y\cdot 0.25}$ and extend the relation on RxP to ${\displaystyle RP'=RP\cup \{(admin,addrole),(admin,revokerole),(admin,addpermission),(admin,revokepermission)\}}$

## c)

No. We can model a turing machine with the given users, roles and access permissions and then reduce the halting problem to the problem of deciding the safety problem.

I don't think so, even if I cannot reason about it formally. On slide 43 of access control 1 in FS15 it says that AC is decidable for fixed number of subjects and objects. Since RBAC is only a simplification of ACMs it should be decidable...
-> how about: in a finite universe, we have finitely many possible states. from each state there can only be finitely many state transitions, so we can decide the safety problem by looking at all possible state transitions. In particular in opposition to the argument above the problem can be modeled with a linear bounded automaton (LBA) (a subset of TM with finite tape) for which the halting problem is decidable.

# Mandatory Access Control

## a)

In the lattice-based security model a subject s with label ${\displaystyle x_{s}}$ may only read documents with label ${\displaystyle x_{d}\leq x_{s}}$. He can only write to documents with security label ${\displaystyle x_{d}\geq x_{s}}$

## b)

1. One can temporarlily downgrade subjects to enable them to write the to lower level documents.
2. One can declare trusted subjects that are allowed to violate the write-up property.

## c)

• error messages
• Timing behavior based on CPU usage. To send a 0-bit use 100% of CPU, to send a 1-bit, sleep.
• Locking and unlocking files.
• Encode information in the invoice sent for services rendered.

## d)

No it is not. Obviously if the admin can list the files in a folder he can also read the file's name, which might leak information. Also if the admin adds a high file himself, he can have read the content of that file before upgrading it.

# Privacy

## a)

Anonymity is a state in which the identity of a communicating party is secret.

## b)

Perhaps if it is impossible to map the pseudonym to a real name of that person.

## c)

Size credit card (cc) and store loyalty card (slc): ${\displaystyle x\cdot 0.8}$

Size no cc and slc: ${\displaystyle y\cdot 0.75}$

Size cc and no slc: ${\displaystyle x\cdot 0.2}$

Size no cc and no slc: ${\displaystyle y\cdot 0.25}$

Alternative: I think whenever a credit card is used, since it records the customer's name, the anonymity set is 1.