# Lösungsvorschlag Information Security FS12

## Security Definitions

#### (a)

${\displaystyle R{\overset {\pi }{\Longrightarrow }}S{\overset {\pi '}{\Longrightarrow }}T\Longrightarrow R{\overset {\pi \circ \pi '}{\Longrightarrow }}T}$

Suppose ${\displaystyle \pi }$ realizes the transformation of an insecure channel to an authentic channel and ${\displaystyle \pi '}$ realizes the transformation of an authenticated channel to a secure channel, then ${\displaystyle \pi \circ \pi '}$ realizes the transformation of an insecure chnannel to a secure channel. Thus, composability enables the construction of security systems from smaller security subsystems (e.g. security primitives).

#### (b)

As Proposition 2.2 states, no deterministic cryptographic system can satisfy COA-security for mulitple applications. Suppose we're arguing about symmetric one-time-pad encryption/decryption. In the COA-security setting, the attacker sends two messages ${\displaystyle m_{0}=(m,m)}$ and ${\displaystyle m_{1}=(m,m')}$, which are then encrypted component-wise by the challenger. Then, the attacker can easily distinguish ${\displaystyle m_{0}}$ and ${\displaystyle m_{1}}$ from each other. Hence, COA-secure systems which are intended to be used multiple times have to be probabilistic.

Another view on the word "probabilistic" is that we have noticed that information-theoretic security is impractical since the key has to be as long as the message itself. Therefore we weaken the definition by saying we want the adversary to break the system only with negligible probability. This gives us the possibility to develop weaker, but still sufficiently strong, ciphers.

## Information-Theoretic vs. Computational Security

#### (a)

• A system under the assumption that entities have infinite computing power is called a information-theoretical secure system.
• A system under the assumption that an entitiy which wants to break the system has a computing power under a certain upper bound and the system is secure under this upper bound, is called a computationally secure system.

#### (b)

##### (1)

There exists a symmetric encryption scheme which is information-theoretically secure: Symmetric encryption using the one-time-pad. If this scheme is carried out carefully (e.g. that the encryption key must not be statistically dependent of the ciphertext), one can show that it's information theoretically secure.

##### (2)

Suppose Bob communicates with Alice using public-key encryption. So Bob knows Alice's public key; and so could an eavesdropper, let's say Eve. In the setting of information theoretically secure systems, an attacker has unbounded computing power; so Eve could try all the possiblities to break the ciphertext, i.e. such system cannot be information theoretically secure.

## Keyless Cryptographic Functions

#### (a)

Reduction Proof. We construct an adversary that attacks the function f and has access to an oracle (i.e. an efficient algorithm) which computes the preimage ${\displaystyle x_{1}||x_{2}}$ of ${\displaystyle g}$ when given ${\displaystyle y:=f(x_{1})||MD5(x_{2})}$ as input.

• The attacker gets as input ${\displaystyle f(x_{1})}$.
• He chooses some ${\displaystyle x_{2}}$ and computes the MD5 of it.
• He gives ${\displaystyle f(x_{1})\ ||\ {\text{MD5}}(x_{2})}$ to the oracle for g.
• The oracle returns some ${\displaystyle x_{1}'\ ||x_{2}'}$ such that ${\displaystyle f(x_{1})=f(x_{1}')}$.
• The attacker outputs ${\displaystyle x_{1}'}$.

Since the oracle finds such an ${\displaystyle x_{1}'\ ||x_{2}'}$ with non-negligible probability our attacker is successfull with the same probability. This shows that if f is one-way, then so is g.

${\displaystyle \Box }$

OTHER USER: I am not quite sure about this. Because MD5 is a production Hash Function, it does not have random key, but a fixed key over all usages. This means that we could just program a fixed collision into our Adversary and let it try out this collision for g.

#### (b)

When the hash function is known we can always come up with an algorithm that computes a collision in constant time. We can look for a collision as long as we want and then "hardcode" that collision into said algorithm.

#### (c)

No. We can easily implement an algorithm that outputs the image x of a value y (such that f(x)=y). This can be done by "binary search" over all the possible images x in {0,1}^k

## Range Extension of Pseudo-Random Generators

We assume h(x) is not a PSR: Then there exists a distinguisher that can distinguish h's output from a random string with a non negligible advantage. This distinguisher can now be used to break f(x) by continuing the algorithm of h by using f(x) as output of f which results in h(x). As there exists a distinguisher for h(x) we can distinguish f(x) with an non negligible advantage -> f(x) is not a PSR -> contradiction -> h(x) is a PSR.

## Trust-Free Security Transformations

#### (a)

${\displaystyle \left.{\begin{matrix}A{\overset {t_{1}}{==}}\bullet B\\A\bullet {\overset {t_{2}\quad t_{3}}{\longrightarrow }}B\\t_{2}\geq t_{1}\end{matrix}}\right\}A\bullet {\overset {t_{2}\quad t_{3}}{\longrightarrow }}\bullet B}$

#### (b)

##### (i)

${\displaystyle \left.{\begin{matrix}A\bullet {\overset {t'}{\longrightarrow }}B\\A{\overset {t}{\longrightarrow }}B\\t\geq t'\end{matrix}}\right\}A\bullet {\overset {t}{\longrightarrow }}B}$

Yes, this transformation can be achieved by digital signatures.

##### (ii)

No, this transformation is not possible, as trust-free security transformations cannot move a ${\displaystyle \bullet }$ from one channel-end to another.

##### (iii)

${\displaystyle \left.{\begin{matrix}A\bullet {\overset {t'}{\longrightarrow }}B\\A{\overset {t}{\longleftarrow }}B\\t\geq t'\end{matrix}}\right\}A\bullet {\overset {t}{\longleftarrow }}B}$

Yes, this transformation can be achieved by public-key cryptography.

## Diffie-Hellmann Key Agreement Protocol

#### (a)

Let G be a group <S,*> over elements of the set S and the binary operator denoted as *. First, A and B agree on a group. Most of the time, this will be the group <Z*_p,*> where p is a prime. Let m be the group order |G|. Let g be a generator for group G such that {g^n | n in {1,...,m-1}} = S. Now, the DH protocol works as follows. A chooses a random x from {1,...,m-1} and B chooses a random y from {1,...,m-1}. Now A sends h1 = g^x to B and B sends h2 = g^y to A. A computes h2^x = (g^y)^x = g^xy = k and B computes h1^y = (g^x)^y = g^xy = k. They agree on the same key k.

#### (b)

##### (1)

We have y in G. We need to find an x in [0,2*m] such that g^x=y. We first compute if y is a quadratic residual or not (there exist efficient algorithms for that). If y is a QR, we know there exists a z such that z^2=y. Since z is also an element of the group G, there must be a u, such that g^u=z. Now we have everything we need, since y=g^x=z^2=g^2u we know that x=2u for some u and therefore know the parity bit of x.

##### (2)

Yep. We can compute the LSB of x and y. Now it is very easy to determine the LSB of x*y.

#### (c)

Since we know one bit of x*y, the key space is reduced by half.

## Authenticity and Trust in PGP

#### (a)

Let the other entity be B.

• A must check if the received public key originates from B (e.g. by checking the public key fingerprint via phone)
• A must decide how much it trusts B's trust in other entitites (full/marginal trust)

#### (b)

A certificate of A certifing X has the meaning that the entitiy A conviced itself from the authenticity of X and the ownership of X's public key and is achieved by signing X's public key with A's secret key.

If A want's to use a certificate of B certifiying C, it must have full trust in B.

In order to get a certificate one can import a public key from another entity in the pgp keyring, state the level of trust in that entity and then sign the public key of that entity. This certificate can then be uploaded to a keyserver. Certificate Revocation can be done easily by creating a revocation certificate and uploading it on the keyserver in order to notify other entities to not use this certificate.

#### (c)

##### (1)

In order to achieve ${\displaystyle Aut_{A,B}}$, A must derive the authenticitiy of either X or Z, as they hold certificates for B's public key and A has full trust in them. The only way for A to derive the authenticity for either X or Z is via W or Y (respectively). But as A has only marignal trust in both W and Y, it cannot rely on their certificates for X and Z.

##### (2)

A solid line form A to X or one from A to Z would solve the authentification problem.

#### (d)

• ${\displaystyle \forall X:Aut_{A,X},Full_{A,X},Cert_{A,Y}\vdash Aut_{A,Y}}$
• ${\displaystyle \forall S,T\quad {\text{with}}\quad S\neq T:Aut_{A,S},Aut_{A,T},Marg_{A,S},Marg_{A,T},Cert_{S,Y},Cert_{T,Y}\vdash Aut_{A,Y}}$

## Security Protocols

#### (a)

The protocol doesn't make clear, who communicates with whom. Most importantly, it should include the names of the actors in the last, signed message.

#### (b)

The protocol can suffer from a man-in-the-middle attack:

M1.  A -> I   {A,K1}pk(I)             I -> B   {A,K1}pk(B)
M2.  B -> I   {K2,NB}K1               I -> B   {K2,NB}K1
M3.  A -> I   {hash(K1,K2,NB)}sk(A)   I -> B   {hash(K1,K2,NB)}sk(A)


During M1, the intruder learns K1, which he then uses during M2 to decrypt the message {K2,NB}K1 and so learns K2 and NB. The last protocol step M3 is not necessary for learning the secret parameters, but gives B the intention, that everything worked well.

#### (c)

The intended secrecy of K1, K2 and NB is violated for B. In addition, non-injective agreement is also violated for B. None of this is violated for A, because A intended to run the Protocol with the attacker.

#### (d)

Aliveness requires the (presumed) partners in the protocol to have participated in some protocol before. Non-injective agreement requires that if A runs the protocol (presumably) with a non compromised party B, then B has also been running the protocol (presumably) with A. In addition, A and B agree on the transmitted messages. As B has been running the protocol, aliveness for B is satisfied (while non-injective agreement is not).

## Dolev-Yao intruder

#### (a)

The Dolev-Yao intruder is an attacker model, where one assumes the following properties of the attackers capabilities:

• The attacker is active, i.e. he can generate and send messages
• The attacker can decompose messages into their components, but crypto remain secure
• The attacker can compromise entities and learn their security parameters

#### (b)

• ${\displaystyle t\in u\Longrightarrow t\in has(u)}$
• ${\displaystyle t_{1}\in has(u)\wedge t_{1}\in has(u)\Longrightarrow (t_{1},t_{2})\in has(u)}$
• ${\displaystyle (t_{1},t_{2})\in has(u)\Longrightarrow t_{i}\in has(u)(i\in {0,1})}$
• ${\displaystyle t\in has(u)\Longrightarrow hash(t)\in has(u)}$
• ${\displaystyle \{t_{1}\}_{t_{2}}\in has(u)\wedge t_{2}^{-1}\in has(u)\Longrightarrow t_{1}\in has(u)}$
• ${\displaystyle t_{1}\in has(u)\wedge t_{2}\in has(u)\Longrightarrow \{t_{1}\}_{t_{2}}\in has(u)}$

#### (c)

##### (1)
• ${\displaystyle pk(A)\in has(IK_{0}\cup T)\wedge \{sk(A),N_{2}\}_{sk(A)}\Longrightarrow (sk(A),N_{2})\in has(IK_{0}\cup T)}$
• ${\displaystyle (sk(A),N_{2})\in has(IK_{0}\cup T)\Longrightarrow sk(A)\in has(IK_{0}\cup T)\wedge N_{2}\in has(IK_{0}\cup T)}$
• ${\displaystyle A\in has(IK_{0}\cup T)\wedge B\in has(IK_{0}\cup T)\wedge N_{2}\in has(IK_{0}\cup T)\Longrightarrow (N_{2},A,B)\in has(IK_{0}\cup T)}$
• ${\displaystyle (N_{2},A,B)\in has(IK_{0}\cup T)\wedge sk(A)\in has(IK_{0}\cup T)\Longrightarrow \{N_{2},A,B\}_{sk(A)}\in has(IK_{0}\cup T)}$
##### (2)

In order to get ${\displaystyle N_{3}}$ we would need ${\displaystyle K_{1}}$, but ${\displaystyle K_{1}}$ is not derivable from any of the available messages.

##### (3)

Alltough we can derive ${\displaystyle hash(N_{3})}$, we cannot get ${\displaystyle N_{3}}$, so this message isn't derivable for the same reasons as (2).

##### (4)
• ${\displaystyle pk(A)\in has(IK_{0}\cup T)\wedge \{sk(A),N_{2}\}_{sk(A)}\Longrightarrow (sk(A),N_{2})\in has(IK_{0}\cup T)}$
• ${\displaystyle (sk(A),N_{2})\in has(IK_{0}\cup T)\Longrightarrow sk(A)\in has(IK_{0}\cup T)\wedge N_{2}\in has(IK_{0}\cup T)}$
• ${\displaystyle sk(A)\in has(IK_{0}\cup T)\wedge \{(N_{1},K_{2})\}_{pk(A)}\in has(IK_{0}\cup T)\Longrightarrow (N_{1},K_{2})\in has(IK_{0}\cup T)}$
• ${\displaystyle (N_{1},K_{2})\in has(IK_{0}\cup T)\Longrightarrow N_{1}\in has(IK_{0}\cup T)\wedge K_{2}\in has(IK_{0}\cup T)}$
• ${\displaystyle N_{1}\in has(IK_{0}\cup T)\wedge N_{2}\in has(IK_{0}\cup T)\Longrightarrow (N_{1},N_{2})\in has(IK_{0}\cup T)}$
• ${\displaystyle (N_{1},N_{2})\in has(IK_{0}\cup T)\Longrightarrow hash(N_{1},N_{2})\in has(IK_{0}\cup T)}$
• ${\displaystyle \{N_{4}\}_{hash(N_{1},N_{2})}\in has(IK_{0}\cup T)\wedge hash(N_{1},N_{2})\in has(IK_{0}\cup T)\Longrightarrow N_{4}\in has(IK_{0}\cup T)}$
• ${\displaystyle C\in has(IK_{0}\cup T)\wedge N_{4}\in has(IK_{0}\cup T)\Longrightarrow (C,N_{4})\in has(IK_{0}\cup T)}$
• ${\displaystyle (C,N_{4})\in has(IK_{0}\cup T)\wedge sk(I)\in has(IK_{0}\cup T)\Longrightarrow \{C,N_{4}\}_{sk(I)}}$

#### (d)

##### (1)

${\displaystyle K\in has(T)\wedge M\in has(T)\Longrightarrow [M]_{K}\in has(T)}$

## Network security and PKIs

#### (a)

The main applications of SSL include secure web-browsing (e.g. online banking, online voting, ...), secure e-mailing. Security guarantees that SSL connections with client certificates provides:

• authenticity of client and server
• data is transmitted confidentially between client and server (and vice-versa)

i.e. an SSL/TLS connection establishes a secure bidirectional channel between client and server.

#### (b)

If SSL is used without client certification, the server can't prove the authenticity of the client. Most systems using SSL/TLS solve that by checking the identity later via the established secure channel (e.g. by TAN systems).

#### (c)

PGP relies on a web of trust. Each entitity in the web of trust can issue certificates. Using them depends on the level of trust an entity has in the issuer of the certificate. This models the intuitive idea of trust: If you trust your friend, you'll probably sign his public key, people strang to your self you probably wouldn't. SSL/TLS depend on certificate authorities (CA) which are organized in a hierarchical manner.

If a PGP client gets corrupted, other entities which originally trusted that client will issue a revocation certificate and upload it to a keyserver. When other PGP clients try to derive the authenticity of the corrupted client, they have to check if a revocation certificate exists. This can be problemantic as clients probably do not notice the corruption of the client and there's probably not one single key server. On the positive side, suppose that the corrupted client certified another client, which is known as not compromised. As there may exist multiple certification chains to the uncompromised client despite via the corrupted client, and so uncompromised clients do not get necessarily untrusted.

If a CA gets corrupted, all its issued certificates get invalid (horror!) All the entities certified by the compromised CA are now untrusted.

## RBAC policies

#### (a)

Let the following privileges be given:

• p1 = read file a
• p2 = write file b
• p3 = execute file c
• p4 = write file d

In traditional AC this would yield the following access control matrix:

 ${\displaystyle f_{a}}$ ${\displaystyle f_{b}}$ ${\displaystyle f_{c}}$ ${\displaystyle f_{d}}$ ${\displaystyle u_{1}}$ r w e ${\displaystyle u_{2}}$ r w e ${\displaystyle u_{3}}$ r w e ${\displaystyle u_{4}}$ e w

i.e. much of the the saved values are redundant. This yields to high payload when updating the matrix, e.g. deleting objects or users. Role-Based Access Control (RBAC) solve the problem of redundant information by decouple users and permissions and introducing roles.

#### (b)

##### (1)

Role hierarchies are partial orders ${\displaystyle \geq }$ on roles. Large roles inherit permissions from all smaller roles. (e.g. a superuser as has all rights of an ordinary user).

##### (2)

When introducing the given role to the given PA relation, the relation ${\displaystyle (r_{2},p_{3})}$ is redundant, as the ${\displaystyle p_{3}}$ is allready given to the smaller role ${\displaystyle r_{1}}$.

## Mandatory access control

##### (a)

The two characteristic properties of the Bell-LaPadula model are

• no write-downs: as they could lead to a reclassification of the lower confidential information (ensures integrity)
• no read-ups: as this would lower classified subjects allow the reading of higher confidential information (ensures confidentiality)
##### (b)
                {confidential,{normal,emergency}}
/     |    \
/      |     \
/       |      \
{confidential, normal}   |     {confidential, emergency}
|   \       |      /   |
|    \      |     /    |
|     \     ~    /     |
|  {confidential, {}}  |
|           ~  |       |
~           |  ~       ~
{public,{normal,emergency}}
~     /        ~ \     ~
|    /         |  \    |
|   /          ~   \   |
{public, normal}    {public, emergency}
\          ~   /
\         |  /
\        | /
{public, {}}

##### (c)
• Dave can read the document (read-down), but he cannot write to it (no write-downs)
• Paulina can read and write to the document (same level as her clearance)
##### (d)

The least privilege that a subject needs is {confidential, {normal,emergency}}

##### (e)

Subjects actually have 2 labels: curlevel(s) and maxlevel(s), where curlevel(s) <= maxlevel(s)

• maxlevel(s) is the maximum clearance the subject can access
• curlevel(s) is the highest clearance the subject has accessed before (high-water mark)
• If curlevel(s) is smaller or equal the clearance of the information the subject wants to access, the access is granted; otherwise not.

An obvious advatage is that this enables the head nurse to write in the patient-record (which is clearly a legitimate situation) without enabling her writing higher classified data. A drawback is that high-water marks keep rising and the system gets eventually in the same problem situation as before.

__ other alternatives:

trusted subject: some are allowed to do write-downs +: we don't enable write-downs for everyone but can add some few people which we have checked first for confidentiality to this special list -: obviously classified information could leak if the subject isn't trustworthy, are careless or not adequately trained -: doesn't solve every case where this could be necessary

temporary downgrade +: this problem can't occur anymore -: classified information is likely to leak if some users aren't trustworthy, are careless or not adequately trained

## Security Automata

#### (a)

We define a security automaton ${\displaystyle {\mathcal {A}}=(Q,\Sigma ,Q_{0},\delta )}$ as follows:

• Let ${\displaystyle U}$ denote the set of users and ${\displaystyle F}$ denote the set of files
• I assume the existence of a function ${\displaystyle p:U\cup F\rightarrow {\text{security label}}}$ that tells the security level of an entity.
• ${\displaystyle Q:={\mathcal {P}}(\{({\text{ticket}},i,f,v)\ |\ i,v\in U,f\in F\}\cup \{({\text{unapproved}},i,f)\ |\ i\in U,f\in F\})}$
• ${\displaystyle \Sigma :=\{{\text{read}}(u,f)\ |\ u\in U,f\in F\}\cup \{{\text{ticket}}(i,f,v)\ |\ i,v\in U,f\in F\}\cup \{{\text{write}}(u,f)\ |\ u\in U,f\in F\}\cup \{{\text{approve}}(i,f)\ |\ i\in U,f\in F\}}$
• ${\displaystyle Q_{0}:=\emptyset }$

The definition of the transition function follows:

$split}“): {\displaystyle \begin{split} \delta (q, \text{read}(u, f)) &= \begin{cases} q &\mbox{if } p(u)\geq p(f) \land \forall i\in U\ (\text{unapproved}, i, f)\not\in q\\ \emptyset & \mbox{else }\end{cases}\\ \delta (q, \text{ticket}(i, f, v)) &= \begin{cases} q \cup \{(\text{ticket}, i, f, v)\} &\mbox{if } p(i) \geq p(f) \\ \emptyset & \mbox{else }\end{cases}\\ \delta (q, \text{write}(u, f)) &= \begin{cases} q \cup \{(\text{unapproved}, i, f)\}\setminus\{(\text{ticket}, i, f, u)\} &\mbox{if } (\text{ticket}, i, f, u)\in q\land (\text{unapproved}, i, f)\not\in q \\ \emptyset & \mbox{else }\end{cases}\\ \delta (q, \text{approve}(i, f)) &= \begin{cases} q\setminus\{(\text{unapproved}, i, f)\} &\mbox{if } (\text{unapproved}, i, f)\in q \\ \emptyset & \mbox{else }\end{cases}\\ \end{split}$

#### (b)

• As long as a change has not been approved, noone can access the file
• Highly labeled users can issue tickets for themselves to write to a low-classified file and leaking information they learnt from a high-classified file.
• Tickets have no expiry date and may be forgotten and polute the state.
• There is no suitable method for labeling files and users with the correct label (no admin)