# Lösungsvorschlag Computer Networks FS09

Solutions provided without any guarantees.

# 1 Network architecture and performance

## 1 Circuits vs. packets

Circuit switching uses a connection setup to create a "connection" between two hosts (as it is in telephony) - packet switch just switches each packet to its target, using routing for each packet.

Circuit switching:

• it is easily possible to charge customer for their connection
• it allows to guarantee bandwith

Packet switching:

• It's flexible and can use redundant connections from host to host because it's not tied to one path

## 2 Latency

• Propagation delay (time to travel through medium)
• Time to transmit data (bits/bandwith)
• Queueing delays (e.g. in routers)
• Processing time (in routers, hosts,...)

## 3 Throughput

```Time to transmit = 0.1s + 125kB/12.5 mBps = 0.1s + 0.01s = 0.11 s
```

```Throughput = Transfersize / Transfertime = 125kB/0.11s = 1.37 mBps
```

## 4 More delay

```Propagation delay = distance/speed = 2000/(2*10^8) = 10^-5s
Transmission time = size/bandwith => 10^-5s = 100/x => x = 10^7 bytes/s => 80mbps
```

For 512 byte packets, just calculate *5.12 ~ 400mbps

## 5 Width

```1 gb = 2.3*10^8 m
10^9 b = 2.3*10^8 m
1 b = 0.23 m
```

## 6 Store and forward

### a)

Assumption: Rule of thumb (10mbps = 1mBps)

```A-> full in router: transmit time = 1/1000s = 1ms; 1ms + 1ms (prop.delay) = 2 ms
Router -> full in B: same time, because symmetric = 2ms
total time: 2ms + 2ms = 4ms
```

### b)

```transmit time B-> router for ack: 0.01ms + 1ms (prop.delay) = 1.01ms
again symmetry, so total: 2.02ms for ack transfer
Long term it's always transferring 1kB per time from a) and ack-time from b) = 4ms+2.02ms = 6.02ms (per 1kB)
Throughput: 1kB/6.02ms = 1mB/6.02s = 10mb/6.02s = approx. 1.65 mbps
```

# 2 Application layer protocols

## 1 HTTP

Since HTTP is based on TCP, it uses a connection setup. Persisten means, this connections stays, even if the current HTTP request is served. Non persisten means that after the HTTP Request is finished, also the TCP connection gets closed.

Since most HTTP based applications (such as websites) today refer to other object which again require a HTTP data exchange (such as linking an image in a website), it is very slow to always make a entirely new TCP connection for each request on the same server. Using persistent connections it allows the client to send multiple requests sequentially to a server, which can enhance performance on both sides (client and server).

## 2 FTP

FTP uses two separate TCP sockets per session: One is used to pass control data like get/put commands or username and password. The other one is used to pass the actual file content between the two. The advantage is that the two communication partners are still able to exchange commands although a large share of the bandwidth is in use by the file transfer. It also eases differentiating between command data and file data.

## 3 DNS

• Work overload: The root server for TLD's would get overwhelmed, when everybody who asks for a .com domain would first ask the root server for .com
• Performance: Because everyone would have to go to the root servers to get IP's, the Internet would be a lot slower than it is today with (DNS-)caching.
• Single-Point-of-Failure: When the root server would fail (or the connection to it), the entire Internet would go down, which is completly unnecessary, as the server would be there but you just wouldn't be able to make the IP translation.

# 3 Transport protocols

## 1 TCP functionality

• Congestion control
• Reliability
• In-order arrival of packets

## 2 TCP flags

SYN is required to setup a connection, FIN to teardown one. ACK is used in both processes, to acknowledge the request.
To setup a connection it works as follows:

```SYN -> SYN + ACK -> ACK
```

Connection Teardown: (1:) FIN -> (2:) ACK, sending also FIN -> (1:) ACK, (2 closed)

## 3 Fast retransmit

After receiving three duplicate acks (= 4 acks for the same sequence number) the sender immediately retransmits the packet. Advantage: Much faster recovery than waiting for the timeout to happen.

## 4 Flow control

The reciever sends in his acknowledging packets the number of bytes in his window which are free, this leads to how many bytes he will be able to buffer. If the sender would send more than that, the reciever would have to drop the packets and this would lead to loss and congestion,... So the sender never sends more than the last number of bytes he recieved the sender can still buffer in his frame. So using this number, the reciever can slow down a sender, if the sender would be too fast. If the reciever is faster than the sender, this has no affect to the sender.

## 5 Congestion control

The sender always has a windowsize of min(receiver_window, congestion_window), where receiver_window is what is explained in the last task. The congestion_window keeps growing, as long as there is no packet-loss, meaning as long as he does not has to retransmit packets. As soon as he has to retransmit one (because of a timeout), he resets his congestion_window = 1; treshold = old_congestion_window /2 ; and then he grows it again. At the beginning (as long as he is lower then a treshold the window grows exponentially (always *2), if the window is bigger than the treshold, it grows incrementally (+1). This behaviour leads to the so called "slow start" of TCP, as the window size is small (1) at the beginning (even though no congestion has happened (it is the start...)).

# 4 Network layer protocols

## 1 Routing protocol types

Distance vector protocols only use information of their neighbours, to create their routing table. Link state protocols know the entire topology of the network and then build the routing table based on this information, each router for himself.

## 2 Count to infinity

```A <--1--> B <--1--> C
```

After a certain time the routing tables look like:

```A:: to B: B,1 ;; to C: B,2
B:: to A: A,1 ;; to C: C,1
C:: omitted
```

Now lets say the connection B to C breaks. So B gets updated

```B:: to A: A,1 ;; to C: ??
```

Then B receives information from A, which says "i can go to C in cost 2" and so B does an update

```B:: to A: A,1 ;; to C: A,3
```

It tells the new table to the neighbours, which then do an update

```A:: to B: B,1 ;; to C: A,4
```

So A tells the update, and then B again, and so on, they keep increasing the cost until infinity.

## 3 Dijkstra's algorithm

```(D,0,-)
(D,0,-) ;; (B,6,B), (C,3,C) -- search for next node
(D,0,-), (C,3,C) ;; (B,6,B) -- adding C
(D,0,-), (C,3,C) ;; (B,6,B), (A,9,C) -- search for next node, found A
(D,0,-), (C,3,C), (B,6,B) ;; (A,9,C) -- adding B
(D,0,-), (C,3,C), (B,6,B) ;; (A,8,B) -- found shorter path to A
(D,0,-), (C,3,C), (B,6,B), (A,8,B) -- adding A, finished
```

## 1 Wire encodings

NRZ: 1 = 1, 0 = 0

NRZI: swtich between values = 1, staying = 0

Manchester: switch low-high = 0, switch high-low = 1

The Picture for NRZI is wrong!

## 2 Framing

Because a mechanism like flagging is required to separate frames, byte or bit level framing is required. The difference is, that in bit framing you insert just a bit (bit-stuffing) when you see a bit-sequence which is like the flag, but is actually data, in byte-level framing you insert an entire byte.

Example byte framing: flag is 0111'1110 want to send data 0111'1110 => add the same byte again. The reciever sees the flag, if the flag is followed by the same flag again, it's treated as data and the first byte is discarded, else it's the flag.

Example bit framing: flag is 0111'1110 want to send data 0111'1110 => add an 0 after each 5 sequential 1's. So send 0111'1101'0. The sender just treats 0111'1110 as flag, and keeps removing each 0 after 5 ones. Care: You have to insert also a 0 when there are just 5 1's in a row, because else the receiver would remove the 0 what you don't want him to do!

## 3 Ethernet

```I'm not sure what this question asks, but I gave it a try... Davids 17:17, 6. Jun. 2010 (CEST)
```

Because it's a shared medium, anyway each adapter listens to the medium. So each packet will anyway be sent to each adapter, no matter if it's a broadcast packet or not. The only difference is, that each adapter handles the request. So the bandwith equals the bandwith of the slowest link in this collision domain.

Another try: On a shared-medium Ethernet no two computers can send at the same time or else the packets overlap. In classic Ethernet which works with CSMA/CD it takes 2*propagation delay until the involved Computers can be sure that no collision occured because when two signals/packets overlap they spike and this spike needs to be propagated to both parties for them to notice it. It takes 2*propagation delay in the worst case: Consider A sends to B and right before B receives it B sends a message to A and they overlap. A will notice after 2*propagation delay.

## 4 Cyclic redundancy checks

```x^3 + 1 = 1001
100101101000 / 1001 = 100001100
1001
----
0000
----
0001
----
0011
----
0110
----
1101
1001
----
1000
1001
----
0010
----
0100
----
100
```

So the remainder and therefore the checksum is 100.