Lösungsvorschlag Computer Networks FS08
Solutions provided without any guarantees. They may or may not express the opinions of the professor. If you use these solutions to prepare for an exam, you must be crazy.
1.1 fefour applees and efivee peearsf
1.3 Bandwidth in a circuit-switched network can be shared between users via time-division multiplexing
1.4 Message framing
1.5 HTTP 1.0
First the Server sends a cookie to the client. For all later requests by the client, the client transmits its cookie using the Cookie header. Server can alter the client's cookie using the Set-Cookie header.
- domain (for which domain the cookie is valid, useful for including subdomains)
- path (on pages starting with the given path the cookie is valid)
- expire date (until when the cookie is valid)
1. HTTP live video streaming
2. VoIP, UDP based Video streaming
3. HTTP file downloads
4. File transfer over UDP (?), [According to slides "Web documents" but that does not really make sense...]
Variation of the arrival times of packets. Causes interruptions in audio stream, if audio buffer is too small, as packets arriving too late.
Bandwidth: amount of data that can be transferred in certain time Latency: time to transfer data from one endpoint to another
For guaranteed bandwidth applications (e.g. live streaming) there is no benefit in increasing bandwidth, as they cannot take advantage of it, given that the guaranteed bandwidth is provided.
Also, if applications communicate with each other and wait for each other's responses, latency is often more important than bandwidth.
- May not provide latest version of a web page.
- Can be single point of failure for organization's web traffic.
Fast retransmit, receiver tells sender that a packet was not received by ACKing the previous packet received in-order. After 4 ACKs sender can be sure it was lost and resends.
Nice, a typo in the exam (who proofreads exams at ETH? (probably nobody...))
Both result in performance degradation.
- Too low timeout: packets are resent even though they arrive, causing superfluous bandwidth.
- Too high timeout: client has to wait longer for lost data to be received.
RTT estimation (sucks having to remember these formulas):
SampleRTT: meausured time from segment transmission until ACK receipt
EstimatedRTT = (1-α)· EstimatedRTT + α · SampleRTT (weighted moving average)
typical alpha = 0.125
Setting retransmission timeout:
Deviation=(1-β) ·Deviation + β · |SampleRTT-EstimatedRTT|
Timeout = EstimatedRTT + 4 · Deviation
C(x) = x^3 + 1 G(x) = 1001
100101101000 / 1001 = 100001100 1001 ---- 0000 ---- 0001 ---- 0011 ---- 0110 ---- 1101 1001 ---- 1000 1001 ---- 0010 ---- 0100 ---- 100
i.e. our CRC checksum is 100
It can be a good idea:
- If the distance between sender and reciever is very large, so errors can be detected a lot faster, rather then going always the entire length.
- If the transport of data is very unreliable, so you dont want to always use the entire length of the network just for errors.
It can be a bad idea:
- You duplicate work => checking for correctness both on link and transport layer
- It can slow things down, if you re-check at each router/switch
- The NIC's need to have a way to resend packages, if they are corrupted.
As a sender A does not immediatly see when another sender B sends something (because of the delay, arising from the distance between them and the given media), both can sense at the same time that the media is not used and start sending. So whilst sending they see that another sender has also started and abort (also sending a JAM-message to ensure everyone understands that there was a collision), so collision handling is still possible (as long as frames are big enough).
I assume the network (abstracted) looks like this:
timothy [192.168.1.10] <---> switch <---> gustavo [192.168.1.53]
1. timothy sends an ethernet frame with the broadcast mac-address ff:ff:ff:ff:ff:ff and gustavos IP as target, using his own IP address and mac-address as source.
2. The switch learns at which port the timothy's mac-address is attached, and forwards the broadcast package to all other ports.
3. Everyone except gustavo ignore the package, because their IP does not match the package. Gustavo sees that the package has his IP as target, so he sends an ethernet frame to timothy (knowing his mac-address because he just learned it by the arrived package), with gustavos Mac-addr+IP as source.
4. The router sees the package and forwards it to the port on which timothy is, he can do that because he already learned that with the broadcast package. He also stores the IP-mac-address tuple for gustavo, just learned by the 2nd package.
5. timothy has now received the 2nd package and knows the mac-address of gustavo.
6. timothy can now send his IP package wrapped in an ethernet package with the mac-address of gustavo.
I don't know how you can write here an answer which should give you 10 (!!) points... Davids 12:36, 5. Jun. 2010 (CEST) Tried to improve it --Thomas.st 01:04, 6. Jun. 2010 (CEST)
- Can always react on incoming data very fast. (threads not?)
- Can reach very high utilization of CPU because its just selecting task on which it can make progress. (threads not? don't know if I understand this though)
- Scales better because less overhead than threads.
- Easier to program behaviour involving multiple sockets (e.g. broadcast a received packet to all other sockets), as no IPC/synchronization needed.
- Cannot benefit from multicore machines (at least not directly)
- Maybe a bit more difficult to find out what task you should perform next on the result of select. (is it a request, are you required to answer, do you have to calculate something => current state of a single connection can't just be stored in the thread context)
- Difficult to program long-running computations
- Program flow less readable due to asynchronous handling (e.g. you can't have code which looks like: receive(x); send(toupper(x)), instead you need callbacks)
- Scales easily on multicoremachines/several machines
- Easier to keep track of what you are doing/need to do (can be stored in thread)
- Nicer program flow
- Easier to do computations
- Wasting resources on context switching between threads
- More memory overhead
- You can't know in advance if a thread can make progress or is blocked, so you (maybe often) (context-)switch into a thread which can't do progress, which wasts also time
1. Assuming: No queueing delay, no inter-processing delay in router etc.
latency = delay + size/bandwith = 10ms + 1ms = 11 ms
2. Use result of 1. for sending time.
Min.timeout = sending time + ack recieving time = 11ms + (10ms + 0.1ms) = 21.1 ms
3. Use result of 2.
Time for 1kB = 21.1ms => 1 kB/(21.1 * 10^-3s) = approx. 1 mB / 21.1s
=> Utilization = (1 / 21.1 mbps) / 1mpbs = approx. 1/20 = 5% Utilization
4. Using 2. => Average time to send with no corruption = 21.1ms
=> latency = 0.8 * (21.1ms) + 0.2 * (21.1ms + latency)
=> latency = 21.1/0.8 ms
Formula can also be written as: sum(0.2^i*0.8*(i+1)*21.1, i, 0, inf) = 26.375ms
2 States: IDLE, WAITING.
Transitions, syntax is (state, trigger, action performed) => new state:
- (IDLE, data from application to send, send the data) => WAITING
- (IDLE, data received and no corruption, send ACK) => IDLE
- (IDLE, data received and corruption, send NACK) => IDLE
- (WAITING, NACK received, resend the data) => WAITING
- (WAITING, ACK received, nothing to do) => IDLE
We need the stations A and B and the AP, which is placed in the middle between the two stations. A and B are hidden terminals iff their respective transmission range is large enough to reach the AP, but not to reach each other. So they cannot communicate, but their signals collide at the AP. This problem is solved by the MACA scheme: Before sending the actual data a station, say A, emitts a RTS (ready to send) request to the AP. The AP replies with a CTS (clear to send) which is received by both A and B. Now B knows it has to remain silent in order not to collide at the AP with some other station, namely A.
|1:||2, A||1, A||inf||4, A||inf|
|2:||2, A||inf||2, C||inf|
|3:||4, B||2, C||7, B|
|4:||3, E||6, E|
A <--1--> B <--1--> C
At the beginning it is like:
A table: to B via B = 1 ;; to C via B = 2
B table: to A via A = 1 ;; to C via C = 1
C table: don't care
1. Now assume the link B-C breaks. B updates it's routing table: to A via A = 1 ;; to C ??
2. It looks wheter a neighbour has a path to C and finds: A says "I have a path, cost = 2"
3. It adds it to its table, adding 1 as cost B->A, so B table is now: to A via A = 1 ;; to C via A = 3
4. It broadcasts this new path, A sees: "to C is now from B at cost 3, my path goes to B to get to C, so I update" => A table: ... ;; to C via B = 4 (3+1)
5. A broadcasts, B updates, and so on they keep updating each other until infinity
Zum Verständnis siehe Beispiel auf https://de.wikipedia.org/wiki/Distanzvektoralgorithmus