Lösungsvorschlag Operating Systems And Computer Networks FS11

Aus VISki
Wechseln zu: Navigation, Suche

Solutions provided without any guarantees.

1 Network performance

Question 1.1

  • Propagation delay (time to travel through medium)
  • Time to transmit data (bits/bandwith)
  • Queueing delays (e.g. in routers)
  • Processing time (in routers, hosts,...)

Question 1.2

Total Latency

= transmission delay + propagation delay
= (MsgSize / bandwith) + propagation delay
= (50Kb / 1000Kb/s) + 10ms
= 50ms + 10ms
= 60ms

Question 1.3

bandwith becomes the greatest factor when

packet size / bandwith > propagation delay

-> packet size > propagation delay * bandwith

in our case:

packet size > 0,01 s * 1000KB/s = 10 KB

Question 1.4


= Msg transmission delay + propagation delay + Acknowledgement transmition delay + progagation delay
= (Msg Size / bandwith) + (200 bytes / bandwith) + 2 * propagation delay
= (Msg Size + 0,2) /1000 + 0,02 Seconds
= (Msg Size + 20.2) Milliseconds

(Message Size in KB)

Question 1.5

Utilization U = (Msg size / (transfer time of msg + transfer time of acknowledgement)) / (Bandwidth)

             = (msg size / (2 * propagation delay + (msg size + acknowledgement size) / Bandwidth)) / Bandwidth
             = (msg size / 0,02s + (msg size + 0,2kb) / 1000KB/s)) / 1000KB/s

2 HyperText Transfer Protocol

Question 2.1

When the client initiates an HTTP request without presenting a cookie number, it receives one from the server with a response message "set cookie: COOKIENO.". The server then can associate your user preferences, previous choices and your previous authetication to the given cookie number, i.e keep "state". When the client sends a new HTTP request at a later time to the server, he can present the cookie to the server,which will restore your "state" of the previous session.

The three important attributes of a cookie are:

  1. Expiration: When the cookie no longer is valid and can be thrown away.
  2. Domain: To which domain this cookie should be presented to.
  3. Path: Which URL to present this cookie with.

Question 2.2

  • Non-persistent connection: The server parses request, responds and closes the TCP connection. This process is repeated for every requested object the server has to fetch. This leads to more RTT. In addition, each transfer suffers from TCP's initial slow start connection. Therefore, many browsers open multiple parallel connections. Standard for HTTP/1.0
  • Persistent connection: The server parses request, responds and directly parses the next requests without closing the TCP connection. This way, the client can request all referenced objects once it has received the base HTML. This leads to fewer RTT and a less slow start.

3 Reliable Transfer

Question 3.1

Check slides here on page 9. Note that low level actions like make_pkt does not need to be included into the solution.

Question 3.2

Check slides here on page 15. Note that low level actions like make_pkt does not need to be included into the solution.


Question 4.1

How actual tranmission rate is calculated:

min(CC, FC) where CC/FC means the sending rate, proposed by the congestion control/flow control algorithm

Problems solved by choosing transmission rate in that way:

  1. Congestions
  2. Overloading the network: Fast detection of a good sending rate
  3. Changes in connection quality: Fast reaction

Question 4.2

  • Initially, we have absolutely no idea, which sending rate is suitable. If we start too slow, we lose performance. If we start too fast, we lose packets and therefore lose performance.
  • The solution is an exponential increase of the congestion window, until we lose packets, or the threshold is reached.

Isn't this more about slow start after packet loss?

5 Routing

Question 5.1

When node A routes paths to some receivers over node B, and node B routes a subset of this receivers over node A, then it is routed for eternity, or until the conditions have changed (compare: count to infinity).

Question 5.2

Poisoned reverse: If A's connection to C is over B, then A tells B, if asked, that its connection to C has weight infinity. This avoids loops between two nodes, and no more. This leads to an ever increasing amount of unnecessary load on the network.

Question 5.3


  1. B=>C breaks
  2. B looks for a new connection to C and chooses the connection over node A.
  3. A has to update its connection because it has been notified by node B
  4. B has to update its connection because it has been notified by node A
  5. return to step 3

Result: A and B never know that C is unreachable, A and B count to infinity.

Question 5.4

The path to the receiver is memorized with the table entry. If a new path contains a broken path, it's invalid. Obviously it's not possible to have the count to infinity problem.

6 Network Layer

Question 6.1

Why does every network or network type have its own MTU?

  1. Circuit switched networks do not have a MTU! :)
  2. The chance of corruption is different for different networks
  3. The speed is optimized if the MTU is as big as possible for a certain network
  4. MTU enforces fairness => no one can send for an arbitrary amount of time
  5. The buffer size of the switches in between limits the MTU => different for different networks

What is the MTU?

Packets in a network are not allowed to be bigger than a networks maximum packet size (MTU).

What are the problems being addressed by fixing a maximum size?

see first question

7 Link-layer mechanisms

Question 7.1

Interpret the polinomial as a binary String: x^3 + 1 => 1001

Take the message and concatenate as many zeroes as the degree of the CRC generator polynomial, then do the following binary-polinomial division.

 100101101 000 / 1001 = 1
  00001101 000          0
   0001101 000          0
    001101 000          0
     01101 000          0
      1101 000          1
       100 000          1
        00 100          0
         0 100          0
remainder: 100

with remainder 100 => 100 is the CRC checksum.

Check: 100101101 100 / 1001 => remainder 0 respectively 1001 * 100001100 = 100101101100

Question 7.2

see Chapter9 on slide 10 and following

8 Processes and Threads

Question 8.1

  1. A timer interrupt arrives
  2. Processor switches to kernel mode and executes the interrupt handler
    1. Save the registers in processes kernel stack
    2. Call scheduler to determine the next process
    3. Load registers from the new processes kernel stack
  3. Switch to the new process

Question 8.2

  1. A timer interrupt arrives
  2. Processor switches to kernel mode and executes the interrupt handler
    1. Save the registers in processes user-mode stack (calculate address)
    2. Call scheduler to determine the next process
    3. Load registers from the new processes user-mode stack
  3. Switch to the new process

Question 8.3

The answers wouldn't be modified at all. The only difference is the scheduler.

Question 8.4

Advantages of pure user-level threads:

  1. Cheap to create and destroy
  2. Fast to context switch

Advantages of kernel threads:

  1. Easier to schedule
  2. nicely handles blocking

9 Scheduling

Question 9.1

  1. Fairness
  2. Power consumption
  3. Policy (Responsiveness, different kind of real-time scheduling )
  4. Balance (between io-bound, cpu-bound, ...)

Question 9.2

Scheduling overhead:

  1. 1/2 Context switch (A to kernel)
  2. Scheduling
  3. 1/2 Context switch (kernel to B)

Response time: time between an action and its corresponding reaction.

Tradeoff between response time and scheduling overhead:

If we never do a context switch, scheduling overhead is 0, but response time may be infinite for some processes.

If we optimize response time, scheduling overhead grows.

Question 9.3

Unix has a set of priority levels. For each priority level, there may be a certain amount of threads. The scheduler executes the threads with higher priority first. If a thread uses its quantum entirely, it loses priority. If the time of last schedule for a process grows, this process gains priority. On the same priority level, processes are scheduled by another scheduling algorithm (RR, FCFS).

It tries to prefer I/O-bound before CPU-bound processes.

10 Virtual Memory and Demand Paging

Question 10.1

Belady's anomaly: More frames less or equal amount of page faults

Algorithm: FIFO

Question 10.2

1 2 4 3 2 3 4 6 7 1 2 4 7

frame 1 frame 2 frame 3 frame 4

Result: 9 page faults (red)

Question 10.3

Working set: Set of page references a process has used in the last k instructions (or time interval)

Useful to minimize thrashing and optimize parallelism

11 Input/Output

Question 11.1

  1. Interrupt handler
  2. Driver thread
  3. User process

Problem that it attempts to solve: Mediation between interrupt-driven hardware and blocking user processes.

Question 11.2

  1. User process blocks (System call)
  2. Wait
  3. NIC transfers packet to kernel memory via DMA, usually using a ring buffer
  4. NIC sends interrupt to OS
    1. Mask interrupts
    2. Check MAC address (if not ours => discard packet)
    3. Check IP address (if not ours => discard packet)
    4. Unblock driver thread
  5. Driver Thread
    1. Packet processing
      1. Demultiplex packets, based on (destination IP, UDP destination port)
    2. Unblock processes
    3. Unmask interrupts
  6. Data is copied from kernel memory to user memory
  7. Blocking syscall returns