Lösungsvorschlag Operating Systems And Computer Networks FS17

Aus VISki
Wechseln zu: Navigation, Suche

Part 1: Operating Systems

Assignment 1: Processes


Name a system call which is used for this. Answer: fork(). How can each of the two processes determine if it is the parent or the child? fork() returns for both, the parent and the child. In the parent process, fork() will return the PID (process ID) of the child or -1 upon error. In the child process, fork() will return 0.


[X] True [] False:
[] True [X] False: Children with terminated parents are orphans, which will get adopted by init (which has PID = 1).
[X] True [] False: see process tree


Message 1 and 3 will be printed. If we're in the parent process, we will print "Message 3". If we're in the child process, we will execute the execlp command (which is just one type of exec()). This command executes echo "Message 1" and replaces everything else that comes after that line. We will never get to the line where "Message 2" is printed.

Assignment 2: Scheduling


Sheduling Jobs:
00: 2
10: 1
20: 4
30: 3
40: 2 (done)
50: 5
60: 1 (done)
70: 4 (done)
80: 3
90: 5 (done)
100: 3 (done)

Wait time for each job:
1) 60 ms 2) 50 ms 3) 80 ms 4) 60 ms 5) 50 ms
Note: Wait time is counted from the moment a job becomes available (entry time) until it's completion. In this case there where no two jobs entering at the same time, so no time was "wasted".

Average waiting time:
300 ms / 5 = 60 ms

Turnaround time for each job:
According to the Computer Systems script from HS18, wait time is the same as turnaround time, so see answer above.


Advantage of SRTF:
If a long job has already been running for some time, it will not get unnecessarily preempted by a job with a shorter overall running time, but longer remaining running time compared to the already dispatched job.

Disadvantage of SRTF:
It is still possible for short jobs to get in the way of long jobs. SRTF does not solve the problems of SJF, it only mitigates them.


Here we talk about scheduling sequential programs on a multiprocessor machine.

Instead of having a system-wide queue of jobs to run, out of which cores can pick their processes to run, affinity-based scheduling tries to keep jobs on one core as much as possible. Each core has its own run queue and jobs are periodically re-balanced between all the individual queues. This is much more efficient (no need to globally lock the system-wide queue when any core needs to do some rescheduling and processes are not allocated arbitrarily, there's less unneeded movement and therefore more cache locality), but it won't be work-conserving anymore. Cores with empty run queues remain idle while there are still runnable jobs in other queues, which are not currently running. To solve this, allow processors to be work-stealing to keep it busy.


Interactive work-loads:
Use round-robin scheduling. Interactive work-loads consist of long running jobs, which are blocked most of the time and are waiting for some kind of event (I/O). RR will give each job the same share of CPU time, all tasks are treated the same. This gives us optimal response time.

Batch work-loads:
Use preemptive SJF/SRTF. Batch work-loads are often low-priority background tasks.

Real-time work-loads:
Use earliest-deadline-first scheduling. Real-time work-loads need to have a scheduling strategy with priorities regarding their deadlines. We need a strategy that guarantees a correct schedule, which can't be achieved with any dynamic setup.

Assignment 3: Memory Management


Accesses: 2 1 3 4 5 1 2 5 2 3 4 6 8 4 8 2 3 8
Page faults (optimal): X X X X X O O O O O X X X O O O X O
Page faults (LRU): X X X X X X X O O X X X X O O X X O


Bélady's anomaly: increasing the cache size can reduce the hit rate.

Affected replacement strategy: FIFO.

Reference string 1 2 3 4 1 2 5 1 2 3 4 5
Cache size 3 will result in 9 page faults.
Cache size 4 will result in 10 page faults.


Base + Limit:
+) Each program can be compiled to run at the same address (using relocation registers)
-) Code/data can't be shared between processes. Each process has its own single region of memory

+) Fast, high locality
+) Easy sharing
-) External fragmentation

+) Solves external fragmentation introduced by segmentation
-) Less efficient than segmentation


Assignment 4: Filesystem/IO





Assignment 5: Virtual Machines


All of them.


[X] True [] False
[X] True [] False
[] True [X] False

Assignment 6: Sampler


Advantages: no need to copy the entire physical address space when doing a fork(). One will only allocate new physical pages (and make therefore new mappings) if changes are made by either process. Very efficient if either process only makes small changes to the address space. Cost are induced by page faults.


Yes, priorities will not change and therefore it is possible that low-priority tasks experience starvation by runnable high-priority tasks that never block. A solution to it is aging - the longer a process is waiting to be scheduled, the higher its priority gets. Priorities have to be reset periodically.


Access Control Lists: each object has a list of principals having rights over them including what rights they have. That list can be stored additionally to that object's/file's metadata and can be used to query and change rights of any principal.

Capabilities: a file system can implement certain capabilities, e.g. the possibility to create symbolic links is one capability of ext4.


Thrashing: we're in the scenario of demand paging. Thrashing occurs when the working set (= set of virtual pages referenced by the process at a specific time interval) is bigger than the available physical memory, i.e. almost all application memory accesses will result in page faults and performance will fall to near zero. Almost all time is spent paging pages in and out of memory and none actually executing the program.

Part 2: Networks

Assignment 7: The Basics

1) ii. Data Link Layer
2) i.
3) iv. Most DNS root servers are reachable by IP anycast
4) i. connect()
5) i. ... in-sequence delivery, iv. ... a connection-oriented service
6) i. Source IP address, iv. Source port, v. IP checksum
7) ii. 23 ms

Explanations: 2) Since the host doesn't know anything except for the Gateway, it should just send every thing there

Host  Switch  Host
  A --- S --- B

   |\   |    |  |
   |\\  |    |  |
   | \\ |    |  |
   |  \\|    | 10ms until first bit of package 1 at S
   |   \|    | 1ms until package 1 at S && first bit of package 2 at S
   |    |\   |  |
   |    |\\  |  |
   |    | \\ |  |
   |    |  \\| 10ms until first bit of package 1 at B
   |    |   \| 1ms until package 1 at B && first bit of package 2 at B
   |    |    | 1ms until package 2 at B

total transmition: 23ms

Assignment 8: Link Layer


 1) Bridge 4, port to LAN C
    Bridge 3, port to LAN E
2) B, C, D, E. Since all forwarding tables are initially empty, the frame is broadcast to the whole network.
3) A, D . After receiving a frame from F, all forwarding tables now contain an entry to properly forward a packet to F. (LAN B -> Bridge 1 -> LAN A -> Bridge 2 -> LAN D -> Bridge 3 -> LAN F -> Host F )
4) Keep track of what messages have been sent out of each switch and don't resend (based on message sequence number)


 1) F -> E 
    D -> E
    E -> F
    Explanation: D, E, and F can send without interfering with A's package at B. E and F do not see the package from A, so it is possible to send to them.

 2) E -> F
    Explanation: F is the only host that does not see the package from B. Since E does not interfere with A when sending, E can send to F.

Explanations: a)

 1) LAN C would be reached from Bridge 2 and 4 at the same time, 2 wins, thus 4 closes the port to LAN C.
    LAN E get's reached by Bridge 4 in step 2, while Bridge 3 only reaches it after 3 steps. Thus Bridge 4 get's LAN E and Bridge 3 closes it's port
2) Since the Bridges don't know where Hast A is, they'll just flood every thing with the package from Host F. Subsequently, Bridge 1, 2, 3 will learn the direction of Host F.
3) Since Bridge 4 doesn't know about F yet, it will forward it to Host E, Bridge 1 on the otherhand has a known path to F, thus it will directly send it without flooding the full path

Question: Is this correct? Or will for example in a) 3) host A, and D too get the package, as they are in the LAN, but Host C won't as Bridge 2 knows to send the traffic to LAN D? I kind of assumed that each lan is a Switch, that is able to directly route traffic from one bridge to another without touching the host, once it knows the path. What do you guys think?

Assignment 9: Network Layer


 1) A could announce to its neighbors, that it has a route to D, with cost 3 (A->C->D), while C just kicked out the connection to D. C thus would accept this route and add it to its table as `Route to D, through A, cost 5 (3+2)`. It would then re-announce this to A, which would update its route through C to be cost 7, at which point it would prefer the route through B at cost 4. This would get re-announced to C which updates its table to a cost of 6. Until this has happened, packages from A and C to D would get dropped.
2) first, set weight to 3, this way, A will prefere the route using B (cost 4), over C (cost 5) once this has propagated, set the weight to 7, now C will also go through A (cost 6) instead of direct (cost 7). The administrator can now safely take the link C-D offline.


| Prefix          | Interface |
|  | eth1      |
| | eth0      |
| default         | eth2      |

Note: This is the same exercise as "SS2018 Computer Networks - Exercise 8 Assignment 2"

| AS | Route |
| 1  | {1 D} |
| 2  |       |
| 3  |       |
| 4  | {4 D} |

| AS | Route   |
| 1  | {1 D}   |
| 2  | {2 1 D} |
| 3  | {3 4 D} |
| 4  | {4 D}   |

| AS | Route     |
| 1  | {1 D}     |
| 2  | {2 3 4 D} |
| 3  | {3 2 1 D} |
| 4  | {4 D}     |

| AS | Route   |
| 1  | {1 D}   |
| 2  | {2 1 D} |
| 3  | {3 4 D} |
| 4  | {4 D}   |

| AS | Route     |
| 1  | {1 D}     |
| 2  | {2 3 4 D} |
| 3  | {3 2 1 D} |
| 4  | {4 D}     |

 6) It oscilates between the state in `2)` and `3)`. This leads to unstable connections from AS 2 and 3 to D, as the route is updated continuously and packages might get droped in between.

Explanations: b)
We do only look at the last byte of the IP -> 64/3 = 010x | eth0 -> 0/1 = 0x | eth1 -> 0/2 = 00x | eth1 -> 64/4 = 0100x | eth1

We see, that 010x has to go to eth0, all other 0x traffic to eth1. We further see that 0100x traffic has to go to eth1, leaving only 0101x for eth0.
Thus we can say, all traffic 0x to eth0, all trafic 0101x to eth1, all other to eth2

Assignment 10: Transport Layer


   1s     1s     1s     1s
A ---- B ---- C ---- D ---- E
 1) RTT = A->E + E->A = 4*1s + 4*1s = 8s
2) A waits on ack of E before sending next package => 1/8 packet/sec (as 1 package takes 8 seconds...)
3) min. Windown Size: 8, Troughput: 1 packet/second
4) Recv Buff: [2,4] (assuming sender sends one package every second)
Alternative (according to Telegram discussion): 4) Recv Buff: [2, 4, 6]

b) (I'm not sure about this one, anyone has a better idea?) Discussion

 1) It has received 3 acks for packet 21, indicating 23, 24, 25 have made it, thus it does a fast retransmit of package 22
2) We would be allowed to send 22, 23 and 24, however we know, that 23, 24 and 25 have allready made it, otherwise we wouldn't have got 3 acks back, thus we only need to send 22
3) 26, 27, 28 and 29
4) Window Size 1, Packet no. 4 (however I think this is wrong, anyone?)
5) 14th, only packets 39 and 40