Lösungsvorschlag Operating Systems And Computer Networks FS14

Aus VISki
Wechseln zu: Navigation, Suche

Achtung dies ist nur ein von Studenten erstellter Lösungsvorschlag, welcher möglicherweise Fehler beinhaltet.

Problem 1: Short Questions

1)

 7  Application Layer
 
 6  Presentation Layer
 
 5  Session Layer
 
 4  Transport Layer 
 
 3  Network Layer
 
 2  Data Link Layer
 
 1  Physical Layer

2)

Um sicher zu stellen, dass die gesendeten Pakete angekommen sind.

Stop and Wait: Nach jedem gesendeten Paket auf eine Bestätigung warten und erst dann das nächste Paket senden. Benutzt ein Bit als sequence number.

Sliding Window: Verallgemeinerung von Stop-and-Wait: Benutzt Bits als sequence number. Erlaubt dadurch unterscheidbare Packete.

Go-Back-N: Nach N Paketen auf Bestätigung warten. Bei einem Fehler (keine Bestätigung nach bestimmter Zeit) werden die letzten N Pakete nochmals gesendet.

Selective Repeat: Bei einem Fehler werden nur die Pakete nochmals gesendet, für die es keine Bestätigung gab.

3)

Ein Hub arbeitet auf dem Physical Layer, ein Switch jedoch auf dem Link Layer (teilweise auch Network Layer). (Siehe Slide 91) [1]

Beide verbinden Netzwerkteile, jedoch broadcasted der Hub einen empfangenen Frame auf alle Ports und somit wird auch bei allen Ports Bandbreite gebraucht. Der Switch hingegen merkt sich die MAC-Adressen der an den Port angeschlossenen Geräte und kann somit einen empfangenen Frame an den richtigen Port weiterleiten. [2]

4)

HIDDEN TERMINAL

           xxxxxxxxxxxxxxxx
 +---------x----+         x         x = Range of C
 | A ====> x  B | <==== C x         - = Range of A
 +---------x----+         x
           xxxxxxxxxxxxxxxx

Nodes A and C are hidden terminals when sending to B. They can't hear each other yet they collide. [3]

EXPOSED TERMINAL

           xxxxxxxxxxxxxxxxxxxxxxxxxxxx
 +---------x--------------+           x         x = Range of C
 | A <==== x  B         C |  ====> D  x         - = Range of B
 +---------x--------------+           x
           xxxxxxxxxxxxxxxxxxxxxxxxxxxx

Nodes B and C are exposed terminals when sending to A and D. They can hear each other yet they don't collide at receivers A and D.[4]

5)

  1. Root Server (.) (Wird von allen Nameservern gespeichert: a.root-servers.net bis m.root-servers.net; sendet IP-Adresse für com Server zurück)
  2. com Server (sendet IP-Adresse für AAA.com zurück)

Discussion

6)

TCP UDP
Connections Datagrams
Bytes are delivered once, reliably and in order Messages may be lost, reordered, duplicated
Arbitrary length content Limited message size
Flow control matches sender to receiver Can send regardless of receiver state
Congestion control matches sender to network Can send regardless of network state

Problem 2: Layer 2

1)

(a)

Nur wird entfernt. (Spanning tree algorithm: Slide 102 [5])

(b)

Transmission B1 B2 B3 B4
A sends to C A on A on A on A on La
C sends to A C on C on Lc C on
D sends to C D on Ld D on D on

Edit:

Bei A sends to C: Fehlt da nicht link12 bei B1

Begründung: B2 macht einen Broadcast der sowohl B3 wie auch B1 erreicht.

Edit2:

Gleiches Problem bei `D sends to C`: Sollte bei B4 eigentlich 'D on link24' drin stehen, da B2 einen Broadcast durchführt?

Antwort: Nein. Denn B2 führt kein Broadcast durch, da es von "C sends to A" gelernt hat, dass der next hop von B2 um nach C zu kommen B3 ist. Darum ist ein Broadcast nicht mehr nötig.

Edit3:

Aus meiner Sicht geht aus der Aufgabenstellung nicht klar hervor, ob die vorherigen Resultate als "gelernt" gelten. Man kann es wahrscheinlich auf beide Arten lösen.

Antwort auf Edit3: Wenn die Bridges nicht lernen könnten, wären die Einträge doch unnötig.


Fazit: Lösung stimmt.

2)

a

The routing tables will be huge. For IP addresses, routers store IP-prefixes to route packets. For MAC-addresses, there is no such thing, so a "router" has to always store all possible MAC-addresses in order to route a packet. The reason that MAC-addresses cannot be grouped in prefixes is that a device has a unique address. And that device can be used anywhere on the world. So it's very unlikely that all device in a "range" (for some defintion of range) are behind the same router.

b

MAC learning is still used. When a new device is connected to the network, it's neighbours have to learn the new MAC address in the network in order to efficiently route packets. When a packet is sent to a MAC-address not in the routing table, it is broadcasted to all addresses.

DHCP is for assigning IP-addresses to hosts and ARP is for resolving IP-addresses to their MAC-addresses. So both are not used anymore if there are no IP-addresses.

c

A devices could move from one AS to another. This could cause the public IP to change, for example when moving from one private network to another. MAC-addresses are fixed and do not change.

Problem 3: Layer 3

1)

(a) R5 (b) R5 (c) R3 (d) R4

Destination Network Next Hop Range Start Range Ende
139.179.200.0/23 R1 139.179.200.0 139.179.201.255
139.179.128.0/18 R2 139.179.128.0 139.179.191.255
139.179.120.0/20 R3 139.179.112.0 139.179.127.255
139.179.220.0/21 R4 139.179.216.0 139.179.223.255
139.179.0.0/16 R5 139.179.0.0 139.179.255.255

2)

Prefix Outgoing Interface
128.128.0.0/9 eth0
128.160.0.0/12 eth1
default eth2

Der Eintrag für 128.192.0.0/10 ist nicht nötig, da der gesamte Bereich (128.192.0.0 - 128.255.255.255) innerhalb des Bereichs von 128.128.0.0/9 (128.128.0.0 - 128.255.255.255) liegt, aber die Adressen die durch eth1 gehen (128.160.0.0 - 128.191.255.255) nicht innerhalb des Bereichs liegen.

Der Bereich 128.176.0.0/12 geht ja ebenfalls durch eth0, womit wir diesen aus 128.160.0.0/11 ausschliessen können, indem wir den Bereich, der durch eth1 geht, auf 128.160.0.0/12 beschränken.

3)

(a)

Prefix: 13.13.0.0/16

Path: {AS5,AS1}

SentTo: AS4, AS7, AS6

(b)

Prefix: 13.13.0.0./16

Path: {AS7,AS5,AS1}

SentTo: AS6, AS8

(c)

{AS8,AS7,AS5,AS1}, {AS8,AS4,AS5,AS1}

{AS8,AS7,AS6,AS1} ist kein gültiger Pfad, da AS7 die Verbindung über AS6 nicht weiterleitet, da er an link 5-7 verdienen will und die Daten nicht gratis über Peer to Peer an AS 6 geben will.

(d)

{AS8,AS4,AS5,AS1}, da AS4 sein Customer ist und AS8 so etwas verdient, während er über die andere Möglichkeit nichts verdienen würde, da er eine Peer To Peer-Verbindung zu AS7 hat.

(e)

No. AS7 would act as transit AS although it is connect with peering links to both AS8 and AS6.

Problem 4: Congestion Control

1) a) Grün

b) Rot

SS14 4.jpg


2) Vorschlag: SS14 42.jpg => Third Dup Ack --> Resend Packet that got lost eralier: Packet seqno 3600

A sidemark: Shouldn't it be 4 times the same segment number before fast retransmit. Which is not the case here. This is because three duplicate acks are four acks with same seq number.

Problem 5: P2P Communication and NATs

1

When the client behind the NAT opens a connection to the server 192.33.93.143:10000, the NAT generates a temporary port number for that connection. The server sees a connection from 77.56.10.10 with that generated port number. The NAT makes sure, that all responses sent to that generated port are redirected to the client (which has assigned its own port to that connection). So the NAT keeps a translation table of the form CLIENT_IP:CLIENT_PORT to PUBLIC_PORT.

2

The translation table of both NATs are (without configuration) initially empty. So when A tries to send a message to Bs public IP, Bs NAT will discard that message since it doesn't know where to redirect that message to. So A and B can not communicate with each other.

3

This technique is called "Hole Punching". Both A and B establish a connection with the Rendevous Server. So As and Bs NAT will have an entry for the connection consisting of As and Bs external port number and the private IP and port number. The Rendevous Server obviously knows both A and Bs public IP and port number. The Rendevous Server now provides A with Bs public port number and B with As public port number. A and B now both can send messages to that port, and the NATs will properly redirect those messages.

Problem 6: Threads and processes

Q: What is the difference between threads and processes?

A: A proces is the execution of a program with restricted rights. It consists of a Virtual Processor, a program text, program data and OS "stuff" like open file descriptors, sockets... A thread can be implemented within a process, as we did in the exercise sessions. I.e. a thread belongs to a process. One of the main differences is that every process has its own, protected address space while threads of a same process share their heap.


Q: Name three methods to communicate between two processes:

A: Pipes, named pipes, file system, signals, shared memory...


Q: Explain why copy-on-write is a useful optimization in the context of fork() system call.

A: Fork creates a copy of a process. But if it would actually copy all the memory (program text, data, ...) it would take way to long. Instead copy-on-write is used. All pages are marked as readonly. If one of the two processes tries to modify one of their pages, a page fault occurs. The handler can then copy the page, make it writeable and restart the instruction. Thus copy-on-write is a performance optimization by only copying memory when it is really needed.


Process Tree:

The number on top is the value of cnt just after the process was created.

                                           0             
                                         +----+          
                                         | p0 |          
                                +--------+-+--+------+   
                                |          |         |   
                                |          |         |   
                               1|          |2        |3  
                             +----+      +----+    +----+
                             | p1 |      | p5 |    | p7 |
                             +----+      +----+    +----+
                             |    |           |          
                        +----+    +--+        |          
                        |            |        +--+       
                       2|           3|           |3      
                     +----+       +----+         +----+  
                +----+ p2 |       | p3 |         | p6 |  
                |    +----+       +----+         +----+  
              3 |                                        
              +-+--+                                     
              | p4 |                                     
              +----+

Problem 7: Memory Management

1)

  • Copy-on-Write: Reuse the same spot for multiple applications if it's read-only
  • Paging: Can page memory out to disk (e.g. Linux-Swap)
  • Devices: Can use virtual memory to point to other hardware

2)

PDBR
Entry 0 0xC0FFE
Entry 1 0xBEEF1
Entry 2 0xFEFEF Flags
Entry 3 0x01337 (ignore)
Entry 4 0xDEAD0
... ...
0x01337
Entry 0 0xABCD0
Entry 1 0xABCD1
Entry 2 0xABCD2 Flags
Entry 3 0xF00DD (ignore)
Entry 4 0xABCD4
... ...
0xFEFEF000
Entry 0 0xCA151
Entry 1 0xABCDE
Entry 2 0xCC151 Flags
Entry 3 0xCD151 (ignore)
Entry 4 0xCD151
... ...

3)

Both page tables map the address 0xfefefefe to the same physical address. The OS can write protect the page of the shared code. If then a process tries to modify the code that causes the CPU to trap and the OS can take countermeasures against the process (kill it!).

4) Page Faults

LRU

Page accessed 1 2 1 3 4 3 2 5 3 1 1 3 7 8 2 3 3 2
Page fault X X X X X X X X X
Pages in Memory 1



2
1


3
1
2

4
3
1
2
5
2
3
4
1
5
3
2
7
3
1
5
8
7
3
1
2
8
7
3

Optimal

Page accessed 1 2 1 3 4 3 2 5 3 1 1 3 7 8 2 3 3 2
Page fault X X X X X X X
Pages in Memory 1



2
1


3
2
1

4
3
2
1
5
3
2
1
7
5
3
2
8
7
3
2

Problem 8: Scheduling

1)

Hardware timer.

2)

0 1 2 3 4 5 6 7 8 9 10
1
2
3
4

shortest-job-first, da zu Zeit 20ms Task 4 ausgeführt wird, der die kürzere Execution Time hat als Task 3, aber die frühere Deadline. Somit wird earliest-deadline-first ausgeschlossen. Round Robin kann man ebenfalls auschliessen, da zu Zeit 10ms immer noch Task 1 ausgeführt wird, und nicht zum ebenfalls schon bereiten Task 4 gewechselt wird.

0 1 2 3 4 5 6 7 8 9 10
1
2
3
4

Round Robin, da bei Zeit 0 Task 4 ausgeführt hat, der einerseits die längere Execution Time hat, als Task 1, der ebenfalls bereits bereit ist und andererseits auch die spätere Deadline als Task 1 hat, kann er weder shortest-job-first noch earliest-deadline-first sein.

0 1 2 3 4 5 6 7 8 9 10
1
2
3
4

earliest-deadline-first, da zur Zeit 20ms Task 3 ausgeführt wird, welcher die frühere Deadline hat als Task 4, der ebenfalls bereit wäre, aber die grössere Execution Time, womit shortest-job-first ausgeschlossen wird. Round Robin kann man ebenfalls auschliessen, da zu Zeit 10ms immer noch Task 1 ausgeführt wird, und nicht zum ebenfalls schon bereiten Task 4 gewechselt wird.

3)

Deadline - Entry Time - Execution Time

Problem 9: File Systems

a

  • Compute the number of the block:
  • Look up the address of the block number in the inode (potentially following the indirect nodes)
  • Access the byte at relative to the address of the point above to get the right byte

b

  • Compute the number of the block:
  • Follow the next-pointer in the FAT times
  • Access the byte at relative to the address of the point above to get the right byte

c

  • FFS computes the address in constant time
  • FAT allows for arbitrary sized files (of course smaller than the disk capacity) ahm, really? See discussion

d

  • Internal fragmentation: A file uses only one byte of a huge block that is exclusively allocated for that file.
  • External fragmentation: Assume a file system which uses contiguous blocks on the disk to store a file. When add files until the disk is full, then remove every second file we have half of the disk available. But there's no space to store a file which has half the size of the disk, as there's no continous block which has that size.

Problem 10: Virtual Machine Monitors

a

There are 17 instructions that are usable in both user and kernel mode of the processor, but behave differently. Therefore a VMM cannot derive easily whether the user mode instruction shall be executed, or the kernel mode instruction be emulated.

b

  • Emulate all kernel code
  • Replace evil instructions by explicit trap instruction when building the kernel (paravirtualisation)
  • Dynamically scan newly loaded kernel code pages for evil instructions and replace them (binary rewriting)
  • Hardware support

c

  • VMM installs a balloon driver in the guest kernel.
  • VMM demands memory from the balloon driver (upcall)
  • balloon driver demands memory from the kernel
  • kernel provides memory address to allocated block
  • balloon returns address of allocated block to the VMM (hypercall)

d

The guest OS tries to modify the PTBR which causes a trap into hypervisor mode. The hypervisor finds out which process has been dispatched to load the corresponding shadow page table of that process. Then it does the demanded changes by the guest OS and returns control to the guest

The guest OS does a write to its page table which causes a trap into hypervisor mode. The hypervisor marks the page as paged out in the shadow page table as well and emulates the page table modification.

The guest OS tries to read its page table. This causes a trap and the hypervisor looks up the virtual address in the shadow page table. It provides the value contained at the physical address to the application in the guest and returns control.