Advanced Systems Lab Sample Exam Questions HS10/11
- 1 Sample Exam HS2010/11
- 2 Solution to Sample Exam HS2010/11
- 2.1 Question 1
- 2.2 Question 2
- 2.3 Question 3
- 2.4 Question 4
- 2.5 Question 5
- 2.6 Question 6
- 2.7 Question 7
- 2.8 Question 8
- 2.9 Question 9
- 2.10 Question 10
- 2.11 Question 11
- 2.12 Question 12
- 2.13 Question 13
- 2.14 Question 14
- 2.15 Question 15
- 2.16 Question 16
- 2.17 Question 17
- 2.18 Question 18
- 2.19 Question 19
- 2.20 Question 20
- 2.21 Question 21
- 2.22 Question 22
- 2.23 Question 23
- 2.24 Question 24
- 2.25 Question 25
- 2.26 Question 26
- 2.27 Question 27
- 2.28 Question 28
Sample Exam HS2010/11
Solution to Sample Exam HS2010/11
The system is stable if .
One can put less then 100 additional operations per second, before the system becomes unstable
λ = ρ * m * μ
μ = 1 / 0.01 = 100
λ = 1 * 2 * 100 = 200
max. request rate with a stable system = 199
For the case we have an arrival rate that is lower than the service rate and thus any incoming requests can be serviced in time.
When the system is saturated because the rate of incoming requests is equal to the service rate. Random effects (we assume an exponential distribution in both arrival rate and service rate) lead to potentially unstable behaviour.
No, in a closed system, the request rate is the same as the throughput, because a client only requests again once it has gotten an answer. Therefore, a drop like that cannot be seen because fewer requests will be sent by each client per second if the response time is longer.
Refer to , Page 9 and 10.
Refer to , Page 46 and 47.
It depends ;)
As long as the management overhead of the queue is negligible compared to the workload itself, a single queue with m consumers is more efficient, because if there is a queued request and an idle consumer, the request will be assigned to this consumer. With multiple queues with a single consumer, one queue can be overloaded while the others are empty.
If queue management overhead is big compared to the workload itself, multiple queues can be more efficient as they avoid the overhead.
The System under Tests includes all aspects involved with providing the services of the system, including hardware and software.
The Component under Study refers to the component of the SUT that we want to study, eg. to determine the effect on a certain service of the system.
From the Slides (Set 5, Slide 3):
- We apply load to the SUT
- We measure performance of SUT
- We want to understand the CUS!
We want to determine the effect the amount of RAM in the server machine has to performance of a postgres installation with respect to a dataset. We setup PostgreSQL populated with the TPC-H dataset and run the specified queries locally in a defined succession. We vary the amount of RAM in the server and measure throughput and response time.
The System under Study consists of the Server machine, OS and Postgres parameters. The CUS is RAM
See Question 10
Resolution: The "fine-grained-ness" of measurements.
Input Width: Amount of data/information logged/traced/measured
- Stop / Abort
To achieve meaningful monitoring data you need to provide some sort of common (synchronized) time or at least ensure that causality can be observed. Furthermore, it is usually desired to merge measurements which itself can be a challenge.
A and B are non-interacting:
- A1/B1-A2/B1 = A1/B2-A2/B2 = 5
- A1/B1-A1/B2 = A2/B1-A2/B2 = 6
C and D are (additively) interacting:
- C1/D1-C2/D1 = 11 ungleich C1/D2-C2/D2 = 12
- C1/D1-C1/D2 = 6 ungleich C2/D1-C2/D2 = 7
Factors: A, B, C
Levels: -1, 1
|always one||vary over all possible values||=A*B||=A*C||=B*C||=A*B*C|
The warm-up phase denotes the time the system does not behave as during the average case, for effects such as cold caches.
The cool-down phase denotes the time the clients start shutting down and completing their runs. Response time and throughput may decrease.
The system can be modelled as a simple M/M/m queue. The service rate of a dequeuer thread can be expected to depend on the amount of db connections: and total service rate
Throughput is maximal if the service rate is maximal.
The two graphs do not match. When the throughput stays constant the the response time has to increase with increasing number of clients. But the response time graph shows a constant response time, this on the other hand would mean that the througput has to increase with increasing number of clients. Therefore this makes no sense.
Graph a) states around 40 queries/minutes, that means one query takes > 1 second. But Graph b) shows a response time of around 80ms, which clearly is less than 1 second. A sign that this 2 graphs can not even belong to the same system.
The system can handle around 40 queries/minutes with a guaranteed response time of 80ms. To make sure that this response time is also guaranteed when the load increases, the system queue is finite and drops all requests that would lead to an overload of the system.
90% of the values lie in the interval
System B is better in every aspect that we can analyse in this graph.
It has better average performance and the standard deviation is relatively smaller for all cases except for one client.
This means that we get a more consistent and higher performance.
while CI > 2 * sigma x_new = doExperiment n = n+1 mean = (mean*(n-1)+x_new)/n t = t(two_sided,n-1,95%) //table look up CI = mean +- (t*s/n^-2) end
Alternative Solution (python):
def getVariance(xs, mean): sum = 0.0 for x in xs: sum = sum + ( x - mean )**2 return sum/(len(xs)-1) # Initialization xs =  mean = 0 # Actual Code while True: x = doExperiment() xs.append(x) mean = ((mean * (len(xs)-1)) + x) / len(xs) if len(xs) == 1: # We need to have at least two runs continue s = getVariance(xs, mean)**0.5 t = table_lookup_t('two_sided', len(xs)-1, 0.95) if t/(n**0.5) <= 1: break
Where we used:
gap = (mean + t * s / (n**0.5)) - (mean - t * s / (n**0.5)) # Is the same as: gap = 2 * t * s / (n**0.5) if gap <= 2 * s: # Is the same as if 2 * t * s / (n**0.5) <= 2 * s: # Is the same as: if t / (n**0.5) <= 1:
N: Number of clients
1 <= N <= 8: System behaves normally, is not yet saturated.
Response time is constant.
9 <= N <= 18: System behaves normally, is saturated.
Response time grows linearly, with slope .
19 <= N: System is thrashing. Throughput collapses.
Response time might grow with slope , or be small and constant, if only 1 client gets successfully served by the system, and we only count the successful queries.
In general, the system behaves unpredictable and unstable.