Lösungsvorschlag Computational Intelligence Lab FS12 (Repetition)

Aus VISki
Wechseln zu: Navigation, Suche

Question 1: Dimensionality Reduction

a)

  - No (we have zero error only if we use ALL eigenvalues or if the unused eigenvalues are zero)
  - No ( for symmetric matrices, so  in SVD --> . In PCA, , with )
  - No (we do not have zero error in general)
  - No (PCA also needs to select the k largest eigenvalues of the cov. matrix)
  - Yes (The PCA objective can be formulated as minimizing the approx. error, L_2 is correct I think)

b)

  1. Calculate SVD by hand: 
     1. Compute 
     2. Compute its eigenvalues (or just check that the square of the elements of , which should be the singular values of  are indeed its eigenvalues), take roots of the absolute values and sort in descending order.
     3. Construct  and place them along its diagonal
     4. Compute the eigenvectors of  using the eigenvalues. Place them along the columns of  (in the correct order).
  2.  is not invertible, therefore calculate . Left side is computable, right side place variables for unknown .
  For the remaining undefined unknowns, calculate them using the fact .
  We get 
  The last column needs to be orthogonal to the first two (e.g. cross product).

c)

  1. each vector has 150 entries, there are 10*10 = 100 patches, so X is 150 x 100, and the covariance matrix is 150 x 150 (X*X^T dimensions)
  2. Compute X' = X - M, where M contains the mean in each column. 
     Compute the Covariance Matrix Sigma = (1/N)*X'*X'^T
     Perform Eigenvalue Decomposition on Sigma, getting U*D*U^T.
     For k <= 20 (*), let U_k be the matrix of the first k eigenvectors only
     Compress X by calculating U_k^T*X' = Z'
     Reconstruct X (with error) by calculating X'~ = U_k*Z', X~ = X'~ + M
     (*) So that we do not exceed 5000 bytes. Dimensions are ,
       with D = 150, N = 100.
     Without saving the dictionary U_k and the mean, k <= 5000 / N = 5000 / 100 = 50.
     With the dictionary and the mean k*N + K*D + D <= 5000 <=> k <= 19

d)

(since A is symmetric it can be factorized using Eigendecomposition, U is orthogonal), to show: If all values of D (eigenvalues) nonnegative, then A PSD.

Consider any . We can rewrite as , i.e. perform a basis transformation to the eigenvector space of A. Then, since is a diagonal matrix with all values nonnegative.

Alternative Solution

Since all , there exists , such that

Question 2: Clustering

a)

  - False (on the contrary, EM is much slower)
  - False (soft assignments is a more general notion than GMM, which includes more than this particular mixture model)
  - True (BIC scales logarithmically with the size of the data set, and AIC does not, so BIC > AIC for large enough N)

b)

a: Since the two clusters on the top are close together, a stable clustering algorithm may put them together in one cluster, and will always put those two together, and never mix it with the bottom one. Thus, the result is stable, but consists of only 2 clusters. In b, the clusters have about the same distance, thus, a result with only 2 clusters cannot be stable (it will mix different clusters together, changing results every time).


c)

1

We have a component distribution, so we need latent variables for the probabilities that a certain document is drawn from a particular distribution. We define with , and .

2

By expectation of the latent variables, they mean given the observed data. Since , we need to apply the formula

That is in our case:

This is analogous to the GMM case, but the mixture components are not Gaussians (but our special distribution).

3

By calculating the parameters, they mean those that maximize the (log) likelihood. So we first rewrite the log-likelihood using our values for from 2.

Then we differentiate the Lagrangian consisting of this log-likelihood and the conditions , for which we need a lambda.

With we get

And summing both sides over j:

Therefore

4

  • Each is a vector consisting of elements, each being a probability. Due to the hint in 3.) we know that . Therefore, only 99 of the 100 elements is unknown/a variable ().
  • We have 's. Due to one is not a free variable, but can again be determined using the others. Further, so there is another one that is fixed. This leaves us with variable 's.

Based on both observations we get

K = 3
AIC
BIC
K = 5
AIC
BIC

d)

  1. 
  1. (Alternative)  Explanation: The probability that at least 1 entry in u (of those marked by z) is 1.
  2. 

Question 3: Sparse Coding

a)

 1. ? , computing inverse transform is efficient.
 2. Transformation is energy (norm) preserving, i.e. 

(I consider 1. to be the important point here, since the question is about a "computational issue". Inverting a matrix is usually inefficient and maybe even ill-conditioned, so this is an issue, while 2. isn't really)

b)

  - Yes (U is invertible with U^-1 = U^T, so rank(U) = D)
  - No (once again since U is invertible)
  - Yes (Since the columns of U have norm 1, multiplication with a vector doesn't change its norm)
  - No
  - Yes

c)

  - Yes (these should be sparse in some frequency domain)
  - Yes (sparse in a component or eigendecomposition)
  - No (this would mean compressing a random sequence, which is impossible)
  - Yes (these follow patterns and are somewhat predictable, so not random)
  - No (a good tennis player will try to be as unpredictable, i.e. random, as possible)

d)

It was discussed in the exam revision session how to solve this type of exercise.

1.) Represent the signal as a vector, [4 2 2 5].

2.) Perform a basis transformation into the Haar Wavelet Basis by taking scalar products with the Haar basis function vectors [1 1 1 1], [1 1 -1 -1], [1 -1 0 0], [0 0 1 -1]. This gives: 13/4, -1/4, 1, -3/2.

3.) Remove the basis function with the smallest absolute value. This is [1 1 -1 -1] in this case.

Alternative Solution

I'm (.gregor) relatively sure the above solution is wrong. However, someone with expertise, I don't have any, should select the right one. The following statements are in line with G. Strang's Linear Algebra book and to the way basis transformation was done in the lecture. According to http://grail.cs.washington.edu/pub/stoll/wavelet1.pdf the method of just throwing away the smallest absolute coefficient works only if we use the orthonormal basis.

  • The values of of the given vector is (read out the value of each section between the vertical lines).
  • The matrix contains the normalized wavelet basis vectors and is the basis in which we want to represent our measurement
  • To represent our measurement in the basis we need to calculate the coefficients
  • This results in
  • The smallest absolute value in is and corresponds to the second basis, i.e. . This is the one we throw away. Again, we can only use the coefficient with smallest magnitude if we use an orthonormal basis.

My two cents, if you are still around: notice the orthonormal basis is just a linear scaling of the orthogonal basis used by the previous answer. Thus, the vector for which the smallest value (in magnitude) is achieved with scaling (in our case, the second column of is the same as the one for which the smallest value is obtained without scaling. This is also consistent with the given hint: computing the actual coefficients is not necessary. Draw the sequence .

e)

1. Yes

2. , with , then and a fourth vector orthogonal to that.

3.

4.

5. The problem reduces to minimizing the cosine of the angle between the existing orthonormal vectors and the new vector (since it is normalized). This is achieved by setting the angle to 45 degrees, as otherwise one of the two angles between the new vector and an existing one has a larger cosine. Then, adding a fourth vector orthogonal to the third does not further increase the coherence. Since a global solution cannot do better than minimize for the first one, then not increase the coherence any more, the solutions for and are the same.

Question 4: Robust PCA

a)

1. We need to stack the frames as columns of a matrix X. It will be decomposed in X ~= L + S, where the background ends up in L, since it is not moving, so it is low-rank (as it does not change, so it is basically the same column repeated over and over) and the foreground ends up in S, since it is moving, but takes less space, so it is sparse.

2. The background has to take most part of the space, and it must not change much, so that the columns representing the background are the low-rank component. The foreground has to take not much space so it is sparse, but it must be moving and changing, so it is not low-rank. We need the guarantee that neither L nor S are both low-rank AND sparse. In particular, the coherence condition states that the Principal Components of L must not be sparse. This can be a problem if the background color/brightness differs too much within the video. For exact recovery, it must fulfill condition of the 'exact recovery theorem'. (Do we need to give the formulas?)


b)

Convex.

First, show that "- b" does not matter:

So it cancels out on each side of the inequality.

Now, we need to show that taking the maximum element of a vector is convex:

Let and denote the vector entries that maximize .

Then, the left side is .


We have and for any x', y'.

Combining these inequalities, we get

as desired.

Alternative Solution

We directly show that

c)

1. We still want to minimize , but the constraint must only hold for the observed values, i.e., only for (i, j) observed.

2. In original Robust PCA, X = L + S must hold, i.e., for all values of X. When entries are missing, the values in L and S do not have constraints, so they can just minimize the main objective without following any constraint.

3. X is the original matrix. L is the low-rank component and S is the sparse component. The observed values are the ones for which we know the user ratings. lambda is a factor that determines how much a nonzero element of S costs us, i.e., it determines how much we care about sparseness of S.

d)

This question corresponds pretty much to Problem 1 in Series 6 (SS 2015).

1.

First of .

This leads us to:

The dual function:

2.

Using the dual function found in the previous step we see that

3.

Therefore we want to maximize subject to and .

NOTE: The following steps are only valid if is invertible. We do not know that! So we must assume is not invertible.