Points: Grade: 0-15 0 16-18 1 19-21 2 22-24 3 25-27 4 28-30 5You must get at least grade 1 to pass the course.

Problems:

1.1 max 3p | A discrete
memoryless source emits a sequence of statistically independent binary
digits with probabilities p(1) = 0.005 and p(0) = 0.995. The digits
are taken 100 at a time and a binary codeword is provided for every
sequence of 100 digits containing three of fewer ones.
| |||||||

1.2 max 2p | Show that the
entropy of the probability distribution, (p_{1}, ...,
p_{i}, ..., p_{j}, ..., p_{m}), is less than
the entropy of the distribution (p_{1}, ...,
(p_{i}+p_{j})/2, ..., (p_{i}+p_{j})/2,
..., p_{m}). Show that in general any transfer of probability
that makes the distribution more uniform increases the entropy.
We only need to look at the difference between the entropies of the mentioned probability distributions. Define f(p)=-p log p. Then the difference is
2*f(p which we need to show to be less than or equal to zero:
f((p This follows if f is convex. Since df/dp=log p+1, it follows that d^2 f/dp^2=1/p>0 when p>0, hence f is convex. We need to define "more uniform". This is conveniently done using Kullback-Leibler divergence D(p,u) where u is the uniform distribution with m discrete probabilities 1/m. If D(p,u)>D(q,u), it means that q is "more uniform" than p. Since
D(q,u)=sum the inequality turns out to be -H(p)>-H(q) or H(p)<H(q) as desired. |
|||||||

2.1 max 2p |
Determine a Huffman code for the text of this problem.
Answer omitted, the code varies depending whether you include spaces or not etc... | |||||||

2.2 max 2p |
Consider all possible 256×256 pixel images with 8 bit grayscale
pixels. How many images can be compressed by 20 bits or more using a
uniquely decodable code? Compare the result with the number of all
possible images.
Each image has n=256*256*8 bits, then there are 2^n possible images. Denote l=n-20, which is the largest codelength of an image that fulfills the requirement given in the problem. Then Kraft-inequality gives an upper bound k as k*2^(-l)<=1 or k=2^l=2^(n-20). This is an upper bound, since shorter codelengths m<l would correspond to larger terms 2^(-m) in the Kraft inequality. To compare k with 2^n, divide k/2^n=2^(n-20)/2^n=2^(-20), which is approximately 0.000095 percent. So compression is difficult! | |||||||

2.3 max 2p | Assume
there is a source emitting symbols a and b with probability P(a |
ß)=ß. Given a sequence s with N_{a} symbols a and
N_{b} symbols b, what is the probability P(a | s) if p(ß) =
[ß(1-ß)]^{-1/2} / pi.
The problem is otherwise identical to the one solved in MacKay's
book in section 4.2.4, except that the prior is different. The
solution is (N | |||||||

3.1 max 2p | The
noisy channel coding theorem states that for a discrete memoryless
channel, it is possible to achieve a finite rate of transmission with
arbitrarily small error probability. Discuss the reasons why in real
applications higher error probabilities are unavoidable.
Some reasons: the noisy channel coding theorem assumes, that the
code is a block code and the block length can be chosen arbitrarily
large. This is not reasonable in practice. | |||||||

3.2 max 2p | Which
distributions p(x) maximize the differential entropy of x for each of
the following constraints:
- E(|x|)=1
- Absolute value |x| is at most one
- E(x^2)=1
The task is to maximize an integral over -p(x)log(p(x)) under constraints. The constraints can be expressed by integrals over p(x)f(x) = constant. The first constraint is that p(x) has to integrate to unity (f(x) = 1) and the next constraint is the one given in the problem. In the first problem f(x) = |x| and in the third problem f(x) = x^2. In the second problem the domain of x is [-1,1] but there are no other constraints. Using the method of Lagrange multipliers one can show that a
general form of the solution is exp(c The solutions are: - p(x) = exp(-|x|)/2 (Laplace distribution), entropy ln 2e.
- p(x) = 1/2 (even distribution), entropy ln 2.
- p(x) = exp(-x^2 / 2)/sqrt(2 pi) (normal distribution), entropy 1/2 ln 2 pi e.
| |||||||

4.1 max 2p |
Recompute the posterior probability (10.33) in the book by using the
given conjugate priors for the mean and variance.
Use the conjugate priors and add them into the posterior probability according to Bayes' theorem. Details omitted here. | |||||||

4.2 max 2p | With Laplace's method, what is the approximation
for the posterior probability density if the posterior has n local
maxima at x_{1}, x_{2}, ..., x_{n}, the
corresponding joint densities of x_{i} and the data D are
p(x_{1}, D), ..., p(x_{n}, D) and the corresponding
Hessian matrices of -log p(x_{i}, D) are H_{1}, ...,
H_{n}.
The solution is obtained as explained in the supplement to the problem. It is . | |||||||

5.1 max 2p |
Estimate in bits the total sensory experience that you have had in
your life - visual information, auditory information etc. Estimate
how much information you have memorized. Compare these with the
capacity of your brain assuming you have 10^{11} neurons
each making 1000 synaptic connections and that the capacity result
for one neuron (two bits per connection) applies. Is your brain full yet?
You need to use heavy approximations. First, ignore everything but
visual information; other sensory information should be a small
fragment of the visual information. Estimate your lifetime, say, in
seconds. Use only 1 significant digit:
| |||||||

6.1 max 2p | In the
MML principle the coding cost is measured by assuming a random
discretisation of the parameters w_{i} with accuracies
e_{i}. The coding cost of the accuracies is neglected. Show
that the MML principle can be seen as a special case of ensemble
learning and that the bits-back argument also justifies the neglection
of the coding cost of the accuracies. (Hint: consider what kind of
ensemble distribution for the weights is implicitly used in the MML
principle.)
The ensemble distribution implicitly assumed in the MML method is
a hyper-box whose sides have lengths e The bits-back argument assumes that the sender picks the
parameters w from distribution Q(w) and encodes them with an accuracy
approaching infinity (e Choosing the Q(w) to be the same as in MML yields identical coding
length since - E In the MML scheme the coding cost of the discretation accuracies
e | |||||||

7.1 max 1p |
Exercise 20.1 in the book, page 387.
Only the sign of the constant n is significant, since the activation is a linear combination of the weights and the constant appears as a multiplier of the activation. The state is updated using a step function of the activation, where only the *sign* of the activation is significant. | |||||||

7.2 max 2p |
Exercise 21.3 in the book, page 403.
Straightforward but tedious calculation, omitted here. | |||||||

8.1 max 2p |
Exercise 19.1 in the book, page 379.
Straightforward but tedious calculation, omitted here. | |||||||

8.2 max 2p |
Exercise 13.3 in the book, page 276. Straightforward calculation, omitted here. |

Tästä sivusta vastaa Harri.Lappalainen@hut.fi.

Sivua on viimeksi päivitetty 26.2.1999.

URL: http://www.cis.hut.fi/cis/edu/Tik-61.181/solutions.html