Commit 5d3988bf authored by Martin Karlsson's avatar Martin Karlsson
Browse files

hw

parent 7482ed9e
%% Cell type:code id: tags:
``` python
from __future__ import division, print_function
import matplotlib
import matplotlib.pyplot as plt
from IPython.display import Image, display, clear_output
%matplotlib nbagg
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import sklearn.datasets
import tensorflow as tf
from tensorflow.python.framework.ops import reset_default_graph
```
%% Output
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-1-06426484224b> in <module>()
7 import numpy as np
8 import matplotlib.pyplot as plt
----> 9 import sklearn.datasets
10 import tensorflow as tf
11 from tensorflow.python.framework.ops import reset_default_graph
ImportError: No module named sklearn.datasets
%% Cell type:markdown id: tags:
# Auto-encoders 101
In this notebook you will implement a simple auto-encoder (AE). We assume that you are already familiar with the basics of neural networks. We'll start by defining an AE similar to the one used for the finetuning step by [Geoffrey Hinton and Ruslan Salakhutdinov](https://www.cs.toronto.edu/~hinton/science.pdf). We'll experiment with the AE setup and try to run it on the MNIST dataset. There has been a wide variety of research into the field of auto-encoders and the technique that you're about to learn is very simple compared to recent advances (e.g. [the Ladder network](https://arxiv.org/abs/1507.02672) and [VAEs](https://arxiv.org/abs/1312.6114)). However, the basic idea stays the same.
AEs are used within unsupervised learning, in which you do not have a target $y$. Instead it *encodes* an input $x$ into a latent state $z$ and decodes $z$ into a reconstruction $\hat{x}$. This way the parameters of the network can be optimized w.r.t. the difference between $x$ and $\hat{x}$. Depending on the input distribution, the difference can be measured in various ways, e.g. mean squared error (MSE). In many applications the auto-encoder will find an internal state of each data point corresponding to a feature. So if we are to model the MNIST dataset, one could expect that the internal state would correspond to a digit-class and/or the shape.
*The exercises are found at the bottom of the notebook*
%% Cell type:markdown id: tags:
## MNIST
First let us load the MNIST dataset and plot a few examples. We only load a limited amount of classes to speed up training.
%% Cell type:code id: tags:
``` python
from sklearn.utils import shuffle
# To speed up training we'll only work on a subset of the data containing only the numbers 0, 1.
data = np.load('../lab1_FFN/mnist.npz')
num_classes = 2
idxs_train = []
idxs_valid = []
idxs_test = []
for i in range(num_classes):
idxs_train += np.where(data['y_train'] == i)[0].tolist()
idxs_valid += np.where(data['y_valid'] == i)[0].tolist()
idxs_test += np.where(data['y_test'] == i)[0].tolist()
x_train = data['X_train'][idxs_train].astype('float32')
# Since this is unsupervised, the targets are only used for validation.
targets_train = data['y_train'][idxs_train].astype('int32')
x_train, targets_train = shuffle(x_train, targets_train, random_state=1234)
x_valid = data['X_valid'][idxs_valid].astype('float32')
targets_valid = data['y_valid'][idxs_valid].astype('int32')
x_test = data['X_test'][idxs_test].astype('float32')
targets_test = data['y_test'][idxs_test].astype('int32')
print("training set dim(%i, %i)." % x_train.shape)
print("validation set dim(%i, %i)." % x_valid.shape)
print("test set dim(%i, %i)." % x_test.shape)
```
%% Cell type:code id: tags:
``` python
#plot a few MNIST examples
idx = 0
canvas = np.zeros((28*10, 10*28))
for i in range(10):
for j in range(10):
canvas[i*28:(i+1)*28, j*28:(j+1)*28] = x_train[idx].reshape((28, 28))
idx += 1
plt.figure(figsize=(7, 7))
plt.axis('off')
plt.imshow(canvas, cmap='gray')
plt.title('MNIST handwritten digits')
```
%% Cell type:markdown id: tags:
### Building the model
When defining the model the latent layer $z$ must act as a bottleneck of information. We initialize the AE with 1 hidden layer in the encoder and decoder using relu units as non-linearities. The latent layer has a dimensionality of 2 in order to make it easy to visualise. Since $x$ are pixel intensities that are normalized between 0 and 1, we use the sigmoid non-linearity to model the reconstruction.
%% Cell type:code id: tags:
``` python
from tensorflow.contrib.layers import fully_connected
from tensorflow.python.ops.nn import relu, sigmoid
```
%% Cell type:code id: tags:
``` python
# define in/output size
num_features = x_train.shape[1]
# reset graph
reset_default_graph()
# define the model
x_pl = tf.placeholder(tf.float32, [None, num_features], 'x_pl')
l_enc = fully_connected(inputs=x_pl, num_outputs=128, activation_fn=relu, scope='l_enc')
l_z = fully_connected(inputs=l_enc, num_outputs=2, activation_fn=None, scope='l_z') # None indicates a linear output.
l_dec = fully_connected(inputs=l_z, num_outputs=128, activation_fn=relu, scope='l_dec')
l_out = fully_connected(inputs=l_dec, num_outputs=num_features, activation_fn=sigmoid) # iid pixel intensities between 0 and 1.
```
%% Cell type:markdown id: tags:
Following we define the TensorFlow functions for training and evaluation.
%% Cell type:code id: tags:
``` python
# calculate loss
loss_per_pixel = tf.square(tf.sub(l_out, x_pl))
loss = tf.reduce_mean(loss_per_pixel, name="mean_square_error")
# if you want regularization
#reg_scale = 0.0005
#regularize = tf.contrib.layers.l2_regularizer(reg_scale)
#params = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)
#reg_term = sum([regularize(param) for param in params])
#loss += reg_term
# define our optimizer
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.25)
# make training op for applying the gradients
train_op = optimizer.minimize(loss)
```
%% Cell type:code id: tags:
``` python
# test the forward pass
_x_test = np.zeros(shape=(32, num_features))
# initialize the Session
sess = tf.Session()
# test the forward pass
sess.run(tf.initialize_all_variables())
feed_dict = {x_pl: _x_test}
res_forward_pass = sess.run(fetches=[l_out], feed_dict=feed_dict)
print("l_out", res_forward_pass[0].shape)
```
%% Cell type:markdown id: tags:
In the training loop we sample each batch and evaluate the error, latent space and reconstructions every epoch.
%% Cell type:code id: tags:
``` python
batch_size = 100
num_epochs = 100
num_samples_train = x_train.shape[0]
num_batches_train = num_samples_train // batch_size
num_samples_valid = x_valid.shape[0]
num_batches_valid = num_samples_valid // batch_size
updates = []
train_loss = []
valid_loss = []
cur_loss = 0
plt.figure(figsize=(12, 24))
try:
for epoch in range(num_epochs):
#Forward->Backprob->Update params
cur_loss = []
for i in range(num_batches_train):
idxs = np.random.choice(range(x_train.shape[0]), size=(batch_size), replace=False)
x_batch = x_train[idxs]
# setup what to fetch, notice l
fetches_train = [train_op, loss, l_out, l_z]
feed_dict_train = {x_pl: x_batch}
# do the complete backprob pass
res_train = sess.run(fetches_train, feed_dict_train)
_, batch_loss, train_out, train_z = tuple(res_train)
cur_loss += [batch_loss]
train_loss += [np.mean(cur_loss)]
updates += [batch_size*num_batches_train*(epoch+1)]
# evaluate
fetches_eval = [loss, l_out, l_z]
feed_dict_eval = {x_pl: x_valid}
res_valid = sess.run(fetches_eval, feed_dict_eval)
eval_loss, eval_out, eval_z = tuple(res_valid)
valid_loss += [eval_loss]