"In this notebook you will implement a simple auto-encoder (AE). We assume that you are already familiar with the basics of neural networks. We'll start by defining an AE similar to the one used for the finetuning step by [Geoffrey Hinton and Ruslan Salakhutdinov](https://www.cs.toronto.edu/~hinton/science.pdf). We'll experiment with the AE setup and try to run it on the MNIST dataset. There has been a wide variety of research into the field of auto-encoders and the technique that you're about to learn is very simple compared to recent advances (e.g. [the Ladder network](https://arxiv.org/abs/1507.02672) and [VAEs](https://arxiv.org/abs/1312.6114)). However, the basic idea stays the same.\n",
"\n",
"AEs are used within unsupervised learning, in which you do not have a target $y$. Instead it *encodes* an input $x$ into a latent state $z$ and decodes $z$ into a reconstruction $\\hat{x}$. This way the parameters of the network can be optimized w.r.t. the difference between $x$ and $\\hat{x}$. Depending on the input distribution, the difference can be measured in various ways, e.g. mean squared error (MSE). In many applications the auto-encoder will find an internal state of each data point corresponding to a feature. So if we are to model the MNIST dataset, one could expect that the internal state would correspond to a digit-class and/or the shape.\n",
"\n",
"*The exercises are found at the bottom of the notebook*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## MNIST\n",
"First let us load the MNIST dataset and plot a few examples. We only load a limited amount of classes to speed up training."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"from sklearn.utils import shuffle\n",
"\n",
"# To speed up training we'll only work on a subset of the data containing only the numbers 0, 1.\n",
"When defining the model the latent layer $z$ must act as a bottleneck of information. We initialize the AE with 1 hidden layer in the encoder and decoder using relu units as non-linearities. The latent layer has a dimensionality of 2 in order to make it easy to visualise. Since $x$ are pixel intensities that are normalized between 0 and 1, we use the sigmoid non-linearity to model the reconstruction."
"1. The above implementation of an AE is very simple.\n",
" - *Experiment with the number of layers and non-linearities in order to improve the reconstructions.*\n",
" - *What happens with the network when we change the non-linearities in the latent layer (e.g. sigmoid)?*\n",
" - *Try to increase the number of digit classes in the training set and analyze the results.*\n",
" - *Test different optimization algorithms and decide whether you should use regularizers*.\n",
" \n",
"2. Currently we optimize w.r.t. mean squared error. \n",
" - *Find another error function that could fit this problem better.* \n",
" - *Evaluate whether the error function is a better choice and explain your findings.*\n",
"\n",
"3. Complexity of the bottleneck.\n",
" - *Increase the number of units in the latent layer and train.*\n",
" - *Visualize by using [PCA](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) or [t-SNE](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html).*"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"### Exercise 2 - Adding classification (for the ambitious)\n",
"The above training has been performed unsupervised. Now let us assume that we only have a fraction of labeled data points from each class (implemented below). As we know, semi-supervised learning can be utilized by combining unsupervised and supervised learning. Now you must analyze whether a trained AE from the above exercise can aid a classifier.\n",
"\n",
"1. Build a simple classifier (like the ones from week1) where you:\n",
" - *Train on the labeled dataset and evaluate the results.*\n",
"2. Build a second classifier and train on the latent output $z$ of the AE.\n",
"3. Build a third classifier and train on the reconstructions of the AE.\n",
"4. Evaluate the classifiers against each other and implement a model that improves the classification by combining the input, latent output and reconstruction."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"# Generate a subset of labeled data points\n",
"\n",
"num_labeled = 10 # You decide on the size of the fraction...\n",
11 from tensorflow.python.framework.ops import reset_default_graph
ImportError: No module named sklearn.datasets
%% Cell type:markdown id: tags:
# Auto-encoders 101
In this notebook you will implement a simple auto-encoder (AE). We assume that you are already familiar with the basics of neural networks. We'll start by defining an AE similar to the one used for the finetuning step by [Geoffrey Hinton and Ruslan Salakhutdinov](https://www.cs.toronto.edu/~hinton/science.pdf). We'll experiment with the AE setup and try to run it on the MNIST dataset. There has been a wide variety of research into the field of auto-encoders and the technique that you're about to learn is very simple compared to recent advances (e.g. [the Ladder network](https://arxiv.org/abs/1507.02672) and [VAEs](https://arxiv.org/abs/1312.6114)). However, the basic idea stays the same.
AEs are used within unsupervised learning, in which you do not have a target $y$. Instead it *encodes* an input $x$ into a latent state $z$ and decodes $z$ into a reconstruction $\hat{x}$. This way the parameters of the network can be optimized w.r.t. the difference between $x$ and $\hat{x}$. Depending on the input distribution, the difference can be measured in various ways, e.g. mean squared error (MSE). In many applications the auto-encoder will find an internal state of each data point corresponding to a feature. So if we are to model the MNIST dataset, one could expect that the internal state would correspond to a digit-class and/or the shape.
*The exercises are found at the bottom of the notebook*
%% Cell type:markdown id: tags:
## MNIST
First let us load the MNIST dataset and plot a few examples. We only load a limited amount of classes to speed up training.
%% Cell type:code id: tags:
``` python
fromsklearn.utilsimportshuffle
# To speed up training we'll only work on a subset of the data containing only the numbers 0, 1.
When defining the model the latent layer $z$ must act as a bottleneck of information. We initialize the AE with 1 hidden layer in the encoder and decoder using relu units as non-linearities. The latent layer has a dimensionality of 2 in order to make it easy to visualise. Since $x$ are pixel intensities that are normalized between 0 and 1, we use the sigmoid non-linearity to model the reconstruction.