lab5_AE-checkpoint.ipynb 17.1 KB
Newer Older
Martin Karlsson's avatar
hw  
Martin Karlsson committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "ename": "ImportError",
     "evalue": "No module named sklearn.datasets",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mImportError\u001b[0m                               Traceback (most recent call last)",
      "\u001b[1;32m<ipython-input-1-06426484224b>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m()\u001b[0m\n\u001b[0;32m      7\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mnumpy\u001b[0m \u001b[1;32mas\u001b[0m \u001b[0mnp\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m      8\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mmatplotlib\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mpyplot\u001b[0m \u001b[1;32mas\u001b[0m \u001b[0mplt\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 9\u001b[1;33m \u001b[1;32mimport\u001b[0m \u001b[0msklearn\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mdatasets\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m     10\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mtensorflow\u001b[0m \u001b[1;32mas\u001b[0m \u001b[0mtf\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     11\u001b[0m \u001b[1;32mfrom\u001b[0m \u001b[0mtensorflow\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mpython\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mframework\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mops\u001b[0m \u001b[1;32mimport\u001b[0m \u001b[0mreset_default_graph\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;31mImportError\u001b[0m: No module named sklearn.datasets"
     ]
    }
   ],
   "source": [
    "from __future__ import division, print_function\n",
    "import matplotlib\n",
    "import matplotlib.pyplot as plt\n",
    "from IPython.display import Image, display, clear_output\n",
    "%matplotlib nbagg\n",
    "%matplotlib inline \n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "import sklearn.datasets\n",
    "import tensorflow as tf\n",
    "from tensorflow.python.framework.ops import reset_default_graph"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Auto-encoders 101\n",
    "In this notebook you will implement a simple auto-encoder (AE). We assume that you are already familiar with the basics of neural networks. We'll start by defining an AE similar to the one used for the finetuning step by [Geoffrey Hinton and Ruslan Salakhutdinov](https://www.cs.toronto.edu/~hinton/science.pdf). We'll experiment with the AE setup and try to run it on the MNIST dataset. There has been a wide variety of research into the field of auto-encoders and the technique that you're about to learn is very simple compared to recent advances (e.g. [the Ladder network](https://arxiv.org/abs/1507.02672) and [VAEs](https://arxiv.org/abs/1312.6114)). However, the basic idea stays the same.\n",
    "\n",
    "AEs are used within unsupervised learning, in which you do not have a target $y$. Instead it *encodes* an input $x$ into a latent state $z$ and decodes $z$ into a reconstruction $\\hat{x}$. This way the parameters of the network can be optimized w.r.t. the difference between $x$ and $\\hat{x}$. Depending on the input distribution, the difference can be measured in various ways, e.g. mean squared error (MSE). In many applications the auto-encoder will find an internal state of each data point corresponding to a feature. So if we are to model the MNIST dataset, one could expect that the internal state would correspond to a digit-class and/or the shape.\n",
    "\n",
    "*The exercises are found at the bottom of the notebook*"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## MNIST\n",
    "First let us load the MNIST dataset and plot a few examples. We only load a limited amount of classes to speed up training."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "from sklearn.utils import shuffle\n",
    "\n",
    "# To speed up training we'll only work on a subset of the data containing only the numbers 0, 1.\n",
    "data = np.load('../lab1_FFN/mnist.npz')\n",
    "num_classes = 2\n",
    "idxs_train = []\n",
    "idxs_valid = []\n",
    "idxs_test = []\n",
    "for i in range(num_classes):\n",
    "    idxs_train += np.where(data['y_train'] == i)[0].tolist()\n",
    "    idxs_valid += np.where(data['y_valid'] == i)[0].tolist()\n",
    "    idxs_test += np.where(data['y_test'] == i)[0].tolist()\n",
    "\n",
    "x_train = data['X_train'][idxs_train].astype('float32')\n",
    "# Since this is unsupervised, the targets are only used for validation.\n",
    "targets_train = data['y_train'][idxs_train].astype('int32')\n",
    "x_train, targets_train = shuffle(x_train, targets_train, random_state=1234)\n",
    "\n",
    "\n",
    "x_valid = data['X_valid'][idxs_valid].astype('float32')\n",
    "targets_valid = data['y_valid'][idxs_valid].astype('int32')\n",
    "\n",
    "x_test = data['X_test'][idxs_test].astype('float32')\n",
    "targets_test = data['y_test'][idxs_test].astype('int32')\n",
    "\n",
    "print(\"training set dim(%i, %i).\" % x_train.shape)\n",
    "print(\"validation set dim(%i, %i).\" % x_valid.shape)\n",
    "print(\"test set dim(%i, %i).\" % x_test.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "#plot a few MNIST examples\n",
    "idx = 0\n",
    "canvas = np.zeros((28*10, 10*28))\n",
    "for i in range(10):\n",
    "    for j in range(10):\n",
    "        canvas[i*28:(i+1)*28, j*28:(j+1)*28] = x_train[idx].reshape((28, 28))\n",
    "        idx += 1\n",
    "plt.figure(figsize=(7, 7))\n",
    "plt.axis('off')\n",
    "plt.imshow(canvas, cmap='gray')\n",
    "plt.title('MNIST handwritten digits')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Building the model\n",
    "When defining the model the latent layer $z$ must act as a bottleneck of information. We initialize the AE with 1 hidden layer in the encoder and decoder using relu units as non-linearities. The latent layer has a dimensionality of 2 in order to make it easy to visualise. Since $x$ are pixel intensities that are normalized between 0 and 1, we use the sigmoid non-linearity to model the reconstruction."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "from tensorflow.contrib.layers import fully_connected\n",
    "from tensorflow.python.ops.nn import relu, sigmoid"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# define in/output size\n",
    "num_features = x_train.shape[1]\n",
    "\n",
    "# reset graph\n",
    "reset_default_graph()\n",
    "\n",
    "# define the model\n",
    "x_pl = tf.placeholder(tf.float32, [None, num_features], 'x_pl')\n",
    "l_enc = fully_connected(inputs=x_pl, num_outputs=128, activation_fn=relu, scope='l_enc')\n",
    "l_z = fully_connected(inputs=l_enc, num_outputs=2, activation_fn=None, scope='l_z') # None indicates a linear output.\n",
    "l_dec = fully_connected(inputs=l_z, num_outputs=128, activation_fn=relu, scope='l_dec')\n",
    "l_out = fully_connected(inputs=l_dec, num_outputs=num_features, activation_fn=sigmoid) # iid pixel intensities between 0 and 1."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Following we define the TensorFlow functions for training and evaluation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# calculate loss\n",
    "loss_per_pixel = tf.square(tf.sub(l_out, x_pl))\n",
    "loss = tf.reduce_mean(loss_per_pixel, name=\"mean_square_error\")\n",
    "# if you want regularization\n",
    "#reg_scale = 0.0005\n",
    "#regularize = tf.contrib.layers.l2_regularizer(reg_scale)\n",
    "#params = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES)\n",
    "#reg_term = sum([regularize(param) for param in params])\n",
    "#loss += reg_term\n",
    "\n",
    "# define our optimizer\n",
    "optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.25)\n",
    "\n",
    "# make training op for applying the gradients\n",
    "train_op = optimizer.minimize(loss)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# test the forward pass\n",
    "_x_test = np.zeros(shape=(32, num_features))\n",
    "# initialize the Session\n",
    "sess = tf.Session()\n",
    "# test the forward pass\n",
    "sess.run(tf.initialize_all_variables())\n",
    "feed_dict = {x_pl: _x_test}\n",
    "res_forward_pass = sess.run(fetches=[l_out], feed_dict=feed_dict)\n",
    "print(\"l_out\", res_forward_pass[0].shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In the training loop we sample each batch and evaluate the error, latent space and reconstructions every epoch."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "batch_size = 100\n",
    "num_epochs = 100\n",
    "num_samples_train = x_train.shape[0]\n",
    "num_batches_train = num_samples_train // batch_size\n",
    "num_samples_valid = x_valid.shape[0]\n",
    "num_batches_valid = num_samples_valid // batch_size\n",
    "updates = []\n",
    "\n",
    "train_loss = []\n",
    "valid_loss = []\n",
    "cur_loss = 0\n",
    "plt.figure(figsize=(12, 24))\n",
    "\n",
    "try:\n",
    "    for epoch in range(num_epochs):\n",
    "        #Forward->Backprob->Update params\n",
    "        cur_loss = []\n",
    "        for i in range(num_batches_train):\n",
    "            idxs = np.random.choice(range(x_train.shape[0]), size=(batch_size), replace=False)    \n",
    "            x_batch = x_train[idxs]\n",
    "            # setup what to fetch, notice l\n",
    "            fetches_train = [train_op, loss, l_out, l_z]\n",
    "            feed_dict_train = {x_pl: x_batch}\n",
    "            # do the complete backprob pass\n",
    "            res_train = sess.run(fetches_train, feed_dict_train)\n",
    "            _, batch_loss, train_out, train_z = tuple(res_train)\n",
    "            cur_loss += [batch_loss]\n",
    "        train_loss += [np.mean(cur_loss)]\n",
    "        updates += [batch_size*num_batches_train*(epoch+1)]\n",
    "\n",
    "        # evaluate\n",
    "        fetches_eval = [loss, l_out, l_z]\n",
    "        feed_dict_eval = {x_pl: x_valid}\n",
    "        res_valid = sess.run(fetches_eval, feed_dict_eval)\n",
    "        eval_loss, eval_out, eval_z = tuple(res_valid)\n",
    "        valid_loss += [eval_loss]\n",
    "\n",
    "        if epoch == 0:\n",
    "            continue\n",
    "\n",
    "        # Plotting\n",
    "        plt.subplot(num_classes+1,2,1)\n",
    "        plt.title('Error')\n",
    "        plt.legend(['Train Error', 'Valid Error'])\n",
    "        plt.xlabel('Updates'), plt.ylabel('Error')\n",
    "        plt.plot(updates, train_loss, color=\"black\")\n",
    "        plt.plot(updates, valid_loss, color=\"grey\")\n",
    "        plt.ticklabel_format(style='sci', axis='x', scilimits=(0,0))\n",
    "        plt.grid('on')\n",
    "\n",
    "        plt.subplot(num_classes+1,2,2)\n",
    "        plt.cla()\n",
    "        plt.title('Latent space')\n",
    "        plt.xlabel('z0'), plt.ylabel('z1')\n",
    "        color = iter(plt.get_cmap('brg')(np.linspace(0, 1.0, num_classes)))\n",
    "        for i in range(num_classes):\n",
    "            clr = next(color)\n",
    "            plt.scatter(eval_z[targets_valid==i, 0], eval_z[targets_valid==i, 1], c=clr, s=5., lw=0, marker='o', )\n",
    "        plt.grid('on')\n",
    "        \n",
    "        c=0\n",
    "        for k in range(3, 3 + num_classes*2, 2):\n",
    "            plt.subplot(num_classes+1,2,k)\n",
    "            plt.cla()\n",
    "            plt.title('Inputs for %i' % c)\n",
    "            plt.axis('off')\n",
    "            idx = 0\n",
    "            canvas = np.zeros((28*10, 10*28))\n",
    "            for i in range(10):\n",
    "                for j in range(10):\n",
    "                    canvas[i*28:(i+1)*28, j*28:(j+1)*28] = x_valid[targets_valid==c][idx].reshape((28, 28))\n",
    "                    idx += 1\n",
    "            plt.imshow(canvas, cmap='gray')\n",
    "            \n",
    "            plt.subplot(num_classes+1,2,k+1)\n",
    "            plt.cla()\n",
    "            plt.title('Reconstructions for %i' % c)\n",
    "            plt.axis('off')\n",
    "            idx = 0\n",
    "            canvas = np.zeros((28*10, 10*28))\n",
    "            for i in range(10):\n",
    "                for j in range(10):\n",
    "                    canvas[i*28:(i+1)*28, j*28:(j+1)*28] = eval_out[targets_valid==c][idx].reshape((28, 28))\n",
    "                    idx += 1\n",
    "            plt.imshow(canvas, cmap='gray')\n",
    "            c+=1\n",
    "      \n",
    "        \n",
    "        plt.savefig(\"out51.png\")\n",
    "        display(Image(filename=\"out51.png\"))\n",
    "        clear_output(wait=True)\n",
    "        \n",
    "except KeyboardInterrupt:\n",
    "    pass\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "### Exercise 1 - Analyzing the AE\n",
    "1. The above implementation of an AE is very simple.\n",
    "    - *Experiment with the number of layers and non-linearities in order to improve the reconstructions.*\n",
    "    - *What happens with the network when we change the non-linearities in the latent layer (e.g. sigmoid)?*\n",
    "    - *Try to increase the number of digit classes in the training set and analyze the results.*\n",
    "    - *Test different optimization algorithms and decide whether you should use regularizers*.\n",
    "       \n",
    "2. Currently we optimize w.r.t. mean squared error. \n",
    "    - *Find another error function that could fit this problem better.* \n",
    "    - *Evaluate whether the error function is a better choice and explain your findings.*\n",
    "\n",
    "3. Complexity of the bottleneck.\n",
    "    - *Increase the number of units in the latent layer and train.*\n",
    "    - *Visualize by using [PCA](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) or [t-SNE](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.TSNE.html).*"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "### Exercise 2 - Adding classification (for the ambitious)\n",
    "The above training has been performed unsupervised. Now let us assume that we only have a fraction of labeled data points from each class (implemented below). As we know, semi-supervised learning can be utilized by combining unsupervised and supervised learning. Now you must analyze whether a trained AE from the above exercise can aid a classifier.\n",
    "\n",
    "1. Build a simple classifier (like the ones from week1) where you:\n",
    "    - *Train on the labeled dataset and evaluate the results.*\n",
    "2. Build a second classifier and train on the latent output $z$ of the AE.\n",
    "3. Build a third classifier and train on the reconstructions of the AE.\n",
    "4. Evaluate the classifiers against each other and implement a model that improves the classification by combining the input, latent output and reconstruction."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# Generate a subset of labeled data points\n",
    "\n",
    "num_labeled = 10 # You decide on the size of the fraction...\n",
    "\n",
    "def onehot(t, num_classes):\n",
    "    out = np.zeros((t.shape[0], num_classes))\n",
    "    for row, col in enumerate(t):\n",
    "        out[row, col] = 1\n",
    "    return out\n",
    "\n",
    "idxs_train_l = []\n",
    "for i in range(num_classes):\n",
    "    idxs = np.where(targets_train == i)[0]\n",
    "    idxs_train_l += np.random.choice(idxs, size=num_labeled).tolist()\n",
    "\n",
    "x_train_l = x_train[idxs_train_l]\n",
    "targets_train_l = targets_train[idxs_train_l]\n",
    "print(\"labeled training set dim(%i, %i).\" % x_train_l.shape)\n",
    "\n",
    "plt.figure(figsize=(12, 7))\n",
    "for i in range(num_classes*num_labeled):\n",
    "    im = x_train_l[i].reshape((28, 28))\n",
    "    plt.subplot(1, num_classes*num_labeled, i + 1)\n",
    "    plt.imshow(im, cmap='gray')\n",
    "    plt.axis('off')"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 2",
   "language": "python",
   "name": "python2"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}