Skip to content
Snippets Groups Projects
Commit bf1f1c2d authored by Martin Karlsson's avatar Martin Karlsson
Browse files

added hw summary folder, and my own project summary in it

parent 4331323c
No related branches found
No related tags found
No related merge requests found
\relax
This diff is collapsed.
File added
File added
\documentclass[a4paper]{article}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{siunitx}
\usepackage{color}
\usepackage{url}
%\usepackage{filecontents}
%\usepackage[style=authoryear]{biblatex}
%\newcommand{\comment}[1]{{\color{red} Comment: #1}}
\title{Deep Learning - Study Circle \\ Homework Summary}
\author{Martin Karlsson}
\begin{document}
\maketitle
This summary contains a brief description of what the author did in the homeworks in the Deep Learning - Study Circle course. See python code and plots in the respective homework folders for more details.
\subsection*{HW 2 - Autoencoders}
An autocoding neural network was applied on the MNIST dataset. The purpose of an autoencoder is to find a less complex representation of the data, that still gives high enough resolution for the intended machine learning. In order to achieve this for the MNIST data, the architecture was varied. As expected, fewer neurons in the intermediate layers gave lower resolution in the reconstruction (output layer). A reasonable trade-off between complexity and reconstruction was obtained by using one intermediate layer with 64 neurons, with one layer with 128 neurons on each side. Since the number of features on the original dataset was $28 \cdot 28 = 784$, the architecture can be described as \\
\phantom{hej} \\
(input layer) $784 \Rightarrow 128 \Rightarrow$ (middle layer)$ \phantom{t} 64 \Rightarrow 128 \Rightarrow 784$ (output layer) \\
\phantom{hej} \\
Hence, this autoencoder would allow a decreased input dimension, from the original 784 to 64.
\subsection*{HW 3 - Convolutional Neural Networks}
A convolutional neural network was applied on the MNIST dataset for classification. The neural network consisted of an input layer, two convolution layers followed by a fully connected layer, and finally an output layer. After training, it achieved a validation accuracy of 99 \%.
\subsection*{HW 6 - Recurrent Neural Network}
In this homework, two types of RNNs were applied.
First, an RNN was used for classification of the MNIST dataset. Each data point, which was an image with $28 \cdot 28$ pixels, was handled as a sequence of length 28, with 28 channels. The RNN read the entire sequence, and had one single output at the end, which corresponded to the class of the image. After training, the RNN achieved 98 \% accuracy on validation data.
Secondly, an RNN was used to map letters, organized as the Swedish names of the numbers 0-9 to the actual numbers, \emph{e.g.}, \\
noll ett nio fyra $\Rightarrow$ 0194 \\
This RNN accepted sequences of any length. After training, it always gave a correct mapping from letters to numbers.
\subsection*{HW 9 - Deep Learning using Graphics Processing Units}
I read the suggested paper; \\
TensorFlow: A system for large-scale machine learning \\
\url{https://arxiv.org/abs/1605.08695}
\subsection*{HW 11 - Natural Language Processing}
A vector representation of words was formed using an implementation in TensorFlow. The learned vectors were visualized by projection down to 2D. It was found, that words that seem intuitively similar/related, tended to appear close to each other in the vector representation. Examples are walked/walking, and cat/dog.
\end{document}
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment