Skip to content

Commit

Permalink
make lab1_FFN more userfriendly
Browse files Browse the repository at this point in the history
  • Loading branch information
alrojo committed Sep 23, 2016
1 parent 0bf6811 commit 1244746
Showing 1 changed file with 36 additions and 10 deletions.
46 changes: 36 additions & 10 deletions lab1_FFN/lab1_FFN.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
"import tensorflow as tf\n",
"from tensorflow.python.framework.ops import reset_default_graph\n",
"\n",
"# Do not worry about the code below for now, it is used for plotting later\n",
"def plot_decision_boundary(pred_func, X, y):\n",
" #from https://github.com/dennybritz/nn-from-scratch/blob/master/nn-from-scratch.ipynb\n",
" # Set min and max values and give it some padding\n",
Expand Down Expand Up @@ -124,7 +125,18 @@
"where $x$ is the input tensor, $y$ is the output tensor and $\\{W, b\\}$ are the weights (variable tensors). The weights are initialized with an initializer of our choice (check [tensorflow's API](https://www.tensorflow.org/versions/r0.10/api_docs/index.html) for more.\n",
"x has shape ```[batchsize, num_features]```. ```W``` has shape ```[num_features, num_units]``` and b has ```[num_units]```. y has then ```[batch_size, num_units]```.\n",
"\n",
"NOTE: to make building neural networks easier, TensorFlow's [contrib](https://www.tensorflow.org/versions/r0.10/api_docs/python/contrib.layers.html#layers-contrib) wraps TensorFlow functionality to support various operations such as; [convolutions](https://www.tensorflow.org/versions/r0.10/api_docs/python/contrib.layers.html#convolution2d), [batch_norm](https://www.tensorflow.org/versions/r0.10/api_docs/python/contrib.layers.html#batch_norm), [fully_connected](https://www.tensorflow.org/versions/r0.10/api_docs/python/contrib.layers.html#fully_connected)."
"NOTE: to make building neural networks easier, TensorFlow's [contrib](https://www.tensorflow.org/versions/r0.10/api_docs/python/contrib.layers.html#layers-contrib) wraps TensorFlow functionality to support various operations such as; [convolutions](https://www.tensorflow.org/versions/r0.10/api_docs/python/contrib.layers.html#convolution2d), [batch_norm](https://www.tensorflow.org/versions/r0.10/api_docs/python/contrib.layers.html#batch_norm), [fully_connected](https://www.tensorflow.org/versions/r0.10/api_docs/python/contrib.layers.html#fully_connected).\n",
"\n",
"In this first exercise we will use basic TensorFlow functions so that you can learn how to build it from scratch. This will help you later if you want to build your own custom operations."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## TensorFlow Playerground\n",
"\n",
"If you are new to Neural Networks, start by using the [TensorFlow playground](http://playground.tensorflow.org/) to familiarize yourself with hidden layers, hidden units, activations, learning rate, etc."
]
},
{
Expand All @@ -135,30 +147,44 @@
},
"outputs": [],
"source": [
"# resets the graph, needed when initializing weights multiple times, like in this notebook\n",
"reset_default_graph()\n",
"\n",
"\n",
"# Setting up placeholder, this is where your data enters the graph!\n",
"x_pl = tf.placeholder(tf.float32, [None, num_features])\n",
"\n",
"# Setting up variables, these variables are weights in your network that can be update while running our graph.\n",
"\n",
"# Notice, to make a hidden layer, the weights needs to have the following dimensionality\n",
"# W[number_of_units_going_in, number_of_units_going_out]\n",
"# b[number_of_units_going_out]\n",
"# in the example below we have 2 input units (num_features) and 2 output units (num_output)\n",
"# so our weights become W[2, 2], b[2]\n",
"# if we want to make a hidden layer with 100 units, we need to define the shape of the\n",
"# first weight to W[2, 100], b[2] and the shape of the second weight to W[100, 2], b[2]\n",
"\n",
"# defining our initializer for our weigths from a normal distribution (mean=0, std=0.1)\n",
"weight_initializer = tf.truncated_normal_initializer(stddev=0.1)\n",
"with tf.variable_scope('l_1'): # if you run it more than once, reuse has to be True\n",
" W_1 = tf.get_variable('W', [num_features, num_output],\n",
" W_1 = tf.get_variable('W', [num_features, num_output], # change num_output to 100 for mlp\n",
" initializer=weight_initializer)\n",
" b_1 = tf.get_variable('b', [num_output],\n",
" b_1 = tf.get_variable('b', [num_output], # change num_output to 100 for mlp\n",
" initializer=tf.constant_initializer(0.0))\n",
"# with tf. variable_scope('l_2'):\n",
"# ...\n",
"# W_2 = tf.get_variable('W', [100, num_output],\n",
"# initializer=weight_initializer)\n",
"# b_2 = tf.get_variable('b', [num_output],\n",
"# initializer=tf.constant_initializer(0.0))\n",
"\n",
"# Setting up ops, these ops will define edges along our computational graph\n",
"# The below ops will compute a logistic regression, but can be modified to compute\n",
"# a neural network\n",
"\n",
"l_1 = tf.matmul(x_pl, W_1) + b_1\n",
"# l_1_nonlinear = tf.nn.relu(l1)\n",
"# l_2 = ...\n",
"y = tf.nn.softmax(l_1)"
"# to make a hidden layer we need a nonlinearity\n",
"# l_1_nonlinear = tf.nn.relu(l_1)\n",
"# the layer before the softmax should not have a nonlinearity\n",
"# l_2 = tf.matmul(l_1_nonlinear, W_2) + b_2\n",
"y = tf.nn.softmax(l_1) # change to l_2 for MLP"
]
},
{
Expand Down Expand Up @@ -266,7 +292,7 @@
},
"outputs": [],
"source": [
"# Defining our optimizer\n",
"# Defining our optimizer (try with different optimizers here!)\n",
"optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)\n",
"\n",
"# Computing our gradients\n",
Expand Down

0 comments on commit 1244746

Please sign in to comment.