Skip to content

Commit

Permalink
Tutorial 2: Finished with empty version
Browse files Browse the repository at this point in the history
  • Loading branch information
phlippe committed Oct 27, 2020
1 parent 17389d1 commit 06e5da7
Show file tree
Hide file tree
Showing 2 changed files with 1,386 additions and 10 deletions.
23 changes: 13 additions & 10 deletions docs/tutorial_notebooks/tutorial2/Introduction_to_PyTorch.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,14 +6,14 @@
"source": [
"# Tutorial 2: Introduction to PyTorch\n",
"\n",
"![Status](https://img.shields.io/static/v1.svg?label=Status&message=First%20version&color=yellow)\n",
"![Status](https://img.shields.io/static/v1.svg?label=Status&message=Finished&color=green)\n",
"\n",
"**Filled notebook:** \n",
"[![View on Github](https://img.shields.io/static/v1.svg?logo=github&label=Repo&message=View%20On%20Github&color=lightgrey)](https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial2/Introduction_to_PyTorch.ipynb)\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial2/Introduction_to_PyTorch.ipynb) \n",
"**Empty notebook:** \n",
"[![View on Github](https://img.shields.io/static/v1.svg?logo=github&label=Repo&message=View%20On%20Github&color=lightgrey)](https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial2/Introduction_to_PyTorch.ipynb)\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial2/Introduction_to_PyTorch.ipynb)"
"[![View on Github](https://img.shields.io/static/v1.svg?logo=github&label=Repo&message=View%20On%20Github&color=lightgrey)](https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial2/Introduction_to_PyTorch_empty.ipynb)\n",
"[![Open In Collab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial2/Introduction_to_PyTorch_empty.ipynb)"
]
},
{
Expand Down Expand Up @@ -630,7 +630,7 @@
"\n",
"$$y = \\frac{1}{|x|}\\sum_i \\left[(x_i + 2)^2 + 3\\right]$$\n",
"\n",
"You could imagine that $x$ are our parameters, and we want to optimize (either maximize or minimize) the output $y$. For this, we want to obtain the gradients $\\partial y / \\partial \\mathbf{x}$. For our example, we'll use $\\mathbf{x}=[1,2,3]$ as our input."
"You could imagine that $x$ are our parameters, and we want to optimize (either maximize or minimize) the output $y$. For this, we want to obtain the gradients $\\partial y / \\partial \\mathbf{x}$. For our example, we'll use $\\mathbf{x}=[0,1,2]$ as our input."
]
},
{
Expand Down Expand Up @@ -704,7 +704,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"`x.grad` will now contain the gradient $\\partial y/ \\partial \\mathcal{x}$, and this gradient indicates how a change in $\\mathbf{x}$ will affect output $y$ given the current input $\\mathbf{x}=[1,2,3]$:"
"`x.grad` will now contain the gradient $\\partial y/ \\partial \\mathcal{x}$, and this gradient indicates how a change in $\\mathbf{x}$ will affect output $y$ given the current input $\\mathbf{x}=[0,1,2]$:"
]
},
{
Expand Down Expand Up @@ -741,7 +741,7 @@
"\\frac{\\partial y}{\\partial c_i} = \\frac{1}{3}\n",
"$$\n",
"\n",
"Hence, with the input being $\\mathbf{x}=[1,2,3]$, our gradients are $\\partial y/\\partial \\mathbf{x}=[4/3,2,8/3]$. The previous code cell should have printed the same result."
"Hence, with the input being $\\mathbf{x}=[0,1,2]$, our gradients are $\\partial y/\\partial \\mathbf{x}=[4/3,2,8/3]$. The previous code cell should have printed the same result."
]
},
{
Expand Down Expand Up @@ -978,7 +978,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The forward function is where the computation of the module is taken place. In the init function, we usually create the parameters of the module, using `nn.Parameter`, or defining other modules that are used in the forward function. The backward calculation is done automatically, but could be overwritten as well if wanted.\n",
"The forward function is where the computation of the module is taken place, and is executed when you call the module (`nn = MyModule(); nn(x)`). In the init function, we usually create the parameters of the module, using `nn.Parameter`, or defining other modules that are used in the forward function. The backward calculation is done automatically, but could be overwritten as well if wanted.\n",
"\n",
"#### Simple classifier\n",
"We can now make use of the pre-defined modules in the `torch.nn` package, and define our own small neural network. We will use a minimal network with a input layer, one hidden layer with tanh as activation function, and a output layer. In other words, our networks should look something like this:\n",
Expand Down Expand Up @@ -2501,13 +2501,14 @@
" ## Step 3: Calculate the loss\n",
" loss = loss_module(preds, data_labels.float())\n",
" \n",
" ## Step 4+5: Perform backpropagation, and update parameters\n",
" ## Step 4: Perform backpropagation\n",
" # Before calculating the gradients, we need to ensure that they are all zero. \n",
" # The gradients would not be overwritten, but actually added to the existing ones.\n",
" optimizer.zero_grad() \n",
" # Perform backpropagation\n",
" loss.backward()\n",
" # Update the parameters\n",
" \n",
" ## Step 5: Update the parameters\n",
" optimizer.step()"
]
},
Expand Down Expand Up @@ -2622,9 +2623,11 @@
"source": [
"# Load state dict from the disk (make sure it is the same name as above)\n",
"state_dict = torch.load(\"our_model.tar\")\n",
"\n",
"# Create a new model and load the state\n",
"new_model = SimpleClassifier(num_inputs=2, num_hidden=4, num_outputs=1)\n",
"new_model.load_state_dict(state_dict)\n",
"\n",
"# Verify that the parameters are the same\n",
"print(\"Original model\\n\", model.state_dict())\n",
"print(\"\\nLoaded model\\n\", new_model.state_dict())"
Expand Down Expand Up @@ -3780,7 +3783,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.9"
"version": "3.7.3"
}
},
"nbformat": 4,
Expand Down
Loading

0 comments on commit 06e5da7

Please sign in to comment.