Skip to content

Commit

Permalink
Tutorial 2: Keep compatibility with torch 1.9
Browse files Browse the repository at this point in the history
  • Loading branch information
phlippe committed Oct 22, 2021
1 parent f805de5 commit 6bc80e7
Showing 1 changed file with 4 additions and 8 deletions.
12 changes: 4 additions & 8 deletions docs/tutorial_notebooks/tutorial2/Introduction_to_PyTorch.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"At the time of writing this tutorial (mid of August 2021), the current stable version is 1.9. You should therefore see the output `Using torch 1.9.0`, eventually with some extension for the CUDA version on Colab. In case you use the `dl2020` environment, you should see `Using torch 1.6.0` since the environment was provided in October 2020. It is recommended to update the PyTorch version to the newest one. If you see a lower version number than 1.6, make sure you have installed the correct the environment, or ask one of your TAs. In case PyTorch 1.10 or newer will be published during the time of the course, don't worry. The interface between PyTorch versions doesn't change too much, and hence all code should also be runnable with newer versions.\n",
"At the time of writing this tutorial (mid of October 2021), the current stable version is 1.10. You should therefore see the output `Using torch 1.10.0` or `Using torch 1.9.0`, eventually with some extension for the CUDA version on Colab. In case you use the `dl2020` environment, you should see `Using torch 1.6.0` since the environment was provided in October 2020. It is recommended to update the PyTorch version to the newest one. If you see a lower version number than 1.6, make sure you have installed the correct the environment, or ask one of your TAs. In case PyTorch 1.11 or newer will be published during the time of the course, don't worry. The interface between PyTorch versions doesn't change too much, and hence all code should also be runnable with newer versions.\n",
"\n",
"As in every machine learning framework, PyTorch provides functions that are stochastic like generating random numbers. However, a very good practice is to setup your code to be reproducible with the exact same random numbers. This is why we set a seed below. "
]
Expand Down Expand Up @@ -934,10 +934,6 @@
"\n",
"## GPU version\n",
"x = x.to(device)\n",
"# The first operation on a CUDA device can be slow as it has to establish a CPU-GPU communication first. \n",
"# Hence, we run an arbitrary command first without timing it for a fair comparison.\n",
"if torch.cuda.is_available():\n",
" _ = torch.matmul(x*0.0, x)\n",
"# CUDA is asynchronous, so we need to use different timing functions\n",
"start = torch.cuda.Event(enable_timing=True)\n",
"end = torch.cuda.Event(enable_timing=True)\n",
Expand Down Expand Up @@ -3648,12 +3644,12 @@
" c1 = torch.Tensor(to_rgba(\"C1\")).to(device)\n",
" x1 = torch.arange(-0.5, 1.5, step=0.01, device=device)\n",
" x2 = torch.arange(-0.5, 1.5, step=0.01, device=device)\n",
" xx1, xx2 = torch.meshgrid(x1, x2, indexing='ij') # Meshgrid function as in numpy\n",
" xx1, xx2 = torch.meshgrid(x1, x2) # Meshgrid function as in numpy\n",
" model_inputs = torch.stack([xx1, xx2], dim=-1)\n",
" preds = model(model_inputs)\n",
" preds = torch.sigmoid(preds)\n",
" output_image = (1 - preds) * c0[None,None] + preds * c1[None,None] # Specifying \"None\" in a dimension creates a new one\n",
" output_image = output_image.cpu().numpy() # Convert to numpy array. This only works for tensors on CPU, hence first push to CPU\n",
" output_image = (1 - preds) * c0[None,None] + preds * c1[None,None] # Specifying \"None\" in a dimension creates a new one\n",
" output_image = output_image.cpu().numpy() # Convert to numpy array. This only works for tensors on CPU, hence first push to CPU\n",
" plt.imshow(output_image, origin='lower', extent=(-0.5, 1.5, -0.5, 1.5))\n",
" plt.grid(False)\n",
" return fig\n",
Expand Down

0 comments on commit 6bc80e7

Please sign in to comment.