Skip to content

Commit

Permalink
Tutorial 8: Fixing typo
Browse files Browse the repository at this point in the history
  • Loading branch information
phlippe committed Feb 21, 2023
1 parent 03d8e98 commit 5f8a7c9
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/tutorial_notebooks/tutorial8/Deep_Energy_Models.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -140,7 +140,7 @@
"1. The probability distribution needs to assign any possible value of $\\mathbf{x}$ a non-negative value: $p(\\mathbf{x}) \\geq 0$.\n",
"2. The probability density must sum/integrate to 1 over **all** possible inputs: $\\int_{\\mathbf{x}} p(\\mathbf{x}) d\\mathbf{x} = 1$. \n",
"\n",
"Luckily, there are actually many approaches for this, and one of them are energy-based models. The fundamental idea of energy-based models is that you can turn any function that predicts values larger than zero into a probability distribution by dviding by its volume. Imagine we have a neural network, which has as output a single neuron, like in regression. We can call this network $E_{\\theta}(\\mathbf{x})$, where $\\theta$ are our parameters of the network, and $\\mathbf{x}$ the input data (e.g. an image). The output of $E_{\\theta}$ is a scalar value between $-\\infty$ and $\\infty$. Now, we can use basic probability theory to *normalize* the scores of all possible inputs:\n",
"Luckily, there are actually many approaches for this, and one of them are energy-based models. The fundamental idea of energy-based models is that you can turn any function that predicts values larger than zero into a probability distribution by dividing by its volume. Imagine we have a neural network, which has as output a single neuron, like in regression. We can call this network $E_{\\theta}(\\mathbf{x})$, where $\\theta$ are our parameters of the network, and $\\mathbf{x}$ the input data (e.g. an image). The output of $E_{\\theta}$ is a scalar value between $-\\infty$ and $\\infty$. Now, we can use basic probability theory to *normalize* the scores of all possible inputs:\n",
"\n",
"$$\n",
"q_{\\theta}(\\mathbf{x}) = \\frac{\\exp\\left(-E_{\\theta}(\\mathbf{x})\\right)}{Z_{\\theta}} \\hspace{5mm}\\text{where}\\hspace{5mm}\n",
Expand Down

0 comments on commit 5f8a7c9

Please sign in to comment.