Skip to content

Commit

Permalink
Initial version of the workshop.
Browse files Browse the repository at this point in the history
  • Loading branch information
wilderrodrigues committed May 22, 2018
1 parent df8b41a commit 38b07ca
Show file tree
Hide file tree
Showing 33 changed files with 24,475 additions and 5 deletions.
20 changes: 15 additions & 5 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ __pycache__/

# Distribution / packaging
.Python

env/
build/
develop-eggs/
dist/
Expand Down Expand Up @@ -80,15 +82,13 @@ celerybeat-schedule

# SageMath parsed files
*.sage.py

# Environments
# dotenv
.env

# virtualenv
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Spyder project settings
.spyderproject
Expand All @@ -102,3 +102,13 @@ venv.bak/

# mypy
.mypy_cache/

**/MNIST_data
**/logs

**/checkpoint
**/mnist_intermediate*
**/.DS_Store
**/17flowers
**/model_output
*.h5
15 changes: 15 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
FROM jupyter/scipy-notebook

MAINTAINER Wilder Rodrigues ([email protected])

USER $NB_USER

# Install TensorFlow
RUN conda install -c conda-forge tensorflow -y && \
conda install -c conda-forge numpy keras nltk gensim -y

# Install Reinforcement Learning packages:
RUN pip install gym==0.9.4

# Install Keras Kontrib
RUN pip install git+https://www.github.com/keras-team/keras-contrib.git
17 changes: 17 additions & 0 deletions Dockerfile-gpu
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
FROM ekholabs/nvidia-cuda
FROM jupyter/scipy-notebook

MAINTAINER Wilder Rodrigues <[email protected]>

USER $NB_USER

RUN conda install -c conda-forge tensorflow-gpu -y && \
conda install -c conda-forge numpy keras nltk -y

# Install Reinforcement Learning packages:
RUN pip install gym==0.9.4

# Install Keras Kontrib
RUN pip install git+https://www.github.com/keras-team/keras-contrib.git

EXPOSE 8888
37 changes: 37 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# Workshop

The examples cover a straightforward start, from shallow to intermediate, deep and CNN networks. It also shows how the trained model can be tested and evaluated.

## Build Docker Image

After cloning this repository, please execute the command below to build the Docker image.

```
docker build -t ekholabs/deeplearning-stack .
```

## Run Docker Container

Once you have built the image, please execute the command below to run the container.

```
docker run -v [path_to_project]/workshop:/home/jovyan/work --rm -p 8888:8888 ekholabs/deeplearning-stack
```

* Remark: 'jovyan' is the default Docker user.

## Jupyter Notebooks

After starting the Docker container, copy the Jupyter notebook URL and start working.

* Remark: if you face problems concerning lack of resources, please increase your Docker Engine memory. I tested the notebooks in a MacBook Pro with 16GB of RAM. I dedicated 5GB to my Docker Engine.

## TensorBoard

If you want to visualise the loss and accuracy metrics, just execute TensorBoard pointing to your logs directory:

```
tensorboard --logdir [path_to_project]/DLinK/notebooks/logs
````

* Remark: the 'logs' directory is not part of the repository. It has to be created under the 'notebooks' directory. All the Jupyter notebook are already configured to use 'notebooks/logs' for the TensorBoard files.
197 changes: 197 additions & 0 deletions notebooks/keras/alexnet-in-keras.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,197 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## AlexNet"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Oxford Flowers classification"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Set the seed"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import numpy as np\n",
"np.random.seed(42)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Load dependencies"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import keras\n",
"from keras.models import Sequential\n",
"from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D\n",
"from keras.layers.normalization import BatchNormalization\n",
"from keras.callbacks import TensorBoard"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Load and Process the data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import tflearn.datasets.oxflower17 as oxflower17\n",
"X, Y = oxflower17.load_data(one_hot = True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Design Neural Network Architecture"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model = Sequential()\n",
"\n",
"# Kernel size is 3x3\n",
"# Default strides = (1, 1)\n",
"model.add(Conv2D(64, 3, activation = \"relu\", input_shape = (224, 224, 3)))\n",
"model.add(Conv2D(64, 3, activation = \"relu\"))\n",
"model.add(MaxPooling2D((2, 2)))\n",
"model.add(BatchNormalization())\n",
"\n",
"model.add(Conv2D(128, 3, activation = \"relu\"))\n",
"model.add(Conv2D(128, 3, activation = \"relu\"))\n",
"model.add(MaxPooling2D((2, 2)))\n",
"model.add(BatchNormalization())\n",
" \n",
"model.add(Conv2D(256, 3, activation = \"relu\"))\n",
"model.add(Conv2D(256, 3, activation = \"relu\"))\n",
"model.add(Conv2D(256, 3, activation = \"relu\"))\n",
"model.add(MaxPooling2D((2, 2)))\n",
"model.add(BatchNormalization())\n",
"\n",
"model.add(Conv2D(512, 3, activation = \"relu\"))\n",
"model.add(Conv2D(512, 3, activation = \"relu\"))\n",
"model.add(Conv2D(512, 3, activation = \"relu\"))\n",
"model.add(MaxPooling2D((2, 2)))\n",
"model.add(BatchNormalization())\n",
"\n",
"model.add(Conv2D(512, 3, activation = \"relu\"))\n",
"model.add(Conv2D(512, 3, activation = \"relu\"))\n",
"model.add(Conv2D(512, 3, activation = \"relu\"))\n",
"model.add(MaxPooling2D((2, 2)))\n",
"model.add(BatchNormalization())\n",
"\n",
"model.add(Flatten())\n",
"model.add(Dense(4096, activation = \"tanh\"))\n",
"model.add(Dropout(0.5))\n",
"model.add(Dense(4096, activation = \"tanh\"))\n",
"model.add(Dropout(0.5))\n",
"\n",
"model.add(Dense(17, activation = \"softmax\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model.summary()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model.compile(loss = \"categorical_crossentropy\", optimizer = \"adam\", metrics = [\"accuracy\"])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### TensorBoard"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"tensorboard = TensorBoard(\"logs/alexnet\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model.fit(X, Y, batch_size = 64, epochs = 1, verbose = 1, validation_split = 0.1, shuffle = True)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.3"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Loading

0 comments on commit 38b07ca

Please sign in to comment.