Skip to content

Commit

Permalink
[Fix] point cloud export broken in Dockerfile (#1148)
Browse files Browse the repository at this point in the history
* Added Dockerfile to build docker image including all dependencies and nerfstudio, added description on how to user docker.

* Move docker description to Installation.md.

* Fix mask_filename indices after splitting train set.

* Rebase DOckerfile to newer image for updated libraries.

* Update docker container documentation.

Co-authored-by: Nicolas Zunhammer <[email protected]>
  • Loading branch information
Zunhammer and Nicolas Zunhammer committed Dec 20, 2022
1 parent af5953e commit 4a51382
Show file tree
Hide file tree
Showing 2 changed files with 21 additions and 16 deletions.
26 changes: 14 additions & 12 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Define base image.
FROM nvidia/cudagl:11.3.1-devel
FROM nvidia/cuda:11.7.1-devel-ubuntu22.04

# Set environment variables.
## Set non-interactive to prevent asking for user inputs blocking image creation.
Expand All @@ -11,7 +11,7 @@ ENV TCNN_CUDA_ARCHITECTURES=86
## CUDA Home, required to find CUDA in some packages.
ENV CUDA_HOME="/usr/local/cuda"

# Install required apt packages.
# Install required apt packages and clear cache afterwards.
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential \
Expand All @@ -36,10 +36,11 @@ RUN apt-get update && \
libsuitesparse-dev \
nano \
protobuf-compiler \
python3.8-dev \
python3.10-dev \
python3-pip \
qtbase5-dev \
wget
wget && \
rm -rf /var/lib/apt/lists/*

# Install GLOG (required by ceres).
RUN git clone --branch v0.6.0 https://github.com/google/glog.git --single-branch && \
Expand All @@ -50,7 +51,7 @@ RUN git clone --branch v0.6.0 https://github.com/google/glog.git --single-branch
make -j && \
make install && \
cd ../.. && \
rm -r glog
rm -rf glog
# Add glog path to LD_LIBRARY_PATH.
ENV LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/usr/local/lib"

Expand All @@ -64,18 +65,19 @@ RUN git clone --branch 2.1.0 https://ceres-solver.googlesource.com/ceres-solver.
make -j && \
make install && \
cd ../.. && \
rm -r ceres-solver
rm -rf ceres-solver

# Install colmap.
RUN git clone --branch 3.7 https://github.com/colmap/colmap.git --single-branch && \
cd colmap && \
mkdir build && \
cd build && \
cmake .. && \
cmake .. -DCUDA_ENABLED=ON \
-DCUDA_NVCC_FLAGS="--std c++14" && \
make -j && \
make install && \
cd ../.. && \
rm -r colmap
rm -rf colmap

# Create non root user and setup environment.
RUN useradd -m -d /home/user -u 1000 user
Expand All @@ -89,11 +91,11 @@ ENV PATH="${PATH}:/home/user/.local/bin"
SHELL ["/bin/bash", "-c"]

# Upgrade pip and install packages.
RUN python3.8 -m pip install --upgrade pip setuptools pathtools promise
RUN python3.10 -m pip install --upgrade pip setuptools pathtools promise
# Install pytorch and submodules.
RUN python3.8 -m pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu113
RUN python3.10 -m pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.12.1 --extra-index-url https://download.pytorch.org/whl/cu116
# Install tynyCUDNN.
RUN python3.8 -m pip install git+https://github.com/NVlabs/tiny-cuda-nn.git#subdirectory=bindings/torch
RUN python3.10 -m pip install git+https://github.com/NVlabs/tiny-cuda-nn.git#subdirectory=bindings/torch

# Copy nerfstudio folder and give ownership to user.
ADD . /home/user/nerfstudio
Expand All @@ -103,7 +105,7 @@ USER 1000:1000

# Install nerfstudio dependencies.
RUN cd nerfstudio && \
python3.8 -m pip install -e . && \
python3.10 -m pip install -e . && \
cd ..

# Change working directory
Expand Down
11 changes: 7 additions & 4 deletions docs/quickstart/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ pip install -e .[docs]
## Use docker image
Instead of installing and compiling prerequisites, setting up the environment and installing dependencies, a ready to use docker image is provided. \
### Prerequisites
Docker ([get docker](https://docs.docker.com/get-docker/)) and nvidia GPU drivers ([get nvidia drivers](https://www.nvidia.de/Download/index.aspx?lang=de)), capable of working with CUDA 11.3, must be installed.
Docker ([get docker](https://docs.docker.com/get-docker/)) and nvidia GPU drivers ([get nvidia drivers](https://www.nvidia.de/Download/index.aspx?lang=de)), capable of working with CUDA 11.7, must be installed.
The docker image can then either be pulled from [here](https://hub.docker.com/r/dromni/nerfstudio/tags) (replace <version> with the actual version, e.g. 0.1.10)
```bash
docker pull dromni/nerfstudio:<version>
Expand All @@ -89,15 +89,18 @@ docker run --gpus all \ # Give the conta
### Call nerfstudio commands directly
Besides, the container can also directly be used by adding the nerfstudio command to the end.
```bash
docker run --gpus all -v /folder/of/your/data:/workspace/ -v /home/<YOUR_USER>/.cache/:/home/user/.cache/ -p 7007:7007 --rm # Parameters.
docker run --gpus all -v /folder/of/your/data:/workspace/ -v /home/<YOUR_USER>/.cache/:/home/user/.cache/ -p 7007:7007 --rm -it # Parameters.
nerfstudio \ # Docker image name
ns-process-data video --data /workspace/video.mp4 # Smaple command of nerfstudio.
```
### Note
- The container works on Linux and Windows, depending on your OS some additional setup steps might be required to provide access to your GPU inside containers.
- Paths on Windows use backslash '\\' while unix based systems use a frontslash '/' for paths, where backslashes might require an escape character depending on where they are used (e.g. C:\\\\folder1\\\\folder2...). Ensure to use the correct paths when mounting folders or providing paths as parameters.
- Everything inside the container, what is not in a mounted folder (workspace in the above example), will be permanently removed after destroying the container. Always do all your tasks and output folder in workdir!
- The user inside the container is called user and is mapped to the local user with ID 1000 (usually the first non-root user on Linux systems).
- The container currently is based on nvidia:cudagl-11.3.1, consequently it comes with CUDA 11.3 which must be supported by the nvidia driver. No local CUDA installation is required or will be affected by using the docker image.
- The user inside the container is called 'user' and is mapped to the local user with ID 1000 (usually the first non-root user on Linux systems).
- The container currently is based on nvidia/cuda:11.7.1-devel-ubuntu22.04, consequently it comes with CUDA 11.7 which must be supported by the nvidia driver. No local CUDA installation is required or will be affected by using the docker image.
- The docker image (respectively Ubuntu 22.04) comes with Python3.10, no older version of Python is installed.
- If you call the container with commands directly, you still might want to add the interactive terminal ('-it') flag to get live log outputs of the nerfstudio scripts. In case the container is used in an automated environment the flag should be discarded.


## Installation FAQ
Expand Down

0 comments on commit 4a51382

Please sign in to comment.