Skip to content

Commit

Permalink
Reorganize documentation (#631)
Browse files Browse the repository at this point in the history
restructure_docs
  • Loading branch information
tancik committed Oct 3, 2022
1 parent 5a76b95 commit d7c34ca
Show file tree
Hide file tree
Showing 68 changed files with 87 additions and 148 deletions.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -195,17 +195,13 @@
"\n",
"```bash\n",
"python scripts/train.py --config-name vanilla_nerf\n",
"```\n",
"\n",
"And now you have a brand-new NeRF in training! For testing and visualizing, simply refer to steps 4-5 in the [quickstart guide](https://github.com/plenoptix/nerfstudio#quickstart).\n",
"\n",
"To help you get started, we also provide additional training tools such as profiling, logging, and debugging. Please refer to our [features guide](../../tooling/index.rst)."
"```"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.8.13 ('nerfactory')",
"display_name": "Python 3.8.12 ('nerfactory')",
"language": "python",
"name": "python3"
},
Expand All @@ -219,12 +215,12 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.13"
"version": "3.8.12"
},
"orig_nbformat": 4,
"vscode": {
"interpreter": {
"hash": "b7b86c17c0dfd6b313529972d424f2e075cc198a46cda005757126afe8133c7b"
"hash": "34c28001ff35fb390494047002768a8182dcf55b1b11415165e62ea61557ab83"
}
}
},
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes
File renamed without changes
File renamed without changes.
File renamed without changes.
File renamed without changes.
15 changes: 0 additions & 15 deletions docs/guides/index.rst

This file was deleted.

24 changes: 15 additions & 9 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -36,30 +36,36 @@ Contents
:hidden:
:caption: Getting Started

Quickstart<tutorials/quickstart_index>
tutorials/data/index
tutorials/pipelines/index
tutorials/viewer/index
quickstart/installation
quickstart/first_nerf
quickstart/custom_dataset
quickstart/viewer_quickstart
Contributing<reference/contributing.md>

.. toctree::
:hidden:
:caption: Guides
:caption: NeRFology

guides/index
nerfology/models/index
nerfology/model_components/index
nerfology/dataloader_components/index

.. toctree::
:hidden:
:caption: Tooling
:caption: Developer Guides

tooling/index
developer_guides/pipelines/index
developer_guides/viewer/index
developer_guides/config
developer_guides/logging_profiling
developer_guides/benchmarking

.. toctree::
:hidden:
:caption: Reference

reference/cli/index
reference/api/index
Contributing<reference/contributing.md>



Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Using custom data
# Using Custom Data

Training model on existing datasets is only so fun. If you would like to train on self captured data you will need to process the data into an existing format. Specifically we need to know the camera poses for each image. [COLMAP](https://github.com/colmap/colmap) is a standard tool for extracting poses. It is possible to use other methods like [SLAM](https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping) or hardware recorded poses. We intend to add documentation for these other methods in the future.

Expand Down
75 changes: 10 additions & 65 deletions docs/tutorials/quickstart.md → docs/quickstart/first_nerf.md
Original file line number Diff line number Diff line change
@@ -1,63 +1,8 @@
# Installation
# Training First Model

### Create environment
## Downloading data

We reccomend using conda to manage dependencies. Make sure to install [Conda](https://docs.conda.io/en/latest/miniconda.html) before preceding.

```bash
conda create --name nerfstudio -y python=3.8.13
conda activate nerfstudio
python -m pip install --upgrade pip

```

### Dependencies

Install pytorch with CUDA (this repo has been tested with CUDA 11.3) and [tiny-cuda-nn](https://github.com/NVlabs/tiny-cuda-nn)

```bash
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 -f https://download.pytorch.org/whl/torch_stable.html
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

```

### Installing nerfstudio

Easy option:

```bash
pip install nerfstudio
```

If you would want the latest and greatest:

```bash
git clone [email protected]:plenoptix/nerfstudio.git
cd nerfstudio
pip install -e .

```

### Optional Installs

#### Tab completion (bash & zsh)

This needs to be rerun when the CLI changes, for example if nerfstudio is updated.

```bash
ns-install-cli
```

### Development packages

```bash
pip install -e.[dev]
pip install -e.[docs]
```

# Downloading data

Download the original NeRF Blender dataset. We support the major datasets and allow users to create their own dataset, described in detail [here](https://docs.nerf.studio/en/latest/tutorials/data/index.html).
Download the original NeRF Blender dataset. We support the major datasets and allow users to create their own dataset, described in detail [here TODO].

```
ns-download-data --dataset=blender
Expand All @@ -78,7 +23,7 @@ Use `--help` to view all currently available datasets. The resulting script shou
...
```

# Training a model
## Training a model

To run with all the defaults, e.g. vanilla nerf method with the blender lego image

Expand All @@ -100,27 +45,27 @@ Run with nerfstudio data. You'll may have to change the ports, and be sure to fo
ns-train nerfacto --vis viewer --viewer.zmq-port 8001 --viewer.websocket-port 8002 nerfstudio-data --pipeline.datamanager.dataparser.data-directory data/nerfstudio/poster --pipeline.datamanager.dataparser.downscale-factor 4
```

# Visualizing training runs
## Visualizing training runs

If you using a fast NeRF variant (ie. Instant-NGP), we reccomend using our viewer. See our [viewer docs](../tutorials/viewer/viewer_quickstart.md) for more details. The viewer will allow interactive visualization of training in realtime.
If you using a fast NeRF variant (ie. Instant-NGP), we reccomend using our viewer. See our [viewer docs](viewer_quickstart.md) for more details. The viewer will allow interactive visualization of training in realtime.

Additionally, if you run everything with the default configuration, by default, we use [TensorBoard](https://www.tensorflow.org/tensorboard) to log all training curves, test images, and other stats. Once the job is launched, you will be able to track training by launching the tensorboard in `outputs/blender_lego/vanilla_nerf/<timestamp>/<events.tfevents>`.

```bash
tensorboard --logdir outputs
```

# Rendering a Trajectory
## Rendering a Trajectory

To evaluate the trained NeRF, we provide an evaluation script that allows you to do benchmarking (see our [benchmarking workflow](../tooling/benchmarking.md)) or to render out the scene with a custom trajectory and save the output to a video.
To evaluate the trained NeRF, we provide an evaluation script that allows you to do benchmarking (see our [benchmarking workflow](../developer_guides/benchmarking.md)) or to render out the scene with a custom trajectory and save the output to a video.

```bash
ns-eval render-trajectory --load-config=outputs/blender_lego/instant_ngp/2022-07-07_230905/config.yml --traj=spiral --output-path=output.mp4
```

Please note, this quickstart allows you to preform everything in a headless manner. We also provide a web-based viewer that allows you to easily monitor training or render out trajectories. See our [viewer docs](../tutorials/viewer/viewer_quickstart.md) for more.
Please note, this quickstart allows you to preform everything in a headless manner. We also provide a web-based viewer that allows you to easily monitor training or render out trajectories. See our [viewer docs](viewer_quickstart.md) for more.

# FAQ
## FAQ

- [TinyCUDA installation errors out with cuda mismatch](tiny-cuda-error)

Expand Down
56 changes: 56 additions & 0 deletions docs/quickstart/installation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# Installation

### Create environment

We reccomend using conda to manage dependencies. Make sure to install [Conda](https://docs.conda.io/en/latest/miniconda.html) before preceding.

```bash
conda create --name nerfstudio -y python=3.8.13
conda activate nerfstudio
python -m pip install --upgrade pip

```

### Dependencies

Install pytorch with CUDA (this repo has been tested with CUDA 11.3) and [tiny-cuda-nn](https://github.com/NVlabs/tiny-cuda-nn)

```bash
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 -f https://download.pytorch.org/whl/torch_stable.html
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch

```

### Installing nerfstudio

Easy option:

```bash
pip install nerfstudio
```

If you would want the latest and greatest:

```bash
git clone [email protected]:plenoptix/nerfstudio.git
cd nerfstudio
pip install -e .

```

### Optional Installs

#### Tab completion (bash & zsh)

This needs to be rerun when the CLI changes, for example if nerfstudio is updated.

```bash
ns-install-cli
```

### Development packages

```bash
pip install -e.[dev]
pip install -e.[docs]
```
1 change: 1 addition & 0 deletions docs/quickstart/viewer_quickstart.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Setting up Viewer
8 changes: 0 additions & 8 deletions docs/tooling/index.rst

This file was deleted.

7 changes: 0 additions & 7 deletions docs/tutorials/data/index.rst

This file was deleted.

Diff not rendered.
35 changes: 0 additions & 35 deletions docs/tutorials/quickstart_index.rst

This file was deleted.

0 comments on commit d7c34ca

Please sign in to comment.