Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement gradient scaling #1901

Merged
merged 9 commits into from
May 12, 2023
Merged

Implement gradient scaling #1901

merged 9 commits into from
May 12, 2023

Conversation

jkulhanek
Copy link
Contributor

@jkulhanek jkulhanek commented May 10, 2023

This PR implements gradient scaling from Radiance Field Gradient Scaling for Unbiased Near-Camera Training

GradientScaling is added to the nerfacto model and enabled using flag --pipeline.model.use_gradient_scaling True

@jkulhanek
Copy link
Contributor Author

I am not sure it helps:
image

Here are the full runs, please take a look:
https://wandb.ai/kulhanek/nerfstudio-baselines?workspace=user-kulhanek

@tancik
Copy link
Contributor

tancik commented May 10, 2023

Black was updated, use pip install -e .[dev] for the latest version.

Scale gradients by the ray distance to the pixel
as suggested in `Radiance Field Gradient Scaling for Unbiased Near-Camera Training` paper

Note, the scaling is applied on the interval of [0, 1] along the ray!
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add example of how to call in docstring. More info here

"""

@staticmethod
def forward(ctx, field_outputs, ray_samples): # pylint: disable=arguments-differ
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we type and add dosctrings? Or does it break since we are overwriting?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typing and doctrings are not visible anywhere. I have added a wrapper function to provide the docstrings and typing support.

@salykova
Copy link
Contributor

@jkulhanek @tancik

In the paper they set near_plane to 0 in all experiments. Does it make sense to do the same here?

@tancik
Copy link
Contributor

tancik commented May 10, 2023

@jkulhanek @tancik

In the paper they set near_plane to 0 in all experiments. Does it make sense to do the same here?

The near plane can be set with --pipeline.model.near-plane
If we set the gradient scaling on by default, then it makes sense to set the near plane to 0 by default. I'm fine with making this default, but we should test it on a few more scenes first to make sure that it better or equal to not using it.

@ichsan2895
Copy link

Can't wait to be implemented in the next version of Nerfstudio 💯

@ichsan2895
Copy link

I am not sure it helps: image

Here are the full runs, please take a look: https://wandb.ai/kulhanek/nerfstudio-baselines?workspace=user-kulhanek

As stated in the paper, the PSNR will slightly change (not too much), but it more better qualitatively because less of floaters

Copy link
Contributor

@tancik tancik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. We should try experiment with it over the next few days. If we see better results on average, let's enable it by default.

@ichsan2895
Copy link

ichsan2895 commented May 12, 2023

Sorry why I can't use it?

image

image

another attempt
image

@ichsan2895
Copy link

ichsan2895 commented May 12, 2023

GradientScaling is added to the nerfacto model and enabled using flag --use-gradient-scaling=True

This seems works with --pipeline.model.use_gradient_scaling True instead of --use-gradient-scaling=True
image

Here is the show. I decreased the value of num-ray-per-chunk & others parameter to avoid CUDA OOM

Without Gradient Scale, CHKPT 48 K, num-rays-per-chunk 16384, num-rays-per-batch 2048 , num-rays-per-batch 2048

without Grad Scaling mp4_snapshot_00 00_frame7
without Grad Scaling mp4_snapshot_00 00_frame0

With Gradient Scale, CHKPT 48 K, num-rays-per-chunk 16384, num-rays-per-batch 2048 , num-rays-per-batch 2048

with Grad Scaling mp4_snapshot_00 00_frame7
with Grad Scaling mp4_snapshot_00 00_frame0

@jkulhanek
Copy link
Contributor Author

Thanks for testing it on your scene! I guess we can optimize it to fit into the memory.

@jkulhanek
Copy link
Contributor Author

Does anyone know if gradients in autograd.function can me modified in-place?

@jkulhanek jkulhanek merged commit 02f449b into main May 12, 2023
@jkulhanek jkulhanek deleted the jkulhanek/gradient-scaling branch May 12, 2023 05:23
@ichsan2895
Copy link

ichsan2895 commented May 12, 2023

Thanks for testing it on your scene! I guess we can optimize it to fit into the memory.

Both of tests will OOM CUDA, so I think the problem of OOM is not caused by grad scaling. The problem of OOM CUDA come from my own potato GPU which it has low VRAM

@jkulhanek
Copy link
Contributor Author

Ok, I guess I will keep the current implementation then.

@ichsan2895
Copy link

Another testing come here...

Without Gradient Scaling

without.GradScal.mp4

With Gradient Scaling

with.GradScal.mp4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants