Skip to content

Latest commit

 

History

History
47 lines (33 loc) · 2.89 KB

File metadata and controls

47 lines (33 loc) · 2.89 KB

Depth Estimation Example

Depth estimation model are usually used to approximate the relative distance of every pixel in an image from the camera, also known as depth.

Depth Estimation Model

X-AnyLabeling offers a range of depth models for using, including Depth Anything V1 and Depth Anything V2.

  • Depth Anything V1 is a highly practical solution for robust monocular depth estimation by training on a combination of 1.5M labeled images and 62M+ unlabeled images.
  • Depth Anything V2 significantly outperforms its predecessor, V1, in terms of fine-grained detail and robustness. In comparison to SD-based models, V2 boasts faster inference speed, a reduced number of parameters, and enhanced depth accuracy.

girl.mov

Usage

  1. Import your image (Ctrl+I) or video (Ctrl+O) file into the X-AnyLabeling.
  2. Select and load the Depth-Anything related model, or choose from other available depth estimation models.
  3. Initiate the process by clicking Run (i). Once you've verified that everything is set up correctly, use the keyboard shortcut Ctrl+M to process all images in one go.

The output, once completed, will be automatically stored in a depth subdirectory within the same folder as your original image.

painting Source depth-anything-v1-gray Depth Anything V1 (Gray) depth-anything-v2-color Depth Anything V2 (Color)

Tip

Two output modes are supported: grayscale and color. You can switch between these modes by modifying the render_mode parameter in the respective configuration file.