Skip to content

Latest commit

 

History

History
153 lines (98 loc) · 6.35 KB

README.md

File metadata and controls

153 lines (98 loc) · 6.35 KB

Code Style: Google

Demo · Doc · Discussion

nutsh_teaser_v3.mp4

nutsh is a platform designed for visual learning through human feedback. With a user-friendly interface and API, it supports a range of visual modalities, diverse human input methods, and learning mechanisms based on human feedback.

The project is currently in its early stages and under active development. Your feedback is highly appreciated.

Main features

  • ✅ Intuitive Interface
    • ✅ Modern frontend design
    • ✅ Easy custom model integration
  • ✅ Multiple Visual Modalities
    • ✅ Images
    • ✅ Videos
    • ⬜ ... (coming)
  • ✅ Diverse Annotations
    • ✅ Masks
    • ✅ Polygons and polylines
    • ✅ Bounding boxes
  • ✅ Versatile Scenarios
    • ✅ Object tracking
    • ✅ Object parts linking
  • ✅ Model-assisted Segmentation
    • ✅ Integration with Segment Anything Model
    • ✅ Positive and negative point prompts
    • ✅ Focus on specific local areas
  • ✅ Learning from Human Feedback
    • ✅ Model fine-tuning based on user feedback

Install

eval "$(curl -sSL nutsh.ai/install)"

Refer to the quick start documentation for a more comprehensive guide.

Quick Overview

welcome.mp4

Diverse Annotations and Versatile Scenarios

Drawing Polygons

In addition to standard polygon operations, Bézier curves are supported, enabling precise annotations of curved objects, such as tires.

polygon_intro.mp4

Drawing masks

Create masks with pixel-level accuracy, ensuring a perfect fit with no gaps or overlaps.

mask_intro.mp4

Tracking Objects

Especially useful when annotating videos, the platform offers an effortless way to track objects across frames.

track_objects.mp4

Linking Object Parts

Objects obscured by others, resulting in fragmented representation, can easily be linked to form a cohesive annotation

link_parts.mp4

Convenient Shortcuts

The platform incorporates various shortcuts, streamlining the annotation process for enhanced speed and accuracy.

Cloning Slice

For polygons sharing a common edge segment, one can be cloned for the other, ensuring a seamless fit without any gaps or overlaps.

slice_clone.mp4

Interpolation

Given two annotations of the same type on both a start and an end frame, heuristic interpolation can automatically generate all intermediate annotations.

polygon_interpolation.mp4

Model-Assisted Segmentation

We offer an API that seamlessly integrates the human labeling interface with deep learning models. Our server facilitates model inference and tuning based on user input. Calculations can be executed on both CPU and GPU platforms.

For instance, our SAM module taps into the Segment Anything Model from Meta AI to enhance segmentation speed and efficiency. While the SAM models are openly accessible to the public, their labeling interface remains proprietary. We also offer advanced features, such as local prediction, to ensure top-notch segmentation results. For a detailed guide, please refer to our documentation.

Global Smart Segmentation

Leverage deep learning models to perform segmentation tasks across entire images, aiding the segmentation process.

global_smart_segmentation.mp4

Local Smart Segmentation

Direct your attention to specific regions of an image and request the model to generate segmentation predictions for that particular section. Such localized predictions often yield more detailed segmentations.

local_smart_segmentation.mp4

Learning from Human Feedback

In addition to utilizing prompts for refining predictions, users can make subsequent adjustments to these predictions. By gathering these modifications, you can train a model that's fine-tuned to your specific needs. Our SAM module comes equipped with features that assist in fine-tuning the SAM decoder seamlessly.

finetune_sam.mp4

Custom Model Integration

Nutsh offers a Python SDK, enabling the quick and easy integration of your custom models in Python into the platform. Additionally, our gRPC interfaces are available for a more low-level but flexible approach to integrating your ideas. As an example, check the auto-tracking feature for videos to see how integrated tracking models can enhance your annotation workflows.

nutsh_track-.mp4

Next Steps

Consult the documentation for further information, including:

and more!

Citation

If you find the platform useful for your research projects, please cite

@misc{nutsh,
  title = {nutsh: A Platform for Visual Learning from Human Feedback},
  author = {Xu Han and Fisher Yu},
  howpublished = {\url{https://github.com/SysCV/nutsh}},
  year = {2023}
}