Skip to content

richbeales/Auto-GPT-HF-Model-Plugin

Repository files navigation

Auto-GPT-HF-Model-Plugin

An attempt to replace the chat completion in Auto-GPT with a model hosted on Hugging Face

Currently very experimental

In this case I'm playing with stablelm-tuned-alpha-3b. But in theory you can pick any model on HF.

You might need to change the prompt separators based on the model.

Same as Auto-GPT look in .env.template and copy to .env (and duplicate in your actual AutoGPT .env) and pip install -r requirements.txt

There's three classes included (you have to change the one you use in the code at the top of init.py). It's currently set up to use the hosted inference one (2).

  1. one using the (free) inference API on Hugging Face https://huggingface.co/stabilityai/stablelm-base-alpha-3b

  2. one using the (hosted - you need to spin up an instance) inference API on Hugging Face, https://ui.endpoints.huggingface.co/endpoints

  3. other using their local model (downloads the model and uses CUDA - doesn't work on my machine yet (CUDA issue) this is just their sample code)

You can run pytest too, to see the limited testing i've done...

About

HuggingFace hosted inference models plugin for Auto-GPT

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published