This repo will teach you how to:
- Use LLM local or API via Ollama and again via LangChain
- Use Llama 3-8B model
- Build UI with Gradio
- Use case = "Summarize YouTube video using Llama 3"
Assuming you have the right python environment and other required tools. You can simply run:
python main.py
- Ollama to run local LLM API
Llama 3
from Meta, to use as AI brainGradio
, to build UIpytube
a python library for working with YouTubeLangChain
as framework for LLM apptiktoken
library to estimate token counts
- For using this notebook smoothly, we recommend create a python environment based on our provided
requirements.txt
.
This can be done by
pip install -r requirements.txt