- Shanghai
-
10:24
(UTC +08:00)
Stars
🦜🔗 Build context-aware reasoning applications
TensorFlow Tutorial and Examples for Beginners (support TF v1 & v2)
CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image
A guidance language for controlling large language models.
A collection of various deep learning architectures, models, and tips
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
A high performance implementation of HDBSCAN clustering.
Making Protein folding accessible to all!
ICCV2021, Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet
ProtTrans is providing state of the art pretrained language models for proteins. ProtTrans was trained on thousands of GPUs from Summit and hundreds of Google TPUs using Transformers Models.
This repository holds all the code for the site http://www.adventuresinmachinelearning.com
OpenAI CLIP text encoders for multiple languages!
BioWordVec & BioSentVec: pre-trained embeddings for biomedical words and sentences
Area-weighted venn-diagrams for Python/matplotlib
various cv tools, such as label tools, data augmentation, label conversion, etc.
Pytorch implementation of "All Tokens Matter: Token Labeling for Training Better Vision Transformers"
Precision Medicine Knowledge Graph (PrimeKG)
OLMoE: Open Mixture-of-Experts Language Models
Solutions to exercises in Reinforcement Learning: An Introduction (2nd Edition).
JupyterLab distribution with a retro look and feel 🌅
Local LLM ReAct Agent with Guidance
Handwritten text recognition using transformers.
A small repo showing how to easily use BERT (or other transformers) for inference