flashinfer-ai / flashinfer
FlashInfer: Kernel Library for LLM Serving
See what the GitHub community is most excited about today.
FlashInfer: Kernel Library for LLM Serving
LLM training in simple, raw C/CUDA
[ARCHIVED] Cooperative primitives for CUDA C++. See https://github.com/NVIDIA/cccl
CUDA Kernel Benchmarking Library
WholeGraph - large scale Graph Neural Networks
RAFT contains fundamental widely-used algorithms and primitives for machine learning and information retrieval. The algorithms are CUDA-accelerated and form building blocks for more easily writing high performance applications.
cuGraph - RAPIDS Graph Analytics Library
NCCL Tests
Tile primitives for speedy kernels
A differentiable rasterizer used in the project "2D Gaussian Splatting"
🎉 Modern CUDA Learn Notes with PyTorch: fp32, fp16, bf16, fp8/int8, flash_attn, sgemm, sgemv, warp/block reduce, dot, elementwise, softmax, layernorm, rmsnorm.
Instant neural graphics primitives: lightning fast NeRF and more