Highlights
- Pro
Lists (2)
Sort Name ascending (A-Z)
Stars
TensorFlow Lite Micro for Espressif Chipsets
LiteRT is the new name for TensorFlow Lite (TFLite). While the name is new, it's still the same trusted, high-performance runtime for on-device AI, now with an expanded vision.
🔥🔥「InterviewGuide」是阿秀从校园->职场多年计算机自学过程的记录以及学弟学妹们计算机校招&秋招经验总结文章的汇总,包括但不限于C/C++ 、Golang、JavaScript、Vue、操作系统、数据结构、计算机网络、MySQL、Redis等学习总结,坚持学习,持续成长!
润学全球官方指定GITHUB,整理润学宗旨、纲领、理论和各类润之实例;解决为什么润,润去哪里,怎么润三大问题; 并成为新中国人的核心宗教,核心信念。
Classification of Human Movement using mmWave FMCW Radar Micro-Doppler Signature
Using mmWave radars for some computer vision tasks.
A catkin workspace in ROS which uses DBSCAN to identify which points in a point cloud belong to the same object.
A point cloud dataset for mmWave radar human activity recognition.
PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space
A curated list of neural network pruning resources.
OTOv1-v3, NeurIPS, ICLR, TMLR, DNN Training, Compression, Structured Pruning, Erasing Operators, CNN, Diffusion, LLM
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning; [NeurIPS 2022] MCUNetV3: On-Device Training Under 2…
[NeurIPS 2020] MCUNet: Tiny Deep Learning on IoT Devices; [NeurIPS 2021] MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning
Sparsity-aware deep learning inference runtime for CPUs
Awesome LLM compression research papers and tools.
EE-LLM is a framework for large-scale training and inference of early-exit (EE) large language models (LLMs).
For releasing code related to compression methods for transformers, accompanying our publications
Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models
Unofficial implementations of block/layer-wise pruning methods for LLMs.
[ICLR 2024] Jaiswal, A., Gan, Z., Du, X., Zhang, B., Wang, Z., & Yang, Y. Compressing llms: The truth is rarely pure and never simple.