-
BUPT
- Beijing
-
17:22
(UTC +08:00) - in/yihangwang1020
Stars
Repository for the Paper (AAAI 2024, Oral) --- Visual Adversarial Examples Jailbreak Large Language Models
A curated list of safety-related papers, articles, and resources focused on Large Language Models (LLMs). This repository aims to provide researchers, practitioners, and enthusiasts with insights i…
This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicitation and Distillation Algorithms, and explore the Skill & V…
📰 Must-read papers and blogs on LLM based Long Context Modeling 🔥
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
Acceptance rates for the major AI conferences
提取微信聊天记录,将其导出成HTML、Word、Excel文档永久保存,对聊天记录进行分析生成年度聊天报告,用聊天数据训练专属于个人的AI聊天助手
GoMate:RAG Framework within Reliable input,Trusted output
General technology for enabling AI capabilities w/ LLMs and MLLMs
Official Repo of paper "QUITO: Accelerating Long-Context Reasoning through Query-Guided Context Compression".
Multilingual/multidomain question generation datasets, models, and python library for question generation.
CVPR 2024: Language Guided Generation of 3D Embodied AI Environments.
Official repository of Evolutionary Optimization of Model Merging Recipes
Open reproduction of MUSE for fast text2image generation.
An Autonomous LLM Agent for Complex Task Solving
The largest pre-trained medical image segmentation model (1.4B parameters) based on the largest public dataset (>100k annotations), up until April 2023.
Painter & SegGPT Series: Vision Foundation Models from BAAI
Use PEFT or Full-parameter to finetune 350+ LLMs or 90+ MLLMs. (LLM: Qwen2.5, Llama3.2, GLM4, Internlm2.5, Yi1.5, Mistral, Baichuan2, DeepSeek, Gemma2, ...; MLLM: Qwen2-VL, Qwen2-Audio, Llama3.2-Vi…
GoGPT:基于Llama/Llama 2训练的中英文增强大模型|Chinese-Llama2
[ICLR'24 spotlight] Chinese and English Multimodal Large Model Series (Chat and Paint) | 基于CPM基础模型的中英双语多模态大模型系列