-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Insights: huggingface/peft
September 19, 2024 – September 26, 2024
Overview
Could not load contribution data
Please try again later
1 Release published by 1 person
-
v0.13.0 LoRA+, VB-LoRA, and more
published
Sep 25, 2024
12 Pull requests merged by 6 people
-
Fix Inconsistent Missing Keys Warning for Adapter Weights in PEFT
#2084 merged
Sep 25, 2024 -
Support Conv3d layer in LoRA and IA3
#2082 merged
Sep 25, 2024 -
Bump version to 0.13.1.dev0
#2094 merged
Sep 25, 2024 -
Release v0.13.0
#2093 merged
Sep 25, 2024 -
ENH: Better DoRA check in mixed adapter batch inference
#2089 merged
Sep 24, 2024 -
Fix func docstring
#2087 merged
Sep 23, 2024 -
FIX: Bug in find_minimal_target_modules
#2083 merged
Sep 23, 2024 -
ENH: Add default target layers for gemma2 architecture
#2078 merged
Sep 23, 2024 -
ENH: Allow empty initialization of adapter weight
#1961 merged
Sep 23, 2024 -
Update setup.py to update contact info
#2086 merged
Sep 23, 2024 -
Expose bias to to ModulesToSaveWrapper
#2081 merged
Sep 20, 2024 -
ENH: PiSSA/OLoRA: Preserve original config on save
#2077 merged
Sep 20, 2024
6 Pull requests opened by 5 people
-
FIX Raise an error when performing mixed adapter inference and passing non-existing adapter names
#2090 opened
Sep 23, 2024 -
[WIP] Fix to prefix tuning to fit transformers
#2096 opened
Sep 25, 2024 -
Add new features: Safe LoRA
#2098 opened
Sep 25, 2024 -
adaption for moe models
#2101 opened
Sep 26, 2024 -
FEAT: Adding exclude modules param(#2044)
#2102 opened
Sep 26, 2024 -
FIX: Transpose weight matrix based on fan_in_fan_out condition in PiSSA initialization (#2103)
#2104 opened
Sep 26, 2024
10 Issues closed by 7 people
-
`load_adapter` seems to require the base model to be identical
#1932 closed
Sep 25, 2024 -
Why original layer weight is saved for LoRA adapter?
#2092 closed
Sep 25, 2024 -
Incorporating contrastive prefixes with PrefixTuning
#2012 closed
Sep 24, 2024 -
Missing PEFT model
#2009 closed
Sep 23, 2024 -
Missing modules in prompt-based PEFT when re-loading model
#2043 closed
Sep 23, 2024 -
PEFT implementations of adapters are outdated and languishing
#1931 closed
Sep 20, 2024 -
Improving generalization of LoRA with wise-ft
#1940 closed
Sep 20, 2024 -
lora_r is double when converting olora to lora.
#2075 closed
Sep 20, 2024 -
make RMSNorm or other small parameters trainable with lora
#2080 closed
Sep 20, 2024
7 Issues opened by 7 people
-
merge_and_unload docs do not clarify behaviour for quantized base models
#2105 opened
Sep 26, 2024 -
Lora PISSA init: not support gpt2
#2103 opened
Sep 26, 2024 -
Questions about original_module and modules_to_save.default
#2100 opened
Sep 26, 2024 -
Using module_to_save to save parameters inited by nn.parameters dose't work!
#2099 opened
Sep 26, 2024 -
Abnormal performance of training LLaMA3.1-70 via LoRA
#2091 opened
Sep 24, 2024 -
Prompt-Tuning for text-to-image diffusion models
#2085 opened
Sep 22, 2024
12 Unresolved conversations
Sometimes conversations happen on old items that aren’t yet closed. Here is a list of all the Issues and Pull Requests with unresolved conversations.
-
Update OFT to fix merge bugs
#1996 commented on
Sep 26, 2024 • 14 new comments -
FEAT: Support quantization for VeRA using bitsandbytes (#2070)
#2076 commented on
Sep 26, 2024 • 2 new comments -
Support Conv3d layer
#2079 commented on
Sep 20, 2024 • 0 new comments -
question about training time
#2063 commented on
Sep 23, 2024 • 0 new comments -
Support optimum-quanto
#1997 commented on
Sep 23, 2024 • 0 new comments -
Cannot use prefix tuning on quantized Codellama
#2035 commented on
Sep 24, 2024 • 0 new comments -
exclude_modules to keep specific layers or other quirky components out of a target_modules selection
#2044 commented on
Sep 25, 2024 • 0 new comments -
Loading lora weights for FLUX pipeline is extremely slow
#2055 commented on
Sep 26, 2024 • 0 new comments -
Low performance on mps backed
#2041 commented on
Sep 26, 2024 • 0 new comments -
[Call for contributions] help us improve LoKr, LoHa, and other LyCORIS
#1935 commented on
Sep 26, 2024 • 0 new comments -
Update layer.py
#2029 commented on
Sep 21, 2024 • 0 new comments -
ENH Make PEFT configs forward compatible
#2038 commented on
Sep 26, 2024 • 0 new comments