Alexander Pozharskii
|
0185095402
|
Use adapter_name for get_gptq_peft_model with train_mode=True
|
2023-09-24 17:11:19 +04:00 |
|
qwopqwop200
|
b1a8cc28e8
|
remove raise
|
2023-05-31 00:03:51 +09:00 |
|
PanQiWei
|
6c64b0b361
|
raise NotImplementedError when model with fused attention injected try to use ADAPTION_PROMPT peft type
|
2023-05-28 22:35:34 +08:00 |
|
PanQiWei
|
def084bf0e
|
reset value of AdaptionPromptConfig.adapter_layers to number of model's hidden layers when exceeds
|
2023-05-28 22:11:02 +08:00 |
|
PanQiWei
|
ad10c13d40
|
support AdaLora
|
2023-05-28 21:30:45 +08:00 |
|
PanQiWei
|
3ee2daa73c
|
make GPTQLoraModel to inherit from LoraModel to simplify code
|
2023-05-28 17:36:18 +08:00 |
|
PanQiWei
|
22d1d8dcaa
|
add 'auto_find_all_linears' argument to get_gptq_peft_model function
|
2023-05-28 17:04:38 +08:00 |
|
PanQiWei
|
83132a663a
|
add warning to guide users interact with lora properly
|
2023-05-28 16:57:31 +08:00 |
|
PanQiWei
|
5bc5325920
|
add find_all_linear_names help function, make customized lora module more general
|
2023-05-27 07:49:17 +08:00 |
|
PanQiWei
|
8bf21a7e4c
|
set xavier_uniform_ as lora_A's init function
|
2023-05-26 14:06:53 +08:00 |
|
PanQiWei
|
cfd27e8caa
|
refactor file structure of qlinears
|
2023-05-26 07:18:16 +08:00 |
|
PanQiWei
|
f6a34137e9
|
lora compatibility
|
2023-05-25 19:44:53 +08:00 |
|
PanQiWei
|
d293bf3a04
|
first upload peft_utils.py
|
2023-05-25 15:11:11 +08:00 |
|