Commit graph

196 commits

Author SHA1 Message Date
PanQiWei
0a04d3fb2a explicit set "base" value 2023-08-13 16:14:01 +08:00
PanQiWei
b1c64d9269 add baichuan model attention fusion logic 2023-08-11 19:12:43 +08:00
PanQiWei
8fedbbf82d using transformers gptj rope implementation 2023-08-11 18:26:23 +08:00
PanQiWei
fdb8c4500a extend to support qlinear_exllama's fusion 2023-08-11 14:52:26 +08:00
PanQiWei
43b9a5cd0a fix pass in wrong argument 2023-08-10 15:49:52 +08:00
PanQiWei
3d09cf36d7 fix syntax error 2023-08-10 15:36:21 +08:00
潘其威(William)
beab695c5b
Merge branch 'main' into xformers_integration 2023-08-10 15:27:11 +08:00
Felix Marty
4af7ea619d patch for transformers compatiblity 2023-08-09 14:23:59 +00:00
PanQiWei
44c7a1a184 make exllama_kernels compilation as optional 2023-08-09 17:42:22 +08:00
PanQiWei
172deae049 expose disable_exllama argument 2023-08-09 12:03:31 +08:00
PanQiWei
edc5b72da4 using pytorch backend rope 2023-08-09 10:20:58 +08:00
qwopqwop200
fe244503e0
add "," 2023-08-08 19:57:23 +09:00
qwopqwop200
d22f89c524
support qwen 2023-08-08 19:27:43 +09:00
qwopqwop200
dc5541e78a
static groups default value change 2023-08-08 14:11:39 +09:00
PanQiWei
26dc6852fe support inherit one of the three fused attention class and customize attn_bias building logic 2023-08-07 18:59:04 +08:00
qwopqwop200
25972d65bf
support static_groups and fix bug 2023-08-07 16:27:48 +09:00
PanQiWei
e5f874e5af add fused attention injection logic to llama 2023-08-07 13:45:37 +08:00
PanQiWei
2092a80b81 keep attn_op as what it is when passed in 2023-08-06 18:38:25 +08:00
PanQiWei
9155ef3038 fix using wrong attribute 2023-08-06 15:37:11 +08:00
PanQiWei
df24da5797 mark fused ops injection as experiment features 2023-08-06 15:05:28 +08:00
PanQiWei
ab6faa6496 implement gptj attention and mlp fused ops injection logic 2023-08-06 14:55:06 +08:00
PanQiWei
bacac399d3 abandon change trainable mode after model is loaded; support specify customized AttentionOp 2023-08-06 14:15:43 +08:00
PanQiWei
c71f5cdf12 add '_fuse_attention' and '_fuse_mlp' abstract static methods 2023-08-06 12:45:08 +08:00
qwopqwop200
a1fd81c72d
if training disable exllama 2023-08-01 12:29:58 +09:00
Felix Marty
1f99b94ae2 fix revision 2023-07-31 15:03:33 +00:00
Felix Marty
5660b22f28 fix bug quantization config loading 2023-07-31 14:28:37 +00:00
Felix Marty
38447262c0 fix fused attn 2023-07-31 13:46:32 +00:00
Felix Marty
760667dccc cleaning 2023-07-31 11:58:10 +00:00
Felix Marty
179776bd1d exllama kernel 2023-07-31 11:50:45 +00:00
PanQiWei
ff1f100ded remove argument 'save_dir' in method from_quantized 2023-07-26 17:58:04 +08:00
潘其威(William)
bbc4a7c455
Merge pull request #208 from TheBloke/TB_Add_SafeTensors_Metadata
Add Safetensors metadata saving, with some values saved to each .safetensor file
2023-07-26 11:54:47 +08:00
TheBloke
2647c92743 safetensors_metadata: add conversion to str() for input metadata to avoid errors from save_safe. Warn if this results in keys being overwritten. 2023-07-25 21:14:21 +00:00
TheBloke
ee7d80945b Add version to metadata using new value 2023-07-25 14:25:24 +00:00
TheBloke
eeaf5ebc53 Extend huggingface_hub features to AutoGPTQForCausalLM.from_pretrained() so models can be quantised from the hub including using a private token and revision/branch etc 2023-07-25 13:26:37 +00:00
TheBloke
c9124e3fc7 Fix revision and other huggingface_hub args for .from_quantized(), which were not being passed through 2023-07-25 12:48:33 +00:00
TheBloke
3f359fc778 Add support for Safetensors metadata 2023-07-25 11:30:39 +00:00
tc
e28e8ee809 Add support for InternLM 2023-07-07 09:25:40 -07:00
LaaZa
03577a7698 Rename the class to match reference capitalisation 2023-06-18 21:01:07 +03:00
LaaZa
9fd558f2ba Add support for Baichuan 2023-06-18 20:13:29 +03:00
潘其威(William)
b4fdd8d264
Merge branch 'main' into peft_integration 2023-06-02 19:11:59 +08:00
PanQiWei
ec6603d0ab support older version python 2023-05-31 22:11:16 +08:00
qwopqwop200
c381958a5f
add warning 2023-05-30 23:53:33 +09:00
潘其威(William)
defc96ff04
Merge pull request #91 from TheBloke/TheBloke_support-HF-download
Add support for HF Hub download, and `push_to_hub`
2023-05-30 07:37:15 +08:00
潘其威(William)
2245fad095
Update auto.py
fix None type error
2023-05-30 07:35:15 +08:00
潘其威(William)
15db2cdc44
Update _base.py
fix problem that recursively adding file extension to model_base_name
2023-05-30 07:26:42 +08:00
潘其威(William)
cfa7271617
Update _base.py
fix variable not found error
2023-05-30 07:22:10 +08:00
潘其威(William)
e5771fb206
Update _base.py
fix key mismatch
2023-05-30 06:44:45 +08:00
潘其威(William)
61a4ea035f
Update auto.py
add back save_dir for backward compatible
2023-05-30 06:43:00 +08:00
潘其威(William)
ea74e15199
Update _base.py
add model_name_or_path and model_file_base_name to BaseQuantizeConfig for better model file management; add back save_dir to .from_quantized() for backward compatible
2023-05-30 06:40:31 +08:00
PanQiWei
86f060c74b Merge branch 'main' into peft_integration 2023-05-28 16:23:38 +08:00