Commit graph

96 commits

Author SHA1 Message Date
oobabooga
86c7021285
Look for .pt files 2023-05-15 00:00:05 -03:00
PanQiWei
d5429441ef add GPTJ fused attention module 2023-05-14 16:17:21 +08:00
PanQiWei
5445c67190 add library version comparison help functions 2023-05-14 16:16:06 +08:00
PanQiWei
de33d26d67 fix bugs 2023-05-14 13:07:18 +08:00
PanQiWei
2273f9ef39 refactor file structure for triton kernels 2023-05-14 11:49:10 +08:00
PanQiWei
fef1a4fe4b make code clean and extendable 2023-05-12 20:11:55 +08:00
PanQiWei
d718d63e9c add import_utils.py for commonly used module importation 2023-05-12 19:58:48 +08:00
PanQiWei
c5ff195764 skip fused module injection instead of raising error if it's not supported yet. 2023-05-12 19:36:00 +08:00
PanQiWei
f159aeabb6 refactor .from_quantized api and improve model loading strategy 2023-05-12 18:09:50 +08:00
PanQiWei
4bb10fda49 groupsize -> group_size 2023-05-12 13:37:52 +08:00
qwopqwop200
3ff6ab18cb
Merge branch 'main' into faster-llama 2023-05-06 00:20:29 +09:00
TheBloke
1b3329b399 Fix 'groupsize' -> 'group_size' in all other .py files. I haven't touched any CUDA kernels in case there's any complexity there I don't understand 2023-05-05 14:44:16 +01:00
TheBloke
f61ce12271 Change 'groupsize' to 'group_size' everywhere. Turns out this is easier than 'groupsize' due to dependencies in other files. 2023-05-05 13:36:00 +01:00
TheBloke
f64c71e779 Change referenes to 'group_size' to 'groupsize' to match rest of this file 2023-05-05 13:21:13 +01:00
PanQiWei
1c6bb69fae fix attribute name error 2023-05-04 22:10:33 +08:00
潘其威(William)
771b650a7c
Merge pull request #38 from PanQiWei/faster-cuda-no-actorder
Faster cuda no actorder
2023-05-04 21:47:19 +08:00
qwopqwop200
b19c59541b
fix bug 2023-05-04 13:17:10 +09:00
qwopqwop200
b14d42e68a
bug fix 2023-05-04 13:03:38 +09:00
qwopqwop200
208d660920
fix bug 2023-05-04 10:04:00 +09:00
qwopqwop200
f51a92ed79
support faster and model load strict 2023-05-04 09:53:28 +09:00
qwopqwop200
cc992c21bd
Merge branch 'faster-cuda-no-actorder' into faster-llama 2023-05-04 09:09:09 +09:00
qwopqwop200
d49281bc5d
support faster and model load strict 2023-05-04 09:07:34 +09:00
qwopqwop200
c8504f0660
support faster and model load strict 2023-05-04 09:06:52 +09:00
qwopqwop200
afe1323b3f
support faster and model load strict 2023-05-04 09:03:36 +09:00
qwopqwop200
24251d1397
check kwargs 2023-05-02 22:32:54 +09:00
qwopqwop200
694f2954a3
add auto model parameter 2023-05-02 22:16:23 +09:00
qwopqwop200
ccd87e5800
add Auto model parameter 2023-05-02 22:15:56 +09:00
qwopqwop200
d8707f92a9
support fused_attn 2023-05-02 21:54:15 +09:00
qwopqwop200
f47322f073
fix bug 2023-05-02 21:14:27 +09:00
qwopqwop200
41f2379850
bug fix 2023-05-02 20:38:17 +09:00
qwopqwop200
d2f48e5311
bug fix 2023-05-02 20:36:53 +09:00
qwopqwop200
709bd7594f
Merge pull request #44 from PanQiWei/fix-bug-cuda
Fix bug cuda
2023-05-02 19:50:59 +09:00
qwopqwop200
a6d4f5c091
fix bug 2023-05-02 19:19:04 +09:00
qwopqwop200
1388acac94
fix bug 2023-05-02 19:13:13 +09:00
qwopqwop200
f51f763fde
fused attn ,fused mlp apply 2023-05-02 18:51:04 +09:00
潘其威(William)
144bd80436
Merge pull request #39 from TheBloke/TheBloke_check_model_exists
Check that model_save_name exists before trying to load it, to avoid confusing checkpoint error
2023-05-01 19:55:24 +08:00
TheBloke
593a0b28bb Fix typo: 'hole' -> 'whole' 2023-05-01 10:25:18 +01:00
TheBloke
60195ca5f2 Check that model_save_name exists before trying inference, to avoid confusing checkpoint error 2023-05-01 10:15:13 +01:00
qwopqwop200
95e633a597
add old cuda 2023-05-01 13:05:14 +09:00
qwopqwop200
5a69e22a93
add qlinear_old 2023-05-01 13:04:47 +09:00
潘其威(William)
5fa803334d
Merge branch 'main' into change-save-name 2023-04-29 20:36:45 +08:00
qwopqwop200
787909084f
fix bug 2023-04-29 19:08:34 +09:00
qwopqwop200
a2ef4b98db
change save the name 2023-04-29 18:20:46 +09:00
qwopqwop200
1792cd1111
change save the name 2023-04-29 18:16:48 +09:00
ZXED
24a371d14a
use the same Optional style as in other params 2023-04-29 09:52:11 +03:00
ZXED
c22770188d
allow user to set trust_remote_code flag manually 2023-04-29 09:52:11 +03:00
ZXED
b3f19a7ba7
support custom model name when loading the model 2023-04-29 09:52:11 +03:00
ZXED
ea8ab73343
support custom quantize_config when loading the model 2023-04-29 09:51:50 +03:00
PanQiWei
16d8dd200f remove non-parameters module from MOSSGPTQForCausalLM.outside_layer_modules 2023-04-29 10:58:29 +08:00
PanQiWei
b490ab004e remove override of _resize_attention_mask for llama and opt 2023-04-28 23:08:42 +08:00