PanQiWei
|
fef1a4fe4b
|
make code clean and extendable
|
2023-05-12 20:11:55 +08:00 |
|
PanQiWei
|
d718d63e9c
|
add import_utils.py for commonly used module importation
|
2023-05-12 19:58:48 +08:00 |
|
PanQiWei
|
c5ff195764
|
skip fused module injection instead of raising error if it's not supported yet.
|
2023-05-12 19:36:00 +08:00 |
|
PanQiWei
|
f159aeabb6
|
refactor .from_quantized api and improve model loading strategy
|
2023-05-12 18:09:50 +08:00 |
|
PanQiWei
|
69610329d2
|
add _fused_base.py
|
2023-05-12 18:09:23 +08:00 |
|
PanQiWei
|
4bb10fda49
|
groupsize -> group_size
|
2023-05-12 13:37:52 +08:00 |
|
LaaZa
|
b8187ff05a
|
Add support for CodeGen/2
|
2023-05-08 17:34:00 +03:00 |
|
LaaZa
|
63247a0669
|
Add support for GPTBigCode
|
2023-05-08 12:28:29 +03:00 |
|
lszxb
|
174ef81995
|
fix incorrect pack while using cuda, desc_act and grouping
|
2023-05-07 20:44:47 +08:00 |
|
qwopqwop200
|
3ff6ab18cb
|
Merge branch 'main' into faster-llama
|
2023-05-06 00:20:29 +09:00 |
|
TheBloke
|
1b3329b399
|
Fix 'groupsize' -> 'group_size' in all other .py files. I haven't touched any CUDA kernels in case there's any complexity there I don't understand
|
2023-05-05 14:44:16 +01:00 |
|
TheBloke
|
f61ce12271
|
Change 'groupsize' to 'group_size' everywhere. Turns out this is easier than 'groupsize' due to dependencies in other files.
|
2023-05-05 13:36:00 +01:00 |
|
TheBloke
|
f64c71e779
|
Change referenes to 'group_size' to 'groupsize' to match rest of this file
|
2023-05-05 13:21:13 +01:00 |
|
PanQiWei
|
6cba6e7123
|
reformat code
|
2023-05-04 22:16:08 +08:00 |
|
PanQiWei
|
1c6bb69fae
|
fix attribute name error
|
2023-05-04 22:10:33 +08:00 |
|
潘其威(William)
|
771b650a7c
|
Merge pull request #38 from PanQiWei/faster-cuda-no-actorder
Faster cuda no actorder
|
2023-05-04 21:47:19 +08:00 |
|
qwopqwop200
|
b19c59541b
|
fix bug
|
2023-05-04 13:17:10 +09:00 |
|
qwopqwop200
|
908248114e
|
fix bug
|
2023-05-04 13:15:52 +09:00 |
|
qwopqwop200
|
b14d42e68a
|
bug fix
|
2023-05-04 13:03:38 +09:00 |
|
qwopqwop200
|
b0bc0b0358
|
bug fix
|
2023-05-04 13:03:11 +09:00 |
|
qwopqwop200
|
208d660920
|
fix bug
|
2023-05-04 10:04:00 +09:00 |
|
qwopqwop200
|
f51a92ed79
|
support faster and model load strict
|
2023-05-04 09:53:28 +09:00 |
|
qwopqwop200
|
cc992c21bd
|
Merge branch 'faster-cuda-no-actorder' into faster-llama
|
2023-05-04 09:09:09 +09:00 |
|
qwopqwop200
|
d49281bc5d
|
support faster and model load strict
|
2023-05-04 09:07:34 +09:00 |
|
qwopqwop200
|
c8504f0660
|
support faster and model load strict
|
2023-05-04 09:06:52 +09:00 |
|
qwopqwop200
|
34201dbff9
|
support faster and model load strict
|
2023-05-04 09:05:07 +09:00 |
|
qwopqwop200
|
c359f672a8
|
support faster and model load strict
|
2023-05-04 09:04:07 +09:00 |
|
qwopqwop200
|
afe1323b3f
|
support faster and model load strict
|
2023-05-04 09:03:36 +09:00 |
|
qwopqwop200
|
a88cd16d65
|
fix bug
|
2023-05-03 22:36:14 +09:00 |
|
qwopqwop200
|
24251d1397
|
check kwargs
|
2023-05-02 22:32:54 +09:00 |
|
qwopqwop200
|
26581b6946
|
remove LlamaGPTQForCausalLM
|
2023-05-02 22:18:17 +09:00 |
|
qwopqwop200
|
694f2954a3
|
add auto model parameter
|
2023-05-02 22:16:23 +09:00 |
|
qwopqwop200
|
ccd87e5800
|
add Auto model parameter
|
2023-05-02 22:15:56 +09:00 |
|
qwopqwop200
|
d8707f92a9
|
support fused_attn
|
2023-05-02 21:54:15 +09:00 |
|
qwopqwop200
|
61c6f6a5d2
|
typo fix
|
2023-05-02 21:53:39 +09:00 |
|
qwopqwop200
|
a11d59f6c4
|
support fused_attn
|
2023-05-02 21:53:13 +09:00 |
|
qwopqwop200
|
f47322f073
|
fix bug
|
2023-05-02 21:14:27 +09:00 |
|
qwopqwop200
|
41f2379850
|
bug fix
|
2023-05-02 20:38:17 +09:00 |
|
qwopqwop200
|
d2f48e5311
|
bug fix
|
2023-05-02 20:36:53 +09:00 |
|
qwopqwop200
|
709bd7594f
|
Merge pull request #44 from PanQiWei/fix-bug-cuda
Fix bug cuda
|
2023-05-02 19:50:59 +09:00 |
|
qwopqwop200
|
9490a98444
|
add LlamaGPTQForCausalLM
|
2023-05-02 19:32:18 +09:00 |
|
qwopqwop200
|
a6d4f5c091
|
fix bug
|
2023-05-02 19:19:04 +09:00 |
|
qwopqwop200
|
2ba84fbb48
|
fix bug
|
2023-05-02 19:13:40 +09:00 |
|
qwopqwop200
|
1388acac94
|
fix bug
|
2023-05-02 19:13:13 +09:00 |
|
qwopqwop200
|
6c23e5b3a5
|
add fused mlp ,fused attn
|
2023-05-02 18:55:44 +09:00 |
|
qwopqwop200
|
f51f763fde
|
fused attn ,fused mlp apply
|
2023-05-02 18:51:04 +09:00 |
|
qwopqwop200
|
50c0fd13c5
|
Multi-GPU, allocate output tensor
|
2023-05-02 17:51:41 +09:00 |
|
潘其威(William)
|
144bd80436
|
Merge pull request #39 from TheBloke/TheBloke_check_model_exists
Check that model_save_name exists before trying to load it, to avoid confusing checkpoint error
|
2023-05-01 19:55:24 +08:00 |
|
TheBloke
|
593a0b28bb
|
Fix typo: 'hole' -> 'whole'
|
2023-05-01 10:25:18 +01:00 |
|
TheBloke
|
60195ca5f2
|
Check that model_save_name exists before trying inference, to avoid confusing checkpoint error
|
2023-05-01 10:15:13 +01:00 |
|