TheBloke
|
898f1ef62d
|
Rename 'quant_cuda' to 'autogptq_cuda' to avoid conflicts with existing GPTQ-for-LLaMa installations.
|
2023-05-20 09:33:51 +01:00 |
|
qwopqwop200
|
3ff6ab18cb
|
Merge branch 'main' into faster-llama
|
2023-05-06 00:20:29 +09:00 |
|
TheBloke
|
1b3329b399
|
Fix 'groupsize' -> 'group_size' in all other .py files. I haven't touched any CUDA kernels in case there's any complexity there I don't understand
|
2023-05-05 14:44:16 +01:00 |
|
PanQiWei
|
6cba6e7123
|
reformat code
|
2023-05-04 22:16:08 +08:00 |
|
qwopqwop200
|
a88cd16d65
|
fix bug
|
2023-05-03 22:36:14 +09:00 |
|
qwopqwop200
|
61c6f6a5d2
|
typo fix
|
2023-05-02 21:53:39 +09:00 |
|
qwopqwop200
|
a11d59f6c4
|
support fused_attn
|
2023-05-02 21:53:13 +09:00 |
|
qwopqwop200
|
ae8b1a22a3
|
change global to local
|
2023-04-28 23:18:39 +09:00 |
|
qwopqwop200
|
e914b9b1bd
|
update support 256 not div
|
2023-04-28 22:48:23 +09:00 |
|
PanQiWei
|
a69a73a22c
|
fix device mismatch when directly using model to inference after quantization
|
2023-04-28 16:41:46 +08:00 |
|
qwopqwop200
|
bb9afe8b61
|
support conv1d,conv2d
|
2023-04-28 09:15:13 +09:00 |
|
qwopqwop200
|
9c38393e31
|
fix bug about wf meta device
|
2023-04-28 08:26:11 +09:00 |
|
PanQiWei
|
73cb1dbf09
|
optimize import and format code
|
2023-04-26 13:08:47 +08:00 |
|
PanQiWei
|
7915278e5f
|
bug fix
|
2023-04-25 20:43:40 +08:00 |
|
PanQiWei
|
832dc4a7a1
|
refactor file structure
|
2023-04-25 18:58:20 +08:00 |
|