qwopqwop200
b03f53294f
support 64dim cuda
2023-06-02 19:53:50 +09:00
qwopqwop200
0891ea4036
support 32dim triton]
2023-06-02 19:05:55 +09:00
qwopqwop200
b3654a68c3
support 32dim triton kernel
2023-06-02 19:04:12 +09:00
qwopqwop200
0f2841cb13
remove log
2023-05-30 23:51:55 +09:00
qwopqwop200
33809a8e59
remove log
2023-05-30 23:51:39 +09:00
qwopqwop200
dfd9dc0e6b
change if trainable backend pytorch
2023-05-30 23:43:55 +09:00
qwopqwop200
5274313067
change if trainable backend pytorch
2023-05-30 23:40:58 +09:00
PanQiWei
eb9c0b140f
update FusedLlamaMLPForQuantizedModel for general usage purpose
2023-05-27 07:47:20 +08:00
PanQiWei
2b532f9453
add trainable mode
2023-05-26 13:11:30 +08:00
PanQiWei
fe5f5d12ed
Merge branch 'main' into peft_integration
2023-05-26 09:48:06 +08:00
PanQiWei
69609c4bc7
support faster vecquant4matmul cuda kernel
2023-05-26 08:55:05 +08:00
PanQiWei
cfd27e8caa
refactor file structure of qlinears
2023-05-26 07:18:16 +08:00
qwopqwop200
503f85255d
Update kernels.py
2023-05-25 23:15:33 +09:00
PanQiWei
8e034b28bc
remove duplicate code
2023-05-23 23:48:15 +08:00
PanQiWei
4373d6b29c
Merge branch 'main' into improve_cpu_offload
2023-05-23 23:47:33 +08:00
PanQiWei
db63c0876a
half out
2023-05-23 16:08:28 +08:00
Lex Song
f2ab4fab46
Fix CUDA out of memory error in qlinear_old.py
...
Add a missing line from qlinear.py to qlinear_old.py to convert the output tensor.
This resolves a CUDA out of memory error that occurred without this line.
2023-05-20 21:10:11 +08:00
潘其威(William)
d4011d29c6
Merge pull request #92 from PanQiWei/fix_triton_integration_bugs
...
fix ImportError when triton is not installed
2023-05-20 17:01:14 +08:00
TheBloke
898f1ef62d
Rename 'quant_cuda' to 'autogptq_cuda' to avoid conflicts with existing GPTQ-for-LLaMa installations.
2023-05-20 09:33:51 +01:00
PanQiWei
73b5952f5e
fix not return directly when triton is not installed
2023-05-20 16:21:52 +08:00
PanQiWei
86b3b52c63
fix ImportError when triton is not installed
2023-05-20 16:15:20 +08:00
PanQiWei
d5429441ef
add GPTJ fused attention module
2023-05-14 16:17:21 +08:00
PanQiWei
e1c564ac0e
compatible with older pytorch version
2023-05-14 16:17:03 +08:00
PanQiWei
de33d26d67
fix bugs
2023-05-14 13:07:18 +08:00
PanQiWei
2273f9ef39
refactor file structure for triton kernels
2023-05-14 11:49:10 +08:00
PanQiWei
fef1a4fe4b
make code clean and extendable
2023-05-12 20:11:55 +08:00
PanQiWei
69610329d2
add _fused_base.py
2023-05-12 18:09:23 +08:00
PanQiWei
4bb10fda49
groupsize -> group_size
2023-05-12 13:37:52 +08:00
qwopqwop200
3ff6ab18cb
Merge branch 'main' into faster-llama
2023-05-06 00:20:29 +09:00
TheBloke
1b3329b399
Fix 'groupsize' -> 'group_size' in all other .py files. I haven't touched any CUDA kernels in case there's any complexity there I don't understand
2023-05-05 14:44:16 +01:00
PanQiWei
6cba6e7123
reformat code
2023-05-04 22:16:08 +08:00
qwopqwop200
908248114e
fix bug
2023-05-04 13:15:52 +09:00
qwopqwop200
b0bc0b0358
bug fix
2023-05-04 13:03:11 +09:00
qwopqwop200
34201dbff9
support faster and model load strict
2023-05-04 09:05:07 +09:00
qwopqwop200
c359f672a8
support faster and model load strict
2023-05-04 09:04:07 +09:00
qwopqwop200
a88cd16d65
fix bug
2023-05-03 22:36:14 +09:00
qwopqwop200
61c6f6a5d2
typo fix
2023-05-02 21:53:39 +09:00
qwopqwop200
a11d59f6c4
support fused_attn
2023-05-02 21:53:13 +09:00
qwopqwop200
2ba84fbb48
fix bug
2023-05-02 19:13:40 +09:00
qwopqwop200
6c23e5b3a5
add fused mlp ,fused attn
2023-05-02 18:55:44 +09:00
qwopqwop200
50c0fd13c5
Multi-GPU, allocate output tensor
2023-05-02 17:51:41 +09:00
qwopqwop200
f0f37c1fe7
fix bug
2023-05-01 18:09:39 +09:00
qwopqwop200
9dfcac8e26
add qlinear_old
2023-05-01 13:03:57 +09:00
qwopqwop200
ae8b1a22a3
change global to local
2023-04-28 23:18:39 +09:00
qwopqwop200
e914b9b1bd
update support 256 not div
2023-04-28 22:48:23 +09:00
qwopqwop200
c9215a1b5b
change div num
2023-04-28 22:42:29 +09:00
qwopqwop200
19f167e58b
add raise-exception
2023-04-28 22:24:44 +09:00
PanQiWei
a69a73a22c
fix device mismatch when directly using model to inference after quantization
2023-04-28 16:41:46 +08:00
qwopqwop200
329a64ed40
support conv1d,conv2d
2023-04-28 09:15:42 +09:00
qwopqwop200
bb9afe8b61
support conv1d,conv2d
2023-04-28 09:15:13 +09:00