Commit graph

38 commits

Author SHA1 Message Date
PanQiWei
d5429441ef add GPTJ fused attention module 2023-05-14 16:17:21 +08:00
PanQiWei
e1c564ac0e compatible with older pytorch version 2023-05-14 16:17:03 +08:00
PanQiWei
de33d26d67 fix bugs 2023-05-14 13:07:18 +08:00
PanQiWei
2273f9ef39 refactor file structure for triton kernels 2023-05-14 11:49:10 +08:00
PanQiWei
fef1a4fe4b make code clean and extendable 2023-05-12 20:11:55 +08:00
PanQiWei
69610329d2 add _fused_base.py 2023-05-12 18:09:23 +08:00
PanQiWei
4bb10fda49 groupsize -> group_size 2023-05-12 13:37:52 +08:00
qwopqwop200
3ff6ab18cb
Merge branch 'main' into faster-llama 2023-05-06 00:20:29 +09:00
TheBloke
1b3329b399 Fix 'groupsize' -> 'group_size' in all other .py files. I haven't touched any CUDA kernels in case there's any complexity there I don't understand 2023-05-05 14:44:16 +01:00
PanQiWei
6cba6e7123 reformat code 2023-05-04 22:16:08 +08:00
qwopqwop200
908248114e
fix bug 2023-05-04 13:15:52 +09:00
qwopqwop200
b0bc0b0358
bug fix 2023-05-04 13:03:11 +09:00
qwopqwop200
34201dbff9
support faster and model load strict 2023-05-04 09:05:07 +09:00
qwopqwop200
c359f672a8
support faster and model load strict 2023-05-04 09:04:07 +09:00
qwopqwop200
a88cd16d65
fix bug 2023-05-03 22:36:14 +09:00
qwopqwop200
61c6f6a5d2
typo fix 2023-05-02 21:53:39 +09:00
qwopqwop200
a11d59f6c4
support fused_attn 2023-05-02 21:53:13 +09:00
qwopqwop200
2ba84fbb48
fix bug 2023-05-02 19:13:40 +09:00
qwopqwop200
6c23e5b3a5
add fused mlp ,fused attn 2023-05-02 18:55:44 +09:00
qwopqwop200
50c0fd13c5
Multi-GPU, allocate output tensor 2023-05-02 17:51:41 +09:00
qwopqwop200
f0f37c1fe7
fix bug 2023-05-01 18:09:39 +09:00
qwopqwop200
9dfcac8e26
add qlinear_old 2023-05-01 13:03:57 +09:00
qwopqwop200
ae8b1a22a3
change global to local 2023-04-28 23:18:39 +09:00
qwopqwop200
e914b9b1bd
update support 256 not div 2023-04-28 22:48:23 +09:00
qwopqwop200
c9215a1b5b
change div num 2023-04-28 22:42:29 +09:00
qwopqwop200
19f167e58b
add raise-exception 2023-04-28 22:24:44 +09:00
PanQiWei
a69a73a22c fix device mismatch when directly using model to inference after quantization 2023-04-28 16:41:46 +08:00
qwopqwop200
329a64ed40
support conv1d,conv2d 2023-04-28 09:15:42 +09:00
qwopqwop200
bb9afe8b61
support conv1d,conv2d 2023-04-28 09:15:13 +09:00
qwopqwop200
9c38393e31
fix bug about wf meta device 2023-04-28 08:26:11 +09:00
PanQiWei
bf2ae6768d bug fix 2023-04-26 13:33:56 +08:00
PanQiWei
73cb1dbf09 optimize import and format code 2023-04-26 13:08:47 +08:00
PanQiWei
c35dce525e format code 2023-04-25 22:58:52 +08:00
PanQiWei
9f7f44146f format code 2023-04-25 22:45:27 +08:00
PanQiWei
b71211b4c3 format code 2023-04-25 22:36:28 +08:00
PanQiWei
7915278e5f bug fix 2023-04-25 20:43:40 +08:00
PanQiWei
9c405b1628 add triton support 2023-04-25 20:05:22 +08:00
PanQiWei
832dc4a7a1 refactor file structure 2023-04-25 18:58:20 +08:00