Forkoz
|
eaac7a7b76
|
Update example script to include desc_act
It will help with people unwittingly making incompatible models.
|
2023-05-15 11:26:22 +00:00 |
|
潘其威(William)
|
570867c109
|
Merge pull request #79 from oobabooga/main
support loading quantized model with .pt file extension
|
2023-05-15 16:08:44 +08:00 |
|
oobabooga
|
86c7021285
|
Look for .pt files
|
2023-05-15 00:00:05 -03:00 |
|
潘其威(William)
|
262669112b
|
Merge pull request #76 from PanQiWei/gptj_fused_attention
Gptj fused attention
|
2023-05-14 16:21:27 +08:00 |
|
PanQiWei
|
d5429441ef
|
add GPTJ fused attention module
|
2023-05-14 16:17:21 +08:00 |
|
PanQiWei
|
e1c564ac0e
|
compatible with older pytorch version
|
2023-05-14 16:17:03 +08:00 |
|
PanQiWei
|
4586b3f31f
|
update setup.py
|
2023-05-14 16:16:20 +08:00 |
|
PanQiWei
|
5445c67190
|
add library version comparison help functions
|
2023-05-14 16:16:06 +08:00 |
|
潘其威(William)
|
7c248cebf6
|
Merge pull request #43 from PanQiWei/faster-llama
Faster llama
|
2023-05-14 13:09:10 +08:00 |
|
PanQiWei
|
e83c9fc8dd
|
update setup.py
|
2023-05-14 13:08:26 +08:00 |
|
PanQiWei
|
de33d26d67
|
fix bugs
|
2023-05-14 13:07:18 +08:00 |
|
PanQiWei
|
2273f9ef39
|
refactor file structure for triton kernels
|
2023-05-14 11:49:10 +08:00 |
|
PanQiWei
|
fef1a4fe4b
|
make code clean and extendable
|
2023-05-12 20:11:55 +08:00 |
|
PanQiWei
|
d718d63e9c
|
add import_utils.py for commonly used module importation
|
2023-05-12 19:58:48 +08:00 |
|
潘其威(William)
|
6f887f666a
|
Update 02-Advanced-Model-Loading-and-Best-Practice.md
|
2023-05-12 19:47:05 +08:00 |
|
PanQiWei
|
c5ff195764
|
skip fused module injection instead of raising error if it's not supported yet.
|
2023-05-12 19:36:00 +08:00 |
|
PanQiWei
|
f159aeabb6
|
refactor .from_quantized api and improve model loading strategy
|
2023-05-12 18:09:50 +08:00 |
|
PanQiWei
|
69610329d2
|
add _fused_base.py
|
2023-05-12 18:09:23 +08:00 |
|
潘其威(William)
|
393a2fbac2
|
Update README.md
|
2023-05-12 13:47:30 +08:00 |
|
潘其威(William)
|
e5c267e289
|
Update README.md
|
2023-05-12 13:46:41 +08:00 |
|
潘其威(William)
|
d6d099a1d1
|
Merge branch 'main' into faster-llama
|
2023-05-12 13:39:24 +08:00 |
|
PanQiWei
|
4bb10fda49
|
groupsize -> group_size
|
2023-05-12 13:37:52 +08:00 |
|
潘其威(William)
|
560cf92d7d
|
Merge pull request #62 from lszxb/fix_incorrect_pack
fix incorrect pack while using cuda, desc_act and grouping
|
2023-05-08 10:35:30 +08:00 |
|
潘其威(William)
|
8b67f7de2f
|
Merge pull request #59 from Sciumo/setup_conda
Setup conda
|
2023-05-08 10:34:10 +08:00 |
|
lszxb
|
174ef81995
|
fix incorrect pack while using cuda, desc_act and grouping
|
2023-05-07 20:44:47 +08:00 |
|
Sciumo
|
ee4ca934aa
|
add conda cuda include directory if found
|
2023-05-05 14:28:04 -04:00 |
|
Sciumo
|
81f3dfe39c
|
add conda cuda include directory if found
|
2023-05-05 14:27:11 -04:00 |
|
qwopqwop200
|
3ff6ab18cb
|
Merge branch 'main' into faster-llama
|
2023-05-06 00:20:29 +09:00 |
|
潘其威(William)
|
7c33fa2fa4
|
Merge pull request #58 from TheBloke/TheBloke_faster-llama_groupsize_fix
Fix bug caused by 'groupsize' vs 'group_size' and change all code to use 'group_size' consistently
|
2023-05-05 23:00:59 +08:00 |
|
TheBloke
|
1b3329b399
|
Fix 'groupsize' -> 'group_size' in all other .py files. I haven't touched any CUDA kernels in case there's any complexity there I don't understand
|
2023-05-05 14:44:16 +01:00 |
|
TheBloke
|
f61ce12271
|
Change 'groupsize' to 'group_size' everywhere. Turns out this is easier than 'groupsize' due to dependencies in other files.
|
2023-05-05 13:36:00 +01:00 |
|
TheBloke
|
f64c71e779
|
Change referenes to 'group_size' to 'groupsize' to match rest of this file
|
2023-05-05 13:21:13 +01:00 |
|
PanQiWei
|
374ce21066
|
release v0.1.0
|
2023-05-05 00:18:50 +08:00 |
|
PanQiWei
|
753c261388
|
update README.md
|
2023-05-05 00:15:33 +08:00 |
|
PanQiWei
|
d79aec7bd0
|
update README.md
|
2023-05-04 23:06:32 +08:00 |
|
PanQiWei
|
fe3456100c
|
bug fix from commit 3c108d4232
|
2023-05-04 22:34:16 +08:00 |
|
PanQiWei
|
e4d476be16
|
update README.md
|
2023-05-04 22:17:38 +08:00 |
|
PanQiWei
|
6cba6e7123
|
reformat code
|
2023-05-04 22:16:08 +08:00 |
|
PanQiWei
|
1c6bb69fae
|
fix attribute name error
|
2023-05-04 22:10:33 +08:00 |
|
潘其威(William)
|
771b650a7c
|
Merge pull request #38 from PanQiWei/faster-cuda-no-actorder
Faster cuda no actorder
|
2023-05-04 21:47:19 +08:00 |
|
qwopqwop200
|
b19c59541b
|
fix bug
|
2023-05-04 13:17:10 +09:00 |
|
qwopqwop200
|
908248114e
|
fix bug
|
2023-05-04 13:15:52 +09:00 |
|
qwopqwop200
|
b14d42e68a
|
bug fix
|
2023-05-04 13:03:38 +09:00 |
|
qwopqwop200
|
b0bc0b0358
|
bug fix
|
2023-05-04 13:03:11 +09:00 |
|
qwopqwop200
|
208d660920
|
fix bug
|
2023-05-04 10:04:00 +09:00 |
|
qwopqwop200
|
f51a92ed79
|
support faster and model load strict
|
2023-05-04 09:53:28 +09:00 |
|
qwopqwop200
|
cc992c21bd
|
Merge branch 'faster-cuda-no-actorder' into faster-llama
|
2023-05-04 09:09:09 +09:00 |
|
qwopqwop200
|
d49281bc5d
|
support faster and model load strict
|
2023-05-04 09:07:34 +09:00 |
|
qwopqwop200
|
c8504f0660
|
support faster and model load strict
|
2023-05-04 09:06:52 +09:00 |
|
qwopqwop200
|
34201dbff9
|
support faster and model load strict
|
2023-05-04 09:05:07 +09:00 |
|