潘其威(William)
|
a85d65e915
|
Update issue templates
|
2023-05-23 15:53:07 +08:00 |
|
潘其威(William)
|
d4011d29c6
|
Merge pull request #92 from PanQiWei/fix_triton_integration_bugs
fix ImportError when triton is not installed
|
2023-05-20 17:01:14 +08:00 |
|
潘其威(William)
|
809efa6fcb
|
Update README_zh.md
|
2023-05-20 16:53:27 +08:00 |
|
潘其威(William)
|
082e76713e
|
Update README.md
|
2023-05-20 16:52:43 +08:00 |
|
潘其威(William)
|
0ca1752a9b
|
Merge pull request #93 from TheBloke/TheBloke_rename-quant_cuda2
Rename 'quant_cuda' to 'autogptq_cuda' to avoid conflicts with existing GPTQ-for-LLaMa installations.
|
2023-05-20 16:44:02 +08:00 |
|
PanQiWei
|
b803369719
|
update quant_with_alpaca.py
|
2023-05-20 16:43:21 +08:00 |
|
PanQiWei
|
f78f074409
|
update quant_with_alpaca.py
|
2023-05-20 16:42:34 +08:00 |
|
TheBloke
|
898f1ef62d
|
Rename 'quant_cuda' to 'autogptq_cuda' to avoid conflicts with existing GPTQ-for-LLaMa installations.
|
2023-05-20 09:33:51 +01:00 |
|
PanQiWei
|
73b5952f5e
|
fix not return directly when triton is not installed
|
2023-05-20 16:21:52 +08:00 |
|
PanQiWei
|
86b3b52c63
|
fix ImportError when triton is not installed
|
2023-05-20 16:15:20 +08:00 |
|
潘其威(William)
|
13defe253a
|
Merge pull request #84 from TheBloke/TheBloke_forward-positional-args
Forward position args to allow `model(tokens)` syntax
|
2023-05-20 15:04:27 +08:00 |
|
潘其威(William)
|
d0b7908a2c
|
Merge pull request #82 from Ph0rk0z/patch-1
Update example script to include desc_act
|
2023-05-20 15:03:18 +08:00 |
|
潘其威(William)
|
1ef0af824a
|
Merge pull request #80 from PanQiWei/user_customized_device_map
Support users customize `device_map`
|
2023-05-20 15:00:05 +08:00 |
|
Forkoz
|
cc835640a9
|
Update some help
|
2023-05-17 07:31:09 -05:00 |
|
Forkoz
|
6b0b84bc9b
|
Update basic_usage_gpt_xl.py
|
2023-05-17 07:28:53 -05:00 |
|
Forkoz
|
2d0aaa423f
|
update another example
|
2023-05-17 07:27:49 -05:00 |
|
Forkoz
|
922ec02998
|
Fix another example
|
2023-05-17 07:26:24 -05:00 |
|
TheBloke
|
7f165337ed
|
Forward position args to allow syntax
|
2023-05-16 12:19:52 +01:00 |
|
Forkoz
|
eaac7a7b76
|
Update example script to include desc_act
It will help with people unwittingly making incompatible models.
|
2023-05-15 11:26:22 +00:00 |
|
潘其威(William)
|
570867c109
|
Merge pull request #79 from oobabooga/main
support loading quantized model with .pt file extension
|
2023-05-15 16:08:44 +08:00 |
|
PanQiWei
|
759d6953d4
|
support user customize device_map
|
2023-05-15 13:26:38 +08:00 |
|
PanQiWei
|
07e06fa08c
|
make compatible with older transformers version
|
2023-05-15 13:26:18 +08:00 |
|
oobabooga
|
86c7021285
|
Look for .pt files
|
2023-05-15 00:00:05 -03:00 |
|
潘其威(William)
|
262669112b
|
Merge pull request #76 from PanQiWei/gptj_fused_attention
Gptj fused attention
|
2023-05-14 16:21:27 +08:00 |
|
PanQiWei
|
d5429441ef
|
add GPTJ fused attention module
|
2023-05-14 16:17:21 +08:00 |
|
PanQiWei
|
e1c564ac0e
|
compatible with older pytorch version
|
2023-05-14 16:17:03 +08:00 |
|
PanQiWei
|
4586b3f31f
|
update setup.py
|
2023-05-14 16:16:20 +08:00 |
|
PanQiWei
|
5445c67190
|
add library version comparison help functions
|
2023-05-14 16:16:06 +08:00 |
|
潘其威(William)
|
7c248cebf6
|
Merge pull request #43 from PanQiWei/faster-llama
Faster llama
|
2023-05-14 13:09:10 +08:00 |
|
PanQiWei
|
e83c9fc8dd
|
update setup.py
|
2023-05-14 13:08:26 +08:00 |
|
PanQiWei
|
de33d26d67
|
fix bugs
|
2023-05-14 13:07:18 +08:00 |
|
PanQiWei
|
2273f9ef39
|
refactor file structure for triton kernels
|
2023-05-14 11:49:10 +08:00 |
|
PanQiWei
|
fef1a4fe4b
|
make code clean and extendable
|
2023-05-12 20:11:55 +08:00 |
|
PanQiWei
|
d718d63e9c
|
add import_utils.py for commonly used module importation
|
2023-05-12 19:58:48 +08:00 |
|
潘其威(William)
|
6f887f666a
|
Update 02-Advanced-Model-Loading-and-Best-Practice.md
|
2023-05-12 19:47:05 +08:00 |
|
PanQiWei
|
c5ff195764
|
skip fused module injection instead of raising error if it's not supported yet.
|
2023-05-12 19:36:00 +08:00 |
|
PanQiWei
|
f159aeabb6
|
refactor .from_quantized api and improve model loading strategy
|
2023-05-12 18:09:50 +08:00 |
|
PanQiWei
|
69610329d2
|
add _fused_base.py
|
2023-05-12 18:09:23 +08:00 |
|
潘其威(William)
|
393a2fbac2
|
Update README.md
|
2023-05-12 13:47:30 +08:00 |
|
潘其威(William)
|
e5c267e289
|
Update README.md
|
2023-05-12 13:46:41 +08:00 |
|
潘其威(William)
|
d6d099a1d1
|
Merge branch 'main' into faster-llama
|
2023-05-12 13:39:24 +08:00 |
|
PanQiWei
|
4bb10fda49
|
groupsize -> group_size
|
2023-05-12 13:37:52 +08:00 |
|
潘其威(William)
|
560cf92d7d
|
Merge pull request #62 from lszxb/fix_incorrect_pack
fix incorrect pack while using cuda, desc_act and grouping
|
2023-05-08 10:35:30 +08:00 |
|
潘其威(William)
|
8b67f7de2f
|
Merge pull request #59 from Sciumo/setup_conda
Setup conda
|
2023-05-08 10:34:10 +08:00 |
|
lszxb
|
174ef81995
|
fix incorrect pack while using cuda, desc_act and grouping
|
2023-05-07 20:44:47 +08:00 |
|
Sciumo
|
ee4ca934aa
|
add conda cuda include directory if found
|
2023-05-05 14:28:04 -04:00 |
|
Sciumo
|
81f3dfe39c
|
add conda cuda include directory if found
|
2023-05-05 14:27:11 -04:00 |
|
qwopqwop200
|
3ff6ab18cb
|
Merge branch 'main' into faster-llama
|
2023-05-06 00:20:29 +09:00 |
|
潘其威(William)
|
7c33fa2fa4
|
Merge pull request #58 from TheBloke/TheBloke_faster-llama_groupsize_fix
Fix bug caused by 'groupsize' vs 'group_size' and change all code to use 'group_size' consistently
|
2023-05-05 23:00:59 +08:00 |
|
TheBloke
|
1b3329b399
|
Fix 'groupsize' -> 'group_size' in all other .py files. I haven't touched any CUDA kernels in case there's any complexity there I don't understand
|
2023-05-05 14:44:16 +01:00 |
|