Lex Song
f2ab4fab46
Fix CUDA out of memory error in qlinear_old.py
...
Add a missing line from qlinear.py to qlinear_old.py to convert the output tensor.
This resolves a CUDA out of memory error that occurred without this line.
2023-05-20 21:10:11 +08:00
TheBloke
bf633c298e
Clean up some unused params
2023-05-20 10:32:27 +01:00
潘其威(William)
d4011d29c6
Merge pull request #92 from PanQiWei/fix_triton_integration_bugs
...
fix ImportError when triton is not installed
2023-05-20 17:01:14 +08:00
潘其威(William)
0ca1752a9b
Merge pull request #93 from TheBloke/TheBloke_rename-quant_cuda2
...
Rename 'quant_cuda' to 'autogptq_cuda' to avoid conflicts with existing GPTQ-for-LLaMa installations.
2023-05-20 16:44:02 +08:00
TheBloke
898f1ef62d
Rename 'quant_cuda' to 'autogptq_cuda' to avoid conflicts with existing GPTQ-for-LLaMa installations.
2023-05-20 09:33:51 +01:00
PanQiWei
73b5952f5e
fix not return directly when triton is not installed
2023-05-20 16:21:52 +08:00
PanQiWei
86b3b52c63
fix ImportError when triton is not installed
2023-05-20 16:15:20 +08:00
潘其威(William)
13defe253a
Merge pull request #84 from TheBloke/TheBloke_forward-positional-args
...
Forward position args to allow `model(tokens)` syntax
2023-05-20 15:04:27 +08:00
潘其威(William)
1ef0af824a
Merge pull request #80 from PanQiWei/user_customized_device_map
...
Support users customize `device_map`
2023-05-20 15:00:05 +08:00
TheBloke
e5c8479100
Remove debugging print line
2023-05-19 17:50:48 +01:00
TheBloke
735f7df4cc
Add push_to_hub for HF hub uploading
2023-05-19 17:10:57 +01:00
TheBloke
908b338436
Initial support for model loading from HF hub
2023-05-19 15:57:05 +01:00
TheBloke
a397f00cc3
Implement HF cached download for quantize_config
2023-05-19 15:15:43 +01:00
TheBloke
7f165337ed
Forward position args to allow syntax
2023-05-16 12:19:52 +01:00
PanQiWei
759d6953d4
support user customize device_map
2023-05-15 13:26:38 +08:00
PanQiWei
07e06fa08c
make compatible with older transformers version
2023-05-15 13:26:18 +08:00
oobabooga
86c7021285
Look for .pt files
2023-05-15 00:00:05 -03:00
PanQiWei
d5429441ef
add GPTJ fused attention module
2023-05-14 16:17:21 +08:00
PanQiWei
e1c564ac0e
compatible with older pytorch version
2023-05-14 16:17:03 +08:00
PanQiWei
5445c67190
add library version comparison help functions
2023-05-14 16:16:06 +08:00
潘其威(William)
bdb08c16fc
Merge branch 'main' into Codegen
2023-05-14 13:10:52 +08:00
潘其威(William)
e24c5122db
Merge branch 'main' into GPTBigCode
2023-05-14 13:10:10 +08:00
PanQiWei
de33d26d67
fix bugs
2023-05-14 13:07:18 +08:00
PanQiWei
2273f9ef39
refactor file structure for triton kernels
2023-05-14 11:49:10 +08:00
PanQiWei
fef1a4fe4b
make code clean and extendable
2023-05-12 20:11:55 +08:00
PanQiWei
d718d63e9c
add import_utils.py for commonly used module importation
2023-05-12 19:58:48 +08:00
PanQiWei
c5ff195764
skip fused module injection instead of raising error if it's not supported yet.
2023-05-12 19:36:00 +08:00
PanQiWei
f159aeabb6
refactor .from_quantized api and improve model loading strategy
2023-05-12 18:09:50 +08:00
PanQiWei
69610329d2
add _fused_base.py
2023-05-12 18:09:23 +08:00
PanQiWei
4bb10fda49
groupsize -> group_size
2023-05-12 13:37:52 +08:00
LaaZa
b8187ff05a
Add support for CodeGen/2
2023-05-08 17:34:00 +03:00
LaaZa
63247a0669
Add support for GPTBigCode
2023-05-08 12:28:29 +03:00
lszxb
174ef81995
fix incorrect pack while using cuda, desc_act and grouping
2023-05-07 20:44:47 +08:00
qwopqwop200
3ff6ab18cb
Merge branch 'main' into faster-llama
2023-05-06 00:20:29 +09:00
TheBloke
1b3329b399
Fix 'groupsize' -> 'group_size' in all other .py files. I haven't touched any CUDA kernels in case there's any complexity there I don't understand
2023-05-05 14:44:16 +01:00
TheBloke
f61ce12271
Change 'groupsize' to 'group_size' everywhere. Turns out this is easier than 'groupsize' due to dependencies in other files.
2023-05-05 13:36:00 +01:00
TheBloke
f64c71e779
Change referenes to 'group_size' to 'groupsize' to match rest of this file
2023-05-05 13:21:13 +01:00
PanQiWei
6cba6e7123
reformat code
2023-05-04 22:16:08 +08:00
PanQiWei
1c6bb69fae
fix attribute name error
2023-05-04 22:10:33 +08:00
潘其威(William)
771b650a7c
Merge pull request #38 from PanQiWei/faster-cuda-no-actorder
...
Faster cuda no actorder
2023-05-04 21:47:19 +08:00
qwopqwop200
b19c59541b
fix bug
2023-05-04 13:17:10 +09:00
qwopqwop200
908248114e
fix bug
2023-05-04 13:15:52 +09:00
qwopqwop200
b14d42e68a
bug fix
2023-05-04 13:03:38 +09:00
qwopqwop200
b0bc0b0358
bug fix
2023-05-04 13:03:11 +09:00
qwopqwop200
208d660920
fix bug
2023-05-04 10:04:00 +09:00
qwopqwop200
f51a92ed79
support faster and model load strict
2023-05-04 09:53:28 +09:00
qwopqwop200
cc992c21bd
Merge branch 'faster-cuda-no-actorder' into faster-llama
2023-05-04 09:09:09 +09:00
qwopqwop200
d49281bc5d
support faster and model load strict
2023-05-04 09:07:34 +09:00
qwopqwop200
c8504f0660
support faster and model load strict
2023-05-04 09:06:52 +09:00
qwopqwop200
34201dbff9
support faster and model load strict
2023-05-04 09:05:07 +09:00