PanQiWei
c31b370228
make_sure_not_tensor_in_meta_device before load checkpoint
2023-05-24 11:32:45 +08:00
PanQiWei
63f1b4e073
remove comment
2023-05-24 11:23:07 +08:00
PanQiWei
057c39e3f2
fix meta device bug when use low_cpu_mem_usage
2023-05-24 11:19:59 +08:00
PanQiWei
e2e7809a1f
always to enable QuantLinear bias to make compatible with model quantized from other frameworks
2023-05-24 10:56:31 +08:00
PanQiWei
8e034b28bc
remove duplicate code
2023-05-23 23:48:15 +08:00
PanQiWei
4373d6b29c
Merge branch 'main' into improve_cpu_offload
2023-05-23 23:47:33 +08:00
PanQiWei
191da8141e
fix device mismatch
2023-05-23 23:22:52 +08:00
PanQiWei
e4e90e8b0a
add warmup_triton method
2023-05-23 23:18:46 +08:00
PanQiWei
ed14d3a786
fix save quantized model failed when load pretrained model using CPU offload
2023-05-23 23:17:11 +08:00
潘其威(William)
7820322089
Merge pull request #66 from LexSong/main
...
Fix CUDA out of memory error in qlinear_old.py
2023-05-23 23:04:45 +08:00
PanQiWei
6476ee4235
add options: 'low_cpu_mem_usage' and 'full_cpu_offload'
2023-05-23 22:51:00 +08:00
PanQiWei
c63959365a
update setup.py
2023-05-23 19:30:47 +08:00
PanQiWei
1b2159bd4c
add more help functions
2023-05-23 19:30:28 +08:00
PanQiWei
db63c0876a
half out
2023-05-23 16:08:28 +08:00
潘其威(William)
1bb7be3dd3
Update issue templates
2023-05-23 15:55:48 +08:00
潘其威(William)
a85d65e915
Update issue templates
2023-05-23 15:53:07 +08:00
Lex Song
f2ab4fab46
Fix CUDA out of memory error in qlinear_old.py
...
Add a missing line from qlinear.py to qlinear_old.py to convert the output tensor.
This resolves a CUDA out of memory error that occurred without this line.
2023-05-20 21:10:11 +08:00
潘其威(William)
d4011d29c6
Merge pull request #92 from PanQiWei/fix_triton_integration_bugs
...
fix ImportError when triton is not installed
2023-05-20 17:01:14 +08:00
潘其威(William)
809efa6fcb
Update README_zh.md
2023-05-20 16:53:27 +08:00
潘其威(William)
082e76713e
Update README.md
2023-05-20 16:52:43 +08:00
潘其威(William)
0ca1752a9b
Merge pull request #93 from TheBloke/TheBloke_rename-quant_cuda2
...
Rename 'quant_cuda' to 'autogptq_cuda' to avoid conflicts with existing GPTQ-for-LLaMa installations.
2023-05-20 16:44:02 +08:00
PanQiWei
b803369719
update quant_with_alpaca.py
2023-05-20 16:43:21 +08:00
PanQiWei
f78f074409
update quant_with_alpaca.py
2023-05-20 16:42:34 +08:00
TheBloke
898f1ef62d
Rename 'quant_cuda' to 'autogptq_cuda' to avoid conflicts with existing GPTQ-for-LLaMa installations.
2023-05-20 09:33:51 +01:00
PanQiWei
73b5952f5e
fix not return directly when triton is not installed
2023-05-20 16:21:52 +08:00
PanQiWei
86b3b52c63
fix ImportError when triton is not installed
2023-05-20 16:15:20 +08:00
潘其威(William)
13defe253a
Merge pull request #84 from TheBloke/TheBloke_forward-positional-args
...
Forward position args to allow `model(tokens)` syntax
2023-05-20 15:04:27 +08:00
潘其威(William)
d0b7908a2c
Merge pull request #82 from Ph0rk0z/patch-1
...
Update example script to include desc_act
2023-05-20 15:03:18 +08:00
潘其威(William)
1ef0af824a
Merge pull request #80 from PanQiWei/user_customized_device_map
...
Support users customize `device_map`
2023-05-20 15:00:05 +08:00
Forkoz
cc835640a9
Update some help
2023-05-17 07:31:09 -05:00
Forkoz
6b0b84bc9b
Update basic_usage_gpt_xl.py
2023-05-17 07:28:53 -05:00
Forkoz
2d0aaa423f
update another example
2023-05-17 07:27:49 -05:00
Forkoz
922ec02998
Fix another example
2023-05-17 07:26:24 -05:00
TheBloke
7f165337ed
Forward position args to allow syntax
2023-05-16 12:19:52 +01:00
Forkoz
eaac7a7b76
Update example script to include desc_act
...
It will help with people unwittingly making incompatible models.
2023-05-15 11:26:22 +00:00
潘其威(William)
570867c109
Merge pull request #79 from oobabooga/main
...
support loading quantized model with .pt file extension
2023-05-15 16:08:44 +08:00
PanQiWei
759d6953d4
support user customize device_map
2023-05-15 13:26:38 +08:00
PanQiWei
07e06fa08c
make compatible with older transformers version
2023-05-15 13:26:18 +08:00
oobabooga
86c7021285
Look for .pt files
2023-05-15 00:00:05 -03:00
潘其威(William)
262669112b
Merge pull request #76 from PanQiWei/gptj_fused_attention
...
Gptj fused attention
2023-05-14 16:21:27 +08:00
PanQiWei
d5429441ef
add GPTJ fused attention module
2023-05-14 16:17:21 +08:00
PanQiWei
e1c564ac0e
compatible with older pytorch version
2023-05-14 16:17:03 +08:00
PanQiWei
4586b3f31f
update setup.py
2023-05-14 16:16:20 +08:00
PanQiWei
5445c67190
add library version comparison help functions
2023-05-14 16:16:06 +08:00
潘其威(William)
7c248cebf6
Merge pull request #43 from PanQiWei/faster-llama
...
Faster llama
2023-05-14 13:09:10 +08:00
PanQiWei
e83c9fc8dd
update setup.py
2023-05-14 13:08:26 +08:00
PanQiWei
de33d26d67
fix bugs
2023-05-14 13:07:18 +08:00
PanQiWei
2273f9ef39
refactor file structure for triton kernels
2023-05-14 11:49:10 +08:00
PanQiWei
fef1a4fe4b
make code clean and extendable
2023-05-12 20:11:55 +08:00
PanQiWei
d718d63e9c
add import_utils.py for commonly used module importation
2023-05-12 19:58:48 +08:00