PanQiWei
|
10347fdd7b
|
remove full_cpu_offload argument and unify model dispatch strategy
|
2023-05-24 17:41:04 +08:00 |
|
PanQiWei
|
379f24c2a5
|
remove add_align_logits_hook_to_model
|
2023-05-24 17:01:57 +08:00 |
|
PanQiWei
|
749dba1a7e
|
disable add_align_logits_hook_to_model for now
|
2023-05-24 13:42:06 +08:00 |
|
PanQiWei
|
58c1b509f0
|
support add_align_logits_hook_to_model
|
2023-05-24 12:50:30 +08:00 |
|
PanQiWei
|
21ab7c435a
|
make comments more readable
|
2023-05-24 11:38:29 +08:00 |
|
PanQiWei
|
c31b370228
|
make_sure_not_tensor_in_meta_device before load checkpoint
|
2023-05-24 11:32:45 +08:00 |
|
PanQiWei
|
63f1b4e073
|
remove comment
|
2023-05-24 11:23:07 +08:00 |
|
PanQiWei
|
057c39e3f2
|
fix meta device bug when use low_cpu_mem_usage
|
2023-05-24 11:19:59 +08:00 |
|
PanQiWei
|
e2e7809a1f
|
always to enable QuantLinear bias to make compatible with model quantized from other frameworks
|
2023-05-24 10:56:31 +08:00 |
|
PanQiWei
|
191da8141e
|
fix device mismatch
|
2023-05-23 23:22:52 +08:00 |
|
PanQiWei
|
e4e90e8b0a
|
add warmup_triton method
|
2023-05-23 23:18:46 +08:00 |
|
PanQiWei
|
ed14d3a786
|
fix save quantized model failed when load pretrained model using CPU offload
|
2023-05-23 23:17:11 +08:00 |
|
PanQiWei
|
6476ee4235
|
add options: 'low_cpu_mem_usage' and 'full_cpu_offload'
|
2023-05-23 22:51:00 +08:00 |
|
PanQiWei
|
1b2159bd4c
|
add more help functions
|
2023-05-23 19:30:28 +08:00 |
|
PanQiWei
|
86b3b52c63
|
fix ImportError when triton is not installed
|
2023-05-20 16:15:20 +08:00 |
|
潘其威(William)
|
13defe253a
|
Merge pull request #84 from TheBloke/TheBloke_forward-positional-args
Forward position args to allow `model(tokens)` syntax
|
2023-05-20 15:04:27 +08:00 |
|
潘其威(William)
|
1ef0af824a
|
Merge pull request #80 from PanQiWei/user_customized_device_map
Support users customize `device_map`
|
2023-05-20 15:00:05 +08:00 |
|
TheBloke
|
7f165337ed
|
Forward position args to allow syntax
|
2023-05-16 12:19:52 +01:00 |
|
PanQiWei
|
759d6953d4
|
support user customize device_map
|
2023-05-15 13:26:38 +08:00 |
|
PanQiWei
|
07e06fa08c
|
make compatible with older transformers version
|
2023-05-15 13:26:18 +08:00 |
|
oobabooga
|
86c7021285
|
Look for .pt files
|
2023-05-15 00:00:05 -03:00 |
|
PanQiWei
|
d5429441ef
|
add GPTJ fused attention module
|
2023-05-14 16:17:21 +08:00 |
|
PanQiWei
|
5445c67190
|
add library version comparison help functions
|
2023-05-14 16:16:06 +08:00 |
|
PanQiWei
|
de33d26d67
|
fix bugs
|
2023-05-14 13:07:18 +08:00 |
|
PanQiWei
|
2273f9ef39
|
refactor file structure for triton kernels
|
2023-05-14 11:49:10 +08:00 |
|
PanQiWei
|
fef1a4fe4b
|
make code clean and extendable
|
2023-05-12 20:11:55 +08:00 |
|
PanQiWei
|
d718d63e9c
|
add import_utils.py for commonly used module importation
|
2023-05-12 19:58:48 +08:00 |
|
PanQiWei
|
c5ff195764
|
skip fused module injection instead of raising error if it's not supported yet.
|
2023-05-12 19:36:00 +08:00 |
|
PanQiWei
|
f159aeabb6
|
refactor .from_quantized api and improve model loading strategy
|
2023-05-12 18:09:50 +08:00 |
|
PanQiWei
|
4bb10fda49
|
groupsize -> group_size
|
2023-05-12 13:37:52 +08:00 |
|
qwopqwop200
|
3ff6ab18cb
|
Merge branch 'main' into faster-llama
|
2023-05-06 00:20:29 +09:00 |
|
TheBloke
|
1b3329b399
|
Fix 'groupsize' -> 'group_size' in all other .py files. I haven't touched any CUDA kernels in case there's any complexity there I don't understand
|
2023-05-05 14:44:16 +01:00 |
|
TheBloke
|
f61ce12271
|
Change 'groupsize' to 'group_size' everywhere. Turns out this is easier than 'groupsize' due to dependencies in other files.
|
2023-05-05 13:36:00 +01:00 |
|
TheBloke
|
f64c71e779
|
Change referenes to 'group_size' to 'groupsize' to match rest of this file
|
2023-05-05 13:21:13 +01:00 |
|
PanQiWei
|
1c6bb69fae
|
fix attribute name error
|
2023-05-04 22:10:33 +08:00 |
|
潘其威(William)
|
771b650a7c
|
Merge pull request #38 from PanQiWei/faster-cuda-no-actorder
Faster cuda no actorder
|
2023-05-04 21:47:19 +08:00 |
|
qwopqwop200
|
b19c59541b
|
fix bug
|
2023-05-04 13:17:10 +09:00 |
|
qwopqwop200
|
b14d42e68a
|
bug fix
|
2023-05-04 13:03:38 +09:00 |
|
qwopqwop200
|
208d660920
|
fix bug
|
2023-05-04 10:04:00 +09:00 |
|
qwopqwop200
|
f51a92ed79
|
support faster and model load strict
|
2023-05-04 09:53:28 +09:00 |
|
qwopqwop200
|
cc992c21bd
|
Merge branch 'faster-cuda-no-actorder' into faster-llama
|
2023-05-04 09:09:09 +09:00 |
|
qwopqwop200
|
d49281bc5d
|
support faster and model load strict
|
2023-05-04 09:07:34 +09:00 |
|
qwopqwop200
|
c8504f0660
|
support faster and model load strict
|
2023-05-04 09:06:52 +09:00 |
|
qwopqwop200
|
afe1323b3f
|
support faster and model load strict
|
2023-05-04 09:03:36 +09:00 |
|
qwopqwop200
|
24251d1397
|
check kwargs
|
2023-05-02 22:32:54 +09:00 |
|
qwopqwop200
|
694f2954a3
|
add auto model parameter
|
2023-05-02 22:16:23 +09:00 |
|
qwopqwop200
|
ccd87e5800
|
add Auto model parameter
|
2023-05-02 22:15:56 +09:00 |
|
qwopqwop200
|
d8707f92a9
|
support fused_attn
|
2023-05-02 21:54:15 +09:00 |
|
qwopqwop200
|
f47322f073
|
fix bug
|
2023-05-02 21:14:27 +09:00 |
|
qwopqwop200
|
41f2379850
|
bug fix
|
2023-05-02 20:38:17 +09:00 |
|