qwopqwop200
|
f0f37c1fe7
|
fix bug
|
2023-05-01 18:09:39 +09:00 |
|
qwopqwop200
|
95e633a597
|
add old cuda
|
2023-05-01 13:05:14 +09:00 |
|
qwopqwop200
|
5a69e22a93
|
add qlinear_old
|
2023-05-01 13:04:47 +09:00 |
|
qwopqwop200
|
9dfcac8e26
|
add qlinear_old
|
2023-05-01 13:03:57 +09:00 |
|
潘其威(William)
|
5fa803334d
|
Merge branch 'main' into change-save-name
|
2023-04-29 20:36:45 +08:00 |
|
qwopqwop200
|
787909084f
|
fix bug
|
2023-04-29 19:08:34 +09:00 |
|
qwopqwop200
|
a2ef4b98db
|
change save the name
|
2023-04-29 18:20:46 +09:00 |
|
qwopqwop200
|
1792cd1111
|
change save the name
|
2023-04-29 18:16:48 +09:00 |
|
ZXED
|
24a371d14a
|
use the same Optional style as in other params
|
2023-04-29 09:52:11 +03:00 |
|
ZXED
|
c22770188d
|
allow user to set trust_remote_code flag manually
|
2023-04-29 09:52:11 +03:00 |
|
ZXED
|
b3f19a7ba7
|
support custom model name when loading the model
|
2023-04-29 09:52:11 +03:00 |
|
ZXED
|
ea8ab73343
|
support custom quantize_config when loading the model
|
2023-04-29 09:51:50 +03:00 |
|
PanQiWei
|
16d8dd200f
|
remove non-parameters module from MOSSGPTQForCausalLM.outside_layer_modules
|
2023-04-29 10:58:29 +08:00 |
|
PanQiWei
|
b490ab004e
|
remove override of _resize_attention_mask for llama and opt
|
2023-04-28 23:08:42 +08:00 |
|
qwopqwop200
|
ae8b1a22a3
|
change global to local
|
2023-04-28 23:18:39 +09:00 |
|
qwopqwop200
|
e914b9b1bd
|
update support 256 not div
|
2023-04-28 22:48:23 +09:00 |
|
qwopqwop200
|
c9215a1b5b
|
change div num
|
2023-04-28 22:42:29 +09:00 |
|
qwopqwop200
|
19f167e58b
|
add raise-exception
|
2023-04-28 22:24:44 +09:00 |
|
潘其威(William)
|
1e353a8dc5
|
Merge pull request #24 from PanQiWei/speedup_quantization
Offloading and Multiple devices quantization/inference
|
2023-04-28 18:50:12 +08:00 |
|
PanQiWei
|
bdb713b5a3
|
add batch_size to model.quant() api
|
2023-04-28 18:26:07 +08:00 |
|
PanQiWei
|
41564a48db
|
make data_utils.py as global utils
|
2023-04-28 18:08:58 +08:00 |
|
PanQiWei
|
3dfc87bec3
|
return module in .to function
|
2023-04-28 17:20:46 +08:00 |
|
PanQiWei
|
a69a73a22c
|
fix device mismatch when directly using model to inference after quantization
|
2023-04-28 16:41:46 +08:00 |
|
qwopqwop200
|
329a64ed40
|
support conv1d,conv2d
|
2023-04-28 09:15:42 +09:00 |
|
qwopqwop200
|
bb9afe8b61
|
support conv1d,conv2d
|
2023-04-28 09:15:13 +09:00 |
|
qwopqwop200
|
c1b7c7647d
|
support conv1d
|
2023-04-28 09:14:44 +09:00 |
|
qwopqwop200
|
ac41f68532
|
add gpt2
|
2023-04-28 09:14:05 +09:00 |
|
qwopqwop200
|
dad249990c
|
add gpt2
|
2023-04-28 09:13:22 +09:00 |
|
qwopqwop200
|
435eebee4b
|
support conv1d,conv2d
|
2023-04-28 09:13:00 +09:00 |
|
qwopqwop200
|
cc0f71a568
|
add gpt2
|
2023-04-28 09:11:50 +09:00 |
|
qwopqwop200
|
3f90a22632
|
fix bug
|
2023-04-28 08:26:58 +09:00 |
|
qwopqwop200
|
9c38393e31
|
fix bug about wf meta device
|
2023-04-28 08:26:11 +09:00 |
|
PanQiWei
|
d0cd5af5d3
|
make code more robust
|
2023-04-28 01:29:12 +08:00 |
|
PanQiWei
|
51d2e53130
|
add support to cpu offloading and multi gpus inference on quantized model
|
2023-04-28 00:53:57 +08:00 |
|
PanQiWei
|
b14dca9207
|
disk offload assertion
|
2023-04-27 21:31:53 +08:00 |
|
PanQiWei
|
7a3397e7ba
|
add cpu offload when doing quantization
|
2023-04-27 21:25:24 +08:00 |
|
PanQiWei
|
ac3f7054e0
|
big fix
|
2023-04-27 19:33:25 +08:00 |
|
PanQiWei
|
498de923f2
|
support multi gpus quantization
|
2023-04-27 18:48:43 +08:00 |
|
qwopqwop200
|
8b6ee04aee
|
add option
|
2023-04-27 17:29:36 +09:00 |
|
PanQiWei
|
c9bb427546
|
align 'from_pretrained' api
|
2023-04-27 02:29:32 +08:00 |
|
PanQiWei
|
a2abff983e
|
support dispatch layers to different devices when loading pretrained model before quantization
|
2023-04-27 02:24:08 +08:00 |
|
PanQiWei
|
950f203260
|
add 'n_positions' to sequence length search list
|
2023-04-27 01:09:10 +08:00 |
|
PanQiWei
|
893c3264cb
|
make layer ignorance more robust
|
2023-04-26 19:35:19 +08:00 |
|
PanQiWei
|
f2359f56cb
|
add support to use push_to_hub to upload and share quantized model
|
2023-04-26 16:55:01 +08:00 |
|
PanQiWei
|
bf2ae6768d
|
bug fix
|
2023-04-26 13:33:56 +08:00 |
|
PanQiWei
|
73cb1dbf09
|
optimize import and format code
|
2023-04-26 13:08:47 +08:00 |
|
PanQiWei
|
975f100d0f
|
init Quantizer() at GPTQ() init stage
|
2023-04-25 23:13:09 +08:00 |
|
PanQiWei
|
c35dce525e
|
format code
|
2023-04-25 22:58:52 +08:00 |
|
PanQiWei
|
9f7f44146f
|
format code
|
2023-04-25 22:45:27 +08:00 |
|
PanQiWei
|
b71211b4c3
|
format code
|
2023-04-25 22:36:28 +08:00 |
|