潘其威(William)
|
13defe253a
|
Merge pull request #84 from TheBloke/TheBloke_forward-positional-args
Forward position args to allow `model(tokens)` syntax
|
2023-05-20 15:04:27 +08:00 |
|
潘其威(William)
|
1ef0af824a
|
Merge pull request #80 from PanQiWei/user_customized_device_map
Support users customize `device_map`
|
2023-05-20 15:00:05 +08:00 |
|
TheBloke
|
7f165337ed
|
Forward position args to allow syntax
|
2023-05-16 12:19:52 +01:00 |
|
PanQiWei
|
759d6953d4
|
support user customize device_map
|
2023-05-15 13:26:38 +08:00 |
|
oobabooga
|
86c7021285
|
Look for .pt files
|
2023-05-15 00:00:05 -03:00 |
|
PanQiWei
|
de33d26d67
|
fix bugs
|
2023-05-14 13:07:18 +08:00 |
|
PanQiWei
|
2273f9ef39
|
refactor file structure for triton kernels
|
2023-05-14 11:49:10 +08:00 |
|
PanQiWei
|
fef1a4fe4b
|
make code clean and extendable
|
2023-05-12 20:11:55 +08:00 |
|
PanQiWei
|
c5ff195764
|
skip fused module injection instead of raising error if it's not supported yet.
|
2023-05-12 19:36:00 +08:00 |
|
PanQiWei
|
f159aeabb6
|
refactor .from_quantized api and improve model loading strategy
|
2023-05-12 18:09:50 +08:00 |
|
TheBloke
|
1b3329b399
|
Fix 'groupsize' -> 'group_size' in all other .py files. I haven't touched any CUDA kernels in case there's any complexity there I don't understand
|
2023-05-05 14:44:16 +01:00 |
|
qwopqwop200
|
afe1323b3f
|
support faster and model load strict
|
2023-05-04 09:03:36 +09:00 |
|
qwopqwop200
|
694f2954a3
|
add auto model parameter
|
2023-05-02 22:16:23 +09:00 |
|
qwopqwop200
|
709bd7594f
|
Merge pull request #44 from PanQiWei/fix-bug-cuda
Fix bug cuda
|
2023-05-02 19:50:59 +09:00 |
|
潘其威(William)
|
144bd80436
|
Merge pull request #39 from TheBloke/TheBloke_check_model_exists
Check that model_save_name exists before trying to load it, to avoid confusing checkpoint error
|
2023-05-01 19:55:24 +08:00 |
|
TheBloke
|
593a0b28bb
|
Fix typo: 'hole' -> 'whole'
|
2023-05-01 10:25:18 +01:00 |
|
TheBloke
|
60195ca5f2
|
Check that model_save_name exists before trying inference, to avoid confusing checkpoint error
|
2023-05-01 10:15:13 +01:00 |
|
qwopqwop200
|
95e633a597
|
add old cuda
|
2023-05-01 13:05:14 +09:00 |
|
潘其威(William)
|
5fa803334d
|
Merge branch 'main' into change-save-name
|
2023-04-29 20:36:45 +08:00 |
|
qwopqwop200
|
787909084f
|
fix bug
|
2023-04-29 19:08:34 +09:00 |
|
qwopqwop200
|
a2ef4b98db
|
change save the name
|
2023-04-29 18:20:46 +09:00 |
|
qwopqwop200
|
1792cd1111
|
change save the name
|
2023-04-29 18:16:48 +09:00 |
|
ZXED
|
24a371d14a
|
use the same Optional style as in other params
|
2023-04-29 09:52:11 +03:00 |
|
ZXED
|
c22770188d
|
allow user to set trust_remote_code flag manually
|
2023-04-29 09:52:11 +03:00 |
|
ZXED
|
b3f19a7ba7
|
support custom model name when loading the model
|
2023-04-29 09:52:11 +03:00 |
|
ZXED
|
ea8ab73343
|
support custom quantize_config when loading the model
|
2023-04-29 09:51:50 +03:00 |
|
潘其威(William)
|
1e353a8dc5
|
Merge pull request #24 from PanQiWei/speedup_quantization
Offloading and Multiple devices quantization/inference
|
2023-04-28 18:50:12 +08:00 |
|
PanQiWei
|
bdb713b5a3
|
add batch_size to model.quant() api
|
2023-04-28 18:26:07 +08:00 |
|
PanQiWei
|
3dfc87bec3
|
return module in .to function
|
2023-04-28 17:20:46 +08:00 |
|
PanQiWei
|
a69a73a22c
|
fix device mismatch when directly using model to inference after quantization
|
2023-04-28 16:41:46 +08:00 |
|
qwopqwop200
|
3f90a22632
|
fix bug
|
2023-04-28 08:26:58 +09:00 |
|
PanQiWei
|
d0cd5af5d3
|
make code more robust
|
2023-04-28 01:29:12 +08:00 |
|
PanQiWei
|
51d2e53130
|
add support to cpu offloading and multi gpus inference on quantized model
|
2023-04-28 00:53:57 +08:00 |
|
PanQiWei
|
b14dca9207
|
disk offload assertion
|
2023-04-27 21:31:53 +08:00 |
|
PanQiWei
|
7a3397e7ba
|
add cpu offload when doing quantization
|
2023-04-27 21:25:24 +08:00 |
|
PanQiWei
|
498de923f2
|
support multi gpus quantization
|
2023-04-27 18:48:43 +08:00 |
|
qwopqwop200
|
8b6ee04aee
|
add option
|
2023-04-27 17:29:36 +09:00 |
|
PanQiWei
|
a2abff983e
|
support dispatch layers to different devices when loading pretrained model before quantization
|
2023-04-27 02:24:08 +08:00 |
|
PanQiWei
|
950f203260
|
add 'n_positions' to sequence length search list
|
2023-04-27 01:09:10 +08:00 |
|
PanQiWei
|
893c3264cb
|
make layer ignorance more robust
|
2023-04-26 19:35:19 +08:00 |
|
PanQiWei
|
f2359f56cb
|
add support to use push_to_hub to upload and share quantized model
|
2023-04-26 16:55:01 +08:00 |
|
PanQiWei
|
975f100d0f
|
init Quantizer() at GPTQ() init stage
|
2023-04-25 23:13:09 +08:00 |
|
PanQiWei
|
062b34f31a
|
add inference_mode and autocast context manager to generate function
|
2023-04-25 20:47:33 +08:00 |
|
PanQiWei
|
31d683f85b
|
add option to choose whether autotune warmup or not after quantization
|
2023-04-25 20:29:05 +08:00 |
|
PanQiWei
|
9c405b1628
|
add triton support
|
2023-04-25 20:05:22 +08:00 |
|
PanQiWei
|
832dc4a7a1
|
refactor file structure
|
2023-04-25 18:58:20 +08:00 |
|
PanQiWei
|
419160b733
|
always trust remote code
|
2023-04-25 12:52:49 +08:00 |
|
PanQiWei
|
f748dad2e1
|
always trust remote code
|
2023-04-25 12:13:46 +08:00 |
|
PanQiWei
|
7d3a625cee
|
fix mismatch GPTNeoxForCausalLM's lm_head
|
2023-04-24 20:51:56 +08:00 |
|
PanQiWei
|
1a8c460262
|
fix problem that some models required more positional arguments in transformer layer's forward function
|
2023-04-24 14:46:21 +08:00 |
|