Commit graph

123 commits

Author SHA1 Message Date
PanQiWei
057c39e3f2 fix meta device bug when use low_cpu_mem_usage 2023-05-24 11:19:59 +08:00
PanQiWei
e2e7809a1f always to enable QuantLinear bias to make compatible with model quantized from other frameworks 2023-05-24 10:56:31 +08:00
PanQiWei
191da8141e fix device mismatch 2023-05-23 23:22:52 +08:00
PanQiWei
e4e90e8b0a add warmup_triton method 2023-05-23 23:18:46 +08:00
PanQiWei
ed14d3a786 fix save quantized model failed when load pretrained model using CPU offload 2023-05-23 23:17:11 +08:00
PanQiWei
6476ee4235 add options: 'low_cpu_mem_usage' and 'full_cpu_offload' 2023-05-23 22:51:00 +08:00
TheBloke
bf633c298e Clean up some unused params 2023-05-20 10:32:27 +01:00
PanQiWei
86b3b52c63 fix ImportError when triton is not installed 2023-05-20 16:15:20 +08:00
潘其威(William)
13defe253a
Merge pull request #84 from TheBloke/TheBloke_forward-positional-args
Forward position args to allow `model(tokens)` syntax
2023-05-20 15:04:27 +08:00
潘其威(William)
1ef0af824a
Merge pull request #80 from PanQiWei/user_customized_device_map
Support users customize `device_map`
2023-05-20 15:00:05 +08:00
TheBloke
e5c8479100 Remove debugging print line 2023-05-19 17:50:48 +01:00
TheBloke
735f7df4cc Add push_to_hub for HF hub uploading 2023-05-19 17:10:57 +01:00
TheBloke
908b338436 Initial support for model loading from HF hub 2023-05-19 15:57:05 +01:00
TheBloke
a397f00cc3 Implement HF cached download for quantize_config 2023-05-19 15:15:43 +01:00
TheBloke
7f165337ed Forward position args to allow syntax 2023-05-16 12:19:52 +01:00
PanQiWei
759d6953d4 support user customize device_map 2023-05-15 13:26:38 +08:00
oobabooga
86c7021285
Look for .pt files 2023-05-15 00:00:05 -03:00
PanQiWei
de33d26d67 fix bugs 2023-05-14 13:07:18 +08:00
PanQiWei
2273f9ef39 refactor file structure for triton kernels 2023-05-14 11:49:10 +08:00
PanQiWei
fef1a4fe4b make code clean and extendable 2023-05-12 20:11:55 +08:00
PanQiWei
c5ff195764 skip fused module injection instead of raising error if it's not supported yet. 2023-05-12 19:36:00 +08:00
PanQiWei
f159aeabb6 refactor .from_quantized api and improve model loading strategy 2023-05-12 18:09:50 +08:00
TheBloke
1b3329b399 Fix 'groupsize' -> 'group_size' in all other .py files. I haven't touched any CUDA kernels in case there's any complexity there I don't understand 2023-05-05 14:44:16 +01:00
qwopqwop200
afe1323b3f
support faster and model load strict 2023-05-04 09:03:36 +09:00
qwopqwop200
694f2954a3
add auto model parameter 2023-05-02 22:16:23 +09:00
qwopqwop200
709bd7594f
Merge pull request #44 from PanQiWei/fix-bug-cuda
Fix bug cuda
2023-05-02 19:50:59 +09:00
潘其威(William)
144bd80436
Merge pull request #39 from TheBloke/TheBloke_check_model_exists
Check that model_save_name exists before trying to load it, to avoid confusing checkpoint error
2023-05-01 19:55:24 +08:00
TheBloke
593a0b28bb Fix typo: 'hole' -> 'whole' 2023-05-01 10:25:18 +01:00
TheBloke
60195ca5f2 Check that model_save_name exists before trying inference, to avoid confusing checkpoint error 2023-05-01 10:15:13 +01:00
qwopqwop200
95e633a597
add old cuda 2023-05-01 13:05:14 +09:00
潘其威(William)
5fa803334d
Merge branch 'main' into change-save-name 2023-04-29 20:36:45 +08:00
qwopqwop200
787909084f
fix bug 2023-04-29 19:08:34 +09:00
qwopqwop200
a2ef4b98db
change save the name 2023-04-29 18:20:46 +09:00
qwopqwop200
1792cd1111
change save the name 2023-04-29 18:16:48 +09:00
ZXED
24a371d14a
use the same Optional style as in other params 2023-04-29 09:52:11 +03:00
ZXED
c22770188d
allow user to set trust_remote_code flag manually 2023-04-29 09:52:11 +03:00
ZXED
b3f19a7ba7
support custom model name when loading the model 2023-04-29 09:52:11 +03:00
ZXED
ea8ab73343
support custom quantize_config when loading the model 2023-04-29 09:51:50 +03:00
潘其威(William)
1e353a8dc5
Merge pull request #24 from PanQiWei/speedup_quantization
Offloading and Multiple devices quantization/inference
2023-04-28 18:50:12 +08:00
PanQiWei
bdb713b5a3 add batch_size to model.quant() api 2023-04-28 18:26:07 +08:00
PanQiWei
3dfc87bec3 return module in .to function 2023-04-28 17:20:46 +08:00
PanQiWei
a69a73a22c fix device mismatch when directly using model to inference after quantization 2023-04-28 16:41:46 +08:00
qwopqwop200
3f90a22632
fix bug 2023-04-28 08:26:58 +09:00
PanQiWei
d0cd5af5d3 make code more robust 2023-04-28 01:29:12 +08:00
PanQiWei
51d2e53130 add support to cpu offloading and multi gpus inference on quantized model 2023-04-28 00:53:57 +08:00
PanQiWei
b14dca9207 disk offload assertion 2023-04-27 21:31:53 +08:00
PanQiWei
7a3397e7ba add cpu offload when doing quantization 2023-04-27 21:25:24 +08:00
PanQiWei
498de923f2 support multi gpus quantization 2023-04-27 18:48:43 +08:00
qwopqwop200
8b6ee04aee
add option 2023-04-27 17:29:36 +09:00
PanQiWei
a2abff983e support dispatch layers to different devices when loading pretrained model before quantization 2023-04-27 02:24:08 +08:00