Commit graph

130 commits

Author SHA1 Message Date
潘其威(William)
15db2cdc44
Update _base.py
fix problem that recursively adding file extension to model_base_name
2023-05-30 07:26:42 +08:00
潘其威(William)
cfa7271617
Update _base.py
fix variable not found error
2023-05-30 07:22:10 +08:00
潘其威(William)
e5771fb206
Update _base.py
fix key mismatch
2023-05-30 06:44:45 +08:00
潘其威(William)
61a4ea035f
Update auto.py
add back save_dir for backward compatible
2023-05-30 06:43:00 +08:00
潘其威(William)
ea74e15199
Update _base.py
add model_name_or_path and model_file_base_name to BaseQuantizeConfig for better model file management; add back save_dir to .from_quantized() for backward compatible
2023-05-30 06:40:31 +08:00
TheBloke
b7bb50b4d5 Fix bug added after merge 2023-05-25 07:05:51 +01:00
Tom Jobbins
492255b400
Merge branch 'main' into TheBloke_support-HF-download 2023-05-25 07:02:13 +01:00
PanQiWei
94ef4d5ada update basic usage example code 2023-05-24 17:56:46 +08:00
PanQiWei
c89bb6450c correct typo of function name 2023-05-24 17:43:38 +08:00
PanQiWei
10347fdd7b remove full_cpu_offload argument and unify model dispatch strategy 2023-05-24 17:41:04 +08:00
PanQiWei
379f24c2a5 remove add_align_logits_hook_to_model 2023-05-24 17:01:57 +08:00
PanQiWei
749dba1a7e disable add_align_logits_hook_to_model for now 2023-05-24 13:42:06 +08:00
PanQiWei
58c1b509f0 support add_align_logits_hook_to_model 2023-05-24 12:50:30 +08:00
PanQiWei
21ab7c435a make comments more readable 2023-05-24 11:38:29 +08:00
PanQiWei
c31b370228 make_sure_not_tensor_in_meta_device before load checkpoint 2023-05-24 11:32:45 +08:00
PanQiWei
63f1b4e073 remove comment 2023-05-24 11:23:07 +08:00
PanQiWei
057c39e3f2 fix meta device bug when use low_cpu_mem_usage 2023-05-24 11:19:59 +08:00
PanQiWei
e2e7809a1f always to enable QuantLinear bias to make compatible with model quantized from other frameworks 2023-05-24 10:56:31 +08:00
PanQiWei
191da8141e fix device mismatch 2023-05-23 23:22:52 +08:00
PanQiWei
e4e90e8b0a add warmup_triton method 2023-05-23 23:18:46 +08:00
PanQiWei
ed14d3a786 fix save quantized model failed when load pretrained model using CPU offload 2023-05-23 23:17:11 +08:00
PanQiWei
6476ee4235 add options: 'low_cpu_mem_usage' and 'full_cpu_offload' 2023-05-23 22:51:00 +08:00
PanQiWei
1b2159bd4c add more help functions 2023-05-23 19:30:28 +08:00
TheBloke
bf633c298e Clean up some unused params 2023-05-20 10:32:27 +01:00
PanQiWei
86b3b52c63 fix ImportError when triton is not installed 2023-05-20 16:15:20 +08:00
潘其威(William)
13defe253a
Merge pull request #84 from TheBloke/TheBloke_forward-positional-args
Forward position args to allow `model(tokens)` syntax
2023-05-20 15:04:27 +08:00
潘其威(William)
1ef0af824a
Merge pull request #80 from PanQiWei/user_customized_device_map
Support users customize `device_map`
2023-05-20 15:00:05 +08:00
TheBloke
e5c8479100 Remove debugging print line 2023-05-19 17:50:48 +01:00
TheBloke
735f7df4cc Add push_to_hub for HF hub uploading 2023-05-19 17:10:57 +01:00
TheBloke
908b338436 Initial support for model loading from HF hub 2023-05-19 15:57:05 +01:00
TheBloke
a397f00cc3 Implement HF cached download for quantize_config 2023-05-19 15:15:43 +01:00
TheBloke
7f165337ed Forward position args to allow syntax 2023-05-16 12:19:52 +01:00
PanQiWei
759d6953d4 support user customize device_map 2023-05-15 13:26:38 +08:00
PanQiWei
07e06fa08c make compatible with older transformers version 2023-05-15 13:26:18 +08:00
oobabooga
86c7021285
Look for .pt files 2023-05-15 00:00:05 -03:00
PanQiWei
d5429441ef add GPTJ fused attention module 2023-05-14 16:17:21 +08:00
PanQiWei
5445c67190 add library version comparison help functions 2023-05-14 16:16:06 +08:00
PanQiWei
de33d26d67 fix bugs 2023-05-14 13:07:18 +08:00
PanQiWei
2273f9ef39 refactor file structure for triton kernels 2023-05-14 11:49:10 +08:00
PanQiWei
fef1a4fe4b make code clean and extendable 2023-05-12 20:11:55 +08:00
PanQiWei
d718d63e9c add import_utils.py for commonly used module importation 2023-05-12 19:58:48 +08:00
PanQiWei
c5ff195764 skip fused module injection instead of raising error if it's not supported yet. 2023-05-12 19:36:00 +08:00
PanQiWei
f159aeabb6 refactor .from_quantized api and improve model loading strategy 2023-05-12 18:09:50 +08:00
PanQiWei
4bb10fda49 groupsize -> group_size 2023-05-12 13:37:52 +08:00
qwopqwop200
3ff6ab18cb
Merge branch 'main' into faster-llama 2023-05-06 00:20:29 +09:00
TheBloke
1b3329b399 Fix 'groupsize' -> 'group_size' in all other .py files. I haven't touched any CUDA kernels in case there's any complexity there I don't understand 2023-05-05 14:44:16 +01:00
TheBloke
f61ce12271 Change 'groupsize' to 'group_size' everywhere. Turns out this is easier than 'groupsize' due to dependencies in other files. 2023-05-05 13:36:00 +01:00
TheBloke
f64c71e779 Change referenes to 'group_size' to 'groupsize' to match rest of this file 2023-05-05 13:21:13 +01:00
PanQiWei
1c6bb69fae fix attribute name error 2023-05-04 22:10:33 +08:00
潘其威(William)
771b650a7c
Merge pull request #38 from PanQiWei/faster-cuda-no-actorder
Faster cuda no actorder
2023-05-04 21:47:19 +08:00