潘其威(William)
e5771fb206
Update _base.py
...
fix key mismatch
2023-05-30 06:44:45 +08:00
潘其威(William)
ea74e15199
Update _base.py
...
add model_name_or_path and model_file_base_name to BaseQuantizeConfig for better model file management; add back save_dir to .from_quantized() for backward compatible
2023-05-30 06:40:31 +08:00
TheBloke
b7bb50b4d5
Fix bug added after merge
2023-05-25 07:05:51 +01:00
Tom Jobbins
492255b400
Merge branch 'main' into TheBloke_support-HF-download
2023-05-25 07:02:13 +01:00
PanQiWei
94ef4d5ada
update basic usage example code
2023-05-24 17:56:46 +08:00
PanQiWei
c89bb6450c
correct typo of function name
2023-05-24 17:43:38 +08:00
PanQiWei
10347fdd7b
remove full_cpu_offload argument and unify model dispatch strategy
2023-05-24 17:41:04 +08:00
PanQiWei
379f24c2a5
remove add_align_logits_hook_to_model
2023-05-24 17:01:57 +08:00
PanQiWei
749dba1a7e
disable add_align_logits_hook_to_model for now
2023-05-24 13:42:06 +08:00
PanQiWei
58c1b509f0
support add_align_logits_hook_to_model
2023-05-24 12:50:30 +08:00
PanQiWei
21ab7c435a
make comments more readable
2023-05-24 11:38:29 +08:00
PanQiWei
c31b370228
make_sure_not_tensor_in_meta_device before load checkpoint
2023-05-24 11:32:45 +08:00
PanQiWei
63f1b4e073
remove comment
2023-05-24 11:23:07 +08:00
PanQiWei
057c39e3f2
fix meta device bug when use low_cpu_mem_usage
2023-05-24 11:19:59 +08:00
PanQiWei
e2e7809a1f
always to enable QuantLinear bias to make compatible with model quantized from other frameworks
2023-05-24 10:56:31 +08:00
PanQiWei
191da8141e
fix device mismatch
2023-05-23 23:22:52 +08:00
PanQiWei
e4e90e8b0a
add warmup_triton method
2023-05-23 23:18:46 +08:00
PanQiWei
ed14d3a786
fix save quantized model failed when load pretrained model using CPU offload
2023-05-23 23:17:11 +08:00
PanQiWei
6476ee4235
add options: 'low_cpu_mem_usage' and 'full_cpu_offload'
2023-05-23 22:51:00 +08:00
TheBloke
bf633c298e
Clean up some unused params
2023-05-20 10:32:27 +01:00
PanQiWei
86b3b52c63
fix ImportError when triton is not installed
2023-05-20 16:15:20 +08:00
潘其威(William)
13defe253a
Merge pull request #84 from TheBloke/TheBloke_forward-positional-args
...
Forward position args to allow `model(tokens)` syntax
2023-05-20 15:04:27 +08:00
潘其威(William)
1ef0af824a
Merge pull request #80 from PanQiWei/user_customized_device_map
...
Support users customize `device_map`
2023-05-20 15:00:05 +08:00
TheBloke
e5c8479100
Remove debugging print line
2023-05-19 17:50:48 +01:00
TheBloke
735f7df4cc
Add push_to_hub for HF hub uploading
2023-05-19 17:10:57 +01:00
TheBloke
908b338436
Initial support for model loading from HF hub
2023-05-19 15:57:05 +01:00
TheBloke
a397f00cc3
Implement HF cached download for quantize_config
2023-05-19 15:15:43 +01:00
TheBloke
7f165337ed
Forward position args to allow syntax
2023-05-16 12:19:52 +01:00
PanQiWei
759d6953d4
support user customize device_map
2023-05-15 13:26:38 +08:00
oobabooga
86c7021285
Look for .pt files
2023-05-15 00:00:05 -03:00
PanQiWei
de33d26d67
fix bugs
2023-05-14 13:07:18 +08:00
PanQiWei
2273f9ef39
refactor file structure for triton kernels
2023-05-14 11:49:10 +08:00
PanQiWei
fef1a4fe4b
make code clean and extendable
2023-05-12 20:11:55 +08:00
PanQiWei
c5ff195764
skip fused module injection instead of raising error if it's not supported yet.
2023-05-12 19:36:00 +08:00
PanQiWei
f159aeabb6
refactor .from_quantized api and improve model loading strategy
2023-05-12 18:09:50 +08:00
TheBloke
1b3329b399
Fix 'groupsize' -> 'group_size' in all other .py files. I haven't touched any CUDA kernels in case there's any complexity there I don't understand
2023-05-05 14:44:16 +01:00
qwopqwop200
afe1323b3f
support faster and model load strict
2023-05-04 09:03:36 +09:00
qwopqwop200
694f2954a3
add auto model parameter
2023-05-02 22:16:23 +09:00
qwopqwop200
709bd7594f
Merge pull request #44 from PanQiWei/fix-bug-cuda
...
Fix bug cuda
2023-05-02 19:50:59 +09:00
潘其威(William)
144bd80436
Merge pull request #39 from TheBloke/TheBloke_check_model_exists
...
Check that model_save_name exists before trying to load it, to avoid confusing checkpoint error
2023-05-01 19:55:24 +08:00
TheBloke
593a0b28bb
Fix typo: 'hole' -> 'whole'
2023-05-01 10:25:18 +01:00
TheBloke
60195ca5f2
Check that model_save_name exists before trying inference, to avoid confusing checkpoint error
2023-05-01 10:15:13 +01:00
qwopqwop200
95e633a597
add old cuda
2023-05-01 13:05:14 +09:00
潘其威(William)
5fa803334d
Merge branch 'main' into change-save-name
2023-04-29 20:36:45 +08:00
qwopqwop200
787909084f
fix bug
2023-04-29 19:08:34 +09:00
qwopqwop200
a2ef4b98db
change save the name
2023-04-29 18:20:46 +09:00
qwopqwop200
1792cd1111
change save the name
2023-04-29 18:16:48 +09:00
ZXED
24a371d14a
use the same Optional style as in other params
2023-04-29 09:52:11 +03:00
ZXED
c22770188d
allow user to set trust_remote_code flag manually
2023-04-29 09:52:11 +03:00
ZXED
b3f19a7ba7
support custom model name when loading the model
2023-04-29 09:52:11 +03:00