oobabooga
|
92adceb7b5
|
UI: Fix the model downloader progress bar
|
2025-06-01 19:22:21 -07:00 |
|
oobabooga
|
5d00574a56
|
Minor UI fixes
|
2025-05-20 16:20:49 -07:00 |
|
oobabooga
|
9ec46b8c44
|
Remove the HQQ loader (HQQ models can be loaded through Transformers)
|
2025-05-19 09:23:24 -07:00 |
|
oobabooga
|
2faaf18f1f
|
Add back the "Common values" to the ctx-size slider
|
2025-05-18 09:06:20 -07:00 |
|
oobabooga
|
1c549d176b
|
Fix GPU layers slider: honor saved settings and show true maximum
|
2025-05-16 17:26:13 -07:00 |
|
oobabooga
|
adb975a380
|
Prevent fractional gpu-layers in the UI
|
2025-05-16 12:52:43 -07:00 |
|
oobabooga
|
fc483650b5
|
Set the maximum gpu_layers value automatically when the model is loaded with --model
|
2025-05-16 11:58:17 -07:00 |
|
oobabooga
|
9ec9b1bf83
|
Auto-adjust GPU layers after model unload to utilize freed VRAM
|
2025-05-16 09:56:23 -07:00 |
|
oobabooga
|
4925c307cf
|
Auto-adjust GPU layers on context size and cache type changes + many fixes
|
2025-05-16 09:07:38 -07:00 |
|
oobabooga
|
cbf4daf1c8
|
Hide the LoRA menu in portable mode
|
2025-05-15 21:21:54 -07:00 |
|
oobabooga
|
5534d01da0
|
Estimate the VRAM for GGUF models + autoset gpu-layers (#6980)
|
2025-05-16 00:07:37 -03:00 |
|
oobabooga
|
c4a715fd1e
|
UI: Move the LoRA menu under "Other options"
|
2025-05-13 20:14:09 -07:00 |
|
oobabooga
|
3fa1a899ae
|
UI: Fix gpu-layers being ignored (closes #6973)
|
2025-05-13 12:07:59 -07:00 |
|
oobabooga
|
512bc2d0e0
|
UI: Update some labels
|
2025-05-08 23:43:55 -07:00 |
|
oobabooga
|
f8ef6e09af
|
UI: Make ctx-size a slider
|
2025-05-08 18:19:04 -07:00 |
|
oobabooga
|
a2ab42d390
|
UI: Remove the exllamav2 info message
|
2025-05-08 08:00:38 -07:00 |
|
oobabooga
|
348d4860c2
|
UI: Create a "Main options" section in the Model tab
|
2025-05-08 07:58:59 -07:00 |
|
oobabooga
|
d2bae7694c
|
UI: Change the ctx-size description
|
2025-05-08 07:26:23 -07:00 |
|
oobabooga
|
b817bb33fd
|
Minor fix after df7bb0db1f
|
2025-05-05 05:00:20 -07:00 |
|
oobabooga
|
df7bb0db1f
|
Rename --n-gpu-layers to --gpu-layers
|
2025-05-04 20:03:55 -07:00 |
|
oobabooga
|
ea60f14674
|
UI: Show the list of files if the user tries to download a GGUF repository
|
2025-05-03 06:06:50 -07:00 |
|
oobabooga
|
4cea720da8
|
UI: Remove the "Autoload the model" feature
|
2025-05-02 16:38:28 -07:00 |
|
oobabooga
|
905afced1c
|
Add a --portable flag to hide things in portable mode
|
2025-05-02 16:34:29 -07:00 |
|
oobabooga
|
307d13b540
|
UI: Minor label change
|
2025-04-30 18:58:14 -07:00 |
|
oobabooga
|
15a29e99f8
|
Lint
|
2025-04-27 21:41:34 -07:00 |
|
oobabooga
|
be13f5199b
|
UI: Add an info message about how to use Speculative Decoding
|
2025-04-27 21:40:38 -07:00 |
|
oobabooga
|
7b80acd524
|
Fix parsing --extra-flags
|
2025-04-26 18:40:03 -07:00 |
|
oobabooga
|
4ff91b6588
|
Better default settings for Speculative Decoding
|
2025-04-26 17:24:40 -07:00 |
|
oobabooga
|
d9de14d1f7
|
Restructure the repository (#6904)
|
2025-04-26 08:56:54 -03:00 |
|
oobabooga
|
d4017fbb6d
|
ExLlamaV3: Add kv cache quantization (#6903)
|
2025-04-25 21:32:00 -03:00 |
|
oobabooga
|
d4b1e31c49
|
Use --ctx-size to specify the context size for all loaders
Old flags are still recognized as alternatives.
|
2025-04-25 16:59:03 -07:00 |
|
oobabooga
|
877cf44c08
|
llama.cpp: Add StreamingLLM (--streaming-llm )
|
2025-04-25 16:21:41 -07:00 |
|
oobabooga
|
98f4c694b9
|
llama.cpp: Add --extra-flags parameter for passing additional flags to llama-server
|
2025-04-25 07:32:51 -07:00 |
|
oobabooga
|
93fd4ad25d
|
llama.cpp: Document the --device-draft syntax
|
2025-04-24 09:20:11 -07:00 |
|
oobabooga
|
e99c20bcb0
|
llama.cpp: Add speculative decoding (#6891)
|
2025-04-23 20:10:16 -03:00 |
|
oobabooga
|
ae02ffc605
|
Refactor the transformers loader (#6859)
|
2025-04-20 13:33:47 -03:00 |
|
oobabooga
|
d68f0fbdf7
|
Remove obsolete references to llamacpp_HF
|
2025-04-18 07:46:04 -07:00 |
|
oobabooga
|
8144e1031e
|
Remove deprecated command-line flags
|
2025-04-18 06:02:28 -07:00 |
|
oobabooga
|
ae54d8faaa
|
New llama.cpp loader (#6846)
|
2025-04-18 09:59:37 -03:00 |
|
oobabooga
|
2c2d453c8c
|
Revert "Use ExLlamaV2 (instead of the HF one) for EXL2 models for now"
This reverts commit 0ef1b8f8b4 .
|
2025-04-17 21:31:32 -07:00 |
|
oobabooga
|
0ef1b8f8b4
|
Use ExLlamaV2 (instead of the HF one) for EXL2 models for now
It doesn't seem to have the "OverflowError" bug
|
2025-04-17 05:47:40 -07:00 |
|
oobabooga
|
bf48ec8c44
|
Remove an unnecessary UI message
|
2025-04-07 17:43:41 -07:00 |
|
oobabooga
|
a5855c345c
|
Set context lengths to at most 8192 by default (to prevent out of memory errors) (#6835)
|
2025-04-07 21:42:33 -03:00 |
|
oobabooga
|
75ff3f3815
|
UI: Mention common context length values
|
2025-01-25 08:22:23 -08:00 |
|
oobabooga
|
3020f2e5ec
|
UI: improve the info message about --tensorcores
|
2025-01-09 12:44:03 -08:00 |
|
oobabooga
|
7157257c3f
|
Remove the AutoGPTQ loader (#6641)
|
2025-01-08 19:28:56 -03:00 |
|
oobabooga
|
c0f600c887
|
Add a --torch-compile flag for transformers
|
2025-01-05 05:47:00 -08:00 |
|
oobabooga
|
39a5c9a49c
|
UI organization (#6618)
|
2024-12-29 11:16:17 -03:00 |
|
oobabooga
|
ddccc0d657
|
UI: minor change to log messages
|
2024-12-17 19:39:00 -08:00 |
|
oobabooga
|
3030c79e8c
|
UI: show progress while loading a model
|
2024-12-17 19:37:43 -08:00 |
|