oobabooga
|
d08acb4af9
|
UI: Rename enable_thinking -> Enable thinking
|
2025-05-02 20:50:52 -07:00 |
|
oobabooga
|
3526b7923c
|
Remove extensions with requirements from portable builds
|
2025-05-02 17:40:53 -07:00 |
|
oobabooga
|
4cea720da8
|
UI: Remove the "Autoload the model" feature
|
2025-05-02 16:38:28 -07:00 |
|
oobabooga
|
905afced1c
|
Add a --portable flag to hide things in portable mode
|
2025-05-02 16:34:29 -07:00 |
|
oobabooga
|
3f26b0408b
|
Fix after 9e3867dc83
|
2025-05-02 16:17:22 -07:00 |
|
oobabooga
|
9e3867dc83
|
llama.cpp: Fix manual random seeds
|
2025-05-02 09:36:15 -07:00 |
|
oobabooga
|
d5c407cf35
|
Use Vulkan instead of ROCm for llama.cpp on AMD
|
2025-05-01 20:05:36 -07:00 |
|
oobabooga
|
f8aaf3c23a
|
Use ROCm 6.2.4 on AMD
|
2025-05-01 19:50:46 -07:00 |
|
oobabooga
|
c12a53c998
|
Use turboderp's exllamav2 wheels
|
2025-05-01 19:46:56 -07:00 |
|
oobabooga
|
89090d9a61
|
Update README
|
2025-05-01 08:22:54 -07:00 |
|
oobabooga
|
b950a0c6db
|
Lint
|
2025-04-30 20:02:10 -07:00 |
|
oobabooga
|
307d13b540
|
UI: Minor label change
|
2025-04-30 18:58:14 -07:00 |
|
oobabooga
|
55283bb8f1
|
Fix CFG with ExLlamaV2_HF (closes #6937)
|
2025-04-30 18:43:45 -07:00 |
|
oobabooga
|
ec2e641749
|
Update settings-template.yaml
|
2025-04-30 15:25:26 -07:00 |
|
oobabooga
|
a6c3ec2299
|
llama.cpp: Explicitly send cache_prompt = True
|
2025-04-30 15:24:07 -07:00 |
|
oobabooga
|
195a45c6e1
|
UI: Make thinking blocks closed by default
|
2025-04-30 15:12:46 -07:00 |
|
oobabooga
|
cd5c32dc19
|
UI: Fix max_updates_second not working
|
2025-04-30 14:54:05 -07:00 |
|
oobabooga
|
b46ca01340
|
UI: Set max_updates_second to 12 by default
When the tokens/second at at ~50 and the model is a thinking model,
the markdown rendering for the streaming message becomes a CPU
bottleneck.
|
2025-04-30 14:53:15 -07:00 |
|
oobabooga
|
a4bf339724
|
Bump llama.cpp
|
2025-04-30 11:13:14 -07:00 |
|
oobabooga
|
e9569c3984
|
Fixes after c5fe92d152
|
2025-04-30 06:57:23 -07:00 |
|
oobabooga
|
771d3d8ed6
|
Fix getting the llama.cpp logprobs for Qwen3-30B-A3B
|
2025-04-30 06:48:32 -07:00 |
|
oobabooga
|
7f49e3c3ce
|
Bump ExLlamaV3
|
2025-04-30 05:25:09 -07:00 |
|
oobabooga
|
c5fe92d152
|
Bump llama.cpp
|
2025-04-30 05:24:58 -07:00 |
|
oobabooga
|
1dd4aedbe1
|
Fix the streaming_llm UI checkbox not being interactive
|
2025-04-29 05:28:46 -07:00 |
|
oobabooga
|
c5fb51e5d1
|
Update README
|
2025-04-28 22:40:26 -07:00 |
|
oobabooga
|
d10bded7f8
|
UI: Add an enable_thinking option to enable/disable Qwen3 thinking
|
2025-04-28 22:37:01 -07:00 |
|
oobabooga
|
1ee0acc852
|
llama.cpp: Make --verbose print the llama-server command
|
2025-04-28 15:56:25 -07:00 |
|
oobabooga
|
15a29e99f8
|
Lint
|
2025-04-27 21:41:34 -07:00 |
|
oobabooga
|
be13f5199b
|
UI: Add an info message about how to use Speculative Decoding
|
2025-04-27 21:40:38 -07:00 |
|
oobabooga
|
c6c2855c80
|
llama.cpp: Remove the timeout while loading models (closes #6907)
|
2025-04-27 21:22:21 -07:00 |
|
oobabooga
|
bbcaec75b4
|
API: Find a new port if the default one is taken (closes #6918)
|
2025-04-27 21:13:16 -07:00 |
|
oobabooga
|
ee0592473c
|
Fix ExLlamaV3_HF leaking memory (attempt)
|
2025-04-27 21:04:02 -07:00 |
|
oobabooga
|
965ca7948f
|
Update README
|
2025-04-27 07:33:08 -07:00 |
|
oobabooga
|
f5b59d2b0b
|
Fix the vulkan workflow
|
2025-04-26 20:11:24 -07:00 |
|
oobabooga
|
765fea5e36
|
UI: minor style change
|
2025-04-26 19:33:46 -07:00 |
|
oobabooga
|
70952553c7
|
Lint
|
2025-04-26 19:29:08 -07:00 |
|
oobabooga
|
363b632a0d
|
Lint
|
2025-04-26 19:22:36 -07:00 |
|
oobabooga
|
fa861de05b
|
Fix portable builds with Python 3.12
|
2025-04-26 18:52:44 -07:00 |
|
oobabooga
|
7b80acd524
|
Fix parsing --extra-flags
|
2025-04-26 18:40:03 -07:00 |
|
oobabooga
|
943451284f
|
Fix the Notebook tab not loading its default prompt
|
2025-04-26 18:25:06 -07:00 |
|
oobabooga
|
511eb6aa94
|
Fix saving settings to settings.yaml
|
2025-04-26 18:20:00 -07:00 |
|
oobabooga
|
8b83e6f843
|
Prevent Gradio from saying 'Thank you for being a Gradio user!'
|
2025-04-26 18:14:57 -07:00 |
|
oobabooga
|
4a32e1f80c
|
UI: show draft_max for ExLlamaV2
|
2025-04-26 18:01:44 -07:00 |
|
oobabooga
|
0fe3b033d0
|
Fix parsing of --n_ctx and --max_seq_len (2nd attempt)
|
2025-04-26 17:52:21 -07:00 |
|
oobabooga
|
c4afc0421d
|
Fix parsing of --n_ctx and --max_seq_len
|
2025-04-26 17:43:53 -07:00 |
|
oobabooga
|
234aba1c50
|
llama.cpp: Simplify the prompt processing progress indicator
The progress bar was unreliable
|
2025-04-26 17:33:47 -07:00 |
|
oobabooga
|
4ff91b6588
|
Better default settings for Speculative Decoding
|
2025-04-26 17:24:40 -07:00 |
|
oobabooga
|
bf2aa19b21
|
Bump llama.cpp
|
2025-04-26 16:39:22 -07:00 |
|
oobabooga
|
029aab6404
|
Revert "Add -noavx2 portable builds"
This reverts commit 0dd71e78c9 .
|
2025-04-26 16:38:13 -07:00 |
|
oobabooga
|
35717a088c
|
API: Add an /v1/internal/health endpoint
|
2025-04-26 15:42:27 -07:00 |
|