oobabooga
|
f5b59d2b0b
|
Fix the vulkan workflow
|
2025-04-26 20:11:24 -07:00 |
|
oobabooga
|
9bb9ce079e
|
Merge pull request #6912 from oobabooga/dev
Merge dev branch
|
2025-04-27 00:03:16 -03:00 |
|
oobabooga
|
765fea5e36
|
UI: minor style change
|
2025-04-26 19:33:46 -07:00 |
|
oobabooga
|
70952553c7
|
Lint
|
2025-04-26 19:29:08 -07:00 |
|
oobabooga
|
363b632a0d
|
Lint
|
2025-04-26 19:22:36 -07:00 |
|
oobabooga
|
fa861de05b
|
Fix portable builds with Python 3.12
|
2025-04-26 18:52:44 -07:00 |
|
oobabooga
|
7b80acd524
|
Fix parsing --extra-flags
|
2025-04-26 18:40:03 -07:00 |
|
oobabooga
|
943451284f
|
Fix the Notebook tab not loading its default prompt
|
2025-04-26 18:25:06 -07:00 |
|
oobabooga
|
511eb6aa94
|
Fix saving settings to settings.yaml
|
2025-04-26 18:20:00 -07:00 |
|
oobabooga
|
8b83e6f843
|
Prevent Gradio from saying 'Thank you for being a Gradio user!'
|
2025-04-26 18:14:57 -07:00 |
|
oobabooga
|
4a32e1f80c
|
UI: show draft_max for ExLlamaV2
|
2025-04-26 18:01:44 -07:00 |
|
oobabooga
|
0fe3b033d0
|
Fix parsing of --n_ctx and --max_seq_len (2nd attempt)
|
2025-04-26 17:52:21 -07:00 |
|
oobabooga
|
c4afc0421d
|
Fix parsing of --n_ctx and --max_seq_len
|
2025-04-26 17:43:53 -07:00 |
|
oobabooga
|
234aba1c50
|
llama.cpp: Simplify the prompt processing progress indicator
The progress bar was unreliable
|
2025-04-26 17:33:47 -07:00 |
|
oobabooga
|
4ff91b6588
|
Better default settings for Speculative Decoding
|
2025-04-26 17:24:40 -07:00 |
|
oobabooga
|
bf2aa19b21
|
Bump llama.cpp
|
2025-04-26 16:39:22 -07:00 |
|
oobabooga
|
029aab6404
|
Revert "Add -noavx2 portable builds"
This reverts commit 0dd71e78c9 .
|
2025-04-26 16:38:13 -07:00 |
|
oobabooga
|
35717a088c
|
API: Add an /v1/internal/health endpoint
|
2025-04-26 15:42:27 -07:00 |
|
oobabooga
|
bc55feaf3e
|
Improve host header validation in local mode
|
2025-04-26 15:42:17 -07:00 |
|
oobabooga
|
a317450dfa
|
Update README
|
2025-04-26 14:59:29 -07:00 |
|
oobabooga
|
d1e7d9c5d5
|
Update CMD_FLAGS.txt
|
2025-04-26 09:00:56 -07:00 |
|
oobabooga
|
3a207e7a57
|
Improve the --help formatting a bit
|
2025-04-26 07:31:04 -07:00 |
|
oobabooga
|
6acb0e1bee
|
Change a UI description
|
2025-04-26 05:13:08 -07:00 |
|
oobabooga
|
cbd4d967cc
|
Update a --help message
|
2025-04-26 05:09:52 -07:00 |
|
oobabooga
|
19c8dced67
|
Move settings-template.yaml into user_data
|
2025-04-26 05:03:23 -07:00 |
|
oobabooga
|
b976112539
|
Remove the WSL installation scripts
They were useful in 2023 but now everything runs natively on Windows.
|
2025-04-26 05:02:17 -07:00 |
|
oobabooga
|
763a7011c0
|
Remove an ancient/obsolete migration check
|
2025-04-26 04:59:05 -07:00 |
|
oobabooga
|
d9de14d1f7
|
Restructure the repository (#6904)
|
2025-04-26 08:56:54 -03:00 |
|
oobabooga
|
d4017fbb6d
|
ExLlamaV3: Add kv cache quantization (#6903)
|
2025-04-25 21:32:00 -03:00 |
|
oobabooga
|
d4b1e31c49
|
Use --ctx-size to specify the context size for all loaders
Old flags are still recognized as alternatives.
|
2025-04-25 16:59:03 -07:00 |
|
oobabooga
|
faababc4ea
|
llama.cpp: Add a prompt processing progress bar
|
2025-04-25 16:42:30 -07:00 |
|
oobabooga
|
877cf44c08
|
llama.cpp: Add StreamingLLM (--streaming-llm )
|
2025-04-25 16:21:41 -07:00 |
|
oobabooga
|
d35818f4e1
|
UI: Add a collapsible thinking block to messages with <think> steps (#6902)
|
2025-04-25 18:02:02 -03:00 |
|
oobabooga
|
0dd71e78c9
|
Add -noavx2 portable builds
|
2025-04-25 09:07:14 -07:00 |
|
oobabooga
|
98f4c694b9
|
llama.cpp: Add --extra-flags parameter for passing additional flags to llama-server
|
2025-04-25 07:32:51 -07:00 |
|
oobabooga
|
b6fffbd216
|
UI: minor style change
|
2025-04-25 05:37:44 -07:00 |
|
oobabooga
|
2c7ff86015
|
Bump exllamav3 to de83084184
|
2025-04-25 05:28:22 -07:00 |
|
oobabooga
|
5993ebeb1b
|
Bump exllamav2 to 0.2.9
|
2025-04-25 05:27:59 -07:00 |
|
oobabooga
|
23399aff3c
|
UI: minor style change
|
2025-04-24 20:39:00 -07:00 |
|
oobabooga
|
5861013e68
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-04-24 20:36:20 -07:00 |
|
oobabooga
|
a90df27ff5
|
UI: Add a greeting when the chat history is empty
|
2025-04-24 20:33:40 -07:00 |
|
oobabooga
|
ae1fe87365
|
ExLlamaV2: Add speculative decoding (#6899)
|
2025-04-25 00:11:04 -03:00 |
|
Matthew Jenkins
|
8f2493cc60
|
Prevent llamacpp defaults from locking up consumer hardware (#6870)
|
2025-04-24 23:38:57 -03:00 |
|
oobabooga
|
370fe7b7cf
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-04-24 09:33:17 -07:00 |
|
oobabooga
|
8ebe868916
|
Fix typos in b313adf653
|
2025-04-24 09:32:17 -07:00 |
|
oobabooga
|
93fd4ad25d
|
llama.cpp: Document the --device-draft syntax
|
2025-04-24 09:20:11 -07:00 |
|
oobabooga
|
f1b64df8dd
|
EXL2: add another torch.cuda.synchronize() call to prevent errors
|
2025-04-24 09:03:49 -07:00 |
|
Ziya
|
60ac495d59
|
extensions/superboogav2: existing embedding check bug fix (#6898)
|
2025-04-24 12:42:05 -03:00 |
|
oobabooga
|
b313adf653
|
Bump llama.cpp, make the wheels work with any Python >= 3.7
|
2025-04-24 08:26:12 -07:00 |
|
oobabooga
|
c71a2af5ab
|
Handle CMD_FLAGS.txt in the main code (closes #6896)
|
2025-04-24 08:21:06 -07:00 |
|