Commit graph

334 commits

Author SHA1 Message Date
oobabooga
d4017fbb6d
ExLlamaV3: Add kv cache quantization (#6903) 2025-04-25 21:32:00 -03:00
oobabooga
d4b1e31c49 Use --ctx-size to specify the context size for all loaders
Old flags are still recognized as alternatives.
2025-04-25 16:59:03 -07:00
oobabooga
877cf44c08 llama.cpp: Add StreamingLLM (--streaming-llm) 2025-04-25 16:21:41 -07:00
oobabooga
d35818f4e1
UI: Add a collapsible thinking block to messages with <think> steps (#6902) 2025-04-25 18:02:02 -03:00
oobabooga
98f4c694b9 llama.cpp: Add --extra-flags parameter for passing additional flags to llama-server 2025-04-25 07:32:51 -07:00
Matthew Jenkins
8f2493cc60
Prevent llamacpp defaults from locking up consumer hardware (#6870) 2025-04-24 23:38:57 -03:00
oobabooga
93fd4ad25d llama.cpp: Document the --device-draft syntax 2025-04-24 09:20:11 -07:00
oobabooga
c71a2af5ab Handle CMD_FLAGS.txt in the main code (closes #6896) 2025-04-24 08:21:06 -07:00
oobabooga
bfbde73409 Make 'instruct' the default chat mode 2025-04-24 07:08:49 -07:00
oobabooga
e99c20bcb0
llama.cpp: Add speculative decoding (#6891) 2025-04-23 20:10:16 -03:00
oobabooga
8cfd7f976b Revert "Remove the old --model-menu flag"
This reverts commit 109de34e3b.
2025-04-20 13:35:42 -07:00
oobabooga
ae02ffc605
Refactor the transformers loader (#6859) 2025-04-20 13:33:47 -03:00
oobabooga
d68f0fbdf7 Remove obsolete references to llamacpp_HF 2025-04-18 07:46:04 -07:00
oobabooga
c6901aba9f Remove deprecation warning code 2025-04-18 06:05:47 -07:00
oobabooga
8144e1031e Remove deprecated command-line flags 2025-04-18 06:02:28 -07:00
oobabooga
ae54d8faaa
New llama.cpp loader (#6846) 2025-04-18 09:59:37 -03:00
oobabooga
4ed0da74a8 Remove the obsolete 'multimodal' extension 2025-04-09 20:09:48 -07:00
oobabooga
8b8d39ec4e
Add ExLlamaV3 support (#6832) 2025-04-09 00:07:08 -03:00
oobabooga
a5855c345c
Set context lengths to at most 8192 by default (to prevent out of memory errors) (#6835) 2025-04-07 21:42:33 -03:00
oobabooga
109de34e3b Remove the old --model-menu flag 2025-03-31 09:24:03 -07:00
oobabooga
0360f54ae8 UI: add a "Show after" parameter (to use with DeepSeek </think>) 2025-02-02 15:30:09 -08:00
oobabooga
c832953ff7 UI: Activate auto_max_new_tokens by default 2025-01-14 05:59:55 -08:00
oobabooga
d2f6c0f65f Update README 2025-01-10 13:25:40 -08:00
oobabooga
c393f7650d Update settings-template.yaml, organize modules/shared.py 2025-01-10 13:22:18 -08:00
oobabooga
83c426e96b
Organize internals (#6646) 2025-01-10 18:04:32 -03:00
oobabooga
7fe46764fb Improve the --help message about --tensorcores as well 2025-01-10 07:07:41 -08:00
oobabooga
da6d868f58 Remove old deprecated flags (~6 months or more) 2025-01-09 16:11:46 -08:00
BPplays
619265b32c
add ipv6 support to the API (#6559) 2025-01-09 10:23:44 -03:00
oobabooga
91a8a87887 Remove obsolete code 2025-01-08 15:07:21 -08:00
oobabooga
7157257c3f
Remove the AutoGPTQ loader (#6641) 2025-01-08 19:28:56 -03:00
oobabooga
c0f600c887 Add a --torch-compile flag for transformers 2025-01-05 05:47:00 -08:00
oobabooga
11af199aff Add a "Static KV cache" option for transformers 2025-01-04 17:52:57 -08:00
oobabooga
60c93e0c66 UI: Set cache_type to fp16 by default 2024-12-17 19:44:20 -08:00
Diner Burger
addad3c63e
Allow more granular KV cache settings (#6561) 2024-12-17 17:43:48 -03:00
oobabooga
d769618591
Improved UI (#6575) 2024-12-17 00:47:41 -03:00
RandoInternetPreson
46996f6519
ExllamaV2 tensor parallelism to increase multi gpu inference speeds (#6356) 2024-09-28 00:26:03 -03:00
oobabooga
e926c03b3d Add a --tokenizer-dir command-line flag for llamacpp_HF 2024-08-06 19:41:18 -07:00
oobabooga
9dcff21da9 Remove unnecessary shared.previous_model_name variable 2024-07-28 18:35:11 -07:00
oobabooga
7050bb880e UI: make n_ctx/max_seq_len/truncation_length numbers rather than sliders 2024-07-27 23:11:53 -07:00
oobabooga
e6181e834a Remove AutoAWQ as a standalone loader
(it works better through transformers)
2024-07-23 15:31:17 -07:00
oobabooga
f18c947a86 Update the tensorcores description 2024-07-22 18:06:41 -07:00
oobabooga
aa809e420e Bump llama-cpp-python to 0.2.83, add back tensorcore wheels
Also add back the progress bar patch
2024-07-22 18:05:11 -07:00
oobabooga
11bbf71aa5
Bump back llama-cpp-python (#6257) 2024-07-22 16:19:41 -03:00
oobabooga
0f53a736c1 Revert the llama-cpp-python update 2024-07-22 12:02:25 -07:00
oobabooga
a687f950ba Remove the tensorcores llama.cpp wheels
They are not faster than the default wheels anymore and they use a lot of space.
2024-07-22 11:54:35 -07:00
oobabooga
e9d4bff7d0 Update the --tensor_split description 2024-07-20 22:04:48 -07:00
Alberto Cano
a14c510afb
Customize the subpath for gradio, use with reverse proxy (#5106) 2024-07-20 19:10:39 -03:00
oobabooga
aa7c14a463 Use chat-instruct mode by default 2024-07-19 21:43:52 -07:00
oobabooga
e436d69e2b Add --no_xformers and --no_sdpa flags for ExllamaV2 2024-07-11 15:47:37 -07:00
GralchemOz
8a39f579d8
transformers: Add eager attention option to make Gemma-2 work properly (#6188) 2024-07-01 12:08:08 -03:00