Commit graph

1576 commits

Author SHA1 Message Date
oobabooga
e99c20bcb0
llama.cpp: Add speculative decoding (#6891) 2025-04-23 20:10:16 -03:00
oobabooga
9424ba17c8 UI: show only part 00001 of multipart GGUF models in the model menu 2025-04-22 19:56:42 -07:00
oobabooga
25cf3600aa Lint 2025-04-22 08:04:02 -07:00
oobabooga
39cbb5fee0 Lint 2025-04-22 08:03:25 -07:00
oobabooga
008c6dd682 Lint 2025-04-22 08:02:37 -07:00
oobabooga
78aeabca89 Fix the transformers loader 2025-04-21 18:33:14 -07:00
oobabooga
8320190184 Fix the exllamav2_HF and exllamav3_HF loaders 2025-04-21 18:32:23 -07:00
oobabooga
15989c2ed8 Make llama.cpp the default loader 2025-04-21 16:36:35 -07:00
oobabooga
86c3ed3218 Small change to the unload_model() function 2025-04-20 20:00:56 -07:00
oobabooga
fe8e80e04a Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2025-04-20 19:09:27 -07:00
oobabooga
ff1c00bdd9 llama.cpp: set the random seed manually 2025-04-20 19:08:44 -07:00
Matthew Jenkins
d3e7c655e5
Add support for llama-cpp builds from https://github.com/ggml-org/llama.cpp (#6862) 2025-04-20 23:06:24 -03:00
oobabooga
e243424ba1 Fix an import 2025-04-20 17:51:28 -07:00
oobabooga
8cfd7f976b Revert "Remove the old --model-menu flag"
This reverts commit 109de34e3b.
2025-04-20 13:35:42 -07:00
oobabooga
b3bf7a885d Fix ExLlamaV2_HF and ExLlamaV3_HF after ae02ffc605 2025-04-20 11:32:48 -07:00
oobabooga
ae02ffc605
Refactor the transformers loader (#6859) 2025-04-20 13:33:47 -03:00
oobabooga
6ba0164c70 Lint 2025-04-19 17:45:21 -07:00
oobabooga
5ab069786b llama.cpp: add back the two encode calls (they are harmless now) 2025-04-19 17:38:36 -07:00
oobabooga
b9da5c7e3a Use 127.0.0.1 instead of localhost for faster llama.cpp on Windows 2025-04-19 17:36:04 -07:00
oobabooga
9c9df2063f llama.cpp: fix unicode decoding (closes #6856) 2025-04-19 16:38:15 -07:00
oobabooga
ba976d1390 llama.cpp: avoid two 'encode' calls 2025-04-19 16:35:01 -07:00
oobabooga
ed42154c78 Revert "llama.cpp: close the connection immediately on 'Stop'"
This reverts commit 5fdebc554b.
2025-04-19 05:32:36 -07:00
oobabooga
5fdebc554b llama.cpp: close the connection immediately on 'Stop' 2025-04-19 04:59:24 -07:00
oobabooga
6589ebeca8 Revert "llama.cpp: new optimization attempt"
This reverts commit e2e73ed22f.
2025-04-18 21:16:21 -07:00
oobabooga
e2e73ed22f llama.cpp: new optimization attempt 2025-04-18 21:05:08 -07:00
oobabooga
e2e90af6cd llama.cpp: don't include --rope-freq-base in the launch command if null 2025-04-18 20:51:18 -07:00
oobabooga
9f07a1f5d7 llama.cpp: new attempt at optimizing the llama-server connection 2025-04-18 19:30:53 -07:00
oobabooga
f727b4a2cc llama.cpp: close the connection properly when generation is cancelled 2025-04-18 19:01:39 -07:00
oobabooga
b3342b8dd8 llama.cpp: optimize the llama-server connection 2025-04-18 18:46:36 -07:00
oobabooga
2002590536 Revert "Attempt at making the llama-server streaming more efficient."
This reverts commit 5ad080ff25.
2025-04-18 18:13:54 -07:00
oobabooga
71ae05e0a4 llama.cpp: Fix the sampler priority handling 2025-04-18 18:06:36 -07:00
oobabooga
5ad080ff25 Attempt at making the llama-server streaming more efficient. 2025-04-18 18:04:49 -07:00
oobabooga
4fabd729c9 Fix the API without streaming or without 'sampler_priority' (closes #6851) 2025-04-18 17:25:22 -07:00
oobabooga
5135523429 Fix the new llama.cpp loader failing to unload models 2025-04-18 17:10:26 -07:00
oobabooga
caa6afc88b Only show 'GENERATE_PARAMS=...' in the logits endpoint if use_logits is True 2025-04-18 09:57:57 -07:00
oobabooga
d00d713ace Rename get_max_context_length to get_vocabulary_size in the new llama.cpp loader 2025-04-18 08:14:15 -07:00
oobabooga
c1cc65e82e Lint 2025-04-18 08:06:51 -07:00
oobabooga
d68f0fbdf7 Remove obsolete references to llamacpp_HF 2025-04-18 07:46:04 -07:00
oobabooga
a0abf93425 Connect --rope-freq-base to the new llama.cpp loader 2025-04-18 06:53:51 -07:00
oobabooga
ef9910c767 Fix a bug after c6901aba9f 2025-04-18 06:51:28 -07:00
oobabooga
1c4a2c9a71 Make exllamav3 safer as well 2025-04-18 06:17:58 -07:00
oobabooga
c6901aba9f Remove deprecation warning code 2025-04-18 06:05:47 -07:00
oobabooga
8144e1031e Remove deprecated command-line flags 2025-04-18 06:02:28 -07:00
oobabooga
ae54d8faaa
New llama.cpp loader (#6846) 2025-04-18 09:59:37 -03:00
oobabooga
5c2f8d828e Fix exllamav2 generating eos randomly after previous fix 2025-04-18 05:42:38 -07:00
oobabooga
2fc58ad935 Consider files with .pt extension in the new model menu function 2025-04-17 23:10:43 -07:00
Googolplexed
d78abe480b
Allow for model subfolder organization for GGUF files (#6686)
---------

Co-authored-by: oobabooga <112222186+oobabooga@users.noreply.github.com>
2025-04-18 02:53:59 -03:00
oobabooga
ce9e2d94b1 Revert "Attempt at solving the ExLlamaV2 issue"
This reverts commit c9b3c9dfbf.
2025-04-17 22:03:21 -07:00
oobabooga
5dfab7d363 New attempt at solving the exl2 issue 2025-04-17 22:03:11 -07:00
oobabooga
c9b3c9dfbf Attempt at solving the ExLlamaV2 issue 2025-04-17 21:45:15 -07:00