Daniel Dengler
|
c25a381540
|
Add a "Branch here" footer button to chat messages (#6967)
|
2025-05-20 11:07:40 -03:00 |
|
oobabooga
|
8e10f9894a
|
Add a metadata field to the chat history & add date/time to chat messages (#7003)
|
2025-05-20 10:48:46 -03:00 |
|
oobabooga
|
9ec46b8c44
|
Remove the HQQ loader (HQQ models can be loaded through Transformers)
|
2025-05-19 09:23:24 -07:00 |
|
oobabooga
|
0c7237e4b7
|
Update README
|
2025-05-18 20:01:29 -07:00 |
|
oobabooga
|
bad1da99db
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-05-18 14:09:08 -07:00 |
|
oobabooga
|
0c1bc6d1d0
|
Bump llama.cpp
|
2025-05-18 14:08:54 -07:00 |
|
Tiago Silva
|
9cd6ea6c0b
|
Fix Dockerfile in AMD and Intel (#6995)
|
2025-05-18 18:07:16 -03:00 |
|
oobabooga
|
83bfd5c64b
|
Fix API issues
|
2025-05-18 12:45:01 -07:00 |
|
oobabooga
|
126b3a768f
|
Revert "Dynamic Chat Message UI Update Speed (#6952)" (for now)
This reverts commit 8137eb8ef4 .
|
2025-05-18 12:38:36 -07:00 |
|
oobabooga
|
9d7a36356d
|
Remove unnecessary js that was causing scrolling issues
|
2025-05-18 10:56:16 -07:00 |
|
oobabooga
|
2faaf18f1f
|
Add back the "Common values" to the ctx-size slider
|
2025-05-18 09:06:20 -07:00 |
|
oobabooga
|
f1ec6c8662
|
Minor label changes
|
2025-05-18 09:04:51 -07:00 |
|
oobabooga
|
bd13a8f255
|
UI: Light theme improvement
|
2025-05-17 22:31:55 -07:00 |
|
oobabooga
|
076aa67963
|
Fix API issues
|
2025-05-17 22:22:18 -07:00 |
|
oobabooga
|
366de4b561
|
UI: Fix the chat area height when "Show controls" is unchecked
|
2025-05-17 17:11:38 -07:00 |
|
oobabooga
|
61276f6a37
|
Merge remote-tracking branch 'refs/remotes/origin/dev' into dev
|
2025-05-17 07:22:51 -07:00 |
|
oobabooga
|
4800d1d522
|
More robust VRAM calculation
|
2025-05-17 07:20:38 -07:00 |
|
mamei16
|
052c82b664
|
Fix KeyError: 'gpu_layers' when loading existing model settings (#6991)
|
2025-05-17 11:19:13 -03:00 |
|
oobabooga
|
0f77ff9670
|
UI: Use total VRAM (not free) for layers calculation when a model is loaded
|
2025-05-16 19:19:22 -07:00 |
|
oobabooga
|
4bf763e1d9
|
Multiple small CSS fixes
|
2025-05-16 18:22:43 -07:00 |
|
oobabooga
|
c0e295dd1d
|
Remove the 'None' option from the model menu
|
2025-05-16 17:53:20 -07:00 |
|
oobabooga
|
e3bba510d4
|
UI: Only add a blank space to streaming messages in instruct mode
|
2025-05-16 17:49:17 -07:00 |
|
oobabooga
|
71fa046c17
|
Minor changes after 1c549d176b
|
2025-05-16 17:38:08 -07:00 |
|
oobabooga
|
d99fb0a22a
|
Add backward compatibility with saved n_gpu_layers values
|
2025-05-16 17:29:18 -07:00 |
|
oobabooga
|
1c549d176b
|
Fix GPU layers slider: honor saved settings and show true maximum
|
2025-05-16 17:26:13 -07:00 |
|
oobabooga
|
e4d3f4449d
|
API: Fix a regression
|
2025-05-16 13:02:27 -07:00 |
|
oobabooga
|
470c822f44
|
API: Hide the uvicorn access logs from the terminal
|
2025-05-16 12:54:39 -07:00 |
|
oobabooga
|
adb975a380
|
Prevent fractional gpu-layers in the UI
|
2025-05-16 12:52:43 -07:00 |
|
oobabooga
|
fc483650b5
|
Set the maximum gpu_layers value automatically when the model is loaded with --model
|
2025-05-16 11:58:17 -07:00 |
|
oobabooga
|
38c50087fe
|
Prevent a crash on systems without an NVIDIA GPU
|
2025-05-16 11:55:30 -07:00 |
|
oobabooga
|
253e85a519
|
Only compute VRAM/GPU layers for llama.cpp models
|
2025-05-16 10:02:30 -07:00 |
|
oobabooga
|
9ec9b1bf83
|
Auto-adjust GPU layers after model unload to utilize freed VRAM
|
2025-05-16 09:56:23 -07:00 |
|
oobabooga
|
ee7b3028ac
|
Always cache GGUF metadata calls
|
2025-05-16 09:12:36 -07:00 |
|
oobabooga
|
4925c307cf
|
Auto-adjust GPU layers on context size and cache type changes + many fixes
|
2025-05-16 09:07:38 -07:00 |
|
oobabooga
|
93e1850a2c
|
Only show the VRAM info for llama.cpp
|
2025-05-15 21:42:15 -07:00 |
|
oobabooga
|
cbf4daf1c8
|
Hide the LoRA menu in portable mode
|
2025-05-15 21:21:54 -07:00 |
|
oobabooga
|
fd61297933
|
Lint
|
2025-05-15 21:19:19 -07:00 |
|
oobabooga
|
8cb73b78e1
|
Update ExLlamaV3
|
2025-05-15 20:10:34 -07:00 |
|
oobabooga
|
041248cc9f
|
Update llama.cpp
|
2025-05-15 20:10:02 -07:00 |
|
oobabooga
|
5534d01da0
|
Estimate the VRAM for GGUF models + autoset gpu-layers (#6980)
|
2025-05-16 00:07:37 -03:00 |
|
oobabooga
|
c4a715fd1e
|
UI: Move the LoRA menu under "Other options"
|
2025-05-13 20:14:09 -07:00 |
|
oobabooga
|
035cd3e2a9
|
UI: Hide the extension install menu in portable builds
|
2025-05-13 20:09:22 -07:00 |
|
oobabooga
|
2826c60044
|
Use logger for "Output generated in ..." messages
|
2025-05-13 14:45:46 -07:00 |
|
oobabooga
|
3fa1a899ae
|
UI: Fix gpu-layers being ignored (closes #6973)
|
2025-05-13 12:07:59 -07:00 |
|
oobabooga
|
c375b69413
|
API: Fix llama.cpp generating after disconnect, improve disconnect detection, fix deadlock on simultaneous requests
|
2025-05-13 11:23:33 -07:00 |
|
oobabooga
|
62c774bf24
|
Revert "New attempt"
This reverts commit e7ac06c169 .
|
2025-05-13 06:42:25 -07:00 |
|
oobabooga
|
e7ac06c169
|
New attempt
|
2025-05-10 19:20:04 -07:00 |
|
oobabooga
|
0c5fa3728e
|
Revert "Fix API failing to cancel streams (attempt), closes #6966"
This reverts commit 006a866079 .
|
2025-05-10 19:12:40 -07:00 |
|
oobabooga
|
006a866079
|
Fix API failing to cancel streams (attempt), closes #6966
|
2025-05-10 17:55:48 -07:00 |
|
oobabooga
|
47d4758509
|
Fix #6970
|
2025-05-10 17:46:00 -07:00 |
|