Commit graph

4473 commits

Author SHA1 Message Date
oobabooga
83bfd5c64b Fix API issues 2025-05-18 12:45:01 -07:00
oobabooga
126b3a768f Revert "Dynamic Chat Message UI Update Speed (#6952)" (for now)
This reverts commit 8137eb8ef4.
2025-05-18 12:38:36 -07:00
oobabooga
9d7a36356d Remove unnecessary js that was causing scrolling issues 2025-05-18 10:56:16 -07:00
oobabooga
2faaf18f1f Add back the "Common values" to the ctx-size slider 2025-05-18 09:06:20 -07:00
oobabooga
f1ec6c8662 Minor label changes 2025-05-18 09:04:51 -07:00
oobabooga
bd13a8f255 UI: Light theme improvement 2025-05-17 22:31:55 -07:00
oobabooga
076aa67963 Fix API issues 2025-05-17 22:22:18 -07:00
oobabooga
366de4b561 UI: Fix the chat area height when "Show controls" is unchecked 2025-05-17 17:11:38 -07:00
oobabooga
61276f6a37 Merge remote-tracking branch 'refs/remotes/origin/dev' into dev 2025-05-17 07:22:51 -07:00
oobabooga
4800d1d522 More robust VRAM calculation 2025-05-17 07:20:38 -07:00
mamei16
052c82b664
Fix KeyError: 'gpu_layers' when loading existing model settings (#6991) 2025-05-17 11:19:13 -03:00
oobabooga
0f77ff9670 UI: Use total VRAM (not free) for layers calculation when a model is loaded 2025-05-16 19:19:22 -07:00
oobabooga
4bf763e1d9 Multiple small CSS fixes 2025-05-16 18:22:43 -07:00
oobabooga
c0e295dd1d Remove the 'None' option from the model menu 2025-05-16 17:53:20 -07:00
oobabooga
e3bba510d4 UI: Only add a blank space to streaming messages in instruct mode 2025-05-16 17:49:17 -07:00
oobabooga
71fa046c17 Minor changes after 1c549d176b 2025-05-16 17:38:08 -07:00
oobabooga
d99fb0a22a Add backward compatibility with saved n_gpu_layers values 2025-05-16 17:29:18 -07:00
oobabooga
1c549d176b Fix GPU layers slider: honor saved settings and show true maximum 2025-05-16 17:26:13 -07:00
oobabooga
e4d3f4449d API: Fix a regression 2025-05-16 13:02:27 -07:00
oobabooga
470c822f44 API: Hide the uvicorn access logs from the terminal 2025-05-16 12:54:39 -07:00
oobabooga
adb975a380 Prevent fractional gpu-layers in the UI 2025-05-16 12:52:43 -07:00
oobabooga
fc483650b5 Set the maximum gpu_layers value automatically when the model is loaded with --model 2025-05-16 11:58:17 -07:00
oobabooga
38c50087fe Prevent a crash on systems without an NVIDIA GPU 2025-05-16 11:55:30 -07:00
oobabooga
253e85a519 Only compute VRAM/GPU layers for llama.cpp models 2025-05-16 10:02:30 -07:00
oobabooga
9ec9b1bf83 Auto-adjust GPU layers after model unload to utilize freed VRAM 2025-05-16 09:56:23 -07:00
oobabooga
ee7b3028ac Always cache GGUF metadata calls 2025-05-16 09:12:36 -07:00
oobabooga
4925c307cf Auto-adjust GPU layers on context size and cache type changes + many fixes 2025-05-16 09:07:38 -07:00
oobabooga
93e1850a2c Only show the VRAM info for llama.cpp 2025-05-15 21:42:15 -07:00
oobabooga
cbf4daf1c8 Hide the LoRA menu in portable mode 2025-05-15 21:21:54 -07:00
oobabooga
fd61297933 Lint 2025-05-15 21:19:19 -07:00
oobabooga
8cb73b78e1 Update ExLlamaV3 2025-05-15 20:10:34 -07:00
oobabooga
041248cc9f Update llama.cpp 2025-05-15 20:10:02 -07:00
oobabooga
5534d01da0
Estimate the VRAM for GGUF models + autoset gpu-layers (#6980) 2025-05-16 00:07:37 -03:00
oobabooga
c4a715fd1e UI: Move the LoRA menu under "Other options" 2025-05-13 20:14:09 -07:00
oobabooga
035cd3e2a9 UI: Hide the extension install menu in portable builds 2025-05-13 20:09:22 -07:00
oobabooga
2826c60044 Use logger for "Output generated in ..." messages 2025-05-13 14:45:46 -07:00
oobabooga
3fa1a899ae UI: Fix gpu-layers being ignored (closes #6973) 2025-05-13 12:07:59 -07:00
oobabooga
c375b69413 API: Fix llama.cpp generating after disconnect, improve disconnect detection, fix deadlock on simultaneous requests 2025-05-13 11:23:33 -07:00
oobabooga
62c774bf24 Revert "New attempt"
This reverts commit e7ac06c169.
2025-05-13 06:42:25 -07:00
oobabooga
e7ac06c169 New attempt 2025-05-10 19:20:04 -07:00
oobabooga
0c5fa3728e Revert "Fix API failing to cancel streams (attempt), closes #6966"
This reverts commit 006a866079.
2025-05-10 19:12:40 -07:00
oobabooga
006a866079 Fix API failing to cancel streams (attempt), closes #6966 2025-05-10 17:55:48 -07:00
oobabooga
47d4758509 Fix #6970 2025-05-10 17:46:00 -07:00
oobabooga
4920981b14 UI: Remove the typing cursor 2025-05-09 20:35:38 -07:00
oobabooga
8984e95c67 UI: More friendly message when no model is loaded 2025-05-09 07:21:05 -07:00
oobabooga
2bde625d57 Update README 2025-05-09 00:19:25 -07:00
oobabooga
512bc2d0e0 UI: Update some labels 2025-05-08 23:43:55 -07:00
oobabooga
f8ef6e09af UI: Make ctx-size a slider 2025-05-08 18:19:04 -07:00
oobabooga
bf7e4a4597 Docs: Add a tool/function calling example (from https://github.com/oobabooga/text-generation-webui/pull/6827#issuecomment-2854716960) 2025-05-08 16:12:07 -07:00
oobabooga
9ea2a69210 llama.cpp: Add --no-webui to the llama-server command 2025-05-08 10:41:25 -07:00