text-generation-webui-mirror/modules
2025-06-06 22:38:20 -07:00
..
grammar Let grammar escape backslashes (#5865) 2024-05-19 20:26:09 -03:00
block_requests.py Fix the Google Colab notebook 2025-01-16 05:21:18 -08:00
callbacks.py Refactor the transformers loader (#6859) 2025-04-20 13:33:47 -03:00
chat.py Remove quotes from LLM-generated websearch query (closes #7045). 2025-06-05 06:57:59 -07:00
deepspeed_parameters.py Fix typo in deepspeed_parameters.py (#3222) 2023-07-24 11:17:28 -03:00
evaluate.py Restructure the repository (#6904) 2025-04-26 08:56:54 -03:00
exllamav2.py Lint 2025-04-26 19:29:08 -07:00
exllamav2_hf.py Fix CFG with ExLlamaV2_HF (closes #6937) 2025-04-30 18:43:45 -07:00
exllamav3_hf.py Fix exllamav3_hf models failing to unload (closes #7031) 2025-05-30 12:05:49 -07:00
extensions.py Move update_wizard_windows.sh to update_wizard_windows.bat (oops) 2024-03-04 19:26:24 -08:00
github.py Fix several typos in the codebase (#6151) 2024-06-22 21:40:25 -03:00
gradio_hijack.py Prevent Gradio from saying 'Thank you for being a Gradio user!' 2025-04-26 18:14:57 -07:00
html_generator.py Update only the last message during streaming + add back dynamic UI update speed (#7038) 2025-06-02 09:50:17 -03:00
llama_cpp_server.py Simplify the llama.cpp stderr filter code 2025-06-06 22:25:13 -07:00
loaders.py Remove the HQQ loader (HQQ models can be loaded through Transformers) 2025-05-19 09:23:24 -07:00
logging_colors.py Lint 2023-12-19 21:36:57 -08:00
logits.py UI: More friendly message when no model is loaded 2025-05-09 07:21:05 -07:00
LoRA.py Refactor the transformers loader (#6859) 2025-04-20 13:33:47 -03:00
metadata_gguf.py llama.cpp: read instruction template from GGUF metadata (#4975) 2023-12-18 01:51:58 -03:00
models.py Fix after 219f0a7731 2025-06-01 19:27:14 -07:00
models_settings.py Fix loading Llama-3_3-Nemotron-Super-49B-v1 and similar models (closes #7012) 2025-05-25 17:19:26 -07:00
presets.py Set top_n_sigma before temperature by default 2025-05-06 08:27:21 -07:00
prompts.py Restructure the repository (#6904) 2025-04-26 08:56:54 -03:00
relative_imports.py Add ExLlama+LoRA support (#2756) 2023-06-19 12:31:24 -03:00
sampler_hijack.py Fix the exllamav2_HF and exllamav3_HF loaders 2025-04-21 18:32:23 -07:00
sane_markdown_lists.py Sane handling of markdown lists (#6626) 2025-01-04 15:41:31 -03:00
shared.py Update only the last message during streaming + add back dynamic UI update speed (#7038) 2025-06-02 09:50:17 -03:00
tensorrt_llm.py Lint 2025-05-15 21:19:19 -07:00
text_generation.py Update only the last message during streaming + add back dynamic UI update speed (#7038) 2025-06-02 09:50:17 -03:00
torch_utils.py Refactor the transformers loader (#6859) 2025-04-20 13:33:47 -03:00
training.py Restructure the repository (#6904) 2025-04-26 08:56:54 -03:00
transformers_loader.py Restructure the repository (#6904) 2025-04-26 08:56:54 -03:00
ui.py Fix the chat input reappearing when the page is reloaded 2025-06-06 22:38:20 -07:00
ui_chat.py Update only the last message during streaming + add back dynamic UI update speed (#7038) 2025-06-02 09:50:17 -03:00
ui_default.py Restructure the repository (#6904) 2025-04-26 08:56:54 -03:00
ui_file_saving.py Restructure the repository (#6904) 2025-04-26 08:56:54 -03:00
ui_model_menu.py UI: Fix the model downloader progress bar 2025-06-01 19:22:21 -07:00
ui_notebook.py Lint 2024-12-17 20:13:32 -08:00
ui_parameters.py Update only the last message during streaming + add back dynamic UI update speed (#7038) 2025-06-02 09:50:17 -03:00
ui_session.py UI: Hide the extension install menu in portable builds 2025-05-13 20:09:22 -07:00
utils.py Better detect when no model is available 2025-05-29 10:49:29 -07:00
web_search.py Filter out failed web search downloads from attachments 2025-06-06 22:32:07 -07:00