.. |
grammar
|
Let grammar escape backslashes (#5865)
|
2024-05-19 20:26:09 -03:00 |
block_requests.py
|
Fix the Google Colab notebook
|
2025-01-16 05:21:18 -08:00 |
callbacks.py
|
Refactor the transformers loader (#6859)
|
2025-04-20 13:33:47 -03:00 |
chat.py
|
Fix 'Start reply with' (closes #7033)
|
2025-05-30 11:17:47 -07:00 |
deepspeed_parameters.py
|
Fix typo in deepspeed_parameters.py (#3222)
|
2023-07-24 11:17:28 -03:00 |
evaluate.py
|
Restructure the repository (#6904)
|
2025-04-26 08:56:54 -03:00 |
exllamav2.py
|
Lint
|
2025-04-26 19:29:08 -07:00 |
exllamav2_hf.py
|
Fix CFG with ExLlamaV2_HF (closes #6937)
|
2025-04-30 18:43:45 -07:00 |
exllamav3_hf.py
|
Fix exllamav3_hf models failing to unload (closes #7031)
|
2025-05-30 12:05:49 -07:00 |
extensions.py
|
Move update_wizard_windows.sh to update_wizard_windows.bat (oops)
|
2024-03-04 19:26:24 -08:00 |
github.py
|
Fix several typos in the codebase (#6151)
|
2024-06-22 21:40:25 -03:00 |
gradio_hijack.py
|
Prevent Gradio from saying 'Thank you for being a Gradio user!'
|
2025-04-26 18:14:57 -07:00 |
html_generator.py
|
UI: Make message editing work the same for user and assistant messages
|
2025-05-28 17:23:46 -07:00 |
llama_cpp_server.py
|
API: Fix a regression
|
2025-05-16 13:02:27 -07:00 |
loaders.py
|
Remove the HQQ loader (HQQ models can be loaded through Transformers)
|
2025-05-19 09:23:24 -07:00 |
logging_colors.py
|
Lint
|
2023-12-19 21:36:57 -08:00 |
logits.py
|
UI: More friendly message when no model is loaded
|
2025-05-09 07:21:05 -07:00 |
LoRA.py
|
Refactor the transformers loader (#6859)
|
2025-04-20 13:33:47 -03:00 |
metadata_gguf.py
|
llama.cpp: read instruction template from GGUF metadata (#4975)
|
2023-12-18 01:51:58 -03:00 |
models.py
|
Fix exllamav3_hf models failing to unload (closes #7031)
|
2025-05-30 12:05:49 -07:00 |
models_settings.py
|
Fix loading Llama-3_3-Nemotron-Super-49B-v1 and similar models (closes #7012)
|
2025-05-25 17:19:26 -07:00 |
presets.py
|
Set top_n_sigma before temperature by default
|
2025-05-06 08:27:21 -07:00 |
prompts.py
|
Restructure the repository (#6904)
|
2025-04-26 08:56:54 -03:00 |
relative_imports.py
|
Add ExLlama+LoRA support (#2756)
|
2023-06-19 12:31:24 -03:00 |
sampler_hijack.py
|
Fix the exllamav2_HF and exllamav3_HF loaders
|
2025-04-21 18:32:23 -07:00 |
sane_markdown_lists.py
|
Sane handling of markdown lists (#6626)
|
2025-01-04 15:41:31 -03:00 |
shared.py
|
Remove the HQQ loader (HQQ models can be loaded through Transformers)
|
2025-05-19 09:23:24 -07:00 |
tensorrt_llm.py
|
Lint
|
2025-05-15 21:19:19 -07:00 |
text_generation.py
|
Don't limit the number of prompt characters printed with --verbose
|
2025-05-29 13:08:48 -07:00 |
torch_utils.py
|
Refactor the transformers loader (#6859)
|
2025-04-20 13:33:47 -03:00 |
training.py
|
Restructure the repository (#6904)
|
2025-04-26 08:56:54 -03:00 |
transformers_loader.py
|
Restructure the repository (#6904)
|
2025-04-26 08:56:54 -03:00 |
ui.py
|
Multiple small style improvements
|
2025-05-30 11:32:24 -07:00 |
ui_chat.py
|
Clean up
|
2025-05-29 14:11:21 -07:00 |
ui_default.py
|
Restructure the repository (#6904)
|
2025-04-26 08:56:54 -03:00 |
ui_file_saving.py
|
Restructure the repository (#6904)
|
2025-04-26 08:56:54 -03:00 |
ui_model_menu.py
|
Minor UI fixes
|
2025-05-20 16:20:49 -07:00 |
ui_notebook.py
|
Lint
|
2024-12-17 20:13:32 -08:00 |
ui_parameters.py
|
Revert "Dynamic Chat Message UI Update Speed (#6952)" (for now)
|
2025-05-18 12:38:36 -07:00 |
ui_session.py
|
UI: Hide the extension install menu in portable builds
|
2025-05-13 20:09:22 -07:00 |
utils.py
|
Better detect when no model is available
|
2025-05-29 10:49:29 -07:00 |
web_search.py
|
Download fetched web search results in parallel
|
2025-05-28 20:36:24 -07:00 |