__init__.py
|
support qwen
|
2023-08-08 19:27:43 +09:00 |
_base.py
|
make exllama_kernels compilation as optional
|
2023-08-09 17:42:22 +08:00 |
_const.py
|
support qwen
|
2023-08-08 19:27:43 +09:00 |
_utils.py
|
patch for transformers compatiblity
|
2023-08-09 14:23:59 +00:00 |
auto.py
|
expose disable_exllama argument
|
2023-08-09 12:03:31 +08:00 |
codegen.py
|
Add support for CodeGen/2
|
2023-05-08 17:34:00 +03:00 |
gpt_bigcode.py
|
Add support for GPTBigCode
|
2023-05-08 12:28:29 +03:00 |
gptj.py
|
add GPTJ fused attention module
|
2023-05-14 16:17:21 +08:00 |
internlm.py
|
Add support for InternLM
|
2023-07-07 09:25:40 -07:00 |
llama.py
|
make compatible with older transformers version
|
2023-05-15 13:26:18 +08:00 |
qwen.py
|
support qwen
|
2023-08-08 19:27:43 +09:00 |
rw.py
|
support falcon
|
2023-05-27 07:53:39 +09:00 |