update README
This commit is contained in:
parent
22748dd2b7
commit
5d6862ee8d
3 changed files with 3 additions and 0 deletions
|
@ -19,6 +19,7 @@
|
|||
|
||||
## News or Update
|
||||
|
||||
- 2023-07-26 - (Update) - An elegant [PPL benchmark script](examples/benchmark/perplexity.py) to get results that can be fairly compared with other libraries such as `llama.cpp`.
|
||||
- 2023-06-05 - (Update) - Integrate with 🤗 peft to use gptq quantized model to train adapters, support LoRA, AdaLoRA, AdaptionPrompt, etc.
|
||||
- 2023-05-30 - (Update) - Support download/upload quantized model from/to 🤗 Hub.
|
||||
- 2023-05-27 - (Update) - Support quantization and inference for `gpt_bigcode`, `codegen` and `RefineWeb/RefineWebModel`(falcon) model types.
|
||||
|
|
|
@ -19,6 +19,7 @@
|
|||
|
||||
## 新闻或更新
|
||||
|
||||
- 2023-07-26 - (更新) - 一个优雅的 [PPL 测评脚本](examples/benchmark/perplexity.py)以获得可以与诸如 `llama.cpp` 等代码库进行公平比较的结果。
|
||||
- 2023-06-05 - (更新) - 集成 🤗 peft 来使用 gptq 量化过的模型训练适应层,支持 LoRA,AdaLoRA,AdaptionPrompt 等。
|
||||
- 2023-05-30 - (更新) - 支持从 🤗 Hub 下载量化好的模型或上次量化好的模型到 🤗 Hub。
|
||||
- 2023-05-27 - (更新) - 支持以下模型的量化和推理: `gpt_bigcode`, `codegen` 以及 `RefineWeb/RefineWebModel`(falcon)。
|
||||
|
|
|
@ -1,4 +1,5 @@
|
|||
## <center>News or Update</center>
|
||||
- 2023-07-26 - (Update) - An elegant [PPL benchmark script](examples/benchmark/perplexity.py) to get results that can be fairly compared with other libraries such as `llama.cpp`.
|
||||
- 2023-06-05 - (Update) - Integrate with 🤗 peft to use gptq quantized model to train adapters, support LoRA, AdaLoRA, AdaptionPrompt, etc.
|
||||
- 2023-05-30 - (Update) - support download/upload quantized model from/to 🤗 Hub.
|
||||
- 2023-05-27 - (Update) - Support quantization and inference for `gpt_bigcode`, `codegen` and `RefineWeb/RefineWebModel`(falcon) model types.
|
||||
|
|
Loading…
Add table
Reference in a new issue