TheBloke
|
f61ce12271
|
Change 'groupsize' to 'group_size' everywhere. Turns out this is easier than 'groupsize' due to dependencies in other files.
|
2023-05-05 13:36:00 +01:00 |
|
TheBloke
|
f64c71e779
|
Change referenes to 'group_size' to 'groupsize' to match rest of this file
|
2023-05-05 13:21:13 +01:00 |
|
qwopqwop200
|
b19c59541b
|
fix bug
|
2023-05-04 13:17:10 +09:00 |
|
qwopqwop200
|
b14d42e68a
|
bug fix
|
2023-05-04 13:03:38 +09:00 |
|
qwopqwop200
|
cc992c21bd
|
Merge branch 'faster-cuda-no-actorder' into faster-llama
|
2023-05-04 09:09:09 +09:00 |
|
qwopqwop200
|
c8504f0660
|
support faster and model load strict
|
2023-05-04 09:06:52 +09:00 |
|
qwopqwop200
|
41f2379850
|
bug fix
|
2023-05-02 20:38:17 +09:00 |
|
qwopqwop200
|
d2f48e5311
|
bug fix
|
2023-05-02 20:36:53 +09:00 |
|
qwopqwop200
|
709bd7594f
|
Merge pull request #44 from PanQiWei/fix-bug-cuda
Fix bug cuda
|
2023-05-02 19:50:59 +09:00 |
|
TheBloke
|
593a0b28bb
|
Fix typo: 'hole' -> 'whole'
|
2023-05-01 10:25:18 +01:00 |
|
qwopqwop200
|
5a69e22a93
|
add qlinear_old
|
2023-05-01 13:04:47 +09:00 |
|
qwopqwop200
|
435eebee4b
|
support conv1d,conv2d
|
2023-04-28 09:13:00 +09:00 |
|
PanQiWei
|
ac3f7054e0
|
big fix
|
2023-04-27 19:33:25 +08:00 |
|
PanQiWei
|
498de923f2
|
support multi gpus quantization
|
2023-04-27 18:48:43 +08:00 |
|
PanQiWei
|
a2abff983e
|
support dispatch layers to different devices when loading pretrained model before quantization
|
2023-04-27 02:24:08 +08:00 |
|
PanQiWei
|
bf2ae6768d
|
bug fix
|
2023-04-26 13:33:56 +08:00 |
|
PanQiWei
|
9c405b1628
|
add triton support
|
2023-04-25 20:05:22 +08:00 |
|
PanQiWei
|
832dc4a7a1
|
refactor file structure
|
2023-04-25 18:58:20 +08:00 |
|
PanQiWei
|
6b6dd3e1e3
|
always trust remote code
|
2023-04-25 12:15:32 +08:00 |
|
PanQiWei
|
7ba0edffe0
|
refactor file structure of modeling module
|
2023-04-23 17:33:09 +08:00 |
|
PanQiWei
|
229b61e20e
|
first init
|
2023-04-14 01:09:40 +08:00 |
|