transformers/docker
Ilyas Moutawwakil 4fc708f98c
Exllama kernels support for AWQ models (#28634)
* added exllama kernels support for awq models

* doc

* style

* Update src/transformers/modeling_utils.py

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>

* refactor

* moved exllama post init to after device dispatching

* bump autoawq version

* added exllama test

* style

* configurable exllama kernels

* copy exllama_config from gptq

* moved exllama version check to post init

* moved to quantization dockerfile

---------

Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
2024-03-05 03:22:48 +01:00
..
transformers-all-latest-gpu Use torch 2.2 for daily CI (model tests) (#29208) 2024-02-23 21:37:08 +08:00
transformers-doc-builder Use python 3.10 for docbuild (#28399) 2024-01-11 14:39:49 +01:00
transformers-gpu TF: TF 2.10 unpin + related onnx test skips (#18995) 2022-09-12 19:30:27 +01:00
transformers-past-gpu Byebye pytorch 1.9 (#24080) 2023-06-16 16:38:23 +02:00
transformers-pytorch-amd-gpu Add deepspeed test to amd scheduled CI (#27633) 2023-12-11 16:33:36 +01:00
transformers-pytorch-deepspeed-amd-gpu Add deepspeed test to amd scheduled CI (#27633) 2023-12-11 16:33:36 +01:00
transformers-pytorch-deepspeed-latest-gpu Use torch 2.2 for deepspeed CI (#29246) 2024-02-27 17:51:37 +08:00
transformers-pytorch-deepspeed-nightly-gpu Update CUDA versions for DeepSpeed (#27853) 2023-12-05 16:15:21 -05:00
transformers-pytorch-gpu [SDPA] Make sure attn mask creation is always done on CPU (#28400) 2024-01-09 11:05:19 +01:00
transformers-pytorch-tpu Rename master to main for notebooks links and leftovers (#16397) 2022-03-25 09:12:23 -04:00
transformers-quantization-latest-gpu Exllama kernels support for AWQ models (#28634) 2024-03-05 03:22:48 +01:00
transformers-tensorflow-gpu Update TF pin in docker image (#25343) 2023-08-07 12:32:34 +02:00