transformers/tests/quantization
Marc Sun 58a939c6b7
Fix quantization tests (#29914)
* revert back to torch 2.1.1

* run test

* switch to torch 2.2.1

* udapte dockerfile

* fix awq tests

* fix test

* run quanto tests

* update tests

* split quantization tests

* fix

* fix again

* final fix

* fix report artifact

* build docker again

* Revert "build docker again"

This reverts commit 399a5f9d93.

* debug

* revert

* style

* new notification system

* testing notfication

* rebuild docker

* fix_prev_ci_results

* typo

* remove warning

* fix typo

* fix artifact name

* debug

* issue fixed

* debug again

* fix

* fix time

* test notif with faling test

* typo

* issues again

* final fix ?

* run all quantization tests again

* remove name to clear space

* revert modfiication done on workflow

* fix

* build docker

* build only quant docker

* fix quantization ci

* fix

* fix report

* better quantization_matrix

* add print

* revert to the basic one
2024-04-09 17:10:29 +02:00
..
aqlm_integration Cleaner Cache `dtype` and `device` extraction for CUDA graph generation for quantizers compatibility (#29079) 2024-02-27 09:32:39 +01:00
autoawq Fix quantization tests (#29914) 2024-04-09 17:10:29 +02:00
bnb [bnb] Fix offload test (#30039) 2024-04-05 13:11:28 +02:00
gptq [GPTQ] Fix test (#28018) 2024-01-15 11:22:54 -05:00
quanto_integration [Quantization] Quanto quantizer (#29023) 2024-03-15 11:51:29 -04:00