transformers/tests/models/gemma
Yih-Dar 1b3dba9417
Make `Gemma` work with `torch.compile` (#30775)
* fix

* [run-slow] gemma

* add test

* add `test_compile_static_cache`

* fix

* style

* remove subprocess

* use attribute

* fix

* style

* update

* [run-slow] dbrx,gemma,jetmoe,phi3,recurrent_gemma

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-05-16 13:41:33 +02:00
..
__init__.py [ `gemma`] Adds support for Gemma 💎 (#29167) 2024-02-21 14:21:28 +01:00
test_modeling_flax_gemma.py FIX [`Gemma` / `CI`] Make sure our runners have access to the model (#29242) 2024-02-28 06:25:23 +01:00
test_modeling_gemma.py Make `Gemma` work with `torch.compile` (#30775) 2024-05-16 13:41:33 +02:00
test_tokenization_gemma.py [`LlamaTokenizerFast`] Refactor default llama (#28881) 2024-04-23 23:12:59 +02:00