transformers/tests/models/gemma
Arthur e34da3ee3c
[`LlamaTokenizerFast`] Refactor default llama (#28881)
* push legacy to fast as well

* super strange

* Update src/transformers/convert_slow_tokenizer.py

* make sure we are BC

* fix Llama test

* nit

* revert

* more test

* style

* update

* small update w.r.t tokenizers

* nit

* don't split

* lol

* add a test for `add_prefix_space=False`

* fix gemma tokenizer as well

* update

* fix gemma

* nicer failures

* fixup

* update

* fix the example for legacy = False

* use `huggyllama/llama-7b` for the PR doctest

* nit

* use from_slow

* fix llama
2024-04-23 23:12:59 +02:00
..
__init__.py [ `gemma`] Adds support for Gemma 💎 (#29167) 2024-02-21 14:21:28 +01:00
test_modeling_flax_gemma.py FIX [`Gemma` / `CI`] Make sure our runners have access to the model (#29242) 2024-02-28 06:25:23 +01:00
test_modeling_gemma.py Fix slow tests for important models to be compatible with A10 runners (#29905) 2024-04-09 13:28:54 +02:00
test_tokenization_gemma.py [`LlamaTokenizerFast`] Refactor default llama (#28881) 2024-04-23 23:12:59 +02:00