transformers/tests/models/llama
Arthur 2f9a3edbb9
[`GemmaConverter`] use user_defined_symbols (#29473)
* use user_defined_symbols

* fixup

* nit

* add a very robust test

* make sure all models are tested with the `pretrained_tokenizer_to_test`

* should we make sure we test all of them?

* merge

* remove the id

* fix test

* update

* ousies

* oups

* fixup

* fix copies check

* remove `pretrained_tokenizer_to_test`
2024-03-19 15:13:56 +01:00
..
__init__.py LLaMA Implementation (#21955) 2023-03-16 09:00:53 -04:00
test_modeling_flax_llama.py Add Llama Flax Implementation (#24587) 2023-12-07 07:05:00 +01:00
test_modeling_llama.py Fix llama + gemma accelete tests (#29380) 2024-03-01 10:32:36 -05:00
test_tokenization_llama.py [`GemmaConverter`] use user_defined_symbols (#29473) 2024-03-19 15:13:56 +01:00