transformers/tests/models/llama
Joao Gante f26e407370
Cache: models return input cache type (#30716)
2024-05-08 18:26:34 +01:00
..
__init__.py LLaMA Implementation (#21955) 2023-03-16 09:00:53 -04:00
test_modeling_flax_llama.py Add Llama Flax Implementation (#24587) 2023-12-07 07:05:00 +01:00
test_modeling_llama.py Cache: models return input cache type (#30716) 2024-05-08 18:26:34 +01:00
test_tokenization_llama.py [`LlamaTokenizerFast`] Refactor default llama (#28881) 2024-04-23 23:12:59 +02:00