transformers/tests
Younes Belkada b647acdb53
FIX [`CI`] `require_read_token` in the llama FA2 test (#29361)
Update test_modeling_llama.py
2024-02-29 04:49:01 +01:00
..
benchmark [Test refactor 1/5] Per-folder tests reorganization (#15725) 2022-02-23 15:46:28 -05:00
bettertransformer Fixed malapropism error (#26660) 2023-10-09 11:04:57 +02:00
deepspeed fix failing trainer ds tests (#29057) 2024-02-16 17:18:45 +05:30
extended Device agnostic trainer testing (#27131) 2023-10-30 18:16:40 +00:00
fixtures [WIP] add SpeechT5 model (#18922) 2023-02-03 12:43:46 -05:00
fsdp Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
generation GenerationConfig validate both constraints and force_words_ids (#29163) 2024-02-27 01:43:52 +01:00
models FIX [`CI`] `require_read_token` in the llama FA2 test (#29361) 2024-02-29 04:49:01 +01:00
optimization Make schedulers picklable by making lr_lambda fns global (#21768) 2023-03-02 12:08:43 -05:00
peft_integration FIX [`CI`]: Fix failing tests for peft integration (#29330) 2024-02-29 03:56:16 +01:00
pipelines Token level timestamps for long-form generation in Whisper (#29148) 2024-02-27 18:15:26 +00:00
quantization Cleaner Cache `dtype` and `device` extraction for CUDA graph generation for quantizers compatibility (#29079) 2024-02-27 09:32:39 +01:00
repo_utils Allow `# Ignore copy` (#27328) 2023-12-07 10:00:08 +01:00
sagemaker Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
tokenization Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
tools Add support for for loops in python interpreter (#24429) 2023-06-26 09:58:14 -04:00
trainer FIX [`PEFT` / `Trainer` ] Handle better peft + quantized compiled models (#29055) 2024-02-20 12:45:08 +01:00
utils Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
__init__.py GPU text generation: mMoved the encoded_prompt to correct device 2020-01-06 15:11:12 +01:00
test_backbone_common.py Align backbone stage selection with out_indices & out_features (#27606) 2023-12-20 18:33:17 +00:00
test_cache_utils.py Llama: fix batched generation (#29109) 2024-02-20 10:23:17 +00:00
test_configuration_common.py [ `PretrainedConfig`] Improve messaging (#27438) 2023-11-15 14:10:39 +01:00
test_configuration_utils.py Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
test_feature_extraction_common.py Split common test from core tests (#24284) 2023-06-15 07:30:24 -04:00
test_feature_extraction_utils.py Remove-auth-token (#27060) 2023-11-13 14:20:54 +01:00
test_image_processing_common.py Raise unused kwargs image processor (#29063) 2024-02-20 16:20:20 +01:00
test_image_processing_utils.py Remove-auth-token (#27060) 2023-11-13 14:20:54 +01:00
test_image_transforms.py Normalize floating point cast (#27249) 2023-11-10 15:35:27 +00:00
test_modeling_common.py Adding SegGPT (#27735) 2024-02-26 18:17:19 +00:00
test_modeling_flax_common.py [Flax] Update no init test for Flax v0.7.1 (#28735) 2024-01-26 18:20:39 +00:00
test_modeling_flax_utils.py Enable safetensors conversion from PyTorch to other frameworks without the torch requirement (#27599) 2024-01-23 10:28:23 +01:00
test_modeling_tf_common.py Add tf_keras imports to prepare for Keras 3 (#28588) 2024-01-30 17:26:36 +00:00
test_modeling_tf_utils.py Add tf_keras imports to prepare for Keras 3 (#28588) 2024-01-30 17:26:36 +00:00
test_modeling_utils.py Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
test_pipeline_mixin.py Image Feature Extraction pipeline (#28216) 2024-02-05 14:50:07 +00:00
test_processing_common.py Don't save `processor_config.json` if a processor has no extra attribute (#28584) 2024-01-19 09:59:14 +00:00
test_sequence_feature_extraction_common.py Fix typo (#25966) 2023-09-05 10:12:25 +02:00
test_tokenization_common.py Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00
test_tokenization_utils.py Update all references to canonical models (#29001) 2024-02-16 08:16:58 +01:00