hoshi-hiyouga
a755745546
PaliGemma - fix processor with no input text ( #30916 )
...
Update processing_paligemma.py
2024-05-21 10:43:22 +01:00
dependabot[bot]
d502bd6475
Bump requests from 2.31.0 to 2.32.0 in /examples/research_projects/decision_transformer ( #30925 )
...
---
updated-dependencies:
- dependency-name: requests
dependency-type: direct:production
...
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-05-21 09:41:29 +01:00
Younes Belkada
8871b26150
FEAT / Trainer: LOMO optimizer support ( #30178 )
...
* add V1 - adalomo not working yet
* add todo docs + refactor from comments
* adjust LR
* add docs
* add more elaborated test
* Apply suggestions from code review
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
* fix
* push
* add accelerate check
* fix DDP case
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fix
* init kwargs
* safely add attribute
* revert to enum logic
* Update src/transformers/trainer.py
---------
Co-authored-by: Zach Mueller <muellerzr@gmail.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-05-21 10:16:37 +02:00
Younes Belkada
c876d12127
FIX / TST: Fix expected results on Mistral slow test (A10) ( #30909 )
...
Update test_modeling_mistral.py
2024-05-21 09:14:14 +02:00
Aaron Jimenez
0df888ffb7
[docs] Spanish translation of model_memory_anatomy.md ( #30885 )
...
* add model_memory_anatomy to es/_toctree.yml
* copy model_memory_anatomy.md to es/
* translate first section
* translate doc
* chage forward activations
* fix sentence and and link to Trainer
* fix Trainer link
2024-05-20 16:48:52 -07:00
Longjie Zheng
616bb11d48
Add torch.compile for Mistral ( #30642 )
...
* first version
* fix sliding window
* fix style
* add sliding window cache
* fix style
* address comments
* fix test
* fix style
* move sliding window check inside cache init
* revert changes on irrelevant files & add comment on SlidingWindowCache
* address comments & fix style
fix style
* update causal mask
* [run-slow] mistral
* [run-slow] mistral
* [run-slow] mistral
* [run-slow] mistral
* [run-slow] mistral
* [run-slow] llama
* [run-slow] mistral
* [run-slow] mistral
* [run-slow] mistral
* revert CI from a10 to t4
* wrap up
2024-05-20 16:27:24 +02:00
Zach Mueller
92d1d97c05
Introduce configured_state arg for accelerator_config ( #29781 )
...
* Introduce configured_state
* Include note on tuning
* Allow for users to have defined a state already
* Include tests
* Add note on hpam tune
* Guard a bit better
* Update src/transformers/training_args.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/training_args.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Finish rebase
* Finish rebase
* Guard carefully
* Fixup test
* Refactor
* Fin refactor
* Comment
* Update wrt feedback
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-05-20 09:21:40 -04:00
Arthur
bb48e92186
`tokenizer_class = "AutoTokenizer"` Llava Family ( #30912 )
...
propagate changes to more models
2024-05-20 13:56:11 +02:00
Anton Vlasjuk
76e05301c3
Fix a shape annotation and typos in `mamba` slow forward ( #30691 )
...
* fix typos and one shape comment
* fix `intermediade` typo in jamba
2024-05-20 13:55:57 +02:00
Yoach Lacombe
e6708709cb
Add AutoFeatureExtractor support to Wav2Vec2ProcessorWithLM ( #28706 )
...
* Add AutoFeatureExtractor support to Wav2Vec2ProcessorWithLM
* update with a type filter
* add raises error test
* fix added test
2024-05-20 13:40:42 +02:00
Hafedh
c11ac7857b
fix for custom pipeline configuration ( #29004 )
...
* fix for custom pipeline configuration
* fix for custom pipelines
* remove extra exception
* added test for custom pipelines extra tag
* format with ruff
* limit extra tag for first time only
* format with ruff
* improve tests for custom pipelines
2024-05-20 11:38:32 +02:00
Eric2i
7b4b456438
separate kwargs in processor (similar to #30193 ) ( #30905 )
...
* Fix similar bug in processor (related to #30193 )
* Reformat processing_git.py to comply with ruff formatting
2024-05-20 10:18:17 +01:00
Goncalo Paulo
1834916481
Fix num_hidden_layers in initialization of new model in Mamba ( #30403 )
...
Fix num_hidden_layers in initialization
Originally, the initialization was using config.num_layers instead of config.num_hidden_layers. This fixes that.
2024-05-20 11:18:09 +02:00
Kamil Akesbi
1c2bb3ac54
add return_token_timestamps to WhisperProcessor ( #30812 )
...
* compute num_frames in WhisperFeatureExtractor
* add return_num_frames in WhisperFeatureProcessor + adapt pipeline
* return_timestamps renaming + pipeline fix
* fix
* fix
* fix
* add tests
* Update src/transformers/models/whisper/feature_extraction_whisper.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* apply review changes
* fix
* Update src/transformers/models/whisper/feature_extraction_whisper.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* Update tests/models/whisper/test_modeling_whisper.py
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* apply review
* fix
* review changes
* Update src/transformers/models/whisper/feature_extraction_whisper.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* make style quality
* EXPECTED_OUTPUT in single line
* small numpy->torch fix
* fix
---------
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-05-20 09:53:58 +01:00
Donggeun Yu
66b0d9ee5d
DeformableDETR two stage support bfloat16 ( #30907 )
...
Update modeling_deformable_detr.py
2024-05-20 09:51:04 +01:00
Raushan Turganbay
5d0bf59b4d
LLaVa-Next: Update docs with batched inference ( #30857 )
...
* update docs with batch ex
* Update docs/source/en/model_doc/llava_next.md
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* accept nested list of img
---------
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
2024-05-20 13:45:56 +05:00
Benjamin Warner
cd6bd0af34
Add support for torch.compile dynamic shapes ( #30560 )
...
* add torch.compile dynamic support
* Add SDPA dynamic shapes compile test & improve SDPA comment
* comment consistency
2024-05-20 10:36:57 +02:00
Younes Belkada
fce78fd0e9
FIX / Quantization: Fix Dockerfile build ( #30890 )
...
* Update Dockerfile
* Update docker/transformers-quantization-latest-gpu/Dockerfile
2024-05-20 10:08:26 +02:00
Joseph Enguehard
07bf2dff78
Add TokenClassification for Mistral, Mixtral and Qwen2 ( #29878 )
...
* Add MistralForTokenClassification
* Add tests and docs
* Add token classification for Mixtral and Qwen2
* Save llma for token classification draft
* Add token classification support for Llama, Gemma, Persimmon, StableLm and StarCoder2
* Formatting
* Add token classification support for Qwen2Moe model
* Add dropout layer to each ForTokenClassification model
* Add copied from in tests
* Update src/transformers/models/llama/modeling_llama.py
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* Propagate suggested changes
* Style
---------
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
2024-05-20 10:06:57 +02:00
Abhiroop Tejomay
481a957814
Enable dynamic resolution input for Swin Transformer and variants ( #30656 )
...
* add interpolation of positional encoding support to swin
* add style changes
* use default image processor and make size a dictionary
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* remove logits testing
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Refactor image size validation logic when interpolation is disabled
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* remove asserts in modeling
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* add dynamic resolution input support to swinv2
* change size to ensure interpolation encoding path is triggered
* set interpolate_pos_encoding default value to False
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* set interpolate_pos_encoding default value to False
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* set interpolate_pos_encoding default value to False
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* set interpolate_pos_encoding default value to False
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* set interpolate_pos_encoding default value to False
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* set interpolate_pos_encoding default value to False
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* set interpolate_pos_encoding default value to False
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* set interpolate_pos_encoding default value to False
* add dynamic resolution input to donut swin
* add dynamic resolution input to maskformer swin
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-05-17 18:38:46 +01:00
Arthur Zucker
b6eb708bf1
v4.42.dev.0
2024-05-17 17:30:41 +02:00
Pavel Iakubovskii
bf646fbf2d
Add fixed resize and pad strategy for object detection ( #30742 )
...
* Add resize and pad strategy
* Merge get_size functions
* Add pad_size + tests to object detection models
* Fixup
* Update docstrings
* Fixup
2024-05-17 16:21:26 +01:00
Arthur
e9a8041d1c
update release script ( #30880 )
...
* update release script
* update release script
2024-05-17 17:09:30 +02:00
Arthur
0a9300f474
Support arbitrary processor ( #30875 )
...
* Support arbitrary processor
* fix
* nit
* update
* nit
* nit
* fix and revert
* add a small test
* better check
* fixup
* bug so let's just use class for now
* oups
* .
2024-05-17 16:51:31 +02:00
Sanchit Gandhi
57edd84bdb
[whisper] fix multilingual fine-tuning ( #30865 )
...
* [whisper] fix multilingual fine-tuning
* config ids as well
2024-05-17 15:12:44 +01:00
Jacky Lee
977ce58a78
Fix dependencies for image classification example ( #30842 )
...
* fix: missing dependencies
* fix: image classification dependencies
2024-05-17 13:57:47 +01:00
Darshana S
3802e786ef
Enable device map ( #30870 )
...
* added_no_split_modules
* added LlavaNextVisionAttention to _no_split_modules
2024-05-17 12:50:24 +01:00
amyeroberts
57c965a8f1
Remove deprecated logic and warnings ( #30743 )
...
* Remove deprecated logic and warnings
* Add back some code that seems to be important...
* Let's just add all he nllb stuff back; removing it is a bit more involved
* Remove kwargs
* Remove more kwargs
2024-05-17 12:15:59 +01:00
Younes Belkada
3d7d3a87a0
TEST: Add llama logits tests ( #30835 )
...
* add llama logits test
* fix
* fix tests
"
"
* fix for a10
* format
* format
* fix
* [run-slow] remove fmt: skip
* Your commit message
* test commit
* Revert "test commit"
This reverts commit b66e01e55f
.
* [run-slow]llama
* Update tests/models/llama/test_modeling_llama.py
* [run-slow]llama
* empty commit
2024-05-17 12:23:00 +02:00
amyeroberts
15c74a2829
Fix VideoLlava imports ( #30867 )
...
* Fix VideoLlava imports
* Update dummy objects
2024-05-16 17:06:21 +01:00
Younes Belkada
4e17e7dcf8
TST / Quantization: Reverting to torch==2.2.1 ( #30866 )
...
Reverting to 2.2.1
2024-05-16 17:30:02 +02:00
Joao Gante
f4014e75db
Docs: update example with assisted generation + sample ( #30853 )
2024-05-16 14:32:21 +01:00
Raushan Turganbay
95b3c3814d
Video-LLaVa: Fix docs ( #30855 )
...
fix model id in docs
2024-05-16 17:23:01 +05:00
Yih-Dar
1b3dba9417
Make `Gemma` work with `torch.compile` ( #30775 )
...
* fix
* [run-slow] gemma
* add test
* add `test_compile_static_cache`
* fix
* style
* remove subprocess
* use attribute
* fix
* style
* update
* [run-slow] dbrx,gemma,jetmoe,phi3,recurrent_gemma
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-05-16 13:41:33 +02:00
Mohit Sharma
0753134f4d
Disable the FA backend for SDPA on AMD GPUs ( #30850 )
...
* disable fa
* disable fa
* update warning
* update warning
2024-05-16 13:31:14 +02:00
Joao Gante
9d889f870e
Cache: add new flag to distinguish models that `Cache` but not static cache ( #30800 )
...
* jamba cache
* new flag
* generate exception
2024-05-16 12:08:35 +01:00
NielsRogge
17cc71e149
[Idefics2] Improve docs, add resources ( #30717 )
...
* Add resources
* Address comment
* Address comments
* Update docs/source/en/model_doc/idefics2.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update figure
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-05-16 12:22:13 +02:00
hyenal
1c21f48a50
add sdpa to ViT [follow up of #29325 ] ( #30555 )
...
remove blank line (+1 squashed commit)
Squashed commits:
[24ccd2061] [run-slow]vit_msn,vision_encoder_decoder (+24 squashed commits)
Squashed commits:
[08bd27e7a] [run-slow]vit_msn,vision_encoder_decoder
[ec96a8db3] [run-slow]vit_msn
[ead817eca] fix vit msn multi gpu
[d12cdc8fd] [run-slow]audio_spectrogram_transformer,deit,vision_encoder_decoder,vision_text_dual_encoder,vit,vit_hybrid,vit_mae,vit_msn,videomae,yolos
[3fdbfa88f] doc
[a3ff33e4a] finish implementation
[e20b7b7fb] Update test_modeling_common.py
[e290c5810] Update test_modeling_flax_common.py
[d3af86f46] comment
[ff7dd32d8] more comments
[59b137889] suggestion
[7e2ba6d67] attn_implementation as attribute of the class
[fe66ab71f] minor
[38642b568] Apply suggestions from code review
Accept comments
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
[22cde7d52] Update tests/test_modeling_common.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
[48e137cc6] Update tests/test_modeling_common.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
[99f4c679f] Update tests/test_modeling_common.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
[96cf20a6d] Update src/transformers/models/vit_msn/modeling_vit_msn.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
[c59377d23] Update src/transformers/models/vit_mae/modeling_vit_mae.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
[b70a47259] Update tests/models/vision_text_dual_encoder/test_modeling_vision_text_dual_encoder.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
[00c84d216] [run-slow]audio_spectrogram_transformer,deit,vision_encoder_decoder,vision_text_dual_encoder,vit,vit_hybrid,vit_mae,vit_msn,videomae,yolos
[61f00ebb0] all tests are passing locally
[e9e0b82b7] vision encoder/decoder
[4d5076b56] test-vision (+20 squashed commits)
Squashed commits:
[d1add8db9] yolo
[9fde65716] fix flax
[986566c28] minor
[ca2f21d1f] vit
[3333efd7a] easy models change
[ebfc21402] [run-slow]audio_spectrogram_transformer,deit,vision_encoder_decoder,vision_text_dual_encoder,vit,vit_hybrid,vit_mae,vit_msn,videomae,yolos
[b8b8603ed] [run-slow]vision_encoder_decoder,vision_text_dual_encoder,yolos
[48ecc7e26] all tests are passing locally
[bff7fc366] minor
[62f88306f] fix yolo and text_encoder tests
[121507555] [run-slow]audio_spectrogram_transformer,deit,vit,vit_hybrid,vit_mae,vit_msn,videomae
[1064cae0a] [run-slow]vision_encoder_decoder,vision_text_dual_encoder,yolos
[b7f52ff3a] [run-slow]audio_spectrogram_transformer,deit,vit,vit_hybrid,vit_mae,vit_msn,videomae
[cffaa10dd] fix-copies
[ef6c511c4] test vit hybrid
[7d4ba8644] vit hybrid
[66f919033] [run-slow]audio_spectrogram_transformer,deit,vit,vit_hybrid,vit_mae,vit_msn,videomae
[1fcc0a031] fixes
[cfde6eb21] fixup
[e77df1ed3] all except yolo end encoder decoder (+17 squashed commits)
Squashed commits:
[602913e22] vit + vit_mae are working
[547f6c4cc] RUN_SLOW=1 pytest tests/models/audio_spectrogram_transformer/ tests/models/deit/ tests/models/videomae/ passes
[61a97dfa9] it s the complete opposite...
[aefab37d4] fix more tests
[71802a1b9] fix all torch tests
[40b12eb58] encoder - decoder tests
[941552b69] slow decorator where appropriate
[14d055d80] has_attentions to yolo and msn
[3381fa19f] add correct name
[e261316a7] repo consistency
[31c6d0c08] fixup
[9d214276c] minor fix
[11ed2e1b7] chore
[eca6644c4] add sdpa to vit-based models
[cffbf390b] make fix-copies result
[6468319b0] fix style
[d324cd02a] add sdpa for vit
Co-authored-by: Liubov Yaronskaya <luba.yaronskaya@gmail.com>
2024-05-16 10:56:11 +01:00
NielsRogge
9fd606dbdb
[LLaVa-NeXT] Small fixes ( #30841 )
...
* First draft
* Update docstring
2024-05-16 08:19:15 +02:00
Edoardo Cetin
4b3eb19fa7
Fix llama model sdpa attention forward function masking bug when output_attentions=True ( #30652 )
...
* Fix llama model forward function with attention=True, same-length encoded sequence.
* Fix style
* propagate fix to modeling_cohere, gemma, dbrx, and olmo (which copy the same sdpa masking logic from llama)
* Fix style
* ignore unnecessary sdpa mask converter when output_attentions=True
* add tests checking sdpa and eager outputs match when output_attentions=True
* Split if statements in two lines
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Fix formatting
* Add fix to new jetmoe model
* Add missing output_attentions argument to jetmoe mask creation
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2024-05-15 19:48:19 +02:00
Yih-Dar
2d83324ecf
Use `torch 2.3` for CI ( #30837 )
...
2.3
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-05-15 19:31:52 +02:00
Younes Belkada
3f435823e0
FEAT / Bitsandbytes: Add `dequantize` API for bitsandbytes quantized models ( #30806 )
...
* add method
* change method name
* more comments
* Apply suggestions from code review
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fixup
* add docstrings and fix comment
* warn users on the de-quantized dtype
* Update src/transformers/quantizers/base.py
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* Update src/transformers/integrations/bitsandbytes.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* final suggestion - use private method
---------
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-05-15 17:17:09 +02:00
amyeroberts
58faa7b824
Deprecate models script - correctly set the model name for the doc file ( #30785 )
...
* Correctly set the moel name for the doc file
* Fix up
2024-05-15 15:14:11 +01:00
Xuan-Phi Nguyen
5ca085b882
Better llava next. ( #29850 )
...
* Better llava next.
- Batched forward with multiple image of different sizes (number of patches).
- Support training, for cases without any image.
- Support multi-image in same sequence. e.g: ["<image> <image> the first image is a dog while the second is a cat", "<image> <image> <image> <image> these 4 image are..."]
Current limitation:
- Haven't done testing
- Only support right padding (for training)
- left padding (batched generation) is not ready yet.
- PR not ready.
* fix bugs in batched generation
* add tests
* fix batch-gen bugs, left-padding positions and incorrect attention mask
* remove better modeling llava
* fix formatting
* fix test
* fix test
* fix testing
* fix test
* fix formatting
* Update src/transformers/models/llava_next/modeling_llava_next.py
add clarity
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update modeling_llava_next.py
remove assert
* fix bug modeling_llava_next.py
* update modeling
* fix bugs
* fix format
* fix error
* fix new_token_positions
* Update modeling_llava_next.py
* update formatting
* add args
* removecomments
* add slow tests for batched inference
* failing tf/flax tests
* this one ic correct
* Update src/transformers/models/llava_next/modeling_llava_next.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* fix docs
* make fixup
* more fixup
* add test for batch equivalence
* Update tests/models/llava_next/test_modeling_llava_next.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/llava_next/image_processing_llava_next.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/llava_next/image_processing_llava_next.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/llava_next/modeling_llava_next.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/llava_next/modeling_llava_next.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/llava_next/modeling_llava_next.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/llava_next/modeling_llava_next.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/llava_next/modeling_llava_next.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/llava_next/modeling_llava_next.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* pr comments
* hardcode padding side for bs=1
* update
* [run-slow] llava_next
* [run-slow] llava_next
* make fix-copies
---------
Co-authored-by: NGUYEN, Xuan Phi <x.nguyen@alibaba-inc.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Co-authored-by: raushan <raushan@huggingface.co>
Co-authored-by: Raushan Turganbay <raushan.turganbay@alumni.nu.edu.kz>
2024-05-15 19:02:56 +05:00
Sourab Mangrulkar
bdfefbadaf
Update ds_config_zero3.json ( #30829 )
2024-05-15 10:02:31 -04:00
xkszltl
92544cb8f3
Missing `Optional` in typing. ( #30821 )
...
The function checks for None in its first line.
2024-05-15 15:00:43 +01:00
amyeroberts
64c06df325
Jamba - Skip 4d custom attention mask test ( #30826 )
...
* Jamba - Skip 4d custom attention mask test
* Skip assistant greedy test
2024-05-15 13:57:28 +01:00
Lysandre Debut
a42844955f
Loading GGUF files support ( #30391 )
...
* Adds support for loading GGUF files
Co-authored-by: Younes Belkada <younesbelkada@gmail.com>
Co-authored-by: 99991 <99991@users.noreply.github.com>
* add q2_k q3_k q5_k support from @99991
* fix tests
* Update doc
* Style
* Docs
* fix CI
* Update docs/source/en/gguf.md
* Update docs/source/en/gguf.md
* Compute merges
* change logic
* add comment for clarity
* add comment for clarity
* Update src/transformers/models/auto/tokenization_auto.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* change logic
* Update src/transformers/modeling_utils.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* change
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/modeling_gguf_pytorch_utils.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* put back comment
* add comment about mistral
* comments and added tests
* fix unconsistent type
* more
* fix tokenizer
* Update src/transformers/modeling_utils.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* address comments about tests and tokenizer + add added_tokens
* from_gguf -> gguf_file
* replace on docs too
---------
Co-authored-by: Younes Belkada <younesbelkada@gmail.com>
Co-authored-by: 99991 <99991@users.noreply.github.com>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-05-15 14:28:20 +02:00
Raushan Turganbay
bd9f4d7951
Add Video Llava ( #29733 )
...
* add model draft
* update docstring
* add tests
* support image and video as input
* update for better handling of mixed input and clean-up a bit
* bug when mixed inputs & add tests
* Update README.md
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Merge remote-tracking branch 'upstream/main' into video_llava
* link to abstract of paper in README
* fix test
* fix-copies
* make tests happy
* skip docstest for now
* do not run doctest for now
* Update src/transformers/models/video_llava/processing_video_llava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/video_llava/image_processing_video_llava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/video_llava/image_processing_video_llava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/video_llava/image_processing_video_llava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/video_llava/image_processing_video_llava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/video_llava/test_modeling_video_llava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/video_llava/image_processing_video_llava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* address review comments
* failing tests
* Fix vocab_size in common tests for VLMs
* codestyle
* Update src/transformers/models/video_llava/configuration_video_llava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/video_llava/configuration_video_llava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/video_llava/modeling_video_llava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/video_llava/modeling_video_llava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update docs/source/en/model_doc/video_llava.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update docs/source/en/model_doc/video_llava.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/video_llava/image_processing_video_llava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update docs/source/en/model_doc/video_llava.md
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/video_llava/processing_video_llava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/video_llava/test_modeling_video_llava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/video_llava/test_modeling_video_llava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/video_llava/test_modeling_video_llava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* PR suggestions
* fix-copies
* Update src/transformers/models/video_llava/configuration_video_llava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/video_llava/configuration_video_llava.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* add full example in docs
* clean-up with new model-id
* [run-slow] video_llava
* update docstring
* [run-slow] video_llava
* remove all achive maps
* fix some tests
* test was supposed to be skipped for llava :)
---------
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-05-15 16:42:29 +05:00
David
b8aee2e918
Remove unused module DETR based models ( #30823 )
...
* removing heads for classification from DETR models.
* quality fix
2024-05-15 11:19:43 +01:00