Guillaume LEGENDRE
1a65689cce
change to test runner to debug
2024-04-25 12:14:25 +02:00
Guillaume LEGENDRE
0ab2a2176f
test ci label
2024-04-25 12:00:34 +02:00
ydshieh
b564cf8d34
check llama
2024-04-16 14:00:36 +02:00
ydshieh
dad79c3719
check llama
2024-04-16 13:51:23 +02:00
ydshieh
cc287fcff5
check llama
2024-04-16 11:50:51 +02:00
ydshieh
4f0791f365
check llama
2024-04-16 11:33:53 +02:00
ydshieh
f1a37d9a3c
fix
2024-04-16 11:10:46 +02:00
ydshieh
217d9465d3
run all
2024-04-16 11:06:30 +02:00
ydshieh
3d8068a5bc
update
2024-04-16 11:01:01 +02:00
ydshieh
3c6f046b58
update
2024-04-16 10:53:47 +02:00
ydshieh
77401c8635
fix
2024-04-16 10:41:05 +02:00
Jungnerd
51bcadc10a
Update `ko/_toctree.yml` ( #30062 )
...
* fix: update `ko/_toctree.yml`
* fix: update ko/_toctree.yml
* Update docs/source/ko/_toctree.yml
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* fix: delete `perf_infer_gpu_many`
* fix: Replace untranslated docs with `in_translation`
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* fix: Replace untraslated docs with `in_translation`
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2024-04-15 10:42:46 -07:00
Matt
5be21302ad
Remove incorrect arg in codellama doctest ( #30257 )
...
Remove incorrect arg in codellama docstring
2024-04-15 18:31:23 +01:00
Sayak Paul
8127f39624
[Docs] Update recurrent_gemma.md for some minor nits ( #30238 )
...
Update recurrent_gemma.md
2024-04-15 18:30:59 +02:00
amyeroberts
6b78360e6d
Add Idefics2 ( #30253 )
...
* Initial add model additions
* Test
* All weights loading
* Can perform full forward pass
* Local and remote the same
* Matching local and remote
* Fixup
* Idefics2Model importable; fixup docstrings
* Don't skip by default
* Remove deprecated use_resampler arg
* Remove self.config
* DecoupledLinear takes config
* Tidy up
* Enable eager attention and tidy up
* Most tests passing
* Update for batch of processed images
* Add image processor
* Update doc pages
* Update conversion script
* Remove erroneous breakpoint
* Remove accidendtal spelling change
* Update to reflect changes on hub - make generate work
* Fix up
* Image processor tests
* Update tests
* Add a processor
* Add a processor
* Update convert script
* Update modeling file - remove fixmes
* Bug fix
* Add processing test
* Use processor
* Fix up
* Update src/transformers/models/idefics2/modeling_idefics2.py
Co-authored-by: Victor SANH <victorsanh@gmail.com>
* Update src/transformers/models/idefics2/modeling_idefics2.py
Co-authored-by: Victor SANH <victorsanh@gmail.com>
* Fix test
* Update config - PR comments and defaults align with checkpoint
* Reviewer comments
* Add copied froms for flahs attention
* Update src/transformers/models/idefics2/modeling_idefics2.py
Co-authored-by: Victor SANH <victorsanh@gmail.com>
* Apply suggestions from code review
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Remove qk_layer_norm and freeze_layers functionality
* Fix
* Remove freeze_layer options from config
* Sync with upstream main
* Fix attention shapes siglip
* Remove Llava-next refs - TO REBASE
* Use AutoModel for text model
* Add comment to explain vision embeddings
* Fix issue with tie_word_embeddings
* Address review comments
* Fix and fix up
* Chat templates for idefics
* Fix copies
* Fix
* Add layer norms to FA2
* Fix tests
* Apply suggestions from code review
Co-authored-by: Victor SANH <victorsanh@gmail.com>
* Fix
* Review comments
* Update src/transformers/models/idefics2/modeling_idefics2.py
Co-authored-by: Victor SANH <victorsanh@gmail.com>
* Update inputs merger
* Merge weights in correct order
* Update convert script
* Update src/transformers/models/idefics2/processing_idefics2.py
Co-authored-by: Victor SANH <victorsanh@gmail.com>
* Update template
* Model code examples (fix idefics too)
* More review comments
* Tidy up
* Update processing
* Fix attention mask preparation
* Update inputs_merger inputs
* Vectorize inputs_merger
* Update src/transformers/models/idefics2/__init__.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/models/idefics2/modeling_idefics2.py
* Review comments
* saying bye to the `qk_layer_norms`
* Simplify
* Update latents
* Remove erroneuous readme changes
* Return images when applying chat template
* Fix bug - prompt images are for a single sample
* Update src/transformers/models/idefics2/modeling_idefics2.py
* image splitting
* fix test
* some more comment
* some comment
* Apply suggestions from code review
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/idefics2/image_processing_idefics2.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update processor
* Update model tests
* Update src/transformers/models/idefics2/processing_idefics2.py
Co-authored-by: Victor SANH <victorsanh@gmail.com>
* Update src/transformers/models/idefics2/processing_idefics2.py
Co-authored-by: Victor SANH <victorsanh@gmail.com>
* Don't add BOS in template
* Update src/transformers/models/idefics2/processing_idefics2.py
Co-authored-by: Victor SANH <victorsanh@gmail.com>
* Remove index in examples
* Update tests to reflect #13
* Update src/transformers/models/idefics2/processing_idefics2.py
Co-authored-by: Victor SANH <victorsanh@gmail.com>
* PR comment - consistent typing
* Update readme and model doc
* Update docs
* Update checkpoint references
* Update examples
* Fix and update tests
* Small addition
* Update tests - remove copied from as no ignore placement copy could be found
* Update example
* small fixes
* Update docs/source/en/model_doc/idefics2.md
Co-authored-by: Victor SANH <victorsanh@gmail.com>
* Update docs/source/en/model_doc/idefics2.md
Co-authored-by: Victor SANH <victorsanh@gmail.com>
* Update README.md
Co-authored-by: Victor SANH <victorsanh@gmail.com>
* Connector model as bridge
* Fix up
* Fix up
* Don't pass model inputs for generation kwargs update
* IDEFICS-2 -> Idefics2
* Remove config archive name
* IDEFICS-2 -> Idefics2
* Add back llava-next
* Update readmes
* Add requirements for processor tester
* Use custom convert_to_rgb to avoid possible BC
* Fix doc example
* Fix doc example
* Skip model doc tests - as model to large
* More doc example - account for image splitting
* Update src/transformers/image_transforms.py
* Fix config doctest
---------
Co-authored-by: Pablo Montalvo <39954772+molbap@users.noreply.github.com>
Co-authored-by: ArthurZucker <arthur.zucker@gmail.com>
Co-authored-by: Victor SANH <victorsanh@gmail.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2024-04-15 17:03:03 +01:00
Fanli Lin
667939a2d3
[tests] add the missing `require_torch_multi_gpu` flag ( #30250 )
...
add gpu flag
2024-04-15 16:30:52 +01:00
Yih-Dar
440bd3c3c0
update github actions packages' version to suppress warnings ( #30249 )
...
update
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-04-15 15:08:09 +02:00
LZR
766810153b
round epoch only in console ( #30237 )
2024-04-15 13:53:21 +01:00
Yih-Dar
fe2d20d275
Fix doctest more (for `docs/source/en`) ( #30247 )
...
* fix
* fix
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-04-15 14:10:59 +02:00
amyeroberts
ec344b560d
Separate out kwargs in processor ( #30193 )
...
* Separate out kwargs in processor
* Fix up
2024-04-15 12:36:50 +01:00
Sai-Suraj-27
fc8eda36c5
fix: Fixed `type annotation` for compatability with python 3.8 ( #30243 )
...
* Fixed type annotation for compatability with python 3.8
* Fixed unsorted imports.
2024-04-15 12:31:37 +01:00
Yih-Dar
b6b6daf2b7
Refactor doctest ( #30210 )
...
* fix
* update
* fix
* update
* fix
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-04-15 13:20:36 +02:00
Sai-Suraj-27
b3595cf02b
fix: Replaced deprecated `typing.Text` with `str` ( #30230 )
...
typing.Text is deprecated. Use str instead
2024-04-15 12:18:37 +01:00
JINO ROHIT
f010786218
Set pad_token in run_glue_no_trainer.py #28534 ( #30234 )
2024-04-15 11:39:10 +01:00
Sai-Suraj-27
06b1192768
fix: Replace deprecated `assertEquals` with `assertEqual` ( #30241 )
...
Replace deprecated assertEquals with assertEqual.
2024-04-15 09:36:06 +01:00
Xu Song
8fd2de933c
Add test for parse_json_file and change typing to os.PathLike ( #30183 )
...
* Add test for parse_json_file
* Change Path to PathLike
* Fix `Import block is un-sorted or un-formatted`
* revert parse_json_file
* Fix ruff format
* Add parse_json_file test
2024-04-15 09:34:36 +01:00
ulatekh
b109257f4f
Fixed config.json download to go to user-supplied cache directory ( #30189 )
...
* Fixed config.json download to go to user-supplied cache directory.
* Simplied implementation suggested by @amyeroberts
2024-04-12 18:03:49 +01:00
Yih-Dar
db7d155444
Fix/Update for doctest ( #30216 )
...
fix
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-04-12 18:59:45 +02:00
Sergei Belousov
4f7b434acb
Update modeling_bark.py ( #30221 )
...
Change .view() to .reshape() to prevent errors on non-contiguous tensors
2024-04-12 17:03:38 +01:00
Yih-Dar
bf9a7ab932
Fix `RecurrentGemmaIntegrationTest.test_2b_sample` ( #30222 )
...
fix
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-04-12 17:53:25 +02:00
Pablo Montalvo
65657d5d8a
fix fuyu doctest ( #30215 )
...
* fix doctest
* fix example
* fix
* fix
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-04-12 17:45:15 +02:00
Michaël Defferrard
ac33aeeeee
fix typo ( #30220 )
2024-04-12 15:41:35 +01:00
Sai-Suraj-27
caa5c65db1
fix: Replaced deprecated `logger.warn` with `logger.warning` ( #30197 )
...
* Fixed deprecated logger.warn by using logger.warning
* Reformatted using ruff.
2024-04-12 10:21:24 +01:00
amyeroberts
c82b38a3e2
Fix pipeline logger.warning_once bug ( #30195 )
...
Fix warning bug
2024-04-12 09:34:45 +01:00
Younes Belkada
2c66600c3f
ENH: [`CI`] Add new workflow to run slow tests of important models on push main if they are modified ( #29235 )
...
* v1
* v1
* more changes
* more models
* add more markers
* swtich to A10
* use cache
* Update .github/workflows/push-important-models.yml
* Update .github/workflows/push-important-models.yml
* Update modeling_llama.py
* test
* test
* another test
* test
* test
* attempt to fix
* fix
* try automatic tagging
* fix
* alternative approach for collecting
* fix
* fix
* fix
* test
* fix
* fix
* test
* revert some changes
* fix
* fix
* fix
* final push
* fix
* revert
* test new slack message
* oops
* Update send-slack.yml
* test
* test re-usable workflow in steps
* Update action.yml
* test
* another test
* test
* another test
* test
* another test
* another test (hopefully last one)
* attempt to fix
* allez
* removing comma
* test
* another test
* attempt
* test
* test
* test push
* test
* test
* another test
* test
* make it better
* fix commas
* valid json
* test
* another test
* test
* final push
* test
* final push
* more customizable messages
* test
* push
* oops
* another test
* another test
* missing indentation
* more tweaks
* more tweaks
* another test
* another test
* tests
* final push
* use global variables instead
* Update .github/workflows/push-important-models.yml
* Apply suggestions from code review
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* commit to test all models
* issue with arrays
* another test
* attempt to fix failing tests
* Update .github/workflows/push-important-models.yml
* add ssh
* Update .github/workflows/push-important-models.yml
* test
* test
* add install curl
* attempt to fix
* final fix
* test
* test
* test
* fix test
* another test
* add inherit secrets
* push
* revert unneeded changes
* revert
* add env variables
* add pip freeze
* revert change in gemma
* Update .github/workflows/push-important-models.yml
* fix mistral and mixtral
* add pdb
* fix mixtral tesst
* fix
* fix mistral ?
* add fix gemma
* fix mistral
* fix
* test
* anoter test
* fix
* fix
* fix mistral tests
* fix them again
* final fixes for mistral
* fix padding right
* fix whipser fa2
* fix
* fix
* fix gemma
* test
* fix llama
* fix
* fix
* fix llama gemma
* add class attribute
* fix CI
* clarify whisper
* compute_capability
* rename names in some comments
* Add # fmt: skip
* make style
* Update tests/models/mistral/test_modeling_mistral.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* update
* update
* change branch
* correct workflow
* modify file
* test
* works
* final test
* another fix
* install sudo
* final fix
* add `-y`
* set to `main`
* Update .github/actions/post-slack/action.yml
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* change title
* fixup
* add upload report
* fix
* revert to main
* add empty lines + add comment
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-04-12 10:01:28 +02:00
Steven Liu
0bd58f1ce0
Docs PR template ( #30171 )
...
remove maria :(
2024-04-11 09:23:55 -07:00
Sam Shleifer
edf0935dca
Falcon: make activation, ffn_hidden_size configurable ( #30134 )
...
* Falcon chg
* delta
* Docstring
* Fix import block
* doc
* fix and overwrite
2024-04-11 14:04:46 +01:00
NielsRogge
5569552cf8
Update output of SuperPointForKeypointDetection ( #29809 )
...
* Remove auto class
* Update ImagePointDescriptionOutput
* Update model outputs
* Rename output class
* Revert "Remove auto class"
This reverts commit ed4a8f549d
.
* Address comments
2024-04-11 14:59:30 +02:00
NielsRogge
386ef34e7d
[Processor classes] Update docs ( #29698 )
...
Update docs
2024-04-11 14:24:38 +02:00
Sai-Suraj-27
e516d1b19d
fix: Fixed `ruff` configuration to avoid deprecated configuration warning ( #30179 )
...
* Fixed deprecated ruff configuration in pyproject.toml file
* reverted un-necessary changes.
* small fix.
2024-04-11 12:47:10 +01:00
hugehope
58b170cdb1
chore: remove repetitive words ( #30174 )
...
Signed-off-by: hugehope <cmm7@sina.cn>
2024-04-11 09:49:36 +01:00
Zach Mueller
e50be9a058
Guard XLA version imports ( #30167 )
2024-04-11 04:49:16 -04:00
lewtun
fbdb978eb5
Fix Llava chat template examples ( #30130 )
2024-04-11 10:38:24 +02:00
Eduardo Pacheco
b752ad3019
Adding grounding dino ( #26087 )
...
* Fixed typo when converting weigths to GroundingDINO vision backbone
* Final modifications on modeling
* Removed unnecessary class
* Fixed convert structure
* Added image processing
* make fixup partially completed
* Now text_backbone_config has its own class
* Modified convert script
* Removed unnecessary config attribute
* Added new function to generate sub sentence mask
* Renamed parameters with gamma in the name as it's currently not allowed
* Removed tokenization and image_processing scripts since we'll map from existing models
* Fixed some issues with configuration
* Just some modifications on conversion script
* Other modifications
* Copied deformable detr
* First commit
* Added bert to model
* Bert validated
* Created Text and Fusion layers for Encoder
* Adapted Encoder layer
* Fixed typos
* Adjusted Encoder
* Converted encoder to hf
* Modified Decoder Layer
* Modified main decoder class
* Removed copy comments
* Fixed forward from GroundingDINOModel and GroundingDINODecoder
* Added all necessary layers, configurations and forward logic up to GroundingDINOModel
* Added all layers to convertion
* Fixed outputs for GroundingDINOModel and GroundingDINOForObjectDetection
* Fixed mask input to encoders and fixed nn.MultiheadAttention batch first and attn output
* Fixed forward from GroundingDINOTextEnhancerLayer
* Fixed output bug with GroundingDINODeformableLayer
* Fixed bugs that prevent GroundingDINOForObjectDetection to run forward method
* Fixed attentions to be passed correctly
* Passing temperature arg when creating Sine position embedding
* Removed copy comments
* Added temperature argument for position embedding
* Fixed typo when converting weigths to GroundingDINO vision backbone
* Final modifications on modeling
* Removed unnecessary class
* Fixed convert structure
* Added image processing
* make fixup partially completed
* Now text_backbone_config has its own class
* Modified convert script
* Removed unnecessary config attribute
* Added new function to generate sub sentence mask
* Renamed parameters with gamma in the name as it's currently not allowed
* Removed tokenization and image_processing scripts since we'll map from existing models
* Fixed some issues with configuration
* Just some modifications on conversion script
* Other modifications
* Fix style
* Improve fixup
* Improve conversion script
* Improve conversion script
* Add GroundingDINOProcessor
* More improvements
* Return token type ids
* something
* Fix more tests
* More improvements
* More cleanup
* More improvements
* Fixed tests, improved modeling and config
* More improvements and fixing tests
* Improved tests and modeling
* Improved tests and added image processor
* Improved tests inference
* More improvements
* More test improvements
* Fixed last test
* Improved docstrings and comments
* Fix style
* Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
* Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
* Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
* Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
* Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
* Better naming
* Better naming
* Added Copied statement
* Added Copied statement
* Moved param init from GroundingDINOBiMultiHeadAttention
* Better naming
* Fixing clamp style
* Better naming
* Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
* Update src/transformers/models/grounding_dino/configuration_grounding_dino.py
Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
* Update src/transformers/models/grounding_dino/convert_grounding_dino_to_hf.py
Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
* Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
* Improving conversion script
* Improved config
* Improved naming
* Improved naming again
* Improved grouding-dino.md
* Moved grounding dino to multimodal
* Update src/transformers/models/grounding_dino/convert_grounding_dino_to_hf.py
Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
* Fixed docstrings and style
* Fix docstrings
* Remove timm attributes
* Reorder imports
* More improvements
* Add Grounding DINO to pipeline
* Remove model from check_repo
* Added grounded post_process to GroundingDINOProcessor
* Fixed style
* Fixed GroundingDINOTextPrenetConfig docstrings
* Aligned inputs.keys() when both image and text are passed with model_input_names
* Added tests for GroundingDINOImageProcessor and GroundingDINOProcessor
* Testing post_process_grounded_object_detection from GroundingDINOProcessor at test_inference_object_detection_head
* Fixed order
* Marked test with require_torch
* Temporarily changed repo_id
* More improvements
* Fix style
* Final improvements
* Improve annotators
* Fix style
* Add is_torch_available
* Remove type hints
* vocab_tokens as one liner
* Removed print statements
* Renamed GroundingDINOTextPrenetConfig to GroundingDINOTextConfig
* remove unnecessary comments
* Removed unnecessary tests on conversion script
* Renamed GroundingDINO to camel case GroundingDino
* Fixed GroundingDinoProcessor docstrings
* loading MSDA kernels in the modeling file
* Fix copies
* Replace nn.multiheadattention
* Replace nn.multiheadattention
* Fixed inputs for GroundingDinoMultiheadAttention & order of modules
* Fixed processing to avoid messing with inputs
* Added more tips for GroundingDino
* Make style
* Chaning name to align with SAM
* Replace final nn.multiheadattention
* Fix model tests
* Update year, remove GenerationTesterMixin
* Address comments
* Address more comments
* Rename TextPrenet to TextModel
* Rename hidden_states
* Address more comments
* Address more comments
* Address comment
* Address more comments
* Address merge
* Address comment
* Address comment
* Address comment
* Make style
* Added layer norm eps to layer norms
* Address more comments
* More fixes
* Fixed equivalence
* Make fixup
* Remove print statements
* Address comments
* Address comments
* Address comments
* Address comments
* Address comments
* Address comments
* Add comment
* Address comment
* Remove overwriting of test
* Fix bbox_embed
* Improve decoder_bbox_embed_share
* Simplify outputs
* Updated post_process_grounded_object_detection
* Renamed sources to feature_maps
* Improved tests for Grounding Dino ImageProcessor and Processor
* Fixed test requirements and imports
* Fixed image_processing
* Fixed processor tests
* Fixed imports for image processing tests
* Fix copies
* Updated modeling
* Fix style
* Moved functions to correct position
* Fixed copy issues
* Update src/transformers/models/deformable_detr/modeling_deformable_detr.py
Co-authored-by: Sangbum Daniel Choi <34004152+SangbumChoi@users.noreply.github.com>
* Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
Co-authored-by: Sangbum Daniel Choi <34004152+SangbumChoi@users.noreply.github.com>
* Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
Co-authored-by: Sangbum Daniel Choi <34004152+SangbumChoi@users.noreply.github.com>
* Keeping consistency custom cuda kernels for MSDA
* Make GroundingDinoProcessor logic clearer
* Updated Grounding DINO checkpoints
* Changed tests to correct structure
* Updated gpu-cpu equivalence test
* fix copies
* Update src/transformers/models/grounding_dino/processing_grounding_dino.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/grounding_dino/processing_grounding_dino.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/grounding_dino/modeling_grounding_dino.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update src/transformers/models/grounding_dino/configuration_grounding_dino.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Fixed erros and style
* Fix copies
* Removed inheritance from PreTrainedModel from GroundingDinoTextModel
* Fixed GroundingDinoTextModel
* Fixed type of default backbone config
* Fixed missing methods for GroundingDinoTextModel and Added timm support for GroundingDinoConvEncoder
* Addressed comments
* Addressed batched image processing tests
* Addressed zero shot test comment
* Addressed tip comment
* Removed GroundingDinoTextModel from check_repo
* Removed inplace masking
* Addressed comments
* Addressed comments
* Addressed comments
* Fix copies
* Fixing timm test
* Fixed batching equivalence test
* Update docs/source/en/model_doc/grounding-dino.md
Co-authored-by: Tianqi Xu <40522713+dandansamax@users.noreply.github.com>
* Update docs/source/en/model_doc/grounding-dino.md
Co-authored-by: Tianqi Xu <40522713+dandansamax@users.noreply.github.com>
* Update docs/source/en/model_doc/grounding-dino.md
Co-authored-by: Tianqi Xu <40522713+dandansamax@users.noreply.github.com>
* Addressed more comments
* Added a new comment
* Reduced image size
* Addressed more comments
* Nits
* Nits
* Changed the way text_config is initialized
* Update src/transformers/models/grounding_dino/processing_grounding_dino.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
---------
Co-authored-by: Niels <niels.rogge1@gmail.com>
Co-authored-by: Rafael Padilla <31217453+rafaelpadilla@users.noreply.github.com>
Co-authored-by: NielsRogge <48327001+NielsRogge@users.noreply.github.com>
Co-authored-by: Eduardo Pacheco <eduardo.pacheco@limehome.com>
Co-authored-by: Sangbum Daniel Choi <34004152+SangbumChoi@users.noreply.github.com>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Co-authored-by: Tianqi Xu <40522713+dandansamax@users.noreply.github.com>
2024-04-11 08:32:16 +01:00
DamonGuzman
a5e5c92aea
Fixed typo in comments/documentation for Pipelines documentation ( #30170 )
...
Update feature_extraction.py - Fixed typo in comments/documentation
2024-04-10 14:52:51 -07:00
Matt
d71f5b3ea8
Update config class check in auto factory ( #29854 )
2024-04-10 17:24:32 +01:00
Younes Belkada
f569172fc2
FIX / bnb: fix torch compatiblity issue with `itemize` ( #30162 )
...
* fix torch compatiblity issues
* fix
* Update src/transformers/modeling_utils.py
2024-04-10 18:12:43 +02:00
Yih-Dar
4f7a9f9c5c
Fix natten install in docker ( #30161 )
...
* fix dinat in docker
* update
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-04-10 17:45:49 +02:00
Etienne.bfx
3280b13260
Fixing a bug when MlFlow try to log a torch.tensor ( #29932 )
...
* Update integration_utils.py
Add the case where a tensor with one element is log with Mlflow
* Update src/transformers/integrations/integration_utils.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update integration_utils.py add a whitespace
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-04-10 16:07:58 +01:00
Arthur
0fe44059ae
Add recurrent gemma ( #30143 )
...
* Fork.
* RecurrentGemma initial commit.
* Updating __init__.py.
* Minor modification to how we initialize the cache.
Changing how the config specifies the architecture.
* Reformat code to 4 spaces.
Fixed a few typos.
* Fixed the forward pass.
Still unclear on the cache?
* Fixed the RecurrentGemmaForCausalLM
* Minor comment that we might not need attention_mask and output_attention arguments.
* Now cache should work as well.
* Adding a temporary example to check whether the model generation works.
* Adding the tests and updating imports.
* Adding the example file missing in the previous commit.
* First working example.
* Removing .gitignore and reverting parts of __init__.
* Re-add .gitignore.
* Addressing comments for configuration.
* Move mask creation to `_prepare_inputs_for_generation`.
* First try at integration tests:
1. AttributeError: 'GriffinCausalLMOutput' object has no attribute 'attentions'.
2. `cache_position` not passed
* Transfoering between machines.
* Running normal tests.
* Minor fix.
* More fixes.
* Addressing more comments.
* Minor fixes.
* first stab at cleanup
* more refactoring
* fix copies and else
* renaming and get init to work
* fix causal mask creation
* update
* nit
* fix a hell lot of things
* updates
* update conversion script
* make all keys importable
* nits
* add auto mappings
* properly convert ffw_up and down
* add scaling
* fix generations
* for recurrent dtype
* update
* fix going beyong window
* fixup
* add missing files
* current updates to remove last einops
* finish modeling refactor
* TADA
* fix compile
* fix most failing testt ? ?
* update tests
* refactor and update
* update
* nits, fixup and update tests
* more fixup
* nits
* fix imports
* test format
* fixups
* nits
* tuple typing
* fix code quality
* add model card
* fix doc
* skip most generation tests
* nits
* style
* doc fixes
* fix pr and check_copies?
* last nit
* oupsy
* Apply suggestions from code review
Co-authored-by: Lysandre Debut <hi@lysand.re>
* update
* Update src/transformers/models/recurrent_gemma/convert_recurrent_gemma_to_hf.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/recurrent_gemma/test_modeling_recurrent_gemma.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/recurrent_gemma/test_modeling_recurrent_gemma.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/recurrent_gemma/test_modeling_recurrent_gemma.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update tests/models/recurrent_gemma/test_modeling_recurrent_gemma.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* update based on review
* doc nit
* fix quality
* quality
* fix slow test model path
* update default dype
* ignore attributes that can be safely ignored in check config attributes
* 0lallalala come on
* save nit
* style
* remove to dict update
* make sure we can also run in float16
* style
---------
Co-authored-by: Pablo Montalvo <39954772+molbap@users.noreply.github.com>
Co-authored-by: Aleksandar Botev <botev@google.com>
Co-authored-by: Leonard Berrada <lberrada@users.noreply.github.com>
Co-authored-by: anushanf <anushanf@google.com>
Co-authored-by: botev <botevmg@gmail.com>
Co-authored-by: Lysandre Debut <hi@lysand.re>
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
2024-04-10 16:59:13 +02:00