Commit Graph

13776 Commits

Author SHA1 Message Date
Sanchit Gandhi 77cb2ab792
⚠️ [CLAP] Fix dtype of logit scales in init (#25682)
[CLAP] Fix dtype of logit scales
2023-08-23 13:17:37 +01:00
Nora Belrose 2cf87e2bbb
Prevent Dynamo graph fragmentation in GPTNeoX with torch.baddbmm fix (#24941)
* Pass a Python scalar for alpha in torch.baddbmm

* fixup

---------

Co-authored-by: Arthur Zucker <arthur.zucker@gmail.com>
2023-08-23 14:07:46 +02:00
Yih-Dar b413e0610b
Remove `utils/documentation_tests.txt` (#25680)
* fix

* fix

* fix

* fix

* fix

* fix

* Apply suggestions from code review

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2023-08-23 11:14:45 +02:00
Yih-Dar 3d1edb6c5d
fix wrong path in some doc (#25658)
* update

* check

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-08-23 08:34:30 +02:00
Arthur db58722084
[`GPTNeo`] Add input_embeds functionality to gpt_neo Causal LM (#25664)
nit
2023-08-23 07:49:19 +02:00
Arthur 51794bf21e
[`SPM`] Patch `spm` Llama and T5 (#25656)
* hot fix

* only encode with string prefix if starts with prefix

* styling

* add a new test

* fixup
2023-08-23 07:16:43 +02:00
Wonhyeong Seo 57943630e2
Add Llama2 resources (#25531)
* docs: feat: model resources for llama2

Co-authored-by: Woojun Jung <hello_984@naver.com>

* fix: add description for dpo and rearrange posts

* docs: feat: add llama2 notebook resources

* style: one liners for each resource

Co-Authored-By: Woojun Jung <46880056+jungnerd@users.noreply.github.com>
Co-Authored-By: Kihoon Son <75935546+kihoon71@users.noreply.github.com>

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Fix typo

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

---------

Co-authored-by: Woojun Jung <hello_984@naver.com>
Co-authored-by: Woojun Jung <46880056+jungnerd@users.noreply.github.com>
Co-authored-by: Kihoon Son <75935546+kihoon71@users.noreply.github.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-08-22 17:14:54 -07:00
Yih-Dar 40a0cabd93
Update doc toctree (#25661)
* fix

* fix

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-08-22 22:58:55 +02:00
Gabriel Asher 977b2f05d5
Add input_embeds functionality to gpt_neo Causal LM (#25659)
* Updated gpt_neo causalLM to support using input embeddings for generation

* added indentation

* Did make fixup
2023-08-22 20:28:38 +02:00
AleksanderWWW 908f853688
stringify config (#25637)
* stringify config

* apply code formatting
2023-08-22 17:21:01 +02:00
Alex McKinney 5eeaef921f
Adds `TRANSFORMERS_TEST_BACKEND` (#25655)
* Adds `TRANSFORMERS_TEST_BACKEND`
Allows specifying arbitrary additional import following first `import torch`.
This is useful for some custom backends, that will require additional imports to trigger backend registration with upstream torch.
See https://github.com/pytorch/benchmark/pull/1805 for a similar change in `torchbench`.

* Update src/transformers/testing_utils.py

Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com>

* Adds real backend example to documentation

---------

Co-authored-by: Yih-Dar <2521628+ydshieh@users.noreply.github.com>
2023-08-22 17:08:13 +02:00
Rafael Padilla fd56f7f081
removing unnecesssary extra parameter (#25643) 2023-08-22 10:10:30 -04:00
Arthur e20fab0bbe
Fix bloom add prefix space (#25652)
* properly support Sequence of pretokenizers

* actual fix

* make sure the fix works. Tests are not working for sure!

* hacky way

* add TODO

* update

* add a todo

* nits

* rename test

* nits

* rename test
2023-08-22 14:50:12 +02:00
Matt 62396cff46
TF 2.14 compatibility (#25630)
* Update the TF pin and see if anything breaks

* make fixup

* make fixup

* make fixup
2023-08-22 13:13:38 +01:00
Sylvain Gugger 3629190689
Put IDEFICS in the right section of the doc (#25650) 2023-08-22 10:39:10 +02:00
Sylvain Gugger edb28722c2
Pass the proper token to PEFT integration in auto classes (#25649) 2023-08-22 10:13:56 +02:00
Christopher Akiki 88e51ba306
[MINOR:TYPO] (#25646)
[MINOR:TYPO] Update tokenization_auto.py
2023-08-22 09:54:44 +02:00
Blake Wyatt 6a314ea7cd
[DOCS] MusicGen Docs Update (#25510)
* docs: note token limitations for MusicGen

* docs: note token limitations for MusicGen

* docs: fix token count with token limitations for MusicGen
2023-08-22 08:22:45 +02:00
Tanay Mehta 182b83749a
Add Number Normalisation for SpeechT5 (#25447)
* add: NumberNormalizer works for integers, floats, common currencies, negative numbers and percentages

* fix: renamed number normalizer class and added normalization to SpeechT5Processor

* fix: restyled with black and ruff, should pass code quality tests

* fix: moved normalization to tokenizer and other small changes to normalizer

* add: test for normalization and changed the existing full tokenizer test

* fix: tokenization tests now pass, made changes to existing tokenization where normalization is covered; added normalize arg to func signature

* fix: changed default normalize setting to False, modified the tests a bit

* fix: added support for comma separated numbers, tokenization on the fly with kwargs and normalizer getter setter funcs
2023-08-22 08:12:57 +02:00
Joe Mifsud 58c36bea74
Support specifying revision in push_to_hub (#25578)
Support revision in push_to_hub
2023-08-22 07:55:35 +02:00
Susnato Dhar 450a181d8b
Add Pop2Piano (#21785)
* init commit

* config updated also some modeling

* Processor and Model config combined

* extraction pipeline(upto before spectogram & mel_conditioner) added but not properly tested

* model loading successful!

* feature extractor done!

* FE can now be called from HF

* postprocessing added in fe file

* same as prev commit

* Pop2PianoConfig doc done

* cfg docs slightly changed

* fe docs done

* batched

* batched working!

* temp

* v1

* checking

* trying to go with generate

* with generate and model tests passed

* before rebasing

* .

* tests done docs done remaining others & nits

* nits

* LogMelSpectogram shifted to FeatureExtractor

* is_tf rmeoved from pop2piano/init

* import solved

* tokenization tests added

* minor fixed regarding modeling_pop2piano

* tokenizer changed to only return midi_object and other changes

* Updated paper abstract(Camera-ready version) (#2)

* more comments and nits

* ruff changes

* code quality fix

* sg comments

* t5 change added and rebased

* comments except batching

* batching done

* comments

* small doc fix

* example removed from modeling

* ckpt

* forward it compatible with fe and generation done

* comments

* comments

* code-quality fix(maybe)

* ckpts changed

* doc file changed from mdx to md

* test fixes

* tokenizer test fix

* changes

* nits done main changes remaining

* code modified

* Pop2PianoProcessor added with tests

* other comments

* added Pop2PianoProcessor to dummy_objects

* added require_onnx to modeling file

* changes

* update .md file

* remove extra line in index.md

* back to the main index

* added pop2piano to index

* Added tokenizer.__call__ with valid args and batch_decode and aligned the processor part too

* changes

* added return types to 2 tokenizer methods

* the PR build test might work now

* added backends

* PR build fix

* vocab added

* comments

* refactored vocab into 1 file

* added conversion script

* comments

* essentia version changed in .md

* comments

* more tokenizer tests added

* minor fix

* tests extended for outputs acc check

* small fix

---------

Co-authored-by: Jongho Choi <sweetcocoa@snu.ac.kr>
2023-08-21 16:35:00 +01:00
mchau 6f041fcbb8
fix documentation for CustomTrainer (#25635)
fix doc
2023-08-21 17:23:17 +02:00
Rafael Padilla 8608bf2049
🚨🚨🚨 changing default threshold and applying threshold before the rescale (#25608)
changing position of score threshold and its default value
2023-08-21 10:20:05 -04:00
Yih-Dar 2df24228d6
Skip doctest for some recent files (#25631)
update

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-08-21 15:20:44 +02:00
Arthur 2582bbde2e
fix ACT_FN (#25627) 2023-08-21 14:33:43 +02:00
Yoach Lacombe 2c1bcbf5ed
correct TTS pipeline docstrings snippet (#25587)
* correct TTS pipeline docstrings snippet

* add text_to_audio.py pipelines to documentation tests
2023-08-21 13:40:04 +02:00
Pranith Pashikanti e769ca3d28
Added paper links in logitprocess.py (#25482) 2023-08-21 12:09:34 +01:00
Sylvain Gugger 5c67682b16
v4.33.0.dev0 2023-08-21 07:07:04 -04:00
Francisco Kurucz 2f8acfea1c
Fix test_modeling_mpt typo in model id (#25606)
Fix model id in get_large_model_config on file test_modeling_mpt
2023-08-21 11:11:21 +02:00
Yih-Dar f09db47a71
Run doctest for new files (#25588)
fix

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-08-21 11:08:38 +02:00
Younes Belkada 9627c3da4a
Fix PEFT integration failures on nightly CI (#25624)
fix PEFT integration failures
2023-08-21 10:04:44 +02:00
Sylvain Gugger f92cc7034a
Ignore all exceptions from signal in dynamic code (#25623) 2023-08-21 09:01:11 +02:00
ydshieh 1982dd3b15 Hotfix 2023-08-19 11:15:38 +02:00
Marc Sun 6b82d936d4
reattach hooks when using `resize_token_embeddings` (#25596)
* reattach hooks

* fix style
2023-08-18 17:30:29 -04:00
Stas Bekman 6c811a322f
new model: IDEFICS via HuggingFaceM4 (#24796)
* rename

* restore

* mappings

* unedited tests+docs

* docs

* fixes

* fix auto-sync breakage

* cleanup

* wip

* wip

* add fetch_images

* remove einops dependency

* update

* fix

* fix

* fix

* fix

* fix

* re-add

* add batching

* rework

* fix

* improve

* add Leo as I am extending his work

* cleanup

* fix

* cleanup

* slow-test

* fix

* fix

* fixes

* deal with warning

* rename modified llama classes

* rework fetch_images

* alternative implementation

* cleanup

* strict version

* cleanup

* [`IDEFICS`] Fix idefics ci (#25056)

* Fix IDEFICS CI

* fix test file

* fixup

* some changes to make tests pass

* fix

* fixup

* Update src/transformers/models/idefics/configuration_idefics.py

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

---------

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* remove compat checks

* style

* explain that Idefics is not for training from scratch

* require pt>=2.0

* fix idefics vision config (#25092)

* fix idefics vision config

* fixup

* clean

* Update src/transformers/models/idefics/configuration_idefics.py

---------

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* cleanup

* style

* cleanup

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* upcase

* sequence of images

* handle the case with no images

* Update src/transformers/image_processing_utils.py

Co-authored-by: Victor SANH <victorsanh@gmail.com>

* support pure lm take 2

* support tokenizer options

* parameterize num_channels

* fix upcase

* s|IdeficsForCausalLM|IdeficsForVisionText2Text|g

* manual to one line

* addressing review

* unbreak

* remove clip dependency

* fix test

* consistency

* PIL import

* Idefics prefix

* Idefics prefix

* hack to make tests work

* style

* fix

* fix

* revert

* try/finally

* cleanup

* clean up

* move

* [`IDEFICS`] Fix idefics config refactor (#25149)

* refactor config

* nuke init weights

* more refactor

* oops

* remove visual question answering pipeline support

* Update src/transformers/models/idefics/clip.py

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>

* Update src/transformers/models/idefics/modeling_idefics.py

* cleanup

* mv clip.py vision.py

* tidyup

---------

Co-authored-by: Stas Bekman <stas00@users.noreply.github.com>
Co-authored-by: Stas Bekman <stas@stason.org>

* fix

* license

* condition on pt

* fix

* style

* fix

* rm torchvision dependency, allow custom transforms

* address review

* rework device arg

* add_eos_token

* s/transforms/transform/

* fix top level imports

* fix return value

* cleanup

* cleanup

* fix

* style

* license

* license

* Update src/transformers/models/idefics/image_processing_idefics.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* add a wrapper to freeze vision layears

* tidyup

* use the correct std/mean settings

* parameterize values from config

* add tests/models/idefics/test_image_processing_idefics.py

* add test_processor_idefics.py

* cleanup

* cleanups

* fix

* fix

* move to the right group

* style

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* add perceiver config

* reset

* missing arg docs

* Apply suggestions from code review

Co-authored-by: Leo Tronchon <leo.tronchon@gmail.com>

* address review comments

* inject automatic end of utterance tokens (#25218)

* inject automatic end of utterance tokens

* fix

* fix

* fix

* rework to not use the config

* not end_of_utterance_token at the end

* Update src/transformers/models/idefics/processing_idefics.py

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* address review

* Apply suggestions from code review

Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>

* Update src/transformers/image_processing_utils.py

Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>

* [`Idefics`] add image_embeddings option in generate-related methods (#25442)

* add image_embeddings option in generate-related methods

* style

* rename image_embeddings and allow perceiver embeddings precomputation

* compute embeddings within generate

* make is_encoder_decoder= True the default in config

* nested if else fix

* better triple check

* switch if elif order for pixel values / img embeds

* update model_kwargs perceiver only at the end

* use _prepare_model_inputs instead of encoder_decoder logic

* fix comment typo

* fix config default for is_encoder_decoder

* style

* add typehints

* precompute in forward

* doc builder

* style

* pop instead of get image hidden states

* Trigger CI

* Update src/transformers/models/idefics/modeling_idefics.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* Update src/transformers/models/idefics/modeling_idefics.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* fix * + indentation + style

* simplify a bit the use_resampler logic using comments

* update diocstrings

* Trigger CI

---------

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* fix rebase changes

* unbreak #25237 - to be fixed in follow up PRs

* is_composition = False

* no longer needed

---------

Co-authored-by: leot13 <leo.tronchon@gmail.com>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Victor SANH <victorsanh@gmail.com>
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2023-08-18 14:12:28 -07:00
Hyeonseo Yun 4d64157ed3
🌐 [i18n-KO] Translated `perf_train_tpu_tf.md` to Korean (#25433)
* docs: ko: perf_train_tpu_tf.md

* feat: nmt and manual edit perf_train_tpu_tf.md

* fix: resolve suggestions

Co-authored-by: Sangam Lee <74291999+augustinLib@users.noreply.github.com>
Co-authored-by: Kim haewon <ehdvkf02@naver.com>
Co-authored-by: Kihoon Son <75935546+kihoon71@users.noreply.github.com>

---------

Co-authored-by: Sangam Lee <74291999+augustinLib@users.noreply.github.com>
Co-authored-by: Kim haewon <ehdvkf02@naver.com>
Co-authored-by: Kihoon Son <75935546+kihoon71@users.noreply.github.com>
2023-08-18 23:08:34 +02:00
Omar Sanseviero 6f4424bb08
Make TTS automodels importable (#25595)
* Add auto model for spectrogram/waveform

* Add doc and install

* Add dummy objects

* Did I miss anything?
2023-08-18 22:01:35 +02:00
Younes Belkada faed2ca46f
[`PEFT`] Peft integration alternative design (#25077)
* a draft version

* v2 integration

* fix

* make it more generic and works for IA3

* add set adapter and multiple adapters support

* fixup

* adapt a bit

* oops

* oops

* oops

* adapt more

* fix

* add more refactor

* now works with model class

* change it to instance method as it causes issues with `jit`.

* add CR

* change method name

* add `add_adapter` method

* clean up

* Update src/transformers/adapters/peft_mixin.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* add moe utils

* fixup

* Update src/transformers/adapters/peft_mixin.py

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>

* adapt

* oops

* fixup

* add is_peft_available

* remove `requires_backend`

* trainer compatibility

* fixup + docstring

* more details

* trigger CI

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* Update src/transformers/modeling_utils.py

* fixup + is_main_process

* added `save_peft_format` in save_pretrained

* up

* fix nits here and there

* nits here and there.

* docs

* revert `encoding="utf-8"`

* comment

* added slow tests before the PEFT release.

* fixup and nits

* let's be on the safe zone

* added more comments

* v1 docs

* add remaining docs

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* move to `lib_integrations`

* fixup

* this time fixup

* Apply suggestions from code review

Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>

* address final comments

* refactor to use `token`

* add PEFT to DockerFile for slow tests.

* added pipeline support.

---------

Co-authored-by: Patrick von Platen <patrick.v.platen@gmail.com>
Co-authored-by: Sylvain Gugger <35901082+sgugger@users.noreply.github.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-08-18 19:08:03 +02:00
Arthur ef1534252f
[`TokenizerFast`] Fix setting prefix space in __init__ (#25563)
* properly support Sequence of pretokenizers

* actual fix

* make sure the fix works. Tests are not working for sure!

* hacky way

* add TODO

* update

* add a todo
2023-08-18 18:09:50 +02:00
Sourab Mangrulkar 636acc75b0
fix z3 init when using accelerate launcher (#25589) 2023-08-18 19:27:17 +05:30
Kashif Rasul 8d2f953f4a
[Time series Informer] fix dtype of cumsum (#25431)
* fix dtype of cumsum

* add comment
2023-08-18 14:27:16 +02:00
Arthur bc3e20dcf0
[`Llama`] remove prompt and fix prefix finetuning (#25565)
* nit

* update

* make sure use_default_system_prompt is saved

* update checkpointing

* consistency

* use_default_system_prompt for test
2023-08-18 13:39:23 +02:00
Arthur 30b3c46ff5
[`split_special_tokens`] Add support for `split_special_tokens` argument to encode (#25081)
* draft changes

* update and add tests

* styling for no

* move test

* path to usable model

* update test

* small update

* update bertbased tokenizers

* don'tuse kwargs for _tokenize

* don'tuse kwargs for _tokenize

* fix copies

* update

* update test for special tokenizers

* fixup

* skip two tests

* remove pdb breakpiont()

* wowo

* rewrite custom tests

* nits

* revert chang in target keys

* fix markup lm

* update documentation of the argument
2023-08-18 13:26:27 +02:00
Alex McKinney 9d7afd2536
Replaces calls to `.cuda` with `.to(torch_device)` in tests (#25571)
* Replaces calls to `.cuda` with `.to(torch_device)` in tests
`torch.Tensor.cuda()` is a pre-0.4 solution to changing a tensor's device. It is recommended to prefer `.to(...)` for greater flexibility and error handling. Furthermore, this makes it more consistent with other tests (that tend to use `.to(torch_device)`) and ensures the correct device backend is used (if `torch_device` is neither `cpu` or `cuda`).

* addressing review comments

* more formatting changes in Bloom test

* `make style`

* Update tests/models/bloom/test_modeling_bloom.py

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>

* fixes style failures

---------

Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2023-08-18 12:40:40 +02:00
Martin Malmsten c45aab7535
Added missing parenthesis in call to is_fsdp_enabled (#25585)
Calling function is_fsdp_enabled instead of checking if it is not None
2023-08-18 10:32:46 +02:00
Younes Belkada 940d1a76b0
[`Docs` / `BetterTransformer` ] Added more details about flash attention + SDPA (#25265)
* added more details about flash attention

* correct and add more details

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* few modifs

* more details

* up

* Apply suggestions from code review

Co-authored-by: fxmarty <9808326+fxmarty@users.noreply.github.com>

* adapt from suggestion

* Apply suggestions from code review

Co-authored-by: fxmarty <9808326+fxmarty@users.noreply.github.com>

* trigger CI

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* fix nits and copies

* add new section

---------

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: fxmarty <9808326+fxmarty@users.noreply.github.com>
2023-08-18 10:32:28 +02:00
Kihoon Son 08e32519f8
Suggestions on Pipeline_webserver (#25570)
* Suggestions on Pipeline_webserver

docs: reorder the warning tip for pseudo-code

Co-Authored-By: Wonhyeong Seo <wonhseo@kakao.com>

* Apply suggestions from code review

Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>

* Update docs/source/ko/pipeline_webserver.md

Co-authored-by: Wonhyeong Seo <wonhseo@kakao.com>

---------

Co-authored-by: Wonhyeong Seo <wonhseo@kakao.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
2023-08-18 10:17:44 +02:00
Amélie T. Reymond 659ab0423e
Fix typo in example code (#25583)
`lang_code_to_id("en_XX")` => `lang_code_to_id["en_XX"]`

lang_code_to_id is a dict
2023-08-18 07:58:59 +02:00
Marc Sun 4a27c13f1e
add warning for 8bit optimizers (#25575)
* add warning for 8bit optimizers

* protect import
2023-08-17 14:48:58 -04:00
Yih-Dar 427adc898a
Skip `test_contrastive_generate` for `TFXLNet` (#25574)
* fix

* fix

---------

Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2023-08-17 18:56:34 +02:00