Arthur
673440d073
update ruff version ( #30932 )
...
* update ruff version
* fix research projects
* Empty
* Fix errors
---------
Co-authored-by: Lysandre <lysandre@huggingface.co>
2024-05-22 06:40:15 +02:00
Mohit Sharma
7a4792e6b3
CI: AMD MI300 tests fix ( #30797 )
...
* add fix
* update import
* updated dicts and comments
* remove prints
* Update testing_utils.py
2024-05-21 12:46:07 +01:00
Joseph Enguehard
07bf2dff78
Add TokenClassification for Mistral, Mixtral and Qwen2 ( #29878 )
...
* Add MistralForTokenClassification
* Add tests and docs
* Add token classification for Mixtral and Qwen2
* Save llma for token classification draft
* Add token classification support for Llama, Gemma, Persimmon, StableLm and StarCoder2
* Formatting
* Add token classification support for Qwen2Moe model
* Add dropout layer to each ForTokenClassification model
* Add copied from in tests
* Update src/transformers/models/llama/modeling_llama.py
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
* Propagate suggested changes
* Style
---------
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
2024-05-20 10:06:57 +02:00
Yih-Dar
1b3dba9417
Make `Gemma` work with `torch.compile` ( #30775 )
...
* fix
* [run-slow] gemma
* add test
* add `test_compile_static_cache`
* fix
* style
* remove subprocess
* use attribute
* fix
* style
* update
* [run-slow] dbrx,gemma,jetmoe,phi3,recurrent_gemma
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
2024-05-16 13:41:33 +02:00
Joao Gante
f26e407370
Cache: models return input cache type ( #30716 )
2024-05-08 18:26:34 +01:00
Arthur
e34da3ee3c
[`LlamaTokenizerFast`] Refactor default llama ( #28881 )
...
* push legacy to fast as well
* super strange
* Update src/transformers/convert_slow_tokenizer.py
* make sure we are BC
* fix Llama test
* nit
* revert
* more test
* style
* update
* small update w.r.t tokenizers
* nit
* don't split
* lol
* add a test for `add_prefix_space=False`
* fix gemma tokenizer as well
* update
* fix gemma
* nicer failures
* fixup
* update
* fix the example for legacy = False
* use `huggyllama/llama-7b` for the PR doctest
* nit
* use from_slow
* fix llama
2024-04-23 23:12:59 +02:00
Yih-Dar
08a194fcd6
Fix slow tests for important models to be compatible with A10 runners ( #29905 )
...
* fix mistral and mixtral
* add pdb
* fix mixtral tesst
* fix
* fix mistral ?
* add fix gemma
* fix mistral
* fix
* test
* anoter test
* fix
* fix
* fix mistral tests
* fix them again
* final fixes for mistral
* fix padding right
* fix whipser fa2
* fix
* fix
* fix gemma
* test
* fix llama
* fix
* fix
* fix llama gemma
* add class attribute
* fix CI
* clarify whisper
* compute_capability
* rename names in some comments
* Add # fmt: skip
* make style
* Update tests/models/mistral/test_modeling_mistral.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* update
* update
---------
Co-authored-by: Younes Belkada <younesbelkada@gmail.com>
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2024-04-09 13:28:54 +02:00
Yoach Lacombe
569f6c7d43
Fix FA2 tests ( #29909 )
...
* fix FA2 tests
* refactor inference test name
2024-04-01 07:51:00 +00:00
Lysandre Debut
39114c0383
Remove static pretrained maps from the library's internals ( #29112 )
...
* [test_all] Remove static pretrained maps from the library's internals
* Deprecate archive maps instead of removing them
* Revert init changes
* [test_all] Deprecate instead of removing
* [test_all] PVT v2 support
* [test_all] Tests should all pass
* [test_all] Style
* Address review comments
* Update src/transformers/models/deprecated/_archive_maps.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Update src/transformers/models/deprecated/_archive_maps.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* [test_all] trigger tests
* [test_all] LLAVA
* [test_all] Bad rebase
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
2024-03-25 10:33:38 +01:00
Lysandre Debut
11bbb505c7
Adds pretrained IDs directly in the tests ( #29534 )
...
* Adds pretrained IDs directly in the tests
* Fix tests
* Fix tests
* Review!
2024-03-13 14:53:27 +01:00
Marc Sun
cec773345a
Fix llama + gemma accelete tests ( #29380 )
2024-03-01 10:32:36 -05:00
Younes Belkada
ad00c482c7
FIX [`Gemma` / `CI`] Make sure our runners have access to the model ( #29242 )
...
* pu hf token in gemma tests
* update suggestion
* add to flax
* revert
* fix
* fixup
* forward contrib credits from discussion
---------
Co-authored-by: ArthurZucker <ArthurZucker@users.noreply.github.com>
2024-02-28 06:25:23 +01:00
Sanchit Gandhi
2a9b1f80c4
[Gemma] Fix eager attention ( #29187 )
...
* fix modelling code
* add tests
* fix tests
* add some logit tests
* style
* fix fix
2024-02-22 01:07:52 +01:00
Arthur
594c1277b2
[ `gemma`] Adds support for Gemma 💎 ( #29167 )
...
* inital commit
* update
* update conversion checkpoint
* update conversion script
* nits
* some fixes
* nits
* merge
* fix permute
* nits
* fix
* nits
* nits
* nits
* fix rope
* fix both rope
* nites
* style
* make sure flax works
* fix flax init code
* fix foward
* nits
* print flax generation out
* current code
* nits
* SIIIIIIIIIIIIIIIIIII
* update
* add new tokenizer
* correct fast tokenizer
* fix conversion
* more comments
* fix modeling and conversion
* nits and nits
* nits testing
* add some tokenization tests
* add some edge cases
* add slow tests and fix them
* fixup
* fix copies for modeling
* fix copies
* add 7B slow tests
* fix
* fix
* fix tests
* make tokenizer cis go green
* styling
* last tokenizer nits
* update jax tests
* fix flax for 7b
* add jit testing 🤗
* cleanups
* isolated nit, inv_freq for rotary_emb.inv_freq
* propagate to jax
* Apply suggestions from code review
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
* adjust test
* fix conversion script
* change name
* correct file names
* update conversion script
* Fix bos and eos token ids in the model configuration (#3 )
* update modelling
* update conversion script
* add static cache for gemma
* fix sdpa generate
* fix batched
* multiple fixes
* fix FA2
* final fix
* Rename a few missing strings and filenames (#4 )
* merge with upstream main
* fix copies
* fix copies
* fix fixup
* fix fixup
* fix
* fix
* final tests
* fix fx gemma tests
* fix fx bf16/fp16 tests
* update slow fx tests
* fx slow tests: one logits, one generation
* move jit test standalone
* Apply suggestions from code review
* nits
* tokenizer updates
* more tokenization updates: custom GemmaSentencepieceExtrator
* style
* Update src/transformers/cache_utils.py
* Update src/transformers/models/gemma/__init__.py
* Update tests/models/gemma/test_modeling_flax_gemma.py
* small nits
* style
* update tokenization test
* fix the rotary embedding
* with style
* fix slow tests
* WARNING this commit might be very important for precisions
* Update tests/models/gemma/test_modeling_flax_gemma.py
* Update src/transformers/models/gemma/configuration_gemma.py
Co-authored-by: Lysandre Debut <hi@lysand.re>
* Update src/transformers/models/gemma/modeling_flax_gemma.py
Co-authored-by: Lysandre Debut <hi@lysand.re>
* small nits here and there!
* forgotten nit
* remove on the fly computation of inv_freq
* revert previous change, let's be safe and for now re-compute freq cis to make sure it's in float
* Apply suggestions from code review
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/transformers/models/gemma/convert_gemma_weights_to_hf.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update src/transformers/models/gemma/convert_gemma_weights_to_hf.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_flax_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_tokenization_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_tokenization_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_tokenization_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_tokenization_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* Update tests/models/gemma/test_modeling_gemma.py
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
* nit conversion script link
* fix some tests
* add not doctest and pr doctest
* repo consistency
* fix last CIs 🚀
* update all readmes
---------
Co-authored-by: younesbelkada <younesbelkada@gmail.com>
Co-authored-by: Sanchit Gandhi <93869735+sanchit-gandhi@users.noreply.github.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: Younes Belkada <49240599+younesbelkada@users.noreply.github.com>
Co-authored-by: sanchit-gandhi <sanchit@huggingface.co>
Co-authored-by: Lysandre Debut <hi@lysand.re>
2024-02-21 14:21:28 +01:00