* Add `return_full_text` option for text-generation models
* [wip] Support chat inputs in text-generation pipeline
* Align return type with python version
* Remove conversational task (moved to text-generation)
* Fix typos
* Add povey window function
* Add `SeamlessM4TFeatureExtractor`
* Add support for wav2vec2-bert models
* Add `SeamlessM4TFeatureExtractor` processor unit tests
* Add pipeline support for `wav2vec2-bert` models
* Update JSDoc
* Update SamModel
* Make `AutoModel.from_pretrained` work with SamModel
* Add listed support for SAM (Segment Anything Model)
* Update types of `calculateDimensions`
* Throw error if reading image from tensor with dims.length != 3
* Make SamProcessor input points optional
* Fix type errors
* `let` -> `const`
* `cat` -> `stack`
* Expose `reshape_input_points` in `SamProcessor`
* Add `input_labels` input parameter for SAM
* Add `input_labels` to sam processor
* Update SAM unit tests
* Remove TODOs
* Update JSDoc
* Add custom VITS tokenizer converter
* Do not decode if expected input_ids is empty
* Update vits tokenizer tests
* Implement `VitsTokenizer`
* Add support for VITS model
* Support VITS through pipeline API
* Update JSDoc
* Add TTS unit test
* Add speecht5 unit test
* Fix typo
* Fix typo
* Update speecht5 model id
* Add note about using quantized speecht5 in unit tests
* Monkey-patch `BigInt64Array` and `BigUint64Array`
* Add `RoFormerTokenizer
* Use `clean_text` in bert normalizer config
* Add control characters test
* Add support for RoFormer models
* Use default label if id2label is not specified
* Update requirements.txt
* Skip roformer tokenizer tests
* Update `VitMatteImageProcessor` test comment
* Add support for ChineseCLIP models
* Add chinese-clip to list of supported models
* Sort zero-shot-image-classification results by score (desc)
* Update expected zero-shot image classification output
* Add support for `VitMatte` models
* Add `VitMatteImageProcessor`
* Add `VitMatteImageProcessor` unit test
* Fix typo
* Add example code for `VitMatteForImageMatting`
* Fix JSDoc
* Fix typo
* Add support for ESM models
* Add ESM tokenizer conversion methods
* Add special test cases for ESM tokenizer
* add special tokens in conversion script
* Do not save decoder
* Add special tokens tokenizer test
* Join tokens with space if decoder is null
* Treat all tokens as added tokens
* Use `WhitespaceSplit` pretokenizer
* `<eos>` and `<bos>` are not special tokens
* Update more supported ESM models
* Add `--tokenizer_id` to conversion script
* Add supported models comments
* Add link to optimum docs for supported architectures
Closes#288
* Refactor `SUPPORTED_MODELS` dict to include task
* Update example model id
* Update list of supported models
* Update generate_tests.py
* Remove requirement of `output_attentions` revision
* Add demo site to examples section (closes#233)
* Fix typo
* Include examples in docs index
* Update github issue templates
* Create config.yml
* Order supported models
* Cleanup
* Update 4_feature-request.yml
* Add FFT unit tests
* Refactor maths.js and audio.js
* Refactor audio processors
* Add support for AST models
* Add another audio-classification example
* Add audio processing unit tests
* Implement `log_mel='dB'` in `spectrogram` function
* Add `ClapFeatureExtractor`
* Implement `ClapFeatureExtractor` unit tests
* Add support for `CLAP`
* Add `ZeroShotAudioClassificationPipeline`
* Add listed support for `zero-shot-audio-classification` pipeline tag
* Cleanup
* `let` -> `const`
* Update `mel_filter_bank` unit test
* Add `'Xenova/tiny-random-ClapModel'`
* Add `ClapAudioModelWithProjection` and `ClapTextModelWithProjection`
* Move audio validation to helper function
* Optimize `mel_filter_bank` computation
-30ms
* Update mel filters unit test
* Cleanup
* Optimizations
* Fix jsdoc
* Optimizations
* Add WIP conversion scripts
Will be updated once https://github.com/huggingface/optimum/pull/1552 is merged