* Add custom VITS tokenizer converter
* Do not decode if expected input_ids is empty
* Update vits tokenizer tests
* Implement `VitsTokenizer`
* Add support for VITS model
* Support VITS through pipeline API
* Update JSDoc
* Add TTS unit test
* Add speecht5 unit test
* Fix typo
* Fix typo
* Update speecht5 model id
* Add note about using quantized speecht5 in unit tests
* Monkey-patch `BigInt64Array` and `BigUint64Array`
* Add `RoFormerTokenizer
* Use `clean_text` in bert normalizer config
* Add control characters test
* Add support for RoFormer models
* Use default label if id2label is not specified
* Update requirements.txt
* Skip roformer tokenizer tests
* Add basic support for chat templates
* Cleanup
* JSDoc improvements
* Support conversion of user-defined functions
* Cleanup
* Fix function creation
* Add unit tests for templates
* Cleanup
* Improve JSDoc
* Add missing return types
* Add chat templates docs to table of contents
* Add support for logical negation
* Fix nested logical negation
* Add unit tests for logical operators
* Add loop variables
* Add support for `RuntimeValue` built-in functions
* Add unit tests for string instance methods
* Fix conversion of normal function to `FunctionValue`
* Update object method unit tests
* Save chat template to tokenizer_config.json during conversion
* Fix `raise_exception` error
* Add `!=` operator for booleans
* Remember to increment loop index
* Cleanup for loop evaluator
* Use `is` helper function
* Add support for text nodes
i.e., non Jinja statements/expressions
* Add auto-generated templating tests
* Update unit tests
* Remove unused function
* Add default chat templates
* Use repo with up-to-date tokenizer config
* Temporarily disable zephyr test
* Delete templates.test.js
* Move Jinja functionality to `@huggingface/jinja`
* Fix template cache type
* Update chat template unit tests
* Update `@huggingface/jinja` version
* Fix default llama2 system prompt usage
* Add unit test for llama2 w/o chat template set
* Update jinja version
* Update jinja version
* Add unit test for user-defined chat templates
Example from https://discuss.huggingface.co/t/issue-with-llama-2-chat-template-and-out-of-date-documentation/61645/3
* Add `AddedToken` for improved tokenization
* Add example usage for chat templates
* Add 'first' Metaspace pretokenizer prepend scheme
* Formatting
* Update wav2vec2 converter special tokens whitespace split
* Fix Metaspace pretokenizer split criteria
* Update inputs of `PreTokenizerSequence`
* Improve Metaspace pretokenizer
* Update llama tokenizer tests
* Improve handling of legacy llama tokenizer
* Re-enable SPM tests
* Add static tokenizer test cases
* Add llama2 static tests
* Allow user to override legacy tokenizer behaviour in `.from_pretrained`
* Add legacy tokenizer unit tests
* Bump jinja version to 0.1.0
* Update `VitMatteImageProcessor` test comment
* Add support for ChineseCLIP models
* Add chinese-clip to list of supported models
* Sort zero-shot-image-classification results by score (desc)
* Update expected zero-shot image classification output
* Add support for `VitMatte` models
* Add `VitMatteImageProcessor`
* Add `VitMatteImageProcessor` unit test
* Fix typo
* Add example code for `VitMatteForImageMatting`
* Fix JSDoc
* Fix typo
* Add support for ESM models
* Add ESM tokenizer conversion methods
* Add special test cases for ESM tokenizer
* add special tokens in conversion script
* Do not save decoder
* Add special tokens tokenizer test
* Join tokens with space if decoder is null
* Treat all tokens as added tokens
* Use `WhitespaceSplit` pretokenizer
* `<eos>` and `<bos>` are not special tokens
* Update more supported ESM models
* Add `--tokenizer_id` to conversion script
* Add supported models comments
* Add link to optimum docs for supported architectures
Closes#288
* Refactor `SUPPORTED_MODELS` dict to include task
* Update example model id
* Update list of supported models
* Update generate_tests.py
* Remove requirement of `output_attentions` revision
* Add demo site to examples section (closes#233)
* Fix typo
* Include examples in docs index
* Update github issue templates
* Create config.yml
* Order supported models
* Cleanup
* Update 4_feature-request.yml
* Add FFT unit tests
* Refactor maths.js and audio.js
* Refactor audio processors
* Add support for AST models
* Add another audio-classification example
* Add audio processing unit tests
* Implement `log_mel='dB'` in `spectrogram` function
* Add `ClapFeatureExtractor`
* Implement `ClapFeatureExtractor` unit tests
* Add support for `CLAP`
* Add `ZeroShotAudioClassificationPipeline`
* Add listed support for `zero-shot-audio-classification` pipeline tag
* Cleanup
* `let` -> `const`
* Update `mel_filter_bank` unit test
* Add `'Xenova/tiny-random-ClapModel'`
* Add `ClapAudioModelWithProjection` and `ClapTextModelWithProjection`
* Move audio validation to helper function
* Optimize `mel_filter_bank` computation
-30ms
* Update mel filters unit test
* Cleanup
* Optimizations
* Fix jsdoc
* Optimizations
* Add WIP conversion scripts
Will be updated once https://github.com/huggingface/optimum/pull/1552 is merged