* Add example `wav2vec2` models
* Add support for `CTCDecoder` and `Wav2Vec2CTCTokenizer`
* Generate tokenizer.json files for wav2vec2 models
* Fix wav2vec2 custom tokenizer generation
* Implement wav2vec2 audio-speech-recognition
* Add `Wav2Vec2` as a supported architecture
* Update README.md
* Update generate_tests.py
* Ignore invalid tests
* Update supported wav2vec2 models
* Update supported_models.py
* Simplify pipeline construction
* Implement basic audio classification pipeline
* Update default topk value for audio classification pipeline
* Add example usage for the audio classification pipeline
* Move `loadAudio` to utils file
* Add audio classification unit test
* Add wav2vec2 ASR unit test
* Improve generated wav2vec2 tokenizer json
* Update supported_models.py
* Allow `added_tokens_regex` to be null
* Support exporting mms vocabs
* Supported nested vocabularies
* Update supported tasks and models
* Add warnings to ignore language and task for wav2vec2 models
Will add in future
* Mark internal methods as private
* Add typing to audio variable
* Update node-audio-processing.mdx
* Move node-audio-processing to guides
* Update table of contents
* Add example code for performing feature extraction w/ `Wav2Vec2Model`
NOTE: feature extraction of MMS models is currently broken in the python library, but it works correctly here. See
https://github.com/huggingface/transformers/issues/25485 for more info
* Refactor `Pipeline` class params
* Fix `pipeline` function
* Fix typo in `pipeline` JSDoc
* Fix second typo
* Update typo in node tutorial
* Create node audio processing tutorial
* Point to tutorial in `read_audio` function
* Rename `.md` to `.mdx`
* Add node audio processing tutorial to table of contents
* Add link to model in tutorial
* Update error message grammar