* link to the conversion Space for maximum simplicity
* add some types to script (very optional)
* typo
* no need for trailing slash here
* Node is also a valid option
* Document how to find a compatible checkpoint on the hub
* Update README
* Fix typing
* Update docs index
---------
Co-authored-by: Julien Chaumond <julien@huggingface.co>
* Recursively replace tensors with custom class
* Add mobile vit models
* Add example code for `ImageClassificationPipeline`
* Fix example urls
* Add MobileViT models and processors
* Update optimum requirement in conversion script
Previous name is deprecated
* Update supported models
* Update supported_models.py
* Update supported_models.py
* Update tokenizer test generator script
* Add special test case for falcon tokenizers
* Update tokenizer test script
* Add support for `FalconTokenizer`
* Update `BertPreTokenizer` call parameter types
* Add `GPTNeoXTokenizer` tokenizer (mpt)
* Use transformers from source when testing
* Reuse `prepare_model_inputs` function type
Better than using `@see {@link ... }` since it works with intellisense.
* Allow user to set `per_channel` and `reduce_range` quantization parameters (#156)
Also save quantization options
* Get operators of graph and subgraphs
* Only run encoder with required inputs
* Add basic whisper unit tests
* Add newline after heading for docs
* Add unit test for transcribing english with timestamps
* Add multilingual test case
* Update typo in node tutorial
* Create node audio processing tutorial
* Point to tutorial in `read_audio` function
* Rename `.md` to `.mdx`
* Add node audio processing tutorial to table of contents
* Add link to model in tutorial
* Update error message grammar
* Override `LOAD_FUNCTION` for decoder-only models
* Use object destructuring in `_call` functions
* Allow decoder-only models to be called
* Fix detection of default call function
* Update default `_call` JSDoc
* Mark helper functions as private
* Remove outdated comments
* Fix JSDoc
* Rename functions
* Specify model types
Reduces major code duplication
* Improve model output classes
* Remove `encoder_input_name` from seq2seq forward method
* Extract `validateInputs` helper function from `sessionRun`
* Move `compare` helper function to separate utility file
* Default `model_type` to null
* Reduce duplication when loading models using `.from_pretrained`
* Add unit tests for loading models using `.from_pretrained()`
* Compute attention mask for decoder if not given
* Improve decoder attention computation
* Implement `flatten` and `view` tensor ops
* Add documentation for new tensor ops
* Fix `flatten` input types
* Align `.generate()` return type with python library
* Add multilingual transcription + translation for whisper models (#87, #95)
* Include `return_timestamps` in calculation of `forced_decoder_ids`
* Only return non-null `forced_decoder_ids`
* Allow user to specify task in any case
* Only set `forced_decoder_ids` when non-empty
* Implement `SuppressTokensAtBeginLogitsProcessor`