* Define custom CLIP ONNX configs
* Update conversion script
* Support specifying custom model file name
* Use int64 for CLIP input ids
* Add support for CLIP text and vision models
* Fix JSDoc
* Add docs for `CLIPTextModelWithProjection`
* Add docs for `CLIPVisionModelWithProjection`
* Add unit test for CLIP text models
* Add unit test for CLIP vision models
* Set resize precision to 3 decimal places
* Fix `RawImage.save()` function
* Throw error when reading image and status != 200
* Create basic semantic image search application
* Separate out components
* Add `update-database` script
* Update transformers.js version
* Override `LOAD_FUNCTION` for decoder-only models
* Use object destructuring in `_call` functions
* Allow decoder-only models to be called
* Fix detection of default call function
* Update default `_call` JSDoc
* Mark helper functions as private
* Remove outdated comments
* Fix JSDoc
* Rename functions
* Specify model types
Reduces major code duplication
* Improve model output classes
* Remove `encoder_input_name` from seq2seq forward method
* Extract `validateInputs` helper function from `sessionRun`
* Move `compare` helper function to separate utility file
* Default `model_type` to null
* Reduce duplication when loading models using `.from_pretrained`
* Add unit tests for loading models using `.from_pretrained()`
* Compute attention mask for decoder if not given
* Improve decoder attention computation
* Implement `flatten` and `view` tensor ops
* Add documentation for new tensor ops
* Fix `flatten` input types