Commit Graph

2 Commits

Author SHA1 Message Date
Joshua Lochner 2fde656791
Add support for computing CLIP image and text embeddings separately (Closes #148) (#227)
* Define custom CLIP ONNX configs

* Update conversion script

* Support specifying custom model file name

* Use int64 for CLIP input ids

* Add support for CLIP text and vision models

* Fix JSDoc

* Add docs for `CLIPTextModelWithProjection`

* Add docs for `CLIPVisionModelWithProjection`

* Add unit test for CLIP text models

* Add unit test for CLIP vision models

* Set resize precision to 3 decimal places

* Fix `RawImage.save()` function

* Throw error when reading image and status != 200

* Create basic semantic image search application

* Separate out components

* Add `update-database` script

* Update transformers.js version
2023-08-01 14:01:04 +02:00
Joshua Lochner 35b9e21193
Support calling of decoder-only models (Fixes #137) (#149)
* Override `LOAD_FUNCTION` for decoder-only models

* Use object destructuring in `_call` functions

* Allow decoder-only models to be called

* Fix detection of default call function

* Update default `_call` JSDoc

* Mark helper functions as private

* Remove outdated comments

* Fix JSDoc

* Rename functions

* Specify model types

Reduces major code duplication

* Improve model output classes

* Remove `encoder_input_name` from seq2seq forward method

* Extract `validateInputs` helper function from `sessionRun`

* Move `compare` helper function to separate utility file

* Default `model_type` to null

* Reduce duplication when loading models using `.from_pretrained`

* Add unit tests for loading models using `.from_pretrained()`

* Compute attention mask for decoder if not given

* Improve decoder attention computation

* Implement `flatten` and `view` tensor ops

* Add documentation for new tensor ops

* Fix `flatten` input types
2023-06-20 15:24:35 +02:00