* Add FFT unit tests
* Refactor maths.js and audio.js
* Refactor audio processors
* Add support for AST models
* Add another audio-classification example
* Add audio processing unit tests
* Implement `log_mel='dB'` in `spectrogram` function
* Add `ClapFeatureExtractor`
* Implement `ClapFeatureExtractor` unit tests
* Add support for `CLAP`
* Add `ZeroShotAudioClassificationPipeline`
* Add listed support for `zero-shot-audio-classification` pipeline tag
* Cleanup
* `let` -> `const`
* Update `mel_filter_bank` unit test
* Add `'Xenova/tiny-random-ClapModel'`
* Add `ClapAudioModelWithProjection` and `ClapTextModelWithProjection`
* Move audio validation to helper function
* Optimize `mel_filter_bank` computation
-30ms
* Update mel filters unit test
* Cleanup
* Optimizations
* Fix jsdoc
* Optimizations
* Add WIP conversion scripts
Will be updated once https://github.com/huggingface/optimum/pull/1552 is merged
* Add `size` getter to `RawImage`
* Add `DPTFeatureExtractor`
* Add depth-estimation w/ DPT models
* Add GLPN models for depth estimation
* Add missing import in example
* Add `DPTFeatureExtractor` processor test
* Add unit test for GLPN processor
* Add support for `GLPNFeatureExtractor`
Uses `size_divisor` to determine resize width and height
* Add `GLPNForDepthEstimation` example code
* Add DPT to list of supported models
* Add GLPN to list of supported models
* Add `DepthEstimationPipeline`
* Add listed support for depth estimation pipeline
* Add depth estimation pipeline unit tests
* Fix formatting
* Update `pipeline` JSDoc
* Fix typo from merge
* Add `NougatTokenizer`
* Add nougat unit tests
* Add support for `NougatImageProcessor`
* Add `crop` function to `RawImage`
* Fix `RawImage` save function
OffscreenCanvas does not have `toDataURL` function
* Add listed support for nougat models
* Fix `min`/`max` function typing
* Add unknown token to tokenizer class
* Implement `NoBadWordsLogitsProcessor`
* Use `NoBadWordsLogitsProcessor` in `generate`
* Fix regex group substitutions
Python uses \1, \2, etc. for group substitutions, but JavaScript uses $1, $2, etc.
* Create `regexSplit` helper function to split but keep delimiter
* Fix splitting for String pattern types
* Fix docstring
* Set `batch_size=1` for owlvit exports
* Add support for owlvit models
* Update default quantization settings
* Add list of supported models
* Revert update of owlvit quantization settings
* Add `OwlViTProcessor`
* Move `get_bounding_box` to utils
* Add `ZeroShotObjectDetectionPipeline`
* Add unit tests
* Add owlvit processor test
* Add listed support for `zero-shot-object-detection`
* Add OWL-ViT to list of supported models
* Update README.md
* Fix typo from merge
* Move tensor clone for Worker ownership NaN issue
* Update src/models.js - Use conditional operator
Co-authored-by: Joshua Lochner <admin@xenova.com>
* Update src/models.js - Object.create(null)
Co-authored-by: Joshua Lochner <admin@xenova.com>
* tensor.js: remove "Object" type to fix types (since ONNX exports correct type now)
* models.js / validateInputs(): Remove promise/await because it is not needed
Use "tensor instanceof Tensor" check because otherwise validateInputs() thinks it has an input even if it doesn't
* Fix JSDoc
* Update JSDoc
---------
Co-authored-by: Joshua Lochner <admin@xenova.com>
* Add `Swin2SRImageProcessor`
* Add `RawImage.fromTensor` helper function
* Add clamp tensor function
* Add support for `.to` data type conversion
* Add `round` tensor function
* Add support for `mul` tensor function
* Fix image padding
* Only perform padding if it will affect size
* Create basic processors unit test suite
* Add SamProcessor test case
* Move `CONTENT_TYPE_MAP` outside `RawImage` class
* Perform reflective padding for swin2sr models
* Add swin2sr models for image super-resolution
* Add listed support for Swin2SR models
* Add image-to-image pipeline
* Add listed support for image-to-image task
* Add image-to-image unit tests
* Add `add` tensor functions
* Generalize `pad_image` helper function
* Add more unit tests for image processors
* Fix typo
* By default, do not add special tokens in text-generation
See 147e8ce4ae/src/transformers/pipelines/text_generation.py (L106)
* Add support for mistral models
* Add support for Falcon models
* Replace `batch_size` with variable
* Add Falcon to list of supported models
* Fix typing issue with bigint literals
* Add vocoder to export
* Add tokenizer.json export for speecht5 models
* Update speecht5 supported models
* Create `SpeechT5Tokenizer`
* Add `ones` and `ones_like` tensor functions
* Add support for speecht5 text-to-speech
* Disambiguate `SpeechSeq2Seq` and `Seq2SeqLM`
* Create `TextToAudioPipeline`
* Add listed support for `text-to-audio` / `text-to-speech`
* Use unquantized vocoder by default
* Skip speecht5 unit tests for now
Due to bug in transformers: https://github.com/huggingface/transformers/issues/26547
* Update example pipeline output
* Create simple in-browser TTS demo
* Add template README
* Delete package-lock.json
* Update required transformers.js version
* Add link to Transformers.js
* Double -> Single quotes
* Add link to text-to-speech demo
* Update sample speaker embeddings
* Update transformers.js version
* Use Singleton object in electron tutorial
* Create package-lock.json
* Remove models folder
* Remove step for copying models to local folder
* Add `add_special_tokens` option to tokenizers
* Improve error messages for loading processors
* Add `DonutFeatureExtractor`
* Add `DonutSwinModel` and `MBartForCausalLM` models
* Fix `addPastKeyValues` for `VisionEncoderDecoder` models
* Add `Donut` to list of supported models
* Make encode parameters optional
* Support batched decoder input ids
* Remove unused import
* Add `do_thumbnail` for donut image processing
* Fix `TypeError: decoder_input_ids[i].map is not a function`
* Only pad if width and height specified in size
* Only pad if `pad_size` is defined
* Only cut `decoder_input_ids` if past model output
* Add donut model
* Add example usage to JSDoc for `DonutSwinModel`
* Add support for `DocumentQuestionAnsweringPipeline`
* Add simple document question answering unit test
* Add listed support for document QA pipeline
* Add support for `Blenderbot` models
Closes#37
References #29
* Add support for `BlenderbotTokenizer`
* Add blenderbot to supported models
* Add support for `BlenderbotSmallTokenizer`
* Add custom tests for blenderbot-small
* Add support for `BlenderbotSmall` models
* Update list of supported models
* Improve `addPastKeyValues` function
* Allow skipping of adding encoder past key values