49 lines
1.7 KiB
Plaintext
49 lines
1.7 KiB
Plaintext
|
|
|
|
By default, Transformers.js uses [hosted pretrained models](https://huggingface.co/models?library=transformers.js) and [precompiled WASM binaries](https://cdn.jsdelivr.net/npm/@xenova/transformers@2.12.1/dist/), which should work out-of-the-box. You can customize this as follows:
|
|
|
|
|
|
### Settings
|
|
|
|
```javascript
|
|
import { env } from '@xenova/transformers';
|
|
|
|
// Specify a custom location for models (defaults to '/models/').
|
|
env.localModelPath = '/path/to/models/';
|
|
|
|
// Disable the loading of remote models from the Hugging Face Hub:
|
|
env.allowRemoteModels = false;
|
|
|
|
// Set location of .wasm files. Defaults to use a CDN.
|
|
env.backends.onnx.wasm.wasmPaths = '/path/to/files/';
|
|
```
|
|
|
|
For a full list of available settings, check out the [API Reference](./api/env).
|
|
|
|
### Convert your models to ONNX
|
|
|
|
We recommend using our [conversion script](https://github.com/xenova/transformers.js/blob/main/scripts/convert.py) to convert your PyTorch, TensorFlow, or JAX models to ONNX in a single command. Behind the scenes, it uses [🤗 Optimum](https://huggingface.co/docs/optimum) to perform conversion and quantization of your model.
|
|
|
|
```bash
|
|
python -m scripts.convert --quantize --model_id <model_name_or_path>
|
|
```
|
|
|
|
For example, convert and quantize [bert-base-uncased](https://huggingface.co/bert-base-uncased) using:
|
|
```bash
|
|
python -m scripts.convert --quantize --model_id bert-base-uncased
|
|
```
|
|
|
|
This will save the following files to `./models/`:
|
|
|
|
```
|
|
bert-base-uncased/
|
|
├── config.json
|
|
├── tokenizer.json
|
|
├── tokenizer_config.json
|
|
└── onnx/
|
|
├── model.onnx
|
|
└── model_quantized.onnx
|
|
```
|
|
|
|
For the full list of supported architectures, see the [Optimum documentation](https://huggingface.co/docs/optimum/main/en/exporters/onnx/overview).
|