This is necessary while we wait for more models to support ONNX weights. In future, we hope to remove the need for separation.
When testing remotely (e.g., GitHub actions), we will load models from the Hugging Face Hub under the username `Xenova`. On the other hand, when testing locally, we will use the model that is exported using the conversion script.