burn/crates/burn-import
Guillaume Lagrange e718243748
Fix clippy errors w/ new rust stable (#3325)
2025-06-27 07:54:26 -04:00
..
onnx-tests Add support onnx size (#3301) 2025-06-25 12:07:18 -04:00
pytorch-tests Improve test tolerance assertions (#3024) 2025-04-16 13:33:05 -04:00
safetensors-tests Support importing safetensors format (#2721) 2025-05-06 12:51:08 -04:00
src Fix clippy errors w/ new rust stable (#3325) 2025-06-27 07:54:26 -04:00
Cargo.toml Chain lint inheritance [was: Disable new default clippy tests] (#3200) 2025-05-20 08:23:11 -04:00
LICENSE-APACHE Update licenses symlinks (#1613) 2024-04-12 14:43:58 -04:00
LICENSE-MIT Update licenses symlinks (#1613) 2024-04-12 14:43:58 -04:00
README.md Refactor: Move op_configuration.rs from burn-import to onnx-ir (#3126) 2025-05-09 11:15:14 -05:00
SUPPORTED-ONNX-OPS.md Add support onnx size (#3301) 2025-06-25 12:07:18 -04:00
onnx_opset_upgrade.py Restrict ONNX opset to 16 and up (#3051) 2025-04-28 07:45:05 -05:00

README.md

Burn Import

The burn-import crate enables seamless integration of pre-trained models from popular machine learning frameworks into the Burn ecosystem. This functionality allows you to leverage existing models while benefiting from Burn's performance optimizations and native Rust integration.

Supported Import Formats

Burn currently supports three primary model import formats, each serving different use cases:

Format Description Use Case
ONNX (Guide) Open Neural Network Exchange format Direct import of complete model architectures and weights from any framework that supports ONNX export
PyTorch (Guide) PyTorch weights (.pt, .pth) Loading weights from PyTorch models into a matching Burn architecture
Safetensors (Guide) Hugging Face's model serialization format Loading a model's tensor weights into a matching Burn architecture

ONNX Contributor Resources