mirror of https://github.com/tracel-ai/burn.git
![]() * Add a Split node to burn-import * Register operation in to_burn * Create Split config function * Dimension inference for split outputs * Remove unnecessary f-strings from squeeze * ONNX file for Split and scipt that generates it * Add Split node to name function in Node impl * Update supported onnx ops list * Add codegen test * Include split onnx model in build * Split values should be taken from inputs, make sure only num_outputs or split is provided * Codegen should make a Vec<Tensor<B, D>> * Fix up split codegen * Remove panic if split is not provided * Add basic split test * Keep the number of output tensor sizes fixed * Clippy fixes * Update supported ops list * Cleanup build errors * Update onnx test now that return is tuple of static size * Potential workaround to constant int node * Change num_outputs to split_size in SplitConfig to follow burn implementation * Intraconvert from ONNX graph node to SplitConfig properly * Revert attempt at sidestepping constant int node issue * Copy override logic from @jameshiew * Fill in placeholder docstrings * Remove initializer helpers * Move code for generating uninitialized tensors into burn-import --------- Co-authored-by: James Hiew <james@hiew.net> |
||
---|---|---|
.. | ||
onnx-tests | ||
pytorch-tests | ||
src | ||
Cargo.toml | ||
DEVELOPMENT.md | ||
LICENSE-APACHE | ||
LICENSE-MIT | ||
README.md | ||
SUPPORTED-ONNX-OPS.md |
README.md
Importing Models
The Burn project supports the import of models from various frameworks, emphasizing efficiency and compatibility. Currently, it handles two primary model formats:
-
ONNX: Facilitates direct import, ensuring the model's performance and structure are maintained.
-
PyTorch: Enables the loading of PyTorch model weights into Burn’s native model architecture, ensuring seamless integration.
Contribution
Interested in contributing to burn-import
? Check out our development guide for
more information.