burn/crates/onnx-ir
Mathias Insley 37822fdb51
Feat/Split ONNX Import (#2568)
* Add a Split node to burn-import

* Register operation in to_burn

* Create Split config function

* Dimension inference for split outputs

* Remove unnecessary f-strings from squeeze

* ONNX file for Split and scipt that generates it

* Add Split node to name function in Node impl

* Update supported onnx ops list

* Add codegen test

* Include split onnx model in build

* Split values should be taken from inputs, make sure only num_outputs or split is provided

* Codegen should make a Vec<Tensor<B, D>>

* Fix up split codegen

* Remove panic if split is not provided

* Add basic split test

* Keep the number of output tensor sizes fixed

* Clippy fixes

* Update supported ops list

* Cleanup build errors

* Update onnx test now that return is tuple of static size

* Potential workaround to constant int node

* Change num_outputs to split_size in SplitConfig to follow burn implementation

* Intraconvert from ONNX graph node to SplitConfig properly

* Revert attempt at sidestepping constant int node issue

* Copy override logic from @jameshiew

* Fill in placeholder docstrings

* Remove initializer helpers

* Move code for generating uninitialized tensors into burn-import

---------

Co-authored-by: James Hiew <james@hiew.net>
2025-02-17 10:28:36 -05:00
..
src Feat/Split ONNX Import (#2568) 2025-02-17 10:28:36 -05:00
Cargo.toml enable doc_auto_cfg to show feature-req-hint in docs.rs (#2271) 2024-10-09 09:15:02 -04:00
README.md Separating ONNX parsing from burn-import (#1921) 2024-07-02 15:17:44 -05:00
build.rs Separating ONNX parsing from burn-import (#1921) 2024-07-02 15:17:44 -05:00

README.md

ONNX-IR

A pure rust Onnx Parser. Creates an intermediate representation useful for generating code in any ML/DL framework

For a full list of currently supported operators, please check here

To see how to use this for generating burn graphs, see here.