mirror of https://github.com/tracel-ai/burn.git
![]() * Add a Split node to burn-import * Register operation in to_burn * Create Split config function * Dimension inference for split outputs * Remove unnecessary f-strings from squeeze * ONNX file for Split and scipt that generates it * Add Split node to name function in Node impl * Update supported onnx ops list * Add codegen test * Include split onnx model in build * Split values should be taken from inputs, make sure only num_outputs or split is provided * Codegen should make a Vec<Tensor<B, D>> * Fix up split codegen * Remove panic if split is not provided * Add basic split test * Keep the number of output tensor sizes fixed * Clippy fixes * Update supported ops list * Cleanup build errors * Update onnx test now that return is tuple of static size * Potential workaround to constant int node * Change num_outputs to split_size in SplitConfig to follow burn implementation * Intraconvert from ONNX graph node to SplitConfig properly * Revert attempt at sidestepping constant int node issue * Copy override logic from @jameshiew * Fill in placeholder docstrings * Remove initializer helpers * Move code for generating uninitialized tensors into burn-import --------- Co-authored-by: James Hiew <james@hiew.net> |
||
---|---|---|
.. | ||
protos | ||
coalesce.rs | ||
dim_inference.rs | ||
from_onnx.rs | ||
ir.rs | ||
lib.rs | ||
node_remap.rs | ||
proto_conversion.rs | ||
util.rs |