Go to file
Nathaniel Simard 233922d60c
Chore: Bump version for next release (#900)
2023-10-24 19:31:13 -04:00
.cargo Add `cargo-xtask` helper and move scripts into it (#757) 2023-09-06 08:22:00 -04:00
.github Fix publish workflow 2023-10-24 19:26:21 -04:00
assets Add configuration doc for vscode environment setup (#737) 2023-08-31 08:23:28 -04:00
backend-comparison Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
burn Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
burn-autodiff Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
burn-book Update book (#858) 2023-10-11 09:11:28 -04:00
burn-candle Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
burn-common Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
burn-compute Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
burn-core Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
burn-dataset Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
burn-derive Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
burn-import Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
burn-ndarray Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
burn-no-std-tests Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
burn-tch Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
burn-tensor Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
burn-tensor-testgen Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
burn-train Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
burn-wgpu Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
examples Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
xtask Chore: Bump version for next release (#900) 2023-10-24 19:31:13 -04:00
.gitignore Remove binaries from .gitignore (#775) 2023-09-07 08:43:52 -04:00
ARCHITECTURE.md Book: Expanded the planned sections and added built-in module section (#730) 2023-08-30 12:39:35 -04:00
CODE-OF-CONDUCT.md Add Code of Conduct (#269) 2023-04-03 18:32:20 -04:00
CONTRIBUTING.md update readme for next release (#769) 2023-09-05 16:42:22 -04:00
Cargo.toml make candle available (#886) 2023-10-23 10:00:39 -04:00
LICENSE-APACHE License fixes (#648) 2023-08-16 12:45:35 -04:00
LICENSE-MIT License fixes (#648) 2023-08-16 12:45:35 -04:00
NOTICES.md Add foundation for importing ONNX files (#297) 2023-04-15 10:44:50 -04:00
POEM.md Create POEM.md (#299) 2023-04-13 09:00:39 -04:00
README.md ci: Implement source-code coverage (#890) 2023-10-23 14:15:14 -04:00
_typos.toml Add image classification web demo with WebGPU, CPU backends (#840) 2023-10-05 10:29:13 -04:00
codecov.yml WGPU: matmul vec4 (#897) 2023-10-24 17:23:43 -04:00
run-checks.ps1 Add `cargo-xtask` helper and move scripts into it (#757) 2023-09-06 08:22:00 -04:00
run-checks.sh Add `cargo-xtask` helper and move scripts into it (#757) 2023-09-06 08:22:00 -04:00

README.md

Discord Test Status Documentation Current Crates.io Version Rust Version CodeCov license

This library strives to serve as a comprehensive deep learning framework, offering exceptional flexibility and written in Rust. Our objective is to cater to both researchers and practitioners by simplifying the process of experimenting, training, and deploying models.

Features

  • Customizable, intuitive and user-friendly neural network module 🔥
  • Comprehensive training tools, including metrics, logging, and checkpointing 📈
  • Versatile Tensor crate equipped with pluggable backends 🔧
    • Torch backend, supporting both CPU and GPU 🚀
    • Ndarray backend with no_std compatibility, ensuring universal platform adaptability 👌
    • WebGPU backend, offering cross-platform, browser-inclusive, GPU-based computations 🌐
    • Candle backend 🕯️
    • Autodiff backend that enables differentiability across all backends 🌟
  • Dataset crate containing a diverse range of utilities and sources 📚
  • Import crate that simplifies the integration of pretrained models 📦

Get Started

The Burn Book 🔥

To begin working effectively with burn, it is crucial to understand its key components and philosophy. For detailed examples and explanations covering every facet of the framework, please refer to The Burn Book 🔥.

Pre-trained Models

We keep an updated and curated list of models and examples built with Burn, see the burn-rs/models repository for more details.

Examples

Here is a code snippet showing how intuitive the framework is to use, where we declare a position-wise feed-forward module along with its forward pass.

use burn::nn;
use burn::module::Module;
use burn::tensor::backend::Backend;

#[derive(Module, Debug)]
pub struct PositionWiseFeedForward<B: Backend> {
    linear_inner: Linear<B>,
    linear_outer: Linear<B>,
    dropout: Dropout,
    gelu: GELU,
}

impl<B: Backend> PositionWiseFeedForward<B> {
    pub fn forward<const D: usize>(&self, input: Tensor<B, D>) -> Tensor<B, D> {
        let x = self.linear_inner.forward(input);
        let x = self.gelu.forward(x);
        let x = self.dropout.forward(x);

        self.linear_outer.forward(x)
    }
}

For more practical insights, you can clone the repository and experiment with the following examples:

Supported Platforms

Burn-ndarray Backend

Option CPU GPU Linux MacOS Windows Android iOS WASM
Pure Rust Yes No Yes Yes Yes Yes Yes Yes
Accelerate Yes No No Yes No No Yes No
Netlib Yes No Yes Yes Yes No No No
Openblas Yes No Yes Yes Yes Yes Yes No

Burn-tch Backend

Option CPU GPU Linux MacOS Windows Android iOS WASM
CPU Yes No Yes Yes Yes Yes Yes No
CUDA No Yes Yes No Yes No No No
MPS No Yes No Yes No No No No
Vulkan Yes Yes Yes Yes Yes Yes No No

Burn-wgpu Backend

Option CPU GPU Linux MacOS Windows Android iOS WASM
Metal No Yes No Yes No No Yes No
Vulkan Yes Yes Yes Yes Yes Yes Yes No
OpenGL No Yes Yes Yes Yes Yes Yes No
WebGpu No Yes No No No No No Yes
Dx11/Dx12 No Yes No No Yes No No No

Support for no_std

Burn, including its burn-ndarray backend, can work in a no_std environment, provided alloc is available for the inference mode. To accomplish this, simply turn off the default features in burn and burn-ndarray (which is the minimum requirement for running the inference mode). You can find a reference example in burn-no-std-tests.

The burn-core and burn-tensor crates also support no_std with alloc. These crates can be directly added as dependencies if necessary, as they are reexported by the burn crate.

Please be aware that when using the no_std mode, a random seed will be generated at build time if one hasn't been set using the Backend::seed method. Also, the spin::mutex::Mutex is used instead of std::sync::Mutex in this mode.

Contributing

Before contributing, please take a moment to review our code of conduct. It's also highly recommended to read our architecture document, which explains our architectural decisions. Please see more details in our contributing guide.

Disclaimer

Burn is currently in active development, and there will be breaking changes. While any resulting issues are likely to be easy to fix, there are no guarantees at this stage.

Sponsors

Thanks to all current sponsors 🙏.

smallstepman premAI-io

License

Burn is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See LICENSE-APACHE and LICENSE-MIT for details. Opening a pull request is assumed to signal agreement with these licensing terms.