Go to file
Nathaniel Simard 86db5dc392
Enable candle cuda (#887)
2023-10-23 11:00:54 -04:00
.cargo Add `cargo-xtask` helper and move scripts into it (#757) 2023-09-06 08:22:00 -04:00
.github Fix train-minimal breakage (#882) 2023-10-22 11:17:36 -04:00
assets Add configuration doc for vscode environment setup (#737) 2023-08-31 08:23:28 -04:00
backend-comparison Enable candle cuda (#887) 2023-10-23 11:00:54 -04:00
burn Enable candle cuda (#887) 2023-10-23 11:00:54 -04:00
burn-autodiff Refactor unfold4d + Add Module (#870) 2023-10-22 11:53:59 -04:00
burn-book Update book (#858) 2023-10-11 09:11:28 -04:00
burn-candle Enable candle cuda (#887) 2023-10-23 11:00:54 -04:00
burn-common Add image classification web demo with WebGPU, CPU backends (#840) 2023-10-05 10:29:13 -04:00
burn-compute Feat/async read (#833) 2023-09-28 17:09:58 -04:00
burn-core Enable candle cuda (#887) 2023-10-23 11:00:54 -04:00
burn-dataset Upgrade dependency versions (#854) 2023-10-09 14:29:44 -04:00
burn-derive Chore: bump version (#777) 2023-09-06 12:15:13 -04:00
burn-import Feat/onnx import erf (#859) 2023-10-13 09:59:44 -04:00
burn-ndarray Refactor unfold4d + Add Module (#870) 2023-10-22 11:53:59 -04:00
burn-no-std-tests Chore: bump version (#777) 2023-09-06 12:15:13 -04:00
burn-tch Refactor unfold4d + Add Module (#870) 2023-10-22 11:53:59 -04:00
burn-tensor Refactor unfold4d + Add Module (#870) 2023-10-22 11:53:59 -04:00
burn-tensor-testgen Chore: bump version (#777) 2023-09-06 12:15:13 -04:00
burn-train [Burn-train] Improve panic messages (#885) 2023-10-23 10:49:46 -04:00
burn-wgpu make candle available (#886) 2023-10-23 10:00:39 -04:00
examples make candle available (#886) 2023-10-23 10:00:39 -04:00
xtask misc: Add clippy lints in PR comments (#659) 2023-10-13 09:57:47 -04:00
.gitignore Remove binaries from .gitignore (#775) 2023-09-07 08:43:52 -04:00
ARCHITECTURE.md Book: Expanded the planned sections and added built-in module section (#730) 2023-08-30 12:39:35 -04:00
CODE-OF-CONDUCT.md Add Code of Conduct (#269) 2023-04-03 18:32:20 -04:00
CONTRIBUTING.md update readme for next release (#769) 2023-09-05 16:42:22 -04:00
Cargo.toml make candle available (#886) 2023-10-23 10:00:39 -04:00
LICENSE-APACHE License fixes (#648) 2023-08-16 12:45:35 -04:00
LICENSE-MIT License fixes (#648) 2023-08-16 12:45:35 -04:00
NOTICES.md Add foundation for importing ONNX files (#297) 2023-04-15 10:44:50 -04:00
POEM.md Create POEM.md (#299) 2023-04-13 09:00:39 -04:00
README.md make candle available (#886) 2023-10-23 10:00:39 -04:00
_typos.toml Add image classification web demo with WebGPU, CPU backends (#840) 2023-10-05 10:29:13 -04:00
run-checks.ps1 Add `cargo-xtask` helper and move scripts into it (#757) 2023-09-06 08:22:00 -04:00
run-checks.sh Add `cargo-xtask` helper and move scripts into it (#757) 2023-09-06 08:22:00 -04:00

README.md

Discord Test Status Documentation Current Crates.io Version Rust Version license

This library strives to serve as a comprehensive deep learning framework, offering exceptional flexibility and written in Rust. Our objective is to cater to both researchers and practitioners by simplifying the process of experimenting, training, and deploying models.

Features

  • Customizable, intuitive and user-friendly neural network module 🔥
  • Comprehensive training tools, including metrics, logging, and checkpointing 📈
  • Versatile Tensor crate equipped with pluggable backends 🔧
    • Torch backend, supporting both CPU and GPU 🚀
    • Ndarray backend with no_std compatibility, ensuring universal platform adaptability 👌
    • WebGPU backend, offering cross-platform, browser-inclusive, GPU-based computations 🌐
    • Candle backend 🕯️
    • Autodiff backend that enables differentiability across all backends 🌟
  • Dataset crate containing a diverse range of utilities and sources 📚
  • Import crate that simplifies the integration of pretrained models 📦

Get Started

The Burn Book 🔥

To begin working effectively with burn, it is crucial to understand its key components and philosophy. For detailed examples and explanations covering every facet of the framework, please refer to The Burn Book 🔥.

Pre-trained Models

We keep an updated and curated list of models and examples built with Burn, see the burn-rs/models repository for more details.

Examples

Here is a code snippet showing how intuitive the framework is to use, where we declare a position-wise feed-forward module along with its forward pass.

use burn::nn;
use burn::module::Module;
use burn::tensor::backend::Backend;

#[derive(Module, Debug)]
pub struct PositionWiseFeedForward<B: Backend> {
    linear_inner: Linear<B>,
    linear_outer: Linear<B>,
    dropout: Dropout,
    gelu: GELU,
}

impl<B: Backend> PositionWiseFeedForward<B> {
    pub fn forward<const D: usize>(&self, input: Tensor<B, D>) -> Tensor<B, D> {
        let x = self.linear_inner.forward(input);
        let x = self.gelu.forward(x);
        let x = self.dropout.forward(x);

        self.linear_outer.forward(x)
    }
}

For more practical insights, you can clone the repository and experiment with the following examples:

Supported Platforms

Burn-ndarray Backend

Option CPU GPU Linux MacOS Windows Android iOS WASM
Pure Rust Yes No Yes Yes Yes Yes Yes Yes
Accelerate Yes No No Yes No No Yes No
Netlib Yes No Yes Yes Yes No No No
Openblas Yes No Yes Yes Yes Yes Yes No

Burn-tch Backend

Option CPU GPU Linux MacOS Windows Android iOS WASM
CPU Yes No Yes Yes Yes Yes Yes No
CUDA No Yes Yes No Yes No No No
MPS No Yes No Yes No No No No
Vulkan Yes Yes Yes Yes Yes Yes No No

Burn-wgpu Backend

Option CPU GPU Linux MacOS Windows Android iOS WASM
Metal No Yes No Yes No No Yes No
Vulkan Yes Yes Yes Yes Yes Yes Yes No
OpenGL No Yes Yes Yes Yes Yes Yes No
WebGpu No Yes No No No No No Yes
Dx11/Dx12 No Yes No No Yes No No No

Support for no_std

Burn, including its burn-ndarray backend, can work in a no_std environment, provided alloc is available for the inference mode. To accomplish this, simply turn off the default features in burn and burn-ndarray (which is the minimum requirement for running the inference mode). You can find a reference example in burn-no-std-tests.

The burn-core and burn-tensor crates also support no_std with alloc. These crates can be directly added as dependencies if necessary, as they are reexported by the burn crate.

Please be aware that when using the no_std mode, a random seed will be generated at build time if one hasn't been set using the Backend::seed method. Also, the spin::mutex::Mutex is used instead of std::sync::Mutex in this mode.

Contributing

Before contributing, please take a moment to review our code of conduct. It's also highly recommended to read our architecture document, which explains our architectural decisions. Please see more details in our contributing guide.

Disclaimer

Burn is currently in active development, and there will be breaking changes. While any resulting issues are likely to be easy to fix, there are no guarantees at this stage.

Sponsors

Thanks to all current sponsors 🙏.

smallstepman premAI-io

License

Burn is distributed under the terms of both the MIT license and the Apache License (Version 2.0). See LICENSE-APACHE and LICENSE-MIT for details. Opening a pull request is assumed to signal agreement with these licensing terms.