Chore/release (#1031)

This commit is contained in:
Nathaniel Simard 2023-12-01 14:33:28 -05:00 committed by GitHub
parent 4192490b88
commit ab1b5890f5
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
55 changed files with 411 additions and 303 deletions

View File

@ -3,29 +3,29 @@ name: publish
on:
push:
tags:
- 'v*'
- "v*"
jobs:
publish-burn-derive:
uses: burn-rs/burn/.github/workflows/publish-template.yml@main
uses: tracel-ai/burn/.github/workflows/publish-template.yml@main
with:
crate: burn-derive
secrets: inherit
publish-burn-dataset:
uses: burn-rs/burn/.github/workflows/publish-template.yml@main
uses: tracel-ai/burn/.github/workflows/publish-template.yml@main
with:
crate: burn-dataset
secrets: inherit
publish-burn-common:
uses: burn-rs/burn/.github/workflows/publish-template.yml@main
uses: tracel-ai/burn/.github/workflows/publish-template.yml@main
with:
crate: burn-common
secrets: inherit
publish-burn-compute:
uses: burn-rs/burn/.github/workflows/publish-template.yml@main
uses: tracel-ai/burn/.github/workflows/publish-template.yml@main
needs:
- publish-burn-common
with:
@ -33,13 +33,13 @@ jobs:
secrets: inherit
publish-burn-tensor-testgen:
uses: burn-rs/burn/.github/workflows/publish-template.yml@main
uses: tracel-ai/burn/.github/workflows/publish-template.yml@main
with:
crate: burn-tensor-testgen
secrets: inherit
publish-burn-tensor:
uses: burn-rs/burn/.github/workflows/publish-template.yml@main
uses: tracel-ai/burn/.github/workflows/publish-template.yml@main
needs:
- publish-burn-tensor-testgen
- publish-burn-common
@ -47,8 +47,17 @@ jobs:
crate: burn-tensor
secrets: inherit
publish-burn-fusion:
uses: tracel-ai/burn/.github/workflows/publish-template.yml@main
needs:
- publish-burn-tensor
- publish-burn-common
with:
crate: burn-fusion
secrets: inherit
publish-burn-autodiff:
uses: burn-rs/burn/.github/workflows/publish-template.yml@main
uses: tracel-ai/burn/.github/workflows/publish-template.yml@main
needs:
- publish-burn-tensor
- publish-burn-tensor-testgen
@ -58,7 +67,7 @@ jobs:
secrets: inherit
publish-burn-tch:
uses: burn-rs/burn/.github/workflows/publish-template.yml@main
uses: tracel-ai/burn/.github/workflows/publish-template.yml@main
needs:
- publish-burn-tensor
- publish-burn-autodiff
@ -67,7 +76,7 @@ jobs:
secrets: inherit
publish-burn-ndarray:
uses: burn-rs/burn/.github/workflows/publish-template.yml@main
uses: tracel-ai/burn/.github/workflows/publish-template.yml@main
needs:
- publish-burn-tensor
- publish-burn-autodiff
@ -77,7 +86,7 @@ jobs:
secrets: inherit
publish-burn-wgpu:
uses: burn-rs/burn/.github/workflows/publish-template.yml@main
uses: tracel-ai/burn/.github/workflows/publish-template.yml@main
needs:
- publish-burn-tensor
- publish-burn-compute
@ -89,7 +98,7 @@ jobs:
secrets: inherit
publish-burn-candle:
uses: burn-rs/burn/.github/workflows/publish-template.yml@main
uses: tracel-ai/burn/.github/workflows/publish-template.yml@main
needs:
- publish-burn-tensor
- publish-burn-autodiff
@ -99,7 +108,7 @@ jobs:
secrets: inherit
publish-burn-core:
uses: burn-rs/burn/.github/workflows/publish-template.yml@main
uses: tracel-ai/burn/.github/workflows/publish-template.yml@main
needs:
- publish-burn-dataset
- publish-burn-common
@ -115,7 +124,7 @@ jobs:
secrets: inherit
publish-burn-train:
uses: burn-rs/burn/.github/workflows/publish-template.yml@main
uses: tracel-ai/burn/.github/workflows/publish-template.yml@main
needs:
- publish-burn-core
with:
@ -123,7 +132,7 @@ jobs:
secrets: inherit
publish-burn:
uses: burn-rs/burn/.github/workflows/publish-template.yml@main
uses: tracel-ai/burn/.github/workflows/publish-template.yml@main
needs:
- publish-burn-core
- publish-burn-train
@ -132,7 +141,7 @@ jobs:
secrets: inherit
publish-burn-import:
uses: burn-rs/burn/.github/workflows/publish-template.yml@main
uses: tracel-ai/burn/.github/workflows/publish-template.yml@main
needs:
- publish-burn
with:

View File

@ -1,21 +1,29 @@
<!--
<!--
TODO: Add the following sections:
# Tenets
# Design Philosophy
# Design Philosophy
-->
# Architecture
This file documents most major architectural decisions with the reasoning behind them.
__Sections__
**Sections**
* [Module](#module)
* [Optimization](#optimization)
* [Serialization](#serialization)
* [Tensor](#tensor)
* [Backend](#backend)
* [Autodiff](#autodiff)
- [Architecture](#architecture)
- [Module](#module)
- [Optimization](#optimization)
- [Constraints](#constraints)
- [Solution](#solution)
- [Serialization](#serialization)
- [Constraints](#constraints-1)
- [Solution](#solution-1)
- [Pros](#pros)
- [Cons](#cons)
- [Compatibility](#compatibility)
- [Tensor](#tensor)
- [Backend](#backend)
- [Autodiff](#autodiff)
## Module
@ -30,13 +38,13 @@ Optimization is normally done with gradient descent (or ascent for reinforcement
#### Constraints
1. __Users should be able to control what is optimized.__
Modules can contain anything for maximum flexibility, but not everything needs to be optimized.
2. __Optimizers should have a serializable state that is updated during training.__
Many optimizers keep track of previous gradients to implement some form of momentum.
However, the state can be anything, not just tensors, allowing for easy implementation of any kind of optimizer.
3. __The learning rate can be updated during training.__
Learning rate schedulers are often used during training and should be considered as a key aspect.
1. **Users should be able to control what is optimized.**
Modules can contain anything for maximum flexibility, but not everything needs to be optimized.
2. **Optimizers should have a serializable state that is updated during training.**
Many optimizers keep track of previous gradients to implement some form of momentum.
However, the state can be anything, not just tensors, allowing for easy implementation of any kind of optimizer.
3. **The learning rate can be updated during training.**
Learning rate schedulers are often used during training and should be considered as a key aspect.
#### Solution
@ -54,12 +62,12 @@ The `Module` trait has two ways to navigate over parameters.
The first one is the `map` function, which returns `Self` and makes it easy to implement any transformation and mutate all parameters.
The second one is the `visit` function, which has a similar signature but does not mutate the parameter tensors.
__SimpleOptimizer__
**SimpleOptimizer**
The `SimpleOptimizer` has two major assumptions:
1. The state of the optimizer is linked to each parameter.
In other words, each parameter has its own optimizer state, decoupled from the other parameters.
In other words, each parameter has its own optimizer state, decoupled from the other parameters.
2. The state of the optimizer implements `Record`, `Clone`, and has a `'static` lifetime.
The benefits of those assumptions materialize in simplicity with little loss in flexibility.
@ -67,7 +75,7 @@ The state associative type is also generic over the dimension, making it extreme
To wrap a simple optimizer into the more general `Optimizer` trait, the `OptimizerAdaptor` struct is used.
__OptimizerAdaptor__
**OptimizerAdaptor**
The `OptimizerAdaptor` is a simple struct composed of a `SimpleOptimizer` and a hashmap with all records associated with each parameter ID.
When performing an optimization step, the adaptor handles the following:
@ -75,7 +83,7 @@ When performing an optimization step, the adaptor handles the following:
1. Updates each parameter tensor in the given module using the `Module::map` function.
2. Checks if a gradient for the current tensor exists.
3. Makes sure that the gradient, the tensor, and the optimizer state associated with the current parameter are on the same device.
The device can be different if the state is loaded from disk to restart training.
The device can be different if the state is loaded from disk to restart training.
4. Performs the simple optimizer step using the inner tensor since the operations done by the optimizer should not be tracked in the autodiff graph.
5. Updates the state for the current parameter and returns the updated tensor, making sure it's properly registered into the autodiff graph if gradients are marked as required.
@ -89,23 +97,23 @@ Despite appearing as a simple feature, it involves numerous constraints that req
#### Constraints
1. __Users should be able to declare the precision of the model to be saved, independent of the backend in use.__
1. **Users should be able to declare the precision of the model to be saved, independent of the backend in use.**
The modules should not be duplicated in RAM in another precision to support this.
Conversion should be done lazily during (de)serialization.
The modules should not be duplicated in RAM in another precision to support this.
Conversion should be done lazily during (de)serialization.
2. __Users should be able to add any field to a module, even fields that are not serializable.__
2. **Users should be able to add any field to a module, even fields that are not serializable.**
This can include constants, database connections, other module references, or any other information.
Only parameters should be serialized since the structure of the module itself should be encapsulated with module configurations (hyper-parameters).
This can include constants, database connections, other module references, or any other information.
Only parameters should be serialized since the structure of the module itself should be encapsulated with module configurations (hyper-parameters).
3. __Users should be able to declare the format in which the module should be saved.__
3. **Users should be able to declare the format in which the module should be saved.**
This can involve saving to a compressed JSON file or directly to bytes in memory for `no-std` environments.
This can involve saving to a compressed JSON file or directly to bytes in memory for `no-std` environments.
4. __Users should be able to create a module with its saved parameters without having to initialize the module first.__
4. **Users should be able to create a module with its saved parameters without having to initialize the module first.**
This will avoid unnecessary module initialization and tensor loading, resulting in reduced cold start when dealing with inference.
This will avoid unnecessary module initialization and tensor loading, resulting in reduced cold start when dealing with inference.
In addition to all of these constraints, the solution should be easy to use.
@ -143,20 +151,20 @@ In addition, you can extend the current system with your own `Recorder` and `Pre
##### Pros
* All constraints are respected.
* The code is simple and easy to maintain, with very few conditional statements.
It is just recursive data structures, where all the complexity is handled by the framework in primitive implementations.
* The user API is simple and small, with only two derives (`Record` and `Module`) and no additional attributes.
* Users can create their own `Module` and `Record` primitive types, which gives them the flexibility to control how their data is serialized without having to fork the framework.
- All constraints are respected.
- The code is simple and easy to maintain, with very few conditional statements.
It is just recursive data structures, where all the complexity is handled by the framework in primitive implementations.
- The user API is simple and small, with only two derives (`Record` and `Module`) and no additional attributes.
- Users can create their own `Module` and `Record` primitive types, which gives them the flexibility to control how their data is serialized without having to fork the framework.
##### Cons
* There are more types, but most of them are automatically generated and single-purpose, so users don't need to interact with them for common use cases.
However, they can do so if necessary.
* When instantiating a new record manually, each field must be set to something, even if the type itself is `()`, which represents no value.
Since the code generation step uses associative types, it doesn't know that a field type is actually nothing.
Creating a record manually without using the generated function `into_record` or loading it from a file is only useful to load a set of parameters into a module from an arbitrary source.
Using the record may not be the optimal solution to this problem, and another API could be created in the future.
- There are more types, but most of them are automatically generated and single-purpose, so users don't need to interact with them for common use cases.
However, they can do so if necessary.
- When instantiating a new record manually, each field must be set to something, even if the type itself is `()`, which represents no value.
Since the code generation step uses associative types, it doesn't know that a field type is actually nothing.
Creating a record manually without using the generated function `into_record` or loading it from a file is only useful to load a set of parameters into a module from an arbitrary source.
Using the record may not be the optimal solution to this problem, and another API could be created in the future.
##### Compatibility
@ -171,34 +179,34 @@ The tensor API abstracts away backend implementation details and focuses on usab
To make it as easy as possible to use, there is only one tensor type, which is different from multiple tensor and deep learning crates in Rust.
Generic parameters are used instead to specialize the tensor type.
* __B: Backend:__
The first argument is the backend on which the tensor implementation lies.
* __const D: usize:__
The second argument is the dimensionality of the tensor.
* __K: TensorKind:__
The third argument is the tensor kind, which can be either Float, Int or Bool.
By default, the tensor kind is set to Float, so for most tensors, the kind argument is not necessary.
- **B: Backend:**
The first argument is the backend on which the tensor implementation lies.
- **const D: usize:**
The second argument is the dimensionality of the tensor.
- **K: TensorKind:**
The third argument is the tensor kind, which can be either Float, Int or Bool.
By default, the tensor kind is set to Float, so for most tensors, the kind argument is not necessary.
Having one struct for tensors reduces the complexity of the tensor API, which also means less duplicated documentation to write and maintain.
Tensors are thread-safe, which means that you can send a tensor to another thread, and everything will work, including auto-differentiation.
Note that there are no in-place tensor operations since all tensor operations take owned tensors as parameters, which make it possible to mutate them.
Tensors can be shared simply by cloning them, but if there is only one reference to a tensor, the backend implementation is free to reuse the tensor's allocated data.
For more information about how it is done, you can have a look at this [blog post](https://burn-rs.github.io/blog/burn-rusty-approach-to-tensor-handling).
For more information about how it is done, you can have a look at this [blog post](https://burn.dev/blog/burn-rusty-approach-to-tensor-handling).
#### Backend
The Backend trait abstracts multiple things:
* Device type
* Float tensor type
* Bool tensor type
* Int tensor type
* Float element type
* Int element type
* Float tensor operations (kernels)
* Int tensor operations (kernels)
* Bool tensor operations (kernels)
- Device type
- Float tensor type
- Bool tensor type
- Int tensor type
- Float element type
- Int element type
- Float tensor operations (kernels)
- Int tensor operations (kernels)
- Bool tensor operations (kernels)
Even though having one type for tensors is convenient for the tensor API, it can be cumbersome when implementing a backend.
Therefore, backends can decide, through associated types, what types they want to use for their int, float, and bool tensors.
@ -219,4 +227,4 @@ Note that Burn is a dynamic graph deep learning framework, so backends may have
As of now, there is only one backend decorator that supports autodiff.
It follows the decorator pattern, making any backend differentiable.
However, the `AutodiffBackend` trait abstracts how gradients are calculated, and other approaches to autodiff might be added later.
For more information about how the current autodiff backend works, you can read this [blog post](https://burn-rs.github.io/blog/burn-rusty-approach-to-tensor-handling).
For more information about how the current autodiff backend works, you can read this [blog post](https://burn.dev/blog/burn-rusty-approach-to-tensor-handling).

View File

@ -10,7 +10,7 @@ Here are some steps to guide you through the process of contributing to the Burn
### Step 1: Review the Issue Tickets
Before you start working on a contribution, please take a moment to look through the open issues in
the [issue tracker](https://github.com/burn-rs/burn/issues) for this project. This will give you an
the [issue tracker](https://github.com/tracel-ai/burn/issues) for this project. This will give you an
idea of what kind of work is currently being planned or is in progress.
### Step 2: Get Familiar with the Project Architecture
@ -63,10 +63,10 @@ the issue or issues that your changes address.
1. Install the following extensions:
* [rust-lang.rust-analyzer](https://marketplace.visualstudio.com/items?itemName=rust-lang.rust-analyzer)
* [tamasfe.even-better-toml](https://marketplace.visualstudio.com/items?itemName=tamasfe.even-better-toml)
* [serayuzgur.crates](https://marketplace.visualstudio.com/items?itemName=serayuzgur.crates)
* [vadimcn.vscode-lldb](https://marketplace.visualstudio.com/items?itemName=vadimcn.vscode-lldb)
- [rust-lang.rust-analyzer](https://marketplace.visualstudio.com/items?itemName=rust-lang.rust-analyzer)
- [tamasfe.even-better-toml](https://marketplace.visualstudio.com/items?itemName=tamasfe.even-better-toml)
- [serayuzgur.crates](https://marketplace.visualstudio.com/items?itemName=serayuzgur.crates)
- [vadimcn.vscode-lldb](https://marketplace.visualstudio.com/items?itemName=vadimcn.vscode-lldb)
2. Open `Command Palette` with Ctrl+Shift+P or F1 and type `LLDB: Generate Launch Configurations from Cargo.toml` then select it, this will generate a file that should be saved as `.vscode/launch.json`.
@ -121,7 +121,7 @@ where `crate_name` is the name of the crate to publish
## Others
To bump for the next version, use this command:
To bump for the next version, use this command:
```
cargo set-version --bump minor

View File

@ -4,8 +4,8 @@
[![Discord](https://img.shields.io/discord/1038839012602941528.svg?color=7289da&&logo=discord)](https://discord.gg/uPEBbYYDB6)
[![Current Crates.io Version](https://img.shields.io/crates/v/burn.svg)](https://crates.io/crates/burn)
[![Documentation](https://img.shields.io/badge/docs-latest-blue)](https://burn.dev/docs/burn)
[![Test Status](https://github.com/burn-rs/burn/actions/workflows/test.yml/badge.svg)](https://github.com/burn-rs/burn/actions/workflows/test.yml)
[![CodeCov](https://codecov.io/gh/burn-rs/burn/branch/main/graph/badge.svg)](https://codecov.io/gh/burn-rs/burn)
[![Test Status](https://github.com/tracel-ai/burn/actions/workflows/test.yml/badge.svg)](https://github.com/tracel-ai/burn/actions/workflows/test.yml)
[![CodeCov](https://codecov.io/gh/tracel-ai/burn/branch/main/graph/badge.svg)](https://codecov.io/gh/tracel-ai/burn)
[![Rust Version](https://img.shields.io/badge/Rust-1.71.0+-blue)](https://releases.rs/docs/1.71.0)
![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)
@ -379,7 +379,7 @@ fn main() {
```
Of note, we plan to implement automatic gradient checkpointing based on compute bound and memory bound operations, which will work gracefully with the fusion backend to make your code run even faster during training, see [this issue](https://github.com/burn-rs/burn/issues/936).
Of note, we plan to implement automatic gradient checkpointing based on compute bound and memory bound operations, which will work gracefully with the fusion backend to make your code run even faster during training, see [this issue](https://github.com/tracel-ai/burn/issues/936).
See the [Fusion Backend README](./burn-fusion/README.md) for more details.
@ -456,7 +456,7 @@ Pre-trained Models 🤖
</summary>
<br />
We keep an updated and curated list of models and examples built with Burn, see the [burn-rs/models repository](https://github.com/burn-rs/models) for more details.
We keep an updated and curated list of models and examples built with Burn, see the [tracel-ai/models repository](https://github.com/tracel-ai/models) for more details.
Don't see the model you want? Don't hesitate to open an issue, and we may prioritize it.
Built a model using Burn and want to share it?
@ -504,9 +504,9 @@ You can ask your questions and share what you built with the community!
**Contributing**
Before contributing, please take a moment to review our
[code of conduct](https://github.com/burn-rs/burn/tree/main/CODE-OF-CONDUCT.md).
[code of conduct](https://github.com/tracel-ai/burn/tree/main/CODE-OF-CONDUCT.md).
It's also highly recommended to read our
[architecture document](https://github.com/burn-rs/burn/tree/main/ARCHITECTURE.md), which explains some of our architectural decisions.
[architecture document](https://github.com/tracel-ai/burn/tree/main/ARCHITECTURE.md), which explains some of our architectural decisions.
Refer to out [contributing guide](/CONTRIBUTING.md) for more details.
## Status

View File

@ -6,7 +6,7 @@ edition = "2021"
license = "MIT OR Apache-2.0"
name = "backend-comparison"
readme = "README.md"
repository = "https://github.com/burn-rs/burn/tree/main/backend-comparison"
repository = "https://github.com/tracel-ai/burn/tree/main/backend-comparison"
version = "0.11.0"
[features]

View File

@ -7,7 +7,7 @@ keywords = ["deep-learning", "machine-learning", "data"]
license = "MIT OR Apache-2.0"
name = "burn-autodiff"
readme = "README.md"
repository = "https://github.com/burn-rs/burn/tree/main/burn-autodiff"
repository = "https://github.com/tracel-ai/burn/tree/main/burn-autodiff"
version = "0.11.0"
[features]
@ -15,9 +15,9 @@ default = ["export_tests"]
export_tests = ["burn-tensor-testgen"]
[dependencies]
burn-common = {path = "../burn-common", version = "0.11.0" }
burn-tensor = {path = "../burn-tensor", version = "0.11.0", default-features = false }
burn-tensor-testgen = {path = "../burn-tensor-testgen", version = "0.11.0", optional = true}
burn-common = { path = "../burn-common", version = "0.11.0" }
burn-tensor = { path = "../burn-tensor", version = "0.11.0", default-features = false }
burn-tensor-testgen = { path = "../burn-tensor-testgen", version = "0.11.0", optional = true }
derive-new = {workspace = true}
spin = {workspace = true}
derive-new = { workspace = true }
spin = { workspace = true }

View File

@ -1,8 +1,8 @@
# Burn Autodiff
> [Burn](https://github.com/burn-rs/burn) autodiff backend
> [Burn](https://github.com/tracel-ai/burn) autodiff backend
[![Current Crates.io Version](https://img.shields.io/crates/v/burn-autodiff.svg)](https://crates.io/crates/burn-autodiff)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/burn-rs/burn-autodiff/blob/master/README.md)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/tracel-ai/burn-autodiff/blob/master/README.md)
For now only first order reverse mode autodiff is supported.

View File

@ -5,7 +5,7 @@ with the WGPU backend. We will take the example of a common workflow in the deep
where we create a kernel to fuse multiple operations together. We will fuse a matmul kernel followed
by an addition and the ReLU activation function, which is commonly found in various models. All the
code can be found under the
[examples directory](https://github.com/burn-rs/burn/tree/main/examples/custom-wgpu-kernel).
[examples directory](https://github.com/tracel-ai/burn/tree/main/examples/custom-wgpu-kernel).
## Custom Backend Trait

View File

@ -11,7 +11,7 @@ version = "0.1.0"
edition = "2021"
[dependencies]
burn = { version = "0.10.0", features=["train", "wgpu"]}
burn = { version = "0.11.0", features=["train", "wgpu"]}
# Serialization
serde = "1"

View File

@ -1,6 +1,6 @@
# Learner
The [burn-train](https://github.com/burn-rs/burn/tree/main/burn-train) crate encapsulates multiple
The [burn-train](https://github.com/tracel-ai/burn/tree/main/burn-train) crate encapsulates multiple
utilities for training deep learning models. The goal of the crate is to provide users with a
well-crafted and flexible training loop, so that projects do not have to write such components from
the ground up. Most of the interactions with `burn-train` will be with the `LearnerBuilder` struct,

View File

@ -7,8 +7,8 @@ training loop instead of using a pre-built one in general.
Burn's got you covered!
We will start from the same example shown in the [basic workflow](./basic-workflow)
section, but without using the `Learner` struct.
We will start from the same example shown in the [basic workflow](./basic-workflow) section, but
without using the `Learner` struct.
```rust, ignore
#[derive(Config)]
@ -144,7 +144,8 @@ specifically `MNISTBatcher<B::InnerBackend>`; not using `model.valid()` will res
error.
You can find the code above available as an
[example](https://github.com/burn-rs/burn/tree/main/examples/custom-training-loop) for you to test.
[example](https://github.com/tracel-ai/burn/tree/main/examples/custom-training-loop) for you to
test.
## Custom Type

View File

@ -66,7 +66,9 @@ By running `cargo run`, you should now see the result of the addition:
```console
Tensor {
data: [[3.0, 4.0], [5.0, 6.0]],
data:
[[3.0, 4.0],
[5.0, 6.0]],
shape: [2, 2],
device: BestAvailable,
backend: "wgpu",
@ -81,7 +83,12 @@ example for deep learning applications.
## Running examples
Burn uses HuggingFace's [datasets](https://huggingface.co/docs/datasets/index) library to load
datasets. `datasets` is a Python library, and therefore, in order to run examples, you will need to
install Python. Follow the instructions on the [official website](https://www.python.org/downloads/)
to install Python on your computer.
Burn uses a [Python library by HuggingFace](https://huggingface.co/docs/datasets/index) to download
datasets. Therefore, in order to run examples, you will need to install Python. Follow the
instructions on the [official website](https://www.python.org/downloads/) to install Python on your
computer.
Many Burn examples are available in the [examples](https://github.com/tracel-ai/burn/tree/main/examples)
directory.
To run one, please refer to the example's README.md for the specific command to
execute.

View File

@ -124,8 +124,8 @@ fn main() {
For practical examples, please refer to:
1. [MNIST Inference Example](https://github.com/burn-rs/burn/tree/main/examples/onnx-inference)
2. [SqueezeNet Image Classification](https://github.com/burn-rs/models/tree/main/squeezenet-burn)
1. [MNIST Inference Example](https://github.com/tracel-ai/burn/tree/main/examples/onnx-inference)
2. [SqueezeNet Image Classification](https://github.com/tracel-ai/models/tree/main/squeezenet-burn)
By combining ONNX's robustness with Burn's unique features, you'll have the flexibility and power to
streamline your deep learning workflows like never before.
@ -133,4 +133,4 @@ streamline your deep learning workflows like never before.
---
> 🚨**Note**: `burn-import` crate is in active development and currently supports a
> [limited set of ONNX operators](https://github.com/burn-rs/burn/blob/main/burn-import/SUPPORTED-ONNX-OPS.md).
> [limited set of ONNX operators](https://github.com/tracel-ai/burn/blob/main/burn-import/SUPPORTED-ONNX-OPS.md).

View File

@ -7,7 +7,7 @@ keywords = ["deep-learning", "machine-learning", "data"]
license = "MIT OR Apache-2.0"
name = "burn-candle"
readme = "README.md"
repository = "https://github.com/burn-rs/burn/tree/main/burn-candle"
repository = "https://github.com/tracel-ai/burn/tree/main/burn-candle"
version = "0.11.0"
[features]

View File

@ -1,8 +1,8 @@
# Burn Candle Backend
This crate provides a backend for [Burn](https://github.com/burn-rs/burn) based on the [Candle](https://github.com/huggingface/candle) framework.
This crate provides a backend for [Burn](https://github.com/tracel-ai/burn) based on the [Candle](https://github.com/huggingface/candle) framework.
It is still in alpha stage, not all operations are supported. It is usable for some use cases, like for inference.
It is still in alpha stage, not all operations are supported. It is usable for some use cases, like for inference.
It can be used with CPU or CUDA. On macOS computations can be accelerated by using the Accelerate framework.

View File

@ -7,7 +7,7 @@ keywords = []
license = "MIT OR Apache-2.0"
name = "burn-common"
readme = "README.md"
repository = "https://github.com/burn-rs/burn/tree/main/burn-common"
repository = "https://github.com/tracel-ai/burn/tree/main/burn-common"
version = "0.11.0"
[features]
@ -25,7 +25,7 @@ getrandom = { workspace = true, features = ["js"] }
# ** Please make sure all dependencies support no_std when std is disabled **
rand = { workspace = true }
spin = { workspace = true } # using in place of use std::sync::Mutex;
spin = { workspace = true } # using in place of use std::sync::Mutex;
uuid = { workspace = true }
derive-new = { workspace = true }

View File

@ -7,7 +7,7 @@ keywords = ["deep-learning", "machine-learning", "data"]
license = "MIT OR Apache-2.0"
name = "burn-compute"
readme = "README.md"
repository = "https://github.com/burn-rs/burn/tree/main/burn-compute"
repository = "https://github.com/tracel-ai/burn/tree/main/burn-compute"
version = "0.11.0"
[features]

View File

@ -7,13 +7,13 @@ keywords = ["deep-learning", "machine-learning", "tensor", "pytorch", "ndarray"]
license = "MIT OR Apache-2.0"
name = "burn-core"
readme = "README.md"
repository = "https://github.com/burn-rs/burn/tree/main/burn-core"
repository = "https://github.com/tracel-ai/burn/tree/main/burn-core"
version = "0.11.0"
[features]
default = [
"std",
"dataset",
"burn-dataset?/default",
"burn-ndarray?/default",
"burn-candle?/default",
"burn-wgpu?/default",

View File

@ -1,15 +1,14 @@
# Burn Core
This crate should be used with [burn](https://github.com/burn-rs/burn).
This crate should be used with [burn](https://github.com/tracel-ai/burn).
[![Current Crates.io Version](https://img.shields.io/crates/v/burn-core.svg)](https://crates.io/crates/burn-core)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/burn-rs/burn-core/blob/master/README.md)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/tracel-ai/burn-core/blob/master/README.md)
## Feature Flags
This crate can be used without the standard library (`#![no_std]`) with `alloc` by disabling
the default `std` feature.
* `std` - enables the standard library. Enabled by default.
* `experimental-named-tensor` - enables experimental named tensor.
- `std` - enables the standard library. Enabled by default.
- `experimental-named-tensor` - enables experimental named tensor.

View File

@ -1,7 +1,9 @@
/// Dataloader module.
#[cfg(feature = "dataset")]
pub mod dataloader;
/// Dataset module.
#[cfg(feature = "dataset")]
pub mod dataset {
pub use burn_dataset::*;
}

View File

@ -7,7 +7,7 @@ use alloc::vec::Vec;
pub use burn_derive::Module;
use burn_tensor::{Bool, Int, Tensor};
/// Type alias to `Vec<B::Device>` which supports `no_std` environements, but automatically using
/// Type alias to `Vec<B::Device>` which supports `no_std` environments, but automatically using
/// the `alloc` crate.
pub type Devices<B> = Vec<<B as Backend>::Device>;

View File

@ -36,7 +36,7 @@ pub struct AvgPool1dConfig {
/// `torch.nn.AvgPool2d` with `count_include_pad=True`.
///
/// TODO: Add support for `count_include_pad=False`, see
/// [Issue 636](https://github.com/burn-rs/burn/issues/636)
/// [Issue 636](https://github.com/tracel-ai/burn/issues/636)
#[derive(Module, Debug, Clone)]
pub struct AvgPool1d {

View File

@ -36,7 +36,7 @@ pub struct AvgPool2dConfig {
/// `torch.nn.AvgPool2d` with `count_include_pad=True`.
///
/// TODO: Add support for `count_include_pad=False`, see
/// [Issue 636](https://github.com/burn-rs/burn/issues/636)
/// [Issue 636](https://github.com/tracel-ai/burn/issues/636)
#[derive(Module, Debug, Clone)]
pub struct AvgPool2d {
stride: [usize; 2],

View File

@ -7,15 +7,13 @@ keywords = ["deep-learning", "machine-learning", "data"]
license = "MIT OR Apache-2.0"
name = "burn-dataset"
readme = "README.md"
repository = "https://github.com/burn-rs/burn/tree/main/burn-dataset"
repository = "https://github.com/tracel-ai/burn/tree/main/burn-dataset"
version = "0.11.0"
[features]
default = ["sqlite-bundled"]
audio = [
"hound",
]
audio = ["hound"]
fake = ["dep:fake"]
@ -23,34 +21,40 @@ sqlite = ["__sqlite-shared", "dep:rusqlite"]
sqlite-bundled = ["__sqlite-shared", "rusqlite/bundled"]
# internal
__sqlite-shared = ["dep:r2d2", "dep:r2d2_sqlite", "dep:serde_rusqlite", "dep:image", "dep:gix-tempfile"]
__sqlite-shared = [
"dep:r2d2",
"dep:r2d2_sqlite",
"dep:serde_rusqlite",
"dep:image",
"dep:gix-tempfile",
]
[dependencies]
csv = {workspace = true}
derive-new = {workspace = true}
dirs = {workspace = true}
fake = {workspace = true, optional = true}
gix-tempfile = {workspace = true, optional = true}
hound = {version = "3.5.1", optional = true}
image = {version = "0.24.7", features = ["png"], optional = true}
r2d2 = {workspace = true, optional = true}
r2d2_sqlite = {workspace = true, optional = true}
rand = {workspace = true, features = ["std"]}
rmp-serde = {workspace = true}
rusqlite = {workspace = true, optional = true}
sanitize-filename = {workspace = true}
serde = {workspace = true, features = ["std", "derive"]}
serde_json = {workspace = true, features = ["std"]}
serde_rusqlite = {workspace = true, optional = true}
strum = {workspace = true}
strum_macros = {workspace = true}
tempfile = {workspace = true}
thiserror = {workspace = true}
csv = { workspace = true }
derive-new = { workspace = true }
dirs = { workspace = true }
fake = { workspace = true, optional = true }
gix-tempfile = { workspace = true, optional = true }
hound = { version = "3.5.1", optional = true }
image = { version = "0.24.7", features = ["png"], optional = true }
r2d2 = { workspace = true, optional = true }
r2d2_sqlite = { workspace = true, optional = true }
rand = { workspace = true, features = ["std"] }
rmp-serde = { workspace = true }
rusqlite = { workspace = true, optional = true }
sanitize-filename = { workspace = true }
serde = { workspace = true, features = ["std", "derive"] }
serde_json = { workspace = true, features = ["std"] }
serde_rusqlite = { workspace = true, optional = true }
strum = { workspace = true }
strum_macros = { workspace = true }
tempfile = { workspace = true }
thiserror = { workspace = true }
[dev-dependencies]
rayon = {workspace = true}
rstest = {workspace = true}
fake = {workspace = true}
rayon = { workspace = true }
rstest = { workspace = true }
fake = { workspace = true }
[package.metadata.cargo-udeps.ignore]
normal = ["strum", "strum_macros"]

View File

@ -1,9 +1,9 @@
# Burn Dataset
> [Burn](https://github.com/burn-rs/burn) dataset library
> [Burn](https://github.com/tracel-ai/burn) dataset library
[![Current Crates.io Version](https://img.shields.io/crates/v/burn-dataset.svg)](https://crates.io/crates/burn-dataset)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/burn-rs/burn-dataset/blob/master/README.md)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/tracel-ai/burn-dataset/blob/master/README.md)
The Burn Dataset library is designed to streamline your machine learning (ML) data pipeline creation
process. It offers a variety of dataset implementations, transformation functions, and data sources.

View File

@ -7,14 +7,14 @@ keywords = []
license = "MIT OR Apache-2.0"
name = "burn-derive"
readme = "README.md"
repository = "https://github.com/burn-rs/burn/tree/main/burn-derive"
repository = "https://github.com/tracel-ai/burn/tree/main/burn-derive"
version = "0.11.0"
[lib]
proc-macro = true
[dependencies]
proc-macro2 = {workspace = true}
quote = {workspace = true}
syn = {workspace = true}
derive-new = {workspace = true}
proc-macro2 = { workspace = true }
quote = { workspace = true }
syn = { workspace = true }
derive-new = { workspace = true }

View File

@ -1,6 +1,6 @@
# Burn Derive
This crate should only be used with [burn](https://github.com/burn-rs/burn).
This crate should only be used with [burn](https://github.com/tracel-ai/burn).
[![Current Crates.io Version](https://img.shields.io/crates/v/burn-derive.svg)](https://crates.io/crates/burn-derive)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/burn-rs/burn-derive/blob/master/README.md)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/tracel-ai/burn-derive/blob/master/README.md)

View File

@ -7,7 +7,7 @@ keywords = ["deep-learning", "machine-learning", "data"]
license = "MIT OR Apache-2.0"
name = "burn-fusion"
readme = "README.md"
repository = "https://github.com/burn-rs/burn/tree/main/burn-fusion"
repository = "https://github.com/tracel-ai/burn/tree/main/burn-fusion"
version = "0.11.0"
[features]

View File

@ -8,7 +8,7 @@ edition = "2021"
license = "MIT OR Apache-2.0"
name = "burn-import"
readme = "README.md"
repository = "https://github.com/burn-rs/burn/tree/main/burn-import"
repository = "https://github.com/tracel-ai/burn/tree/main/burn-import"
version = "0.11.0"
@ -17,27 +17,27 @@ default = ["onnx"]
onnx = []
[dependencies]
burn = {path = "../burn", version = "0.11.0" }
burn-ndarray = {path = "../burn-ndarray", version = "0.11.0" }
burn = { path = "../burn", version = "0.11.0" }
burn-ndarray = { path = "../burn-ndarray", version = "0.11.0" }
bytemuck = {workspace = true}
derive-new = {workspace = true}
half = {workspace = true}
log = {workspace = true}
proc-macro2 = {workspace = true}
protobuf = {version = "3.3", features = ["with-bytes"]}
quote = {workspace = true}
rust-format = {version = "0.3", features = ["token_stream", "post_process"]}
serde = {workspace = true}
serde_json = {workspace = true, features = ["std"]}
strum = {workspace = true}
strum_macros = {workspace = true}
syn = {workspace = true, features = ["parsing"]}
bytemuck = { workspace = true }
derive-new = { workspace = true }
half = { workspace = true }
log = { workspace = true }
proc-macro2 = { workspace = true }
protobuf = { version = "3.3", features = ["with-bytes"] }
quote = { workspace = true }
rust-format = { version = "0.3", features = ["token_stream", "post_process"] }
serde = { workspace = true }
serde_json = { workspace = true, features = ["std"] }
strum = { workspace = true }
strum_macros = { workspace = true }
syn = { workspace = true, features = ["parsing"] }
tracing-subscriber.workspace = true
tracing-core.workspace = true
[build-dependencies]
protobuf-codegen = {workspace = true}
protobuf-codegen = { workspace = true }
[dev-dependencies]
pretty_assertions = {workspace = true}
pretty_assertions = { workspace = true }

View File

@ -12,8 +12,8 @@ compatibility.
For practical examples, please refer to:
1. [ONNX Inference Example](https://github.com/burn-rs/burn/tree/main/examples/onnx-inference)
2. [SqueezeNet Image Classification](https://github.com/burn-rs/models/tree/main/squeezenet-burn)
1. [ONNX Inference Example](https://github.com/tracel-ai/burn/tree/main/examples/onnx-inference)
2. [SqueezeNet Image Classification](https://github.com/tracel-ai/models/tree/main/squeezenet-burn)
## Usage

View File

@ -9,7 +9,7 @@ import torch.nn as nn
class Model(nn.Module):
def __init__(self):
# TODO enable this after https://github.com/burn-rs/burn/issues/665 is fixed
# TODO enable this after https://github.com/tracel-ai/burn/issues/665 is fixed
# Declare a constant int tensor with ones
# self.a = torch.ones(1, 1, 1, 4, dtype=torch.int32)

View File

@ -8,7 +8,7 @@ import torch.nn as nn
class Model(nn.Module):
def __init__(self):
# TODO enable this after https://github.com/burn-rs/burn/issues/665 is fixed
# TODO enable this after https://github.com/tracel-ai/burn/issues/665 is fixed
# Declare a constant int tensor with ones
# self.a = torch.ones(1, 1, 1, 4)

View File

@ -24,7 +24,7 @@ pub enum RecordType {
/// Compressed Named MessagePack.
///
/// Note: This may cause infinite build.
/// See [#952 bug](https://github.com/Tracel-AI/burn/issues/952).
/// See [#952 bug](https://github.com/tracel-ai/burn/issues/952).
NamedMpkGz,
/// Uncompressed Named MessagePack.

View File

@ -7,7 +7,7 @@ keywords = ["deep-learning", "machine-learning", "data"]
license = "MIT OR Apache-2.0"
name = "burn-ndarray"
readme = "README.md"
repository = "https://github.com/burn-rs/burn/tree/main/burn-ndarray"
repository = "https://github.com/tracel-ai/burn/tree/main/burn-ndarray"
version = "0.11.0"
[features]
@ -25,7 +25,10 @@ std = [
"matrixmultiply/threading",
]
blas-accelerate = ["ndarray/blas", "blas-src/accelerate"] # Accelerate framework (macOS only)
blas-accelerate = [
"ndarray/blas",
"blas-src/accelerate",
] # Accelerate framework (macOS only)
blas-netlib = ["ndarray/blas", "blas-src/netlib"]
blas-openblas = ["ndarray/blas", "blas-src/openblas", "openblas-src"]
blas-openblas-system = [
@ -38,19 +41,23 @@ blas-openblas-system = [
# ** Please make sure all dependencies support no_std when std is disabled **
burn-autodiff = {path = "../burn-autodiff", version = "0.11.0", features = ["export_tests"], optional = true}
burn-common = {path = "../burn-common", version = "0.11.0", default-features = false}
burn-tensor = {path = "../burn-tensor", version = "0.11.0", default-features = false, features = ["export_tests"]}
burn-autodiff = { path = "../burn-autodiff", version = "0.11.0", features = [
"export_tests",
], optional = true }
burn-common = { path = "../burn-common", version = "0.11.0", default-features = false }
burn-tensor = { path = "../burn-tensor", version = "0.11.0", default-features = false, features = [
"export_tests",
] }
matrixmultiply = {version = "0.3.8", default-features = false}
rayon = {workspace = true, optional = true}
matrixmultiply = { version = "0.3.8", default-features = false }
rayon = { workspace = true, optional = true }
blas-src = {version = "0.9.0", default-features = false, optional = true}# no-std compatible
blas-src = { version = "0.9.0", default-features = false, optional = true } # no-std compatible
derive-new = {workspace = true}
libm = {workspace = true}
ndarray = {workspace = true}
num-traits = {workspace = true}
openblas-src = {version = "0.10.8", optional = true}
rand = {workspace = true}
spin = {workspace = true}# using in place of use std::sync::Mutex;
derive-new = { workspace = true }
libm = { workspace = true }
ndarray = { workspace = true }
num-traits = { workspace = true }
openblas-src = { version = "0.10.8", optional = true }
rand = { workspace = true }
spin = { workspace = true } # using in place of use std::sync::Mutex;

View File

@ -1,9 +1,9 @@
# Burn NdArray
> [Burn](https://github.com/burn-rs/burn) ndarray backend
> [Burn](https://github.com/tracel-ai/burn) ndarray backend
[![Current Crates.io Version](https://img.shields.io/crates/v/burn-ndarray.svg)](https://crates.io/crates/burn-ndarray)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/burn-rs/burn-ndarray/blob/master/README.md)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/tracel-ai/burn-ndarray/blob/master/README.md)
## Feature Flags
@ -17,7 +17,7 @@ The following flags support various BLAS options:
- `blas-openblas` - OpenBLAS static linked
- `blas-openblas-system` - OpenBLAS from the system
Note, under the `no_std` mode, a random seed is generated during the build time if the seed is not
Note: under the `no_std` mode, the seed is fixed if the seed is not
initialized by by `Backend::seed` method.
### Platform Support

View File

@ -7,14 +7,14 @@ edition = "2021"
license = "MIT OR Apache-2.0"
name = "burn-no-std-tests"
readme = "README.md"
repository = "https://github.com/burn-rs/burn/tree/main/burn-no-std-tests"
repository = "https://github.com/tracel-ai/burn/tree/main/burn-no-std-tests"
version = "0.11.0"
[dependencies]
# ** Please make sure all dependencies support no_std **
burn = {path = "../burn", version = "0.11.0", default-features = false}
burn-ndarray = {path = "../burn-ndarray", version = "0.11.0", default-features = false}
burn = { path = "../burn", version = "0.11.0", default-features = false }
burn-ndarray = { path = "../burn-ndarray", version = "0.11.0", default-features = false }
serde = {workspace = true}
serde = { workspace = true }

View File

@ -7,7 +7,7 @@ keywords = ["deep-learning", "machine-learning", "data"]
license = "MIT OR Apache-2.0"
name = "burn-tch"
readme = "README.md"
repository = "https://github.com/burn-rs/burn/tree/main/burn-tch"
repository = "https://github.com/tracel-ai/burn/tree/main/burn-tch"
version = "0.11.0"
[features]
@ -15,17 +15,17 @@ default = []
doc = ["tch/doc-only"]
[dependencies]
burn-tensor = {path = "../burn-tensor", version = "0.11.0" }
burn-tensor = { path = "../burn-tensor", version = "0.11.0" }
half = {workspace = true, features = ["std"]}
half = { workspace = true, features = ["std"] }
libc = "0.2.150"
rand = {workspace = true, features = ["std"]}
tch = {version = "0.14.0", features = ["download-libtorch"]}
rand = { workspace = true, features = ["std"] }
tch = { version = "0.14.0", features = ["download-libtorch"] }
[dev-dependencies]
burn-autodiff = {path = "../burn-autodiff", version = "0.11.0", default-features = false, features = [
burn-autodiff = { path = "../burn-autodiff", version = "0.11.0", default-features = false, features = [
"export_tests",
]}
burn-tensor = {path = "../burn-tensor", version = "0.11.0", default-features = false, features = [
] }
burn-tensor = { path = "../burn-tensor", version = "0.11.0", default-features = false, features = [
"export_tests",
]}
] }

View File

@ -1,11 +1,11 @@
# Burn Torch Backend
[Burn](https://github.com/burn-rs/burn) Torch backend
[Burn](https://github.com/tracel-ai/burn) Torch backend
[![Current Crates.io Version](https://img.shields.io/crates/v/burn-tch.svg)](https://crates.io/crates/burn-tch)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/burn-rs/burn-tch/blob/master/README.md)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/tracel-ai/burn-tch/blob/master/README.md)
This crate provides a Torch backend for [Burn](https://github.com/burn-rs/burn) utilizing the
This crate provides a Torch backend for [Burn](https://github.com/tracel-ai/burn) utilizing the
[tch-rs](https://github.com/LaurentMazare/tch-rs) crate, which offers a Rust interface to the
[PyTorch](https://pytorch.org/) C++ API.

View File

@ -5,12 +5,12 @@ edition = "2021"
license = "MIT OR Apache-2.0"
name = "burn-tensor-testgen"
readme = "README.md"
repository = "https://github.com/burn-rs/burn/tree/main/burn-tensor-testgen"
repository = "https://github.com/tracel-ai/burn/tree/main/burn-tensor-testgen"
version = "0.11.0"
[lib]
proc-macro = true
[dependencies]
proc-macro2 = {workspace = true}
quote = {workspace = true}
proc-macro2 = { workspace = true }
quote = { workspace = true }

View File

@ -1,6 +1,6 @@
# Burn Tensor Test Generation
> [Burn](https://github.com/burn-rs/burn) tensor test generation
> [Burn](https://github.com/tracel-ai/burn) tensor test generation
[![Current Crates.io Version](https://img.shields.io/crates/v/burn-tensor-testgen.svg)](https://crates.io/crates/burn-tensor-testgen)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/burn-rs/burn-tensor-testgen/blob/master/README.md)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/tracel-ai/burn-tensor-testgen/blob/master/README.md)

View File

@ -7,7 +7,7 @@ keywords = ["deep-learning", "machine-learning", "tensor", "pytorch", "ndarray"]
license = "MIT OR Apache-2.0"
name = "burn-tensor"
readme = "README.md"
repository = "https://github.com/burn-rs/burn/tree/main/burn-tensor"
repository = "https://github.com/tracel-ai/burn/tree/main/burn-tensor"
version = "0.11.0"
[features]

View File

@ -1,33 +1,33 @@
# Burn Tensor
> [Burn](https://github.com/burn-rs/burn) Tensor Library
> [Burn](https://github.com/tracel-ai/burn) Tensor Library
[![Current Crates.io Version](https://img.shields.io/crates/v/burn-tensor.svg)](https://crates.io/crates/burn-tensor)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/burn-rs/burn-tensor/blob/master/README.md)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/tracel-ai/burn-tensor/blob/master/README.md)
This library provides multiple tensor implementations hidden behind an easy to use API that supports reverse mode automatic differentiation.
## Features
* Flexible ✨
* CPU + GPU 🙏
* Multi-Threads 🚀
* Intuitive Usage 😌
* No Global State 🚫
* Multiple Backends 🦾
* Reverse Mode Autodiff 🔥
- Flexible ✨
- CPU + GPU 🙏
- Multi-Threads 🚀
- Intuitive Usage 😌
- No Global State 🚫
- Multiple Backends 🦾
- Reverse Mode Autodiff 🔥
### Backends
For now, three backends are implemented, and some more are planned.
For now, three backends are implemented, and some more are planned.
* [X] Pytorch using [tch-rs](https://github.com/LaurentMazare/tch-rs)
* [X] 100% Rust backend using [ndarray](https://github.com/rust-ndarray/ndarray)
* [X] [WGPU](https://github.com/gfx-rs/wgpu) backend
* [ ] [Candle](https://github.com/huggingface/candle) backend
* [ ] Tensorflow using [tensorflow-rust](https://github.com/tensorflow/rust)
* [ ] CuDNN using RustCUDA[tensorflow-rust](https://github.com/Rust-GPU/Rust-CUDA)
* [ ] ...
- [x] Pytorch using [tch-rs](https://github.com/LaurentMazare/tch-rs)
- [x] 100% Rust backend using [ndarray](https://github.com/rust-ndarray/ndarray)
- [x] [WGPU](https://github.com/gfx-rs/wgpu) backend
- [ ] [Candle](https://github.com/huggingface/candle) backend
- [ ] Tensorflow using [tensorflow-rust](https://github.com/tensorflow/rust)
- [ ] CuDNN using RustCUDA[tensorflow-rust](https://github.com/Rust-GPU/Rust-CUDA)
- [ ] ...
### Autodiff
@ -56,12 +56,10 @@ To run with CUDA set `TORCH_CUDA_VERSION=cu113`.
This crate can be used alone without the entire burn stack and with only selected backends for smaller binaries.
## Feature Flags
This crate can be used without the standard library (`#![no_std]`) with `alloc` by disabling
the default `std` feature.
* `std` - enables the standard library.
* `burn-tensor-testgen` - enables test macros for generating tensor tests.
- `std` - enables the standard library.
- `burn-tensor-testgen` - enables test macros for generating tensor tests.

View File

@ -7,25 +7,18 @@ keywords = ["deep-learning", "machine-learning", "tensor", "pytorch", "ndarray"]
license = "MIT OR Apache-2.0"
name = "burn-train"
readme = "README.md"
repository = "https://github.com/burn-rs/burn/tree/main/burn-train"
repository = "https://github.com/tracel-ai/burn/tree/main/burn-train"
version = "0.11.0"
[features]
default = ["metrics", "tui"]
metrics = [
"nvml-wrapper",
"sysinfo",
"systemstat"
]
tui = [
"ratatui",
"crossterm"
]
metrics = ["nvml-wrapper", "sysinfo", "systemstat"]
tui = ["ratatui", "crossterm"]
[dependencies]
burn-core = {path = "../burn-core", version = "0.11.0" }
burn-core = { path = "../burn-core", version = "0.11.0" }
log = {workspace = true}
log = { workspace = true }
tracing-subscriber.workspace = true
tracing-appender.workspace = true
tracing-core.workspace = true
@ -40,8 +33,8 @@ ratatui = { version = "0.23", optional = true, features = ["all-widgets"] }
crossterm = { version = "0.27", optional = true }
# Utilities
derive-new = {workspace = true}
serde = {workspace = true, features = ["std", "derive"]}
derive-new = { workspace = true }
serde = { workspace = true, features = ["std", "derive"] }
[dev-dependencies]
burn-ndarray = {path = "../burn-ndarray", version = "0.11.0" }
burn-ndarray = { path = "../burn-ndarray", version = "0.11.0" }

View File

@ -1,6 +1,6 @@
# Burn Train
This crate should be used with [burn](https://github.com/burn-rs/burn).
This crate should be used with [burn](https://github.com/tracel-ai/burn).
[![Current Crates.io Version](https://img.shields.io/crates/v/burn-train.svg)](https://crates.io/crates/burn-train)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/burn-rs/burn-train/blob/master/README.md)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/tracel-ai/burn-train/blob/master/README.md)

View File

@ -7,7 +7,7 @@ keywords = ["deep-learning", "machine-learning", "gpu", "wgpu", "webgpu"]
license = "MIT OR Apache-2.0"
name = "burn-wgpu"
readme = "README.md"
repository = "https://github.com/burn-rs/burn/tree/main/burn-wgpu"
repository = "https://github.com/tracel-ai/burn/tree/main/burn-wgpu"
version = "0.11.0"
[features]

View File

@ -1,11 +1,11 @@
# Burn WGPU Backend
[Burn](https://github.com/burn-rs/burn) WGPU backend
[Burn](https://github.com/tracel-ai/burn) WGPU backend
[![Current Crates.io Version](https://img.shields.io/crates/v/burn-wgpu.svg)](https://crates.io/crates/burn-wgpu)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/burn-rs/burn-wgpu/blob/master/README.md)
[![license](https://shields.io/badge/license-MIT%2FApache--2.0-blue)](https://github.com/tracel-ai/burn-wgpu/blob/master/README.md)
This crate provides a WGPU backend for [Burn](https://github.com/burn-rs/burn) using the
This crate provides a WGPU backend for [Burn](https://github.com/tracel-ai/burn) using the
[wgpu](https://github.com/gfx-rs/wgpu).
The backend supports Vulkan, Metal, DirectX11/12, OpenGL, WebGPU.

View File

@ -7,12 +7,12 @@ keywords = ["deep-learning", "machine-learning", "tensor", "pytorch", "ndarray"]
license = "MIT OR Apache-2.0"
name = "burn"
readme = "README.md"
repository = "https://github.com/burn-rs/burn"
repository = "https://github.com/tracel-ai/burn"
version = "0.11.0"
rust-version = "1.71"
[features]
default = ["burn-core/default", "burn-train?/default"]
default = ["burn-core/default", "burn-train?/default", "std"]
std = ["burn-core/std"]
# Training with full features
@ -60,4 +60,12 @@ burn-core = { path = "../burn-core", version = "0.11.0", default-features = fals
burn-train = { path = "../burn-train", version = "0.11.0", optional = true, default-features = false }
[package.metadata.docs.rs]
features = ["dataset", "default", "std", "train", "train-tui", "train-metrics", "dataset-sqlite"]
features = [
"dataset",
"default",
"std",
"train",
"train-tui",
"train-metrics",
"dataset-sqlite",
]

View File

@ -2,15 +2,88 @@
#![warn(missing_docs)]
//! # Burn
//! This library strives to serve as a comprehensive **deep learning framework**,
//! offering exceptional flexibility and written in Rust. The main objective is to cater
//! to both researchers and practitioners by simplifying the process of experimenting,
//! training, and deploying models.
//!
//! Burn is a new comprehensive dynamic Deep Learning Framework built using Rust
//! with extreme flexibility, compute efficiency and portability as its primary goals.
//!
//! ## Performance
//!
//! Because we believe the goal of a deep learning framework is to convert computation
//! into useful intelligence, we have made performance a core pillar of Burn.
//! We strive to achieve top efficiency by leveraging multiple optimization techniques:
//!
//! - Automatic kernel fusion
//! - Asynchronous execution
//! - Thread-safe building blocks
//! - Intelligent memory management
//! - Automatic kernel selection
//! - Hardware specific features
//! - Custom Backend Extension
//!
//! ## Training & Inference
//!
//! The whole deep learning workflow is made easy with Burn, as you can monitor your training progress
//! with an ergonomic dashboard, and run inference everywhere from embedded devices to large GPU clusters.
//!
//! Burn was built from the ground up with training and inference in mind. It's also worth noting how Burn,
//! in comparison to frameworks like PyTorch, simplifies the transition from training to deployment,
//! eliminating the need for code changes.
//!
//! ## Backends
//!
//! Burn strives to be as fast as possible on as many hardwares as possible, with robust implementations.
//! We believe this flexibility is crucial for modern needs where you may train your models in the cloud,
//! then deploy on customer hardwares, which vary from user to user.
//!
//! Compared to other frameworks, Burn has a very different approach to supporting many backends.
//! By design, most code is generic over the Backend trait, which allows us to build Burn with swappable backends.
//! This makes composing backend possible, augmenting them with additional functionalities such as
//! autodifferentiation and automatic kernel fusion.
//!
//! - WGPU (WebGPU): Cross-Platform GPU Backend
//! - Candle: Backend using the Candle bindings
//! - LibTorch: Backend using the LibTorch bindings
//! - NdArray: Backend using the NdArray primitive as data structure
//! - Autodiff: Backend decorator that brings backpropagation to any backend
//! - Fusion: Backend decorator that brings kernel fusion to backends that support it
//!
//! ## Feature Flags
//!
//! The following feature flags are available.
//! By default, the feature `std` is activated.
//!
//! - Training
//! - `train`: Enables features `dataset` and `autodiff` and provides a training environment
//! - `tui`: Includes Text UI with progress bar and plots
//! - `metrics`: Includes system info metrics (CPU/GPU usage, etc.)
//! - Dataset
//! - `dataset`: Includes a datasets library
//! - `audio`: Enables audio datasets (SpeechCommandsDataset)
//! - `sqlite`: Stores datasets in SQLite database
//! - `sqlite_bundled`: Use bundled version of SQLite
//! - Backends
//! - `wgpu`: Makes available the WGPU backend
//! - `candle`: Makes available the Candle backend
//! - `tch`: Makes available the LibTorch backend
//! - `ndarray`: Makes available the NdArray backend
//! - Backend specifications
//! - `cuda`: If supported, CUDA will be used
//! - `accelerate`: If supported, Accelerate will be used
//! - `blas-netlib`: If supported, Blas Netlib will be use
//! - `openblas`: If supported, Openblas will be use
//! - `openblas-system`: If supported, Openblas installed on the system will be use
//! - `wasm-sync`: When targeting wasm, but want a sync API (won't work with WGPU)
//! - Backend decorators
//! - `autodiff`: Makes available the Autodiff backend
//! - `fusion`: Makes available the Fusion backend
//! - Others:
//! - `std`: Activates the standard library (deactivate for no_std)
//! - `experimental-named-tensor`: Enables named tensors (experimental)
pub use burn_core::*;
/// Train module
#[cfg(any(feature = "train", feature = "train-minimal"))]
#[cfg(feature = "train")]
pub mod train {
pub use burn_train::*;
}

View File

@ -1,10 +1,10 @@
/**
*
* This demo is part of Burn project: https://github.com/burn-rs/burn
* This demo is part of Burn project: https://github.com/tracel-ai/burn
*
* Released under a dual license:
* https://github.com/burn-rs/burn/blob/main/LICENSE-MIT
* https://github.com/burn-rs/burn/blob/main/LICENSE-APACHE
* https://github.com/tracel-ai/burn/blob/main/LICENSE-MIT
* https://github.com/tracel-ai/burn/blob/main/LICENSE-APACHE
*
*/

View File

@ -30,7 +30,7 @@ special system library, such as [WASI](https://wasi.dev/). (See [Cargo.toml](./C
include burn dependencies without `std`).
For this demo, we use trained parameters (`model.bin`) and model (`model.rs`) from the
[`burn` MNIST example](https://github.com/burn-rs/burn/tree/main/examples/mnist).
[`burn` MNIST example](https://github.com/tracel-ai/burn/tree/main/examples/mnist).
The inference API for JavaScript is exposed with the help of
[`wasm-bindgen`](https://github.com/rustwasm/wasm-bindgen)'s library and tools.
@ -56,7 +56,7 @@ The total number of parameters is 376,952.
The model is trained with 4 epochs and the final test accuracy is 98.67%.
The training and hyper parameter information in can be found in
[`burn` MNIST example](https://github.com/burn-rs/burn/tree/main/examples/mnist).
[`burn` MNIST example](https://github.com/tracel-ai/burn/tree/main/examples/mnist).
## Comparison
@ -72,9 +72,9 @@ byte file is the model's parameters. The rest of 356,744 bytes contain all the c
There are several planned enhancements in place:
- [#202](https://github.com/burn-rs/burn/issues/202) - Saving model's params in half-precision and
- [#202](https://github.com/tracel-ai/burn/issues/202) - Saving model's params in half-precision and
loading back in full. This can be half the size of the wasm file.
- [#243](https://github.com/burn-rs/burn/issues/243) - New WebGPU backend would allow computation
- [#243](https://github.com/tracel-ai/burn/issues/243) - New WebGPU backend would allow computation
using GPU in the browser.
- [#1271](https://github.com/rust-ndarray/ndarray/issues/1271) -
[WASM SIMD](https://github.com/WebAssembly/simd/blob/master/proposals/simd/SIMD.md) support in

View File

@ -1,9 +1,9 @@
<!-- This demo is part of Burn project: https://github.com/burn-rs/burn
<!-- This demo is part of Burn project: https://github.com/tracel-ai/burn
Released under a dual license:
https://github.com/burn-rs/burn/blob/main/LICENSE-MIT
https://github.com/tracel-ai/burn/blob/main/LICENSE-MIT
https://github.com/burn-rs/burn/blob/main/LICENSE-APACHE
https://github.com/tracel-ai/burn/blob/main/LICENSE-APACHE
-->
<!DOCTYPE html>
<html>

View File

@ -1,10 +1,10 @@
/**
*
* This demo is part of Burn project: https://github.com/burn-rs/burn
* This demo is part of Burn project: https://github.com/tracel-ai/burn
*
* Released under a dual license:
* https://github.com/burn-rs/burn/blob/main/LICENSE-MIT
* https://github.com/burn-rs/burn/blob/main/LICENSE-APACHE
* https://github.com/tracel-ai/burn/blob/main/LICENSE-MIT
* https://github.com/tracel-ai/burn/blob/main/LICENSE-APACHE
*
*/

View File

@ -2,14 +2,14 @@
The example is showing you how to:
* Define your own custom module (MLP).
* Create the data pipeline from a raw dataset to a batched multi-threaded fast DataLoader.
* Configure a learner to display and log metrics as well as to keep training checkpoints.
- Define your own custom module (MLP).
- Create the data pipeline from a raw dataset to a batched multi-threaded fast DataLoader.
- Configure a learner to display and log metrics as well as to keep training checkpoints.
The example can be run like so:
```bash
git clone https://github.com/burn-rs/burn.git
git clone https://github.com/tracel-ai/burn.git
cd burn
# Use the --release flag to really speed up training.
echo "Using ndarray backend"

View File

@ -15,15 +15,14 @@ models on AG News and DbPedia datasets using the Rust-based Burn Deep Learning L
# Usage
## Torch GPU backend
```bash
git clone https://github.com/burn-rs/burn.git
git clone https://github.com/tracel-ai/burn.git
cd burn
# Use the --release flag to really speed up training.
# Use the f16 feature if your CUDA device supports FP16 (half precision) operations. May not work well on every device.
# Use the f16 feature if your CUDA device supports FP16 (half precision) operations. May not work well on every device.
export TORCH_CUDA_VERSION=cu117 # Set the cuda version (CUDA users)
@ -39,7 +38,7 @@ cargo run --example db-pedia-infer --release --features tch-gpu # Run inference
## Torch CPU backend
```bash
git clone https://github.com/burn-rs/burn.git
git clone https://github.com/tracel-ai/burn.git
cd burn
# Use the --release flag to really speed up training.
@ -56,7 +55,7 @@ cargo run --example db-pedia-infer --release --features tch-cpu # Run inference
## ndarray backend
```bash
git clone https://github.com/burn-rs/burn.git
git clone https://github.com/tracel-ai/burn.git
cd burn
# Use the --release flag to really speed up training.
@ -75,7 +74,7 @@ cargo run --example db-pedia-infer --release --features ndarray # Run inference
## WGPU backend
```bash
git clone https://github.com/burn-rs/burn.git
git clone https://github.com/tracel-ai/burn.git
cd burn
# Use the --release flag to really speed up training.
@ -87,4 +86,4 @@ cargo run --example ag-news-infer --release --features wgpu # Run inference on
# DbPedia
cargo run --example db-pedia-train --release --features wgpu # Train on the db pedia dataset
cargo run --example db-pedia-infer --release --features wgpu # Run inference db pedia dataset
```
```

View File

@ -5,7 +5,7 @@ The example can be run like so:
## CUDA users
```bash
git clone https://github.com/burn-rs/burn.git
git clone https://github.com/tracel-ai/burn.git
cd burn
# Use the --release flag to really speed up training.
@ -16,7 +16,7 @@ cargo run --example text-generation --release
## Mac users
```bash
git clone https://github.com/burn-rs/burn.git
git clone https://github.com/tracel-ai/burn.git
cd burn
# Use the --release flag to really speed up training.