Add models and examples reference (#1966)

Co-authored-by: Sylvain Benner <sylvain@benner.online>

---------

Co-authored-by: Sylvain Benner <sylvain@benner.online>
This commit is contained in:
Guillaume Lagrange 2024-07-04 16:22:08 -04:00 committed by GitHub
parent f709858a8b
commit 5236e12c81
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
5 changed files with 183 additions and 103 deletions

View File

@ -524,8 +524,48 @@ impl<B: Backend> PositionWiseFeedForward<B> {
```
We have a somewhat large amount of [examples](./examples) in the repository that shows how to use
the framework in different scenarios. For more practical insights, you can clone the repository and
run any of them directly on your computer!
the framework in different scenarios.
Following [the book](https://burn.dev/book/):
- [Basic Workflow](./examples/guide) : Creates a custom CNN `Module` to train on the MNIST dataset
and use for inference.
- [Custom Training Loop](./examples/custom-training-loop) : Implements a basic training loop instead
of using the `Learner`.
- [Custom WGPU Kernel](./examples/custom-wgpu-kernel) : Learn how to create your own custom
operation with the WGPU backend.
Additional examples:
- [Custom CSV Dataset](./examples/custom-csv-dataset) : Implements a dataset to parse CSV data for a
regression task.
- [Regression](./examples/simple-regression) : Trains a simple MLP on the CSV dataset for the
regression task.
- [Custom Image Dataset](./examples/custom-image-dataset) : Trains a simple CNN on custom image
dataset following a simple folder structure.
- [Custom Renderer](./examples/custom-renderer) : Implements a custom renderer to display the
[`Learner`](./building-blocks/learner.md) progress.
- [Simple CubeCL Kernel](./examples/gelu) : Implements a simple GELU kernel with `CubeCL`.
- [Image Classificaiton Web](./examples/image-classification-web) : Image classification web browser
demo using Burn, WGPU and WebAssembly.
- [MNIST Inference on Web](./examples/mnist-inference-web) : An interactive MNIST inference demo in
the browser. The demo is available [online](https://burn.dev/demo/).
- [MNIST Training](./examples/mnist) : Demonstrates how to train a custom `Module` (MLP) with the
`Learner` configured to log metrics and keep training checkpoints.
- [Named Tensor](./examples/named-tensor) : Performs operations with the experimental `NamedTensor`
feature.
- [ONNX Import Inference](./examples/onnx-inference) : Imports an ONNX model pre-trained on MNIST to
perform inference on a sample image with Burn.
- [PyTorch Import Inference](./examples/pytorch-import) : Imports a PyTorch model pre-trained on
MNIST to perform inference on a sample image with Burn.
- [Text Classification](./examples/text-classification) : Trains a text classification transformer
model on the AG News or DbPedia dataset. The trained model can then be used to classify a text
sample.
- [Text Generation](./examples/text-generation) : Trains a text generation transformer model on the
DbPedia dataset.
For more practical insights, you can clone the repository and run any of them directly on your
computer!
</details>

View File

@ -1,6 +1,7 @@
- [Overview](./overview.md)
- [Why Burn?](./motivation.md)
- [Getting started](./getting-started.md)
- [Examples](./examples.md)
- [Basic Workflow: From Training to Inference](./basic-workflow/README.md)
- [Model](./basic-workflow/model.md)
- [Data](./basic-workflow/data.md)
@ -23,6 +24,7 @@
- [Import Models](./import/README.md)
- [ONNX Model](./import/onnx-model.md)
- [PyTorch Model](./import/pytorch-model.md)
- [Models & Pre-Trained Weights](./models-and-pretrained-weights.md)
- [Advanced](./advanced/README.md)
- [Backend Extension](./advanced/backend-extension/README.md)
- [Custom WGPU Kernel](./advanced/backend-extension/custom-wgpu-kernel.md)

102
burn-book/src/examples.md Normal file
View File

@ -0,0 +1,102 @@
# Examples
In the [next chapter](./basic-workflow) you'll have the opportunity to implement the whole Burn
`guide` example yourself in a step by step manner.
Many additional Burn examples are available in the
[examples](https://github.com/tracel-ai/burn/tree/main/examples) directory. Burn examples are
organized as library crates with one or more examples that are executable binaries. An example can
then be executed using the following cargo command line in the root of the Burn repository:
```bash
cargo run --example <example name>
```
To learn more about crates and examples, read the Rust section below.
<details>
<summary><strong>🦀 About Rust crates</strong></summary>
Each Burn example is a **package** which are subdirectories of the `examples` directory. A package
is composed of one or more **crates**.
A package is a bundle of one or more crates that provides a set of functionality. A package contains
a `Cargo.toml` file that describes how to build those crates.
A crate is a compilation unit in Rust. It could be a single file, but it is often easier to split up
crates into multiple **modules**.
A module lets us organize code within a crate for readability and easy reuse. Modules also allow us
to control the _privacy_ of items. For instance the `pub(crate)` keyword is employed to make a
module publicly available inside the crate. In the snippet below there are four modules declared,
two of them are public and visible to the users of the crates, one of them is public inside the
crate only and crate users cannot see it, at last one is private when there is no keyword. These
modules can be single files or a directory with a `mod.rs` file inside.
```rust, ignore
pub mod data;
pub mod inference;
pub(crate) mod model;
mod training;
```
A crate can come in one of two forms: a **binary crate** or a **library crate**. When compiling a
crate, the compiler first looks in the crate root file (`src/lib.rs` for a library crate and
`src/main.rs` for a binary crate). Any module declared in the crate root file will be inserted in
the crate for compilation.
All Burn examples are library crates and they can contain one or more executable examples that uses
the library. We even have some Burn examples that uses the library crate of other examples.
The examples are unique files under the `examples` directory. Each file produces an executable file
with the same name. Each example can then be executed with `cargo run --example <executable name>`.
Below is a file tree of a typical Burn example package:
```
examples/burn-example
├── Cargo.toml
├── examples
│ ├── example1.rs ---> compiled to example1 binary
│ ├── example2.rs ---> compiled to example2 binary
│ └── ...
└── src
├── lib.rs ---> this is the root file for a library
├── module1.rs
├── module2.rs
└── ...
```
</details><br>
The following additional examples are currently available if you want to check them out:
| Example | Description |
| :-------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| [Custom CSV Dataset](https://github.com/tracel-ai/burn/tree/main/examples/custom-csv-dataset) | Implements a dataset to parse CSV data for a regression task. |
| [Regression](https://github.com/tracel-ai/burn/tree/main/examples/simple-regression) | Trains a simple MLP on the CSV dataset for the regression task. |
| [Custom Image Dataset](https://github.com/tracel-ai/burn/tree/main/examples/custom-image-dataset) | Trains a simple CNN on custom image dataset following a simple folder structure. |
| [Custom Renderer](https://github.com/tracel-ai/burn/tree/main/examples/custom-renderer) | Implements a custom renderer to display the [`Learner`](./building-blocks/learner.md) progress. |
| [Simple CubeCL Kernel](https://github.com/tracel-ai/burn/tree/main/examples/gelu) | Implements a simple GELU kernel with `CubeCL`. |
| [Image Classificaiton Web](https://github.com/tracel-ai/burn/tree/main/examples/image-classification-web) | Image classification web browser demo using Burn, WGPU and WebAssembly. |
| [MNIST Inference on Web](https://github.com/tracel-ai/burn/tree/main/examples/mnist-inference-web) | An interactive MNIST inference demo in the browser. The demo is available [online](https://burn.dev/demo/). |
| [MNIST Training](https://github.com/tracel-ai/burn/tree/main/examples/mnist) | Demonstrates how to train a custom [`Module`](./building-blocks/module.md) (MLP) with the [`Learner`](./building-blocks/learner.md) configured to log metrics and keep training checkpoints. |
| [Named Tensor](https://github.com/tracel-ai/burn/tree/main/examples/named-tensor) | Performs operations with the experimental `NamedTensor` feature. |
| [ONNX Import Inference](https://github.com/tracel-ai/burn/tree/main/examples/onnx-inference) | Imports an ONNX model pre-trained on MNIST to perform inference on a sample image with Burn. |
| [PyTorch Import Inference](https://github.com/tracel-ai/burn/tree/main/examples/pytorch-import) | Imports a PyTorch model pre-trained on MNIST to perform inference on a sample image with Burn. |
| [Text Classification](https://github.com/tracel-ai/burn/tree/main/examples/text-classification) | Trains a text classification transformer model on the AG News or DbPedia datasets. The trained model can then be used to classify a text sample. |
| [Text Generation](https://github.com/tracel-ai/burn/tree/main/examples/text-generation) | Trains a text generation transformer model on the DbPedia dataset. |
For more information on each example, see their respective `README.md` file. Be sure to check out
the [examples](https://github.com/tracel-ai/burn/tree/main/examples) directory for an up-to-date
list.
<div class="warning">
Note that some examples use the
[`datasets` library by HuggingFace](https://huggingface.co/docs/datasets/index) to download the
datasets required in the examples. This is a Python library, which means that you will need to
install Python before running these examples. This requirement will be clearly indicated in the
example's README when applicable.
</div>

View File

@ -28,7 +28,7 @@ libraries/packages your code depends on, and build said libraries.
Below is a quick cheat sheet of the main `cargo` commands you might use throughout this guide.
| Command | Description |
|---------------------|----------------------------------------------------------------------------------------------|
| ------------------- | -------------------------------------------------------------------------------------------- |
| `cargo new` _path_ | Create a new Cargo package in the given directory. |
| `cargo add` _crate_ | Add dependencies to the Cargo.toml manifest file. |
| `cargo build` | Compile the local package and all of its dependencies (in debug mode, use `-r` for release). |
@ -126,9 +126,10 @@ of the Rust Book or the
If you're new to Rust, you're probably wondering why we had to use `Tensor::<Backend, 2>::...`.
That's because the `Tensor` struct is [generic](https://doc.rust-lang.org/book/ch10-01-syntax.html)
over multiple concrete data types. More specifically, a `Tensor` can be defined using three generic
parameters: the backend, the number of dimensions (rank) and the data type (defaults to `Float`).
Here, we only specify the backend and number of dimensions since a `Float` tensor is used by default.
For more details on the `Tensor` struct, take a look at [this section](./building-blocks/tensor.md).
parameters: the backend, the number of dimensions (rank) and the data type (defaults to `Float`).
Here, we only specify the backend and number of dimensions since a `Float` tensor is used by
default. For more details on the `Tensor` struct, take a look at
[this section](./building-blocks/tensor.md).
Most of the time when generics are involved, the compiler can infer the generic parameters
automatically. In this case, the compiler needs a little help. This can usually be done in one of
@ -141,11 +142,10 @@ let tensor_1: Tensor<Backend, 2> = Tensor::from_data([[2., 3.], [4., 5.]]);
let tensor_2 = Tensor::ones_like(&tensor_1);
```
You probably noticed that we provided a type annotation for the first tensor only and yet this example
still works.
That's because the compiler (correctly) inferred that `tensor_2` had the same generic parameters.
The same could have been done in the original example, but specifying the parameters for both is
more explicit.
You probably noticed that we provided a type annotation for the first tensor only and yet this
example still works. That's because the compiler (correctly) inferred that `tensor_2` had the same
generic parameters. The same could have been done in the original example, but specifying the
parameters for both is more explicit.
</details><br>
@ -164,17 +164,17 @@ Tensor {
}
```
While the previous example is somewhat trivial, the upcoming
basic workflow section will walk you through a much more relevant example for
deep learning applications.
While the previous example is somewhat trivial, the upcoming basic workflow section will walk you
through a much more relevant example for deep learning applications.
## Using `prelude`
Burn comes with a variety of things in its core library.
When creating a new model or using an existing one for inference,
you may need to import every single component you used, which could be a little verbose.
Burn comes with a variety of things in its core library. When creating a new model or using an
existing one for inference, you may need to import every single component you used, which could be a
little verbose.
To address it, a `prelude` module is provided, allowing you to easily import commonly used structs and macros as a group:
To address it, a `prelude` module is provided, allowing you to easily import commonly used structs
and macros as a group:
```rust, ignore
use burn::prelude::*;
@ -196,90 +196,8 @@ use burn::{
<div class="warning">
For the sake of simplicity, the subsequent chapters of this book will all use this form of importing except in the [Building Blocks](./building-blocks) chapter, as explicit importing aids users in grasping the usage of particular structures and macros.
For the sake of simplicity, the subsequent chapters of this book will all use this form of importing
except in the [Building Blocks](./building-blocks) chapter, as explicit importing aids users in
grasping the usage of particular structures and macros.
</div>
## Explore examples
In the [next chapter](./basic-workflow) you'll have the opportunity to implement the whole Burn
`guide` example yourself in a step by step manner.
Many additional Burn examples are available in the
[examples](https://github.com/tracel-ai/burn/tree/main/examples) directory. Burn examples are
organized as library crates with one or more examples that are executable binaries. An example
can then be executed using the following cargo command line in the root of the Burn repository:
```bash
cargo run --example <example name>
```
To learn more about crates and examples, read the Rust section below.
<details>
<summary><strong>🦀 About Rust crates</strong></summary>
Each Burn example is a **package** which are subdirectories of the `examples` directory. A package
is composed of one or more **crates**.
A package is a bundle of one or more crates that provides a set of functionality. A package
contains a `Cargo.toml` file that describes how to build those crates.
A crate is a compilation unit in Rust. It could be a single file, but it is often easier to
split up crates into multiple **modules**.
A module lets us organize code within a crate for readability and easy reuse. Modules also allow
us to control the _privacy_ of items. For instance the `pub(crate)` keyword is employed to make
a module publicly available inside the crate. In the snippet below there are four modules declared,
two of them are public and visible to the users of the crates, one of them is public inside the crate
only and crate users cannot see it, at last one is private when there is no keyword.
These modules can be single files or a directory with a `mod.rs` file inside.
```rust, ignore
pub mod data;
pub mod inference;
pub(crate) mod model;
mod training;
```
A crate can come in one of two forms: a **binary crate** or a **library crate**. When compiling a crate,
the compiler first looks in the crate root file (`src/lib.rs` for a library crate and `src/main.rs`
for a binary crate). Any module declared in the crate root file will be inserted in the crate for
compilation.
All Burn examples are library crates and they can contain one or more executable examples that
uses the library. We even have some Burn examples that uses the library crate of other examples.
The examples are unique files under the `examples` directory. Each file produces an executable file
with the same name. Each example can then be executed with `cargo run --example <executable name>`.
Below is a file tree of a typical Burn example package:
```
examples/burn-example
├── Cargo.toml
├── examples
│ ├── example1.rs
│ ├── example2.rs
│ └── ...
└── src
├── lib.rs
├── module1.rs
├── module2.rs
└── ...
```
</details><br>
For more information on each example, see their respective `README.md` file.
<div class="warning">
Note that some examples use the
[`datasets` library by HuggingFace](https://huggingface.co/docs/datasets/index) to download the
datasets required in the examples. This is a Python library, which means that you will need to
install Python before running these examples. This requirement will be clearly indicated in the
example's README when applicable.
</div>

View File

@ -0,0 +1,18 @@
# Models and Pre-Trained Weights
The [`models`](https://github.com/tracel-ai/models) repository contains definitions of different
deep learning models with examples for different domains like computer vision and natural language
processing.
This includes image classification models such as
[`MobileNetV2`](https://github.com/tracel-ai/models/tree/main/mobilenetv2-burn),
[`SqueezeNet`](https://github.com/tracel-ai/models/tree/main/squeezenet-burn) and
[`ResNet`](https://github.com/tracel-ai/models/tree/main/resnet-burn), object detection models such
as [`YOLOX`](https://github.com/tracel-ai/models/tree/main/yolox-burn) and language models like
[`BERT` and `RoBERTa`](https://github.com/tracel-ai/models/tree/main/bert-burn).
Be sure to check out the up-to-date
[collection of models](https://github.com/tracel-ai/models?tab=readme-ov-file#collection-of-official-models)
to get you started. Pre-trained weights are available for every supported architecture in this
collection. You will also find a spotlight of
[community contributed models](https://github.com/tracel-ai/models?tab=readme-ov-file#community-contributions).