burn/examples/mnist
Crutcher Dunnavant f5198e5e6f
Chain lint inheritance [was: Disable new default clippy tests] (#3200)
2025-05-20 08:23:11 -04:00
..
examples Update remote backend (#3175) 2025-05-12 15:56:00 -04:00
src Improve multi-device data loading strategy (#2890) 2025-04-04 11:08:36 -04:00
Cargo.toml Chain lint inheritance [was: Disable new default clippy tests] (#3200) 2025-05-20 08:23:11 -04:00
README.md Update tch instructions (#2844) 2025-02-25 08:09:10 -05:00

README.md

MNIST

The example is showing you how to:

  • Define your own custom module (MLP).
  • Create the data pipeline from a raw dataset to a batched multi-threaded fast DataLoader.
  • Configure a learner to display and log metrics as well as to keep training checkpoints.

The example can be run like so:

git clone https://github.com/tracel-ai/burn.git
cd burn
# Use the --release flag to really speed up training.
echo "Using ndarray backend"
cargo run --example mnist --release --features ndarray                # CPU NdArray Backend - f32 - single thread
cargo run --example mnist --release --features ndarray-blas-openblas  # CPU NdArray Backend - f32 - blas with openblas
cargo run --example mnist --release --features ndarray-blas-netlib    # CPU NdArray Backend - f32 - blas with netlib
echo "Using tch backend"
export TORCH_CUDA_VERSION=cu124                                       # Set the cuda version
cargo run --example mnist --release --features tch-gpu                # GPU Tch Backend - f32
cargo run --example mnist --release --features tch-cpu                # CPU Tch Backend - f32
echo "Using wgpu backend"
cargo run --example mnist --release --features wgpu