burn/examples/simple-regression
Guillaume Lagrange 0cbe9a927d
Add learner training report summary (#1591)
* Add training report summary

* Fix LossMetric batch size state

* Add NumericEntry de/serialize

* Fix clippy suggestion

* Compact recorder does not use compression (anymore)

* Add learner summary expected results tests

* Add summary to learner builder and automatically display in fit

- Add LearnerSummaryConfig
- Keep track of summary metrics names
- Add model field when displaying from learner.fit()
2024-04-11 12:32:25 -04:00
..
examples docs(book-&-examples): modify book and examples with new `prelude` module (#1372) 2024-02-28 13:25:25 -05:00
src Add learner training report summary (#1591) 2024-04-11 12:32:25 -04:00
Cargo.toml [refactor] Move burn crates to their own crates directory (#1336) 2024-02-20 13:57:55 -05:00
README.md Update TORCH_CUDA_VERSION usage (#1284) 2024-02-10 12:01:45 -05:00

README.md

Regression

The example shows you how to:

  • Define a custom dataset for regression problems. We implement the Diabetes Toy Dataset from HuggingFace hub. The dataset is also available as part of toy regression datasets in sklearndatasets.
  • Create a data pipeline from a raw dataset to a batched fast DataLoader with min-max feature scaling.
  • Define a Simple NN model for regression using Burn Modules.

Note
This example makes use of the HuggingFace datasets library to download the datasets. Make sure you have Python installed on your computer.

The example can be run like so:

git clone https://github.com/tracel-ai/burn.git
cd burn
# Use the --release flag to really speed up training.
echo "Using ndarray backend"
cargo run --example regression --release --features ndarray                # CPU NdArray Backend - f32 - single thread
cargo run --example regression --release --features ndarray-blas-openblas  # CPU NdArray Backend - f32 - blas with openblas
cargo run --example regression --release --features ndarray-blas-netlib    # CPU NdArray Backend - f32 - blas with netlib
echo "Using tch backend"
export TORCH_CUDA_VERSION=cu121                                            # Set the cuda version
cargo run --example regression --release --features tch-gpu                # GPU Tch Backend - f32
cargo run --example regression --release --features tch-cpu                # CPU Tch Backend - f32
echo "Using wgpu backend"
cargo run --example regression --release --features wgpu