burn/examples/raspberry-pi-pico
Bjorn Beishline 474fa113d6
Fixed raspberry pi pico example not compiling (#2220)
The import for the model didn't get renamed properly when the name changed from onnx-inference-rp2040 to raspberry-pi-pico.
2024-09-03 11:46:11 -04:00
..
.cargo Make compatible with thumbv6m-none-eabi + add raspberry pi pico example (#2096) 2024-08-23 07:39:39 -04:00
src Fixed raspberry pi pico example not compiling (#2220) 2024-09-03 11:46:11 -04:00
tensorflow Make compatible with thumbv6m-none-eabi + add raspberry pi pico example (#2096) 2024-08-23 07:39:39 -04:00
Cargo.lock Fixed raspberry pi pico example not compiling (#2220) 2024-09-03 11:46:11 -04:00
Cargo.toml Make compatible with thumbv6m-none-eabi + add raspberry pi pico example (#2096) 2024-08-23 07:39:39 -04:00
README.md Make compatible with thumbv6m-none-eabi + add raspberry pi pico example (#2096) 2024-08-23 07:39:39 -04:00
build.rs Make compatible with thumbv6m-none-eabi + add raspberry pi pico example (#2096) 2024-08-23 07:39:39 -04:00
memory.x Make compatible with thumbv6m-none-eabi + add raspberry pi pico example (#2096) 2024-08-23 07:39:39 -04:00

README.md

Running Onnx Inference on the Raspberry Pi Pico

This example shows how to run an inference on a no_std, no atomic pointer, and no heap environment.

Setup

  1. Install raspberry pi pico target rustup target add thumbv6m-none-eabi

  2. Install probe-rs. This is optional, install elf2uf2-rs to use the usb boot with cargo install elf2uf2-rs.

  3. Have a compatible probe to flash to the raspberry pi pico. This is optional, alternatively, modify .cargo/config.toml and uncomment the runner to use elf2uf2-rs.

If you are using elfuf2-rs logging will not go to your serial port, add logging by using embassy-usb.

Running

Run as usual with cargo run

Project Structure

The project is structured as follows

raspberry-pi-pico
├── Cargo.lock
├── Cargo.toml
├── README.md
├── build.rs
├── memory.x
├── src
│   ├── bin
│   │   └── main.rs
│   ├── lib.rs
│   └── model
│       ├── mod.rs
│       └── sine.onnx
└── tensorflow
    ├── requirements.txt
    └── train.py

Everything is standard with any other cargo project except for the memory.x, the model directory, and the tensorflow directory.

The memory.x file contains the memory layout of the chip.

The tensorflow directory contains a python script which generates the onnx model using tensorflow, using the requirements from requirements.txt. The onnx model will be outputted to src/model/sine.onnx. The build.rs script will generate a rust file which takes in the sine.onnx file and generates an import, which gets exposed in mod.rs.