2021-11-09 14:49:06 +08:00
# ScaleHLS Project
2020-05-13 12:28:39 +08:00
2021-11-10 06:47:56 +08:00
ScaleHLS is a High-level Synthesis (HLS) framework on [MLIR ](https://mlir.llvm.org ). ScaleHLS can compile HLS C/C++ or ONNX model to optimized HLS C/C++ in order to generate high-efficiency RTL design using downstream tools, such as Vivado HLS.
2021-07-28 02:00:10 +08:00
2021-11-10 06:47:56 +08:00
By using the MLIR framework that can be better tuned to particular algorithms at different representation levels, ScaleHLS is more scalable and customizable towards various applications coming with intrinsic structural or functional hierarchies. ScaleHLS represents HLS designs at multiple levels of abstraction and provides an HLS-dedicated analysis and transform library (in both C++ and Python) to solve the optimization problems at the suitable representation levels. Using this library, we've developed a design space exploration engine to generate optimized HLS designs automatically.
For more details, please see our [HPCA'22 paper ](https://arxiv.org/abs/2107.11673 ).
2020-04-21 05:25:12 +08:00
2020-08-26 03:11:30 +08:00
## Quick Start
2021-05-01 01:34:36 +08:00
2021-11-09 14:49:06 +08:00
### Prerequisites
- cmake
- ninja (recommended)
- clang and lld (recommended)
- pybind11
- python3 with numpy
### Build ScaleHLS
First, make sure this repository has been cloned recursively.
2021-10-02 02:22:03 +08:00
```sh
2021-11-04 11:25:06 +08:00
$ git clone --recursive git@github.com:hanchenye/scalehls.git
2021-11-09 14:49:06 +08:00
$ cd scalehls
2020-11-06 07:56:34 +08:00
```
2021-11-09 14:49:06 +08:00
Then, run the following script to build ScaleHLS. Note that you can use `-j xx` to specify the number of parallel linking jobs.
2021-10-02 02:22:03 +08:00
```sh
2021-11-09 14:49:06 +08:00
$ ./build-scalehls.sh
2021-10-02 02:22:03 +08:00
```
2021-11-10 06:47:56 +08:00
After the build, we suggest to export the following paths.
2020-09-06 16:25:26 +08:00
```sh
2021-11-09 14:49:06 +08:00
$ export PATH=$PATH:$PWD/build/bin:$PWD/polygeist/build/mlir-clang
$ export PYTHONPATH=$PYTHONPATH:$PWD/build/tools/scalehls/python_packages/scalehls_core
```
2020-12-20 13:31:52 +08:00
2021-11-09 14:49:06 +08:00
### Try ScaleHLS
To launch the automatic kernel-level design space exploration, run:
```sh
2021-11-12 16:09:50 +08:00
$ mlir-clang samples/polybench/gemm/test_gemm.c -function=test_gemm -memref-fullrank -raise-scf-to-affine -S \
| scalehls-opt -dse="top-func=test_gemm target-spec=samples/polybench/target-spec.ini" -debug-only=scalehls > /dev/null \
& & scalehls-translate -emit-hlscpp test_gemm_pareto_0.mlir > test_gemm_pareto_0.cpp
2020-09-06 16:25:26 +08:00
```
2020-09-15 13:57:44 +08:00
2021-11-10 06:47:56 +08:00
Meanwhile, we provide a `pyscalehls` tool to showcase the `scalehls` Python library:
2021-11-04 11:25:06 +08:00
```sh
2021-11-12 16:09:50 +08:00
$ pyscalehls.py samples/polybench/syrk/test_syrk.c -f test_syrk
2021-11-04 11:25:06 +08:00
```
2020-12-24 14:15:47 +08:00
## Integration with ONNX-MLIR
2021-11-09 14:49:06 +08:00
If you have installed [ONNX-MLIR ](https://github.com/onnx/onnx-mlir ) or established ONNX-MLIR docker to `$ONNXMLIR_DIR` , you should be able to run the following integration test:
2020-12-24 14:15:47 +08:00
```sh
2021-11-09 14:49:06 +08:00
$ cd samples/onnx-mlir/resnet18
2020-12-26 06:18:38 +08:00
$ # Export PyTorch model to ONNX.
2021-11-09 14:49:06 +08:00
$ python3 export_resnet18.py
2020-12-24 14:15:47 +08:00
$ # Parse ONNX model to MLIR.
2020-12-26 06:18:38 +08:00
$ $ONNXMLIR_DIR/build/bin/onnx-mlir -EmitONNXIR resnet18.onnx
2020-12-24 14:15:47 +08:00
$ # Lower from ONNX dialect to Affine dialect.
2020-12-26 06:18:38 +08:00
$ $ONNXMLIR_DIR/build/bin/onnx-mlir-opt resnet18.onnx.mlir \
-shape-inference -convert-onnx-to-krnl -pack-krnl-constants \
-convert-krnl-to-affine > resnet18.mlir
$ # (Optional) Print model graph.
$ scalehls-opt resnet18.tmp -print-op-graph 2> resnet18.gv
$ dot -Tpng resnet18.gv > resnet18.png
2020-12-24 14:15:47 +08:00
$ # Legalize the output of ONNX-MLIR, optimize and emit C++ code.
2021-04-27 10:05:18 +08:00
$ scalehls-opt resnet18.mlir -allow-unregistered-dialect -legalize-onnx \
2021-05-01 14:28:38 +08:00
-affine-loop-normalize -canonicalize -legalize-dataflow="insert-copy=true min-gran=3" \
2021-04-27 10:05:18 +08:00
-split-function -convert-linalg-to-affine-loops -legalize-to-hlscpp="top-func=main_graph" \
2021-04-27 09:54:10 +08:00
-affine-loop-perfection -affine-loop-order-opt -loop-pipelining -simplify-affine-if \
-affine-store-forward -simplify-memref-access -array-partition -cse -canonicalize \
2021-03-03 02:24:23 +08:00
| scalehls-translate -emit-hlscpp > resnet18.cpp
2020-12-24 14:15:47 +08:00
```
2021-04-27 09:54:10 +08:00
Please refer to the `samples/onnx-mlir` folder for more test cases, and `sample/onnx-mlir/ablation_int_test.sh` for how to conduct the graph, loop, and directive optimizations.
2020-04-21 05:29:04 +08:00
## References
2021-11-09 14:49:06 +08:00
- [CIRCT ](https://github.com/llvm/circt ): Circuit IR Compilers and Tools
2021-11-10 06:47:56 +08:00
- [CIRCT-HLS ](https://github.com/circt-hls/circt-hls ): A HLS flow around CIRCT project