Go to file
Hanchen Ye fc0818d873 Update instructions for spam-filter in readme 2022-02-09 15:09:57 -06:00
include [ReduceInitialInterval] Implement this pass 2022-02-09 14:22:05 -06:00
lib [Utils] Include reduce-initial-interval into the dse 2022-02-09 14:34:57 -06:00
polygeist@3a1b4e2377 Update polygeist 2022-02-01 15:21:11 -06:00
samples [Samples] Add config file into rosetta benchmarks 2022-02-09 14:52:45 -06:00
test [LoopPerfection] Fix the issue when the loop contains local buffer allocs 2022-02-02 15:02:03 -06:00
tools [HLSKernel] Remove this dialect from the repo 2022-01-21 03:55:49 -06:00
.clang-format mechanical rename hlsld to scalehls; update file structure 2020-09-06 18:05:16 -05:00
.gitignore Add pyscalehls tool; Python binding for applyArrayPartition 2021-10-03 10:18:25 -05:00
.gitmodules Rename Polygeist to polygeist 2021-09-30 12:20:01 -05:00
CMakeLists.txt Update the path of mlir-clang 2022-01-26 11:33:56 -06:00
LICENSE Update license to Apache 2.0 with LLVM exceptions 2022-01-05 20:34:39 -06:00
README.md Update instructions for spam-filter in readme 2022-02-09 15:09:57 -06:00
build-scalehls.sh Add build-scalehls.sh script; [README] Update build and test instructions; [Polygeist] Update polygeist submodule to avoid gettimeofday failure 2021-11-09 00:49:06 -06:00

README.md

ScaleHLS Project

ScaleHLS is a High-level Synthesis (HLS) framework on MLIR. ScaleHLS can compile HLS C/C++ or ONNX model to optimized HLS C/C++ in order to generate high-efficiency RTL design using downstream tools, such as Vivado HLS.

By using the MLIR framework that can be better tuned to particular algorithms at different representation levels, ScaleHLS is more scalable and customizable towards various applications coming with intrinsic structural or functional hierarchies. ScaleHLS represents HLS designs at multiple levels of abstraction and provides an HLS-dedicated analysis and transform library (in both C++ and Python) to solve the optimization problems at the suitable representation levels. Using this library, we've developed a design space exploration engine to generate optimized HLS designs automatically.

For more details, please see our HPCA'22 paper.

Quick Start

Prerequisites

  • cmake
  • ninja (recommended)
  • clang and lld (recommended)
  • pybind11
  • python3 with numpy

Build ScaleHLS

First, make sure this repository has been cloned recursively.

$ git clone --recursive git@github.com:hanchenye/scalehls.git
$ cd scalehls

Then, run the following script to build ScaleHLS. Note that you can use -j xx to specify the number of parallel linking jobs.

$ ./build-scalehls.sh

After the build, we suggest to export the following paths.

$ export PATH=$PATH:$PWD/build/bin:$PWD/polygeist/build/bin
$ export PYTHONPATH=$PYTHONPATH:$PWD/build/tools/scalehls/python_packages/scalehls_core

Try ScaleHLS

To launch the automatic kernel-level design space exploration, run:

$ mlir-clang samples/polybench/gemm/test_gemm.c -function=test_gemm -memref-fullrank -raise-scf-to-affine -S \
    | scalehls-opt -dse="top-func=test_gemm target-spec=samples/polybench/config.json" -debug-only=scalehls > /dev/null \
    && scalehls-translate -emit-hlscpp test_gemm_pareto_0.mlir > test_gemm_pareto_0.cpp

$ mlir-clang samples/rosetta/spam-filter/sgd_sw.c -function=SgdLR_sw -memref-fullrank -raise-scf-to-affine -S \
    | scalehls-opt -materialize-reduction -dse="top-func=SgdLR_sw target-spec=samples/rosetta/config.json" -debug-only=scalehls > /dev/null \
    && scalehls-translate -emit-hlscpp SgdLR_sw_pareto_0.mlir > Sgd_sw_pareto_0.cpp

Meanwhile, we provide a pyscalehls tool to showcase the scalehls Python library:

$ pyscalehls.py samples/polybench/syrk/test_syrk.c -f test_syrk

Integration with ONNX-MLIR

If you have installed ONNX-MLIR or established ONNX-MLIR docker to $ONNXMLIR_DIR, you should be able to run the following integration test:

$ cd samples/onnx-mlir/resnet18

$ # Export PyTorch model to ONNX.
$ python3 export_resnet18.py

$ # Parse ONNX model to MLIR.
$ $ONNXMLIR_DIR/build/bin/onnx-mlir -EmitMLIRIR resnet18.onnx

$ # Legalize the output of ONNX-MLIR, optimize and emit C++ code.
$ scalehls-opt resnet18.onnx.mlir -allow-unregistered-dialect -legalize-onnx \
    -affine-loop-normalize -canonicalize -legalize-dataflow="insert-copy=true min-gran=3" \
    -split-function -convert-linalg-to-affine-loops -legalize-to-hlscpp="top-func=main_graph" \
    -affine-loop-perfection -affine-loop-order-opt -loop-pipelining -simplify-affine-if \
    -affine-store-forward -simplify-memref-access -array-partition -cse -canonicalize \
    | scalehls-translate -emit-hlscpp > resnet18.cpp

Please refer to the samples/onnx-mlir folder for more test cases, and sample/onnx-mlir/ablation_int_test.sh for how to conduct the graph, loop, and directive optimizations.

References

  • CIRCT: Circuit IR Compilers and Tools
  • CIRCT-HLS: A HLS flow around CIRCT project