Go to file
Hanchen Ye 939f01cd05 [InsertPipelinePragma] imple of this pass; support to specify the insertion level 2020-12-05 23:53:18 -06:00
config [BenchmarkGen] update cnn-config.ini comments 2020-12-01 23:45:56 -06:00
include [InsertPipelinePragma] imple of this pass; support to specify the insertion level 2020-12-05 23:53:18 -06:00
lib [InsertPipelinePragma] imple of this pass; support to specify the insertion level 2020-12-05 23:53:18 -06:00
samples [AffineLoopPerfection] add this new pass: now only support sequential nested loop and constant loop bound 2020-12-04 23:24:16 -06:00
test [InsertPipelinePragma] imple of this pass; support to specify the insertion level 2020-12-05 23:53:18 -06:00
tools [Conversion] update all conversion passes definition to TableGen 2020-12-04 18:34:24 -06:00
.clang-format mechanical rename hlsld to scalehls; update file structure 2020-09-06 18:05:16 -05:00
.gitignore [Samples] add a script for reporting HLS result for the test cases under test/Conversion/HLSKernelToAffine; remove unrelated files 2020-12-02 18:54:59 -06:00
CMakeLists.txt change lit report style 2020-09-14 19:56:06 -05:00
README.md [Samples] add a script for reporting HLS result for the test cases under test/Conversion/HLSKernelToAffine; remove unrelated files 2020-12-02 18:54:59 -06:00

README.md

ScaleHLS Project (scalehls)

This project aims to create a framework that ultimately converts an algorithm written in a high level language into an efficient hardware implementation. With multiple levels of intermediate representations (IRs), MLIR appears to be the ideal tool for exploring ways to optimize the eventual design at various levels of abstraction (e.g. various levels of parallelism). Our framework will be based on MLIR, it will incorporate a backend for high level synthesis (HLS) C/C++ code. However, the key contribution will be our parametrization and optimization of a tremendously large design space.

Quick Start

1. Install LLVM and MLIR

IMPORTANT This step assumes that you have cloned LLVM from (https://github.com/circt/llvm) to $LLVM_DIR. To build LLVM and MLIR, run

$ mkdir $LLVM_DIR/build
$ cd $LLVM_DIR/build
$ cmake -G Ninja ../llvm \
    -DLLVM_ENABLE_PROJECTS="mlir" \
    -DLLVM_TARGETS_TO_BUILD="X86;RISCV" \
    -DLLVM_ENABLE_ASSERTIONS=ON \
    -DCMAKE_BUILD_TYPE=DEBUG
$ ninja
$ ninja check-mlir

2. Install ScaleHLS

This step assumes this repository is cloned to $SCALEHLS_DIR. To build and launch the tests, run

$ mkdir $SCALEHLS_DIR/build
$ cd $SCALEHLS_DIR/build
$ cmake -G Ninja .. \
    -DMLIR_DIR=$LLVM_DIR/build/lib/cmake/mlir \
    -DLLVM_DIR=$LLVM_DIR/build/lib/cmake/llvm \
    -DLLVM_ENABLE_ASSERTIONS=ON \
    -DCMAKE_BUILD_TYPE=DEBUG
$ ninja check-scalehls

3. Test ScaleHLS

After the installation and test successfully completed, you should be able to play with

$ export PATH=$SCALEHLS_DIR/build/bin:$PATH
$ cd $SCALEHLS_DIR
$
$ benchmark-gen -type "cnn" -config "$SCALEHLS_DIR/config/cnn-config.ini" -number 1
$ scalehls-opt -hlskernel-to-affine test/Conversion/HLSKernelToAffine/test_*.mlir
$
$ scalehls-opt -convert-to-hlscpp test/Conversion/ConvertToHLSCpp/test_*.mlir
$ scalehls-opt -convert-to-hlscpp test/EmitHLSCpp/test_*.mlir | scalehls-translate -emit-hlscpp
$
$ scalehls-opt -qor-estimation test/Analysis/QoREstimation/test_for.mlir

If Vivado HLS (2019.1 tested) is installed on your machine, running the following script will report the HLS results for some benchmarks.

$ cd $SCALEHLS_DIR/sample
$ source ./test_run.sh rerun

References

  1. MLIR documents
  2. mlir-npcomp github
  3. onnx-mlir github
  4. circt github
  5. comba github
  6. dahlia github