Go to file
Hanchen Ye 75338f35da [Test] Add errors test case for HLSCpp dialect 2022-03-23 23:26:08 -05:00
docs [Docs] Update scalehls architecture figure 2022-03-20 17:01:30 -05:00
include [HLSCpp] Add verifiers to stream ops; Add DeclaresStreamChannel Trait; Move hlscpp-associated methods from support/transform utils to hlscpp namespace 2022-03-23 21:08:41 -05:00
lib [Test] Add errors test case for HLSCpp dialect 2022-03-23 23:26:08 -05:00
polygeist@3bfc572645 Bump polygeist to 3bfc5726456be032fd4fc5c0fc5781b81af57ede 2022-03-18 15:49:31 -05:00
samples [Transforms] Add pytorch-pipeline-v2 with inter/intra-node memory optimizations; Remove axi-interf option from dse-pipeline as we haven't been able to scale it; [FuncDataflow] Support to specify target-func during initialization 2022-03-22 17:57:10 -05:00
test [Test] Add errors test case for HLSCpp dialect 2022-03-23 23:26:08 -05:00
tools [Transforms] Rename simplificationOpts to memoryOpts; Reorganize code structure of Utils.h and Passes.h 2022-03-16 16:23:12 -05:00
.clang-format mechanical rename hlsld to scalehls; update file structure 2020-09-06 18:05:16 -05:00
.gitignore [Samples] Update the structure of the samples folder 2022-02-20 17:15:16 -06:00
.gitmodules Rename Polygeist to polygeist 2021-09-30 12:20:01 -05:00
CMakeLists.txt Update the path of mlir-clang 2022-01-26 11:33:56 -06:00
LICENSE Update license to Apache 2.0 with LLVM exceptions 2022-01-05 20:34:39 -06:00
README.md [Readme] Add verified Torch-MLIR SHA to instructions 2022-03-23 21:26:41 -05:00
build-scalehls.sh [Build] Disable python binding by default; Add a pybind option to build-scalehls.sh; [Readme] Update readme accordingly 2022-03-09 02:11:32 -06:00

README.md

ScaleHLS Project

ScaleHLS is a High-level Synthesis (HLS) framework on MLIR. ScaleHLS can compile HLS C/C++ or PyTorch model to optimized HLS C/C++ in order to generate high-efficiency RTL design using downstream tools, such as Xilinx Vivado HLS.

By using the MLIR framework that can be better tuned to particular algorithms at different representation levels, ScaleHLS is more scalable and customizable towards various applications coming with intrinsic structural or functional hierarchies. ScaleHLS represents HLS designs at multiple levels of abstraction and provides an HLS-dedicated analysis and transform library (in both C++ and Python) to solve the optimization problems at the suitable representation levels. Using this library, we've developed a design space exploration engine to generate optimized HLS designs automatically.

For more details, please see our HPCA'22 paper:

@article{ye2021scalehls,
  title={ScaleHLS: A New Scalable High-Level Synthesis Framework on Multi-Level Intermediate Representation},
  author={Ye, Hanchen and Hao, Cong and Cheng, Jianyi and Jeong, Hyunmin and Huang, Jack and Neuendorffer, Stephen and Chen, Deming},
  journal={arXiv preprint arXiv:2107.11673},
  year={2021}
}

Framework Architecture

Setting this up

Prerequisites

  • python3
  • cmake
  • ninja
  • clang and lld

Optionally, the following packages are required for the Python binding.

  • pybind11
  • numpy

Clone ScaleHLS

$ git clone --recursive git@github.com:hanchenye/scalehls.git
$ cd scalehls

Build ScaleHLS

Run the following script to build ScaleHLS. Optionally, add -p ON to enable the Python binding and -j xx to specify the number of parallel linking jobs.

$ ./build-scalehls.sh

After the build, we suggest to export the following paths.

$ export PATH=$PATH:$PWD/build/bin:$PWD/polygeist/build/bin
$ export PYTHONPATH=$PYTHONPATH:$PWD/build/tools/scalehls/python_packages/scalehls_core

Compiling HLS C/C++

To optimize C/C++ kernels with the design space exploration (DSE) engine, run:

$ cd samples/polybench/gemm

$ # Parse C/C++ kernel into MLIR.
$ mlir-clang test_gemm.c -function=test_gemm -S \
    -memref-fullrank -raise-scf-to-affine > test_gemm.mlir

$ # Launch the DSE and emit the optimized design as C++ code.
$ scalehls-opt test_gemm.mlir -debug-only=scalehls \
    -scalehls-dse-pipeline="top-func=test_gemm target-spec=../config.json" \
    | scalehls-translate -emit-hlscpp > test_gemm_dse.cpp

If Python binding is enabled, we provide a pyscalehls tool to showcase the scalehls Python library:

$ pyscalehls.py test_gemm.c -f test_gemm > test_gemm_pyscalehls.cpp

Compiling PyTorch Model

If you have installed Torch-MLIR with SHA ea371a9, you should be able to run the following test:

$ cd samples/pytorch/resnet18

$ # Parse PyTorch model to TOSA dialect (with Torch-MLIR mlir_venv activated).
$ # This may take several minutes to compile due to the large amount of weights.
$ python3 export_resnet18_mlir.py | torch-mlir-opt \
    -torchscript-module-to-torch-backend-pipeline="optimize=true" \
    -torch-backend-to-tosa-backend-pipeline="optimize=true" > resnet18.mlir

$ # Optimize the model and emit C++ code.
$ scalehls-opt resnet18.mlir \
    -scalehls-pytorch-pipeline="top-func=forward opt-level=2" \
    | scalehls-translate -emit-hlscpp > resnet18.cpp

Repository Layout

The project follows the conventions of typical MLIR-based projects:

  • include/scalehls and lib for C++ MLIR dialects/passes.
  • polygeist for the C/C++ front-end.
  • samples for C/C++ and PyTorch examples.
  • test for holding regression tests.
  • tools for command line tools.