ScaleHLS is a High-level Synthesis (HLS) framework on MLIR. ScaleHLS can compile HLS C/C++ or PyTorch model to optimized HLS C/C++ in order to generate high-efficiency RTL design using downstream tools, such as Xilinx Vivado HLS.
By using the MLIR framework that can be better tuned to particular algorithms at different representation levels, ScaleHLS is more scalable and customizable towards various applications coming with intrinsic structural or functional hierarchies. ScaleHLS represents HLS designs at multiple levels of abstraction and provides an HLS-dedicated analysis and transform library (in both C++ and Python) to solve the optimization problems at the suitable representation levels. Using this library, we’ve developed a design space exploration engine to generate optimized HLS designs automatically.
For more details, please see our HPCA’22 and DAC’22 paper:
@inproceedings{yehpca2022scalehls,
title={ScaleHLS: A New Scalable High-Level Synthesis Framework on Multi-Level Intermediate Representation},
author={Ye, Hanchen and Hao, Cong and Cheng, Jianyi and Jeong, Hyunmin and Huang, Jack and Neuendorffer, Stephen and Chen, Deming},
booktitle={2022 IEEE International Symposium on High-Performance Computer Architecture (HPCA)},
year={2022}
}
@inproceedings{yedac2022scalehls,
title={ScaleHLS: a scalable high-level synthesis framework with multi-level transformations and optimizations},
author={Ye, Hanchen and Jun, HyeGang and Jeong, Hyunmin and Neuendorffer, Stephen and Chen, Deming},
booktitle={Proceedings of the 59th ACM/IEEE Design Automation Conference},
year={2022}
}
Framework Architecture
Setting this up
Prerequisites
python3
cmake
ninja
clang and lld
Optionally, the following packages are required for the Python binding.
pybind11
numpy
Clone ScaleHLS
$ git clone --recursive git@github.com:hanchenye/scalehls.git
$ cd scalehls
Build ScaleHLS
Run the following script to build ScaleHLS. Optionally, add -p ON to enable the Python binding and -j xx to specify the number of parallel linking jobs.
$ ./build-scalehls.sh
After the build, we suggest to export the following paths.
If you have installed Torch-MLIR with SHA ea371a9, you should be able to run the following test:
$ cd samples/pytorch/resnet18
$ # Parse PyTorch model to TOSA dialect (with Torch-MLIR mlir_venv activated).
$ # This may take several minutes to compile due to the large amount of weights.
$ python3 export_resnet18_mlir.py | torch-mlir-opt \
-torchscript-module-to-torch-backend-pipeline="optimize=true" \
-torch-backend-to-tosa-backend-pipeline="optimize=true" > resnet18.mlir
$ # Optimize the model and emit C++ code.
$ scalehls-opt resnet18.mlir \
-scalehls-pytorch-pipeline-v2="top-func=forward loop-tile-size=4 loop-unroll-factor=2" \
| scalehls-translate -emit-hlscpp > resnet18.cpp
Repository Layout
The project follows the conventions of typical MLIR-based projects:
include/scalehls and lib for C++ MLIR dialects/passes.
ScaleHLS Project
ScaleHLS is a High-level Synthesis (HLS) framework on MLIR. ScaleHLS can compile HLS C/C++ or PyTorch model to optimized HLS C/C++ in order to generate high-efficiency RTL design using downstream tools, such as Xilinx Vivado HLS.
By using the MLIR framework that can be better tuned to particular algorithms at different representation levels, ScaleHLS is more scalable and customizable towards various applications coming with intrinsic structural or functional hierarchies. ScaleHLS represents HLS designs at multiple levels of abstraction and provides an HLS-dedicated analysis and transform library (in both C++ and Python) to solve the optimization problems at the suitable representation levels. Using this library, we’ve developed a design space exploration engine to generate optimized HLS designs automatically.
For more details, please see our HPCA’22 and DAC’22 paper:
Framework Architecture
Setting this up
Prerequisites
Optionally, the following packages are required for the Python binding.
Clone ScaleHLS
Build ScaleHLS
Run the following script to build ScaleHLS. Optionally, add
-p ON
to enable the Python binding and-j xx
to specify the number of parallel linking jobs.After the build, we suggest to export the following paths.
Compiling HLS C/C++
To optimize C/C++ kernels with the design space exploration (DSE) engine, run:
If Python binding is enabled, we provide a
pyscalehls
tool to showcase thescalehls
Python library:Compiling PyTorch Model
If you have installed Torch-MLIR with SHA ea371a9, you should be able to run the following test:
Repository Layout
The project follows the conventions of typical MLIR-based projects:
include/scalehls
andlib
for C++ MLIR dialects/passes.polygeist
for the C/C++ front-end.samples
for C/C++ and PyTorch examples.test
for holding regression tests.tools
for command line tools.