hanchenye-llvm-project/polly
Zachary Turner a0e55b6403 [lit] Force site configs to be run before source-tree configs
This patch simplifies LLVM's lit infrastructure by enforcing an ordering
that a site config is always run before a source-tree config.

A significant amount of the complexity from lit config files arises from
the fact that inside of a source-tree config file, we don't yet know if
the site config has been run.  However it is *always* required to run
a site config first, because it passes various variables down through
CMake that the main config depends on.  As a result, every config
file has to do a bunch of magic to try to reverse-engineer the location
of the site config file if they detect (heuristically) that the site
config file has not yet been run.

This patch solves the problem by emitting a mapping from source tree
config file to binary tree site config file in llvm-lit.py. Then, during
discovery when we find a config file, we check to see if we have a
target mapping for it, and if so we use that instead.

This mechanism is generic enough that it does not affect external users
of lit. They will just not have a config mapping defined, and everything
will work as normal.

On the other hand, for us it allows us to make many simplifications:

* We are guaranteed that a site config will be executed first
* Inside of a main config, we no longer have to assume that attributes
  might not be present and use getattr everywhere.
* We no longer have to pass parameters such as --param llvm_site_config=<path>
  on the command line.
* It is future-proof, meaning you don't have to edit llvm-lit.in to add
  support for new projects.
* All of the duplicated logic of trying various fallback mechanisms of
  finding a site config from the main config are now gone.

One potentially noteworthy thing that was required to implement this
change is that whereas the ninja check targets previously used the first
method to spawn lit, they now use the second. In particular, you can no
longer run lit.py against the source tree while specifying the various
`foo_site_config=<path>` parameters.  Instead, you need to run
llvm-lit.py.

Differential Revision: https://reviews.llvm.org/D37756

llvm-svn: 313270
2017-09-14 16:47:58 +00:00
..
cmake [CMake] FindJsoncpp.cmake: Use descriptive variable name for libjsoncpp.so path. 2017-07-18 10:10:02 +00:00
docs Tiny docs fix 2017-07-27 18:14:00 +00:00
include/polly Revert "[ScopDetect/Info] Look through PHIs that follow an error block" 2017-09-06 19:05:40 +00:00
lib Unroll and separate the remaining parts of isolation 2017-09-11 17:46:47 +00:00
test [lit] Force site configs to be run before source-tree configs 2017-09-14 16:47:58 +00:00
tools [Polly][GPGPU] Fixed undefined reference for CUDA's managed memory in Runtime library. 2017-08-27 12:50:51 +00:00
unittests [test] Add some test cases for computeArrayUnused. 2017-08-21 23:04:55 +00:00
utils
www [WWW] Add a section to Getting Started about building out-of-tree 2017-07-11 20:37:28 +00:00
.arcconfig
.arclint [External] Move lib/JSON to lib/External/JSON. NFC. 2017-02-05 15:26:56 +00:00
.gitattributes
.gitignore Do not track the isl PDF manual in SVN 2017-01-16 11:48:03 +00:00
CMakeLists.txt [Polly][CMake] Skip unit-tests in lit if gtest is not available 2017-07-11 11:37:35 +00:00
CREDITS.txt
LICENSE.txt [External] Move lib/JSON to lib/External/JSON. NFC. 2017-02-05 15:26:56 +00:00
README Test commit 2017-06-28 12:58:44 +00:00

README

Polly - Polyhedral optimizations for LLVM
-----------------------------------------
http://polly.llvm.org/

Polly uses a mathematical representation, the polyhedral model, to represent and
transform loops and other control flow structures. Using an abstract
representation it is possible to reason about transformations in a more general
way and to use highly optimized linear programming libraries to figure out the
optimal loop structure. These transformations can be used to do constant
propagation through arrays, remove dead loop iterations, optimize loops for
cache locality, optimize arrays, apply advanced automatic parallelization, drive
vectorization, or they can be used to do software pipelining.