hanchenye-llvm-project/polly
Johannes Doerfert 96e5471139 Separate invariant equivalence classes by type
We now distinguish invariant loads to the same memory location if they
  have different types. This will cause us to pre-load an invariant
  location once for each type that is used to access it. However, we can
  thereby avoid invalid casting, especially if an array is accessed
  though different typed/sized invariant loads.

  This basically reverts the changes in r260023 but keeps the test
  cases.

llvm-svn: 260045
2016-02-07 17:30:13 +00:00
..
cmake Compile ISL into its own library 2015-09-24 11:30:22 +00:00
docs Support accesses with differently sized types to the same array 2016-02-04 13:18:42 +00:00
include/polly Separate invariant equivalence classes by type 2016-02-07 17:30:13 +00:00
lib Separate invariant equivalence classes by type 2016-02-07 17:30:13 +00:00
test Separate invariant equivalence classes by type 2016-02-07 17:30:13 +00:00
tools Remove autotools build system 2016-01-28 12:00:33 +00:00
utils Revise polly-{update|check}-format targets 2015-09-14 16:59:50 +00:00
www www: Remove some spaces 2016-02-04 06:41:03 +00:00
.arcconfig Adjusted arc linter config for modern version of arcanist 2015-08-12 09:01:16 +00:00
.arclint Adjusted arc linter config for modern version of arcanist 2015-08-12 09:01:16 +00:00
.gitattributes
.gitignore Add git patch files to .gitignore 2015-06-23 20:55:01 +00:00
CMakeLists.txt Add basic doxygen infrastructure for Polly 2016-02-04 07:16:36 +00:00
CREDITS.txt
LICENSE.txt
README

README

Polly - Polyhedral optimizations for LLVM
-----------------------------------------
http://polly.llvm.org/

Polly uses a mathematical representation, the polyhedral model, to represent and
transform loops and other control flow structures. Using an abstract
representation it is possible to reason about transformations in a more general
way and to use highly optimized linear programming libraries to figure out the
optimal loop structure. These transformations can be used to do constant
propagation through arrays, remove dead loop iterations, optimize loops for
cache locality, optimize arrays, apply advanced automatic parallelization, drive
vectorization, or they can be used to do software pipelining.