Commit Graph

70 Commits

Author SHA1 Message Date
Tobias Grosser 7d136d952e [tests] Specify the dependence to NVPTX backend for Polly ACC test cases
Some Polly ACC test cases fail without a working NVPTX backend. We explicitly
specify this dependence in REQUIRES. Alternatively, we could have only marked
polly-acc as supported in case the NVPTX backend is available, but as we might
use other backends in the future, this does not seem to be the best choice.

For this to work, we also need to make the 'targets_to_build' information
available.

Suggested-by: Michael Kruse <llvm@meinersbur.de>
llvm-svn: 296853
2017-03-03 03:38:50 +00:00
Tobias Grosser 9d551da5c1 [test] Do not emit binary data to output
Suggested-by: Michael Kruse <llvm@meinersbur.de>
llvm-svn: 296852
2017-03-03 03:24:34 +00:00
Tobias Grosser 93eb7e321f Un-XFAIL test case after half support was added to PTX backend in r291956
llvm-svn: 292124
2017-01-16 14:08:14 +00:00
Tobias Grosser a9cac6a732 [tests] Adjust test output to recent changed SCEV canonocalization [NFC]
LLVM recently changed the SCEV canonicalization which changed the output of
one of our GPGPU test cases.

llvm-svn: 286770
2016-11-13 19:27:17 +00:00
Tobias Grosser 05ee64e67a GPGPU: add missing REQUIRES line to test case
llvm-svn: 281850
2016-09-18 08:57:38 +00:00
Tobias Grosser bc653f2031 GPGPU: Do not run mostly sequential kernels in GPU
In case sequential kernels are found deeper in the loop tree than any parallel
kernel, the overall scop is probably mostly sequential. Hence, run it on the
CPU.

llvm-svn: 281849
2016-09-18 08:31:09 +00:00
Tobias Grosser 82f2af3508 GPGPU: Dynamically ensure 'sufficient compute'
Offloading to a GPU is only beneficial if there is a sufficient amount of
compute that can be accelerated. Many kernels just have a very small number
of dynamic compute, which means GPU acceleration is not beneficial. We
compute at run-time an approximation of how many dynamic instructions will be
executed and fall back to CPU code in case this number is not sufficiently
large. To keep the run-time checking code simple, we over-approximate the
number of instructions executed in each statement by computing the volume of
the rectangular hull of its iteration space.

llvm-svn: 281848
2016-09-18 06:50:35 +00:00
Tobias Grosser cfdee6582b GPGPU: Make test cases independent of register numbering [NFC]
llvm-svn: 281847
2016-09-18 06:50:28 +00:00
Tobias Grosser 51dfc27589 GPGPU: Store back non-read-only scalars
We may generate GPU kernels that store into scalars in case we run some
sequential code on the GPU because the remaining data is expected to already be
on the GPU. For these kernels it is important to not keep the scalar values
in thread-local registers, but to store them back to the corresponding device
memory objects that backs them up.

We currently only store scalars back at the end of a kernel. This is only
correct if precisely one thread is executed. In case more than one thread may
be run, we currently invalidate the scop. To support such cases correctly,
we would need to always load and store back from a corresponding global
memory slot instead of a thread-local alloca slot.

llvm-svn: 281838
2016-09-17 19:22:31 +00:00
Tobias Grosser fe74a7a1f5 GPGPU: Detect read-only scalar arrays ...
and pass these by value rather than by reference.

llvm-svn: 281837
2016-09-17 19:22:18 +00:00
Tobias Grosser aaabbbf886 GPGPU: Do not assume arrays start at 0
Our alias checks precisely check that the minimal and maximal accessed elements
do not overlap in a kernel. Hence, we must ensure that our host <-> device
transfers do not touch additional memory locations that are not covered in
the alias check. To ensure this, we make sure that the data we copy for a
given array is only the data from the smallest element accessed to the largest
element accessed.

We also adjust the size of the array according to the offset at which the array
is actually accessed.

An interesting result of this is: In case array are accessed with negative
subscripts ,e.g., A[-100], we automatically allocate and transfer _more_ data to
cover the full array. This is important as such code indeed exists in the wild.

llvm-svn: 281611
2016-09-15 14:05:58 +00:00
Tobias Grosser a82c4b5df8 GPGPU: Allow region statements
llvm-svn: 281305
2016-09-13 08:42:10 +00:00
Tobias Grosser b79f4d3970 GPGPU: Extend types when array sizes have smaller types
This prevents a compiler crash.

llvm-svn: 281303
2016-09-13 08:02:14 +00:00
Tobias Grosser 5857b701a3 GPGPU: Bail out gracefully in case of invalid IR
Instead of aborting, we now bail out gracefully in case the kernel IR we
generate is invalid. This can currently happen in case the SCoP stores
pointer values, which we model as arrays, as data values into other arrays. In
this case, the original pointer value is not available on the device and can
consequently not be stored. As detecting this ahead of time is not so easy, we
detect these situations after the invalid IR has been generated and bail out.

llvm-svn: 281193
2016-09-12 06:06:31 +00:00
Tobias Grosser 0bf4cc6499 Add missing 'REQUIRES' line
llvm-svn: 281166
2016-09-11 13:42:42 +00:00
Tobias Grosser 02293ed755 GPGPU: Do not fail in case of arrays never accessed
If these arrays have never been accessed we failed to derive an upper bound
of the accesses and consequently a size for the outermost dimension. We
now explicitly check for empty access sets and then just use zero as size
for the outermost dimension.

llvm-svn: 281165
2016-09-11 13:30:12 +00:00
Tobias Grosser d58acf866a [GPGPU] Ensure arrays where only parts are modified are copied to GPU
To do so we change the way array exents are computed. Instead of the precise
set of memory locations accessed, we now compute the extent as the range between
minimal and maximal address in the first dimension and the full extent defined
by the sizes of the inner array dimensions.

We also move the computation of the may_persist region after the construction
of the arrays, as it relies on array information. Without arrays being
constructed no useful information is computed at all.

llvm-svn: 278212
2016-08-10 10:58:19 +00:00
Tobias Grosser b06ff4574e [GPGPU] Support PHI nodes used in GPU kernel
Ensure the right scalar allocations are used as the host location of data
transfers. For the device code, we clear the allocation cache before device
code generation to be able to generate new device-specific allocation and
we need to make sure to add back the old host allocations as soon as the
device code generation is finished.

llvm-svn: 278126
2016-08-09 15:35:06 +00:00
Tobias Grosser 750160e260 [GPGPU] Use separate basic block for GPU initialization code
This increases the readability of the IR and also clarifies that the GPU
inititialization is executed _after_ the scalar initialization which needs
to before the code of the transformed scop is executed.

Besides increased readability, the IR should not change. Specifically, I
do not expect any changes in program semantics due to this patch.

llvm-svn: 278125
2016-08-09 15:35:03 +00:00
Tobias Grosser 77f76788dc [tests] Add two missing 'REQUIRES' lines
llvm-svn: 278104
2016-08-09 09:11:39 +00:00
Tobias Grosser c59b3ce044 [BlockGenerator] Also eliminate dead code not originating from BB
After having generated the code for a ScopStmt, we run a simple dead-code
elimination that drops all instructions that are known to be and remain unused.
Until this change, we only considered instructions for dead-code elimination, if
they have a corresponding instruction in the original BB that belongs to
ScopStmt. However, when generating code we do not only copy code from the BB
belonging to a ScopStmt, but also generate code for operands referenced from BB.
After this change, we now also considers code for dead code elimination, which
does not have a corresponding instruction in BB.

This fixes a bug in Polly-ACC where such dead-code referenced CPU code from
within a GPU kernel, which is possible as we do not guarantee that all variables
that are used in known-dead-code are moved to the GPU.

llvm-svn: 278103
2016-08-09 08:59:05 +00:00
Tobias Grosser cf66ef26f3 [GPGPU] Pass parameters always by using their own type
llvm-svn: 278100
2016-08-09 07:22:08 +00:00
Tobias Grosser 124534038a [GPGPU] Support Values referenced from both isl expr and llvm instructions
When adding code that avoids to pass values used in isl expressions and
LLVM instructions twice, we forgot to make single variable passed to the
kernel available in the ValueMap that makes it usable for instructions that
are not replaced with isl ast expressions. This change adds the variable
that is passed to the kernel to the ValueMap to ensure it is available
for such use cases as well.

llvm-svn: 278039
2016-08-08 19:22:19 +00:00
Tobias Grosser cb1aef8de4 [GPGPU] Create code to verify run-time conditions
llvm-svn: 278026
2016-08-08 17:35:55 +00:00
Tobias Grosser 928d7573dd GPGPU: Sort dimension sizes of multi-dimensional shared memory arrays correctly
Before this commit we generated the array type in reverse order and we also
added the outermost dimension size to the new array declaration, which is
incorrect as Polly additionally assumed an additional unsized outermost
dimension, such that we had an off-by-one error in the linearization of access
expressions.

llvm-svn: 277802
2016-08-05 08:27:24 +00:00
Tobias Grosser 470608e3e4 Add missing 'REQUIRES' line
llvm-svn: 277800
2016-08-05 07:08:45 +00:00
Tobias Grosser c1c6a2a61b GPGPU: Add cuda annotations to specify maximal number of threads per block
These annotations ensure that the NVIDIA PTX assembler limits the number of
registers used such that we can be certain the resulting kernel can be executed
for the number of threads in a thread block that we are planning to use.

llvm-svn: 277799
2016-08-05 06:47:43 +00:00
Tobias Grosser f919d8b360 GPGPU: Support scalars that are mapped to shared memory
llvm-svn: 277726
2016-08-04 13:57:29 +00:00
Tobias Grosser 130ca30f92 GPGPU: Add private memory support
llvm-svn: 277722
2016-08-04 12:39:03 +00:00
Tobias Grosser b513b4916b GPGPU: Add support for shared memory
llvm-svn: 277721
2016-08-04 12:18:14 +00:00
Tobias Grosser 00bb5a99f5 GPGPU: Handle scalar array references
Pass the content of scalar array references to the alloca on the kernel side
and do not pass them additional as normal LLVM scalar value.

llvm-svn: 277699
2016-08-04 06:55:59 +00:00
Tobias Grosser 576932728d GPGPU: Pass subtree values correctly to the kernel
llvm-svn: 277697
2016-08-04 06:55:49 +00:00
Tobias Grosser 629109b633 GPGPU: Mark kernel functions as polly.skip
Otherwise, we would try to re-optimize them with Polly-ACC and possibly even
generate kernels that try to offload themselves, which does not work as the
GPURuntime is not available on the accelerator and also does not make any
sense.

llvm-svn: 277589
2016-08-03 12:00:07 +00:00
Tobias Grosser 8af38ecaa3 Add missing REQUIRES line
llvm-svn: 276964
2016-07-28 07:08:34 +00:00
Tobias Grosser d8b94bcac1 GPGPU: Pass context parameters to GPU kernel
llvm-svn: 276963
2016-07-28 06:47:59 +00:00
Tobias Grosser a490147c90 GPGPU: Pass host iterators to kernel
llvm-svn: 276962
2016-07-28 06:47:56 +00:00
Tobias Grosser 44143bb927 GPGPU: use current 'Index' to find slot in parameter array
Before this change we used the array index, which would result in us accessing
the parameter array out-of-bounds. This bug was visible for test cases where not
all arrays in a scop are passed to a given kernel.

llvm-svn: 276961
2016-07-28 06:47:53 +00:00
Tobias Grosser 4e18d71c71 GPGPU: Generate kernel parameter allocation with right size
Before this change we miscounted the number of function parameters.

llvm-svn: 276960
2016-07-28 06:47:50 +00:00
Tobias Grosser 79a947c233 GPGPU: Add basic support for kernel launches
llvm-svn: 276863
2016-07-27 13:20:16 +00:00
Tobias Grosser 5779359624 GPGPU: Load GPU kernels
We embed the PTX code into the host IR as a global variable and compile it
at run-time into a GPU kernel.

llvm-svn: 276645
2016-07-25 16:31:21 +00:00
Tobias Grosser 13c78e4d51 GPGPU: Emit data-transfer code
Also factor out getArraySize() to avoid code dupliciation and reorder some
function arguments to indicate the direction into which data is transferred.

llvm-svn: 276636
2016-07-25 12:47:39 +00:00
Tobias Grosser 7287aeddf1 GPGPU: Complete code to allocate and free device arrays
At the beginning of each SCoP, we allocate device arrays for all arrays
used on the GPU and we free such arrays after the SCoP has been executed.

llvm-svn: 276635
2016-07-25 12:47:33 +00:00
Tobias Grosser fa7b080218 GPGPU: initialize GPU context and simplify the corresponding GPURuntime interface.
There is no need to expose the selected device at the moment. We also pass back
pointers as return values, as this simplifies the interface.

llvm-svn: 276623
2016-07-25 09:16:01 +00:00
Tobias Grosser 74dc3cb431 GPGPU: Generate PTX assembly code for the kernel modules
Run the NVPTX backend over the GPUModule IR and write the resulting assembly
code in a string.

To work correctly, it is important to invalidate analysis results that still
reference the IR in the kernel module. Hence, this change clears all references
to dominators, loop info, and scalar evolution.

Finally, the NVPTX backend has troubles to generate code for various special
floating point types (not surprising), but also for uncommon integer types. This
commit does not resolve these issues, but pulls out problematic test cases into
separate files to XFAIL them individually and resolve them in future (not
immediate) changes one by one.

llvm-svn: 276396
2016-07-22 07:11:12 +00:00
Tobias Grosser edb885cb12 GPGPU: generate code for ScopStatements
This change introduces the actual compute code in the GPU kernels. To ensure
all values referenced from the statements in the GPU kernel are indeed available
we scan all ScopStmts in the GPU kernel for references to llvm::Values that
are not yet covered by already modeled outer loop iterators, parameters, or
array base pointers and also pass these additional llvm::Values to the
GPU kernel.

For arrays used in the GPU kernel we introduce a new ScopArrayInfo object, which
is referenced by the newly generated access functions within the GPU kernel and
which is used to help with code generation.

llvm-svn: 276270
2016-07-21 13:15:59 +00:00
Tobias Grosser 2d58a64e7f GPGPU: Bail out of scops with hoisted invariant loads
This is currently not supported and will only be added later. Also update the
test cases to ensure no invariant code hoisting is applied.

llvm-svn: 275987
2016-07-19 15:56:25 +00:00
Tobias Grosser f95c5cd06a test: Add missing 'REQUIRES' line
llvm-svn: 275962
2016-07-19 07:47:27 +00:00
Tobias Grosser 92852dbe78 test: Add missing 'REQUIRES' line
llvm-svn: 275960
2016-07-19 07:39:54 +00:00
Tobias Grosser 5260c041ea GPGPU: Emit in-kernel synchronization statements
We use this opportunity to further classify the different user statements that
can arise and add TODOs for the ones not yet implemented.

llvm-svn: 275957
2016-07-19 07:33:16 +00:00
Tobias Grosser 59ab070523 GPGPU: generate control flow within the kernel
llvm-svn: 275956
2016-07-19 07:33:11 +00:00