Commit Graph

172542 Commits

Author SHA1 Message Date
Todd Fiala 18f4540f61 Fix TestPtrRef2Typedef with new const adornment on expected ref type.
llvm-svn: 206619
2014-04-18 17:04:21 +00:00
Todd Fiala 3dbe13f6a8 Address hung tests in Linux.
Follow-up patch coming to address test failures exposed by this change.

llvm-svn: 206618
2014-04-18 17:01:01 +00:00
Benjamin Kramer 889873d890 LineIterator: Add DataTypes.h for int64_t on MSVC.
llvm-svn: 206617
2014-04-18 16:57:01 +00:00
Benjamin Kramer 753a7ab8c5 Add some missing includes for various standard library implementations.
llvm-svn: 206616
2014-04-18 16:46:29 +00:00
Benjamin Kramer e677b8dab1 Make the copy member of StringRef/ArrayRef generic wrt allocators.
Doesn't make sense to restrict this to BumpPtrAllocator. While there
replace an explicit loop with std::equal. Some standard libraries know
how to compile this down to a ::memcmp call if possible.

llvm-svn: 206615
2014-04-18 16:36:15 +00:00
Timur Iskhodzhanov 675bf27ab7 Split out the no-thunk multiple inheritance tests
Also, intentionally duplicate base class definitions per test, so it's easier to copy tests while debugging failures

llvm-svn: 206614
2014-04-18 15:10:05 +00:00
Tim Northover ff046b64d9 AArch64/ARM64: add more NEON tests.
Mostly no testing this time, since they were just wrangling
target-specific intrinsics.

llvm-svn: 206613
2014-04-18 14:54:53 +00:00
Benjamin Kramer 29af3ed457 Allocator: Remove ReferenceAdder hack.
This was a workaround for compilers that had issues with reference
collapsing.

llvm-svn: 206612
2014-04-18 14:54:51 +00:00
Tim Northover 37d9a9cebf ARM64: disable generation of .loh directives outside MachO.
Part of PR19455.

llvm-svn: 206611
2014-04-18 14:54:46 +00:00
Tim Northover be1d1b6681 ARM64: don't emit .subsections_via_symbols on ELF.
Part of PR19455.

llvm-svn: 206610
2014-04-18 14:54:41 +00:00
Tim Northover be3941cc79 ARM64: add extra NEG pattern.
llvm-svn: 206609
2014-04-18 14:54:35 +00:00
Dmitri Gribenko 62bcd925c0 Add more constness to module-related APIs
llvm-svn: 206595
2014-04-18 14:36:51 +00:00
Tim Northover 4dab69815c ARM64: make sure the caller is expected to extend in AAPCS.
This is one of those DarwinPCS differences. It'd been caught in
arguments, but not return values.

llvm-svn: 206594
2014-04-18 13:46:08 +00:00
Evgeniy Stepanov 561c4db707 [asan] Reenable tests that should pass since PR19207 is fixed.
llvm-svn: 206593
2014-04-18 13:24:03 +00:00
Tim Northover 0dbdfb8522 AArch64/ARM64: port more AArch64 tests to ARM64.
llvm-svn: 206592
2014-04-18 13:16:55 +00:00
Tim Northover e3028832d1 AArch64/ARM64: add non-scalar lowering for more FCVT operations.
llvm-svn: 206591
2014-04-18 13:16:42 +00:00
Aaron Ballman 0491afaf5f Updating to use more range-based for loops, nullptr and auto. No functional changes.
llvm-svn: 206590
2014-04-18 13:13:15 +00:00
Evgeniy Stepanov 474011d55d [msan] Add missing quotes.
llvm-svn: 206589
2014-04-18 13:03:54 +00:00
Tim Northover 01f315a556 AArch64/ARM64: improve spotting of EXT instructions from VECTOR_SHUFFLE.
We couldn't cope if the first mask element was UNDEF before, which
isn't ideal.

llvm-svn: 206588
2014-04-18 12:50:58 +00:00
Evgeniy Stepanov 191ebd874f [msan] Run msan_test in the new with-calls mode.
llvm-svn: 206587
2014-04-18 12:19:28 +00:00
Evgeniy Stepanov 83477cb93b [msan] Missing declarations for the new interface functions.
llvm-svn: 206586
2014-04-18 12:18:00 +00:00
Evgeniy Stepanov 65120ec8c6 [msan] Add -msan-instrumentation-with-call-threshold.
This flag replaces inline instrumentation for checks and origin stores with
calls into MSan runtime library. This is a workaround for PR17409.

Disabled by default.

llvm-svn: 206585
2014-04-18 12:17:20 +00:00
Evgeniy Stepanov 8f41674719 [msan] Add new MSan callbacks for instrumentation-with-calls mode.
llvm-svn: 206584
2014-04-18 12:15:24 +00:00
Chandler Carruth d8d865e266 [LCG] Remove all of the complexity stemming from supporting copying.
Reality is that we're never going to copy one of these. Supporting this
was becoming a nightmare because nothing even causes it to compile most
of the time. Lots of subtle errors built up that wouldn't have been
caught by any "normal" testing.

Also, make the move assignment actually work rather than the bogus swap
implementation that would just infloop if used. As part of that, factor
out the graph pointer updates into a helper to share between move
construction and move assignment.

llvm-svn: 206583
2014-04-18 11:02:33 +00:00
Chandler Carruth 54125a2ba8 [Allocator] Fix an obvious think-o with the move assignment
implementation of the SpecificBumpPtrAllocator -- we have to actually
move the subobject. =] Noticed when using this code more directly.

llvm-svn: 206582
2014-04-18 11:02:29 +00:00
Chandler Carruth 18eadd9260 [LCG] Add support for building persistent and connected SCCs to the
LazyCallGraph. This is the start of the whole point of this different
abstraction, but it is just the initial bits. Here is a run-down of
what's going on here. I'm planning to incorporate some (or all) of this
into comments going forward, hopefully with better editing and wording.
=]

The crux of the problem with the traditional way of building SCCs is
that they are ephemeral. The new pass manager however really needs the
ability to associate analysis passes and results of analysis passes with
SCCs in order to expose these analysis passes to the SCC passes. Making
this work is kind-of the whole point of the new pass manager. =]

So, when we're building SCCs for the call graph, we actually want to
build persistent nodes that stick around and can be reasoned about
later. We'd also like the ability to walk the SCC graph in more complex
ways than just the traditional postorder traversal of the current CGSCC
walk. That means that in addition to being persistent, the SCCs need to
be connected into a useful graph structure.

However, we still want the SCCs to be formed lazily where possible.

These constraints are quite hard to satisfy with the SCC iterator. Also,
using that would bypass our ability to actually add data to the nodes of
the call graph to facilite implementing the Tarjan walk. So I've
re-implemented things in a more direct and embedded way. This
immediately makes it easy to get the persistence and connectivity
correct, and it also allows leveraging the existing nodes to simplify
the algorithm. I've worked somewhat to make this implementation more
closely follow the traditional paper's nomenclature and strategy,
although it is still a bit obtuse because it isn't recursive, using
an explicit stack and a tail call instead, and it is interruptable,
resuming each time we need another SCC.

The other tricky bit here, and what actually took almost all the time
and trials and errors I spent building this, is exactly *what* graph
structure to build for the SCCs. The naive thing to build is the call
graph in its newly acyclic form. I wrote about 4 versions of this which
did precisely this. Inevitably, when I experimented with them across
various use cases, they became incredibly awkward. It was all
implementable, but it felt like a complete wrong fit. Square peg, round
hole. There were two overriding aspects that pushed me in a different
direction:

1) We want to discover the SCC graph in a postorder fashion. That means
   the root node will be the *last* node we find. Using the call-SCC DAG
   as the graph structure of the SCCs results in an orphaned graph until
   we discover a root.

2) We will eventually want to walk the SCC graph in parallel, exploring
   distinct sub-graphs independently, and synchronizing at merge points.
   This again is not helped by the call-SCC DAG structure.

The structure which, quite surprisingly, ended up being completely
natural to use is the *inverse* of the call-SCC DAG. We add the leaf
SCCs to the graph as "roots", and have edges to the caller SCCs. Once
I switched to building this structure, everything just fell into place
elegantly.

Aside from general cleanups (there are FIXMEs and too few comments
overall) that are still needed, the other missing piece of this is
support for iterating across levels of the SCC graph. These will become
useful for implementing #2, but they aren't an immediate priority.

Once SCCs are in good shape, I'll be working on adding mutation support
for incremental updates and adding the pass manager that this analysis
enables.

llvm-svn: 206581
2014-04-18 10:50:32 +00:00
Tim Northover 07f1624aa2 ARM64: make sure HFAs on the stack get properly aligned.
Another AAPCS bug, part of PR19432.

llvm-svn: 206580
2014-04-18 10:47:44 +00:00
Benjamin Kramer e6c821ef4c X86: Pattern match scalar loads + vcvtph2ps into just vcvtph2ps.
vcvtph2ps only reads the lower 64 bits of the address passed to the
intrinsic.

llvm-svn: 206579
2014-04-18 10:45:33 +00:00
Dmitry Vyukov 10d4b14a16 tsan: add benchmark that allows to investigate shadow memory consumption
llvm-svn: 206578
2014-04-18 10:37:39 +00:00
Tobias Grosser 954939842f Really fix the load case.
Commit r206510 falsely advertised to fix the load cases, even though it only
fixed the store case. This commit adds the same fix for the load case including
the missing test coverage.

llvm-svn: 206577
2014-04-18 09:46:35 +00:00
Chandler Carruth 1911882569 Revert r206565 (and r206566 which updated tests).
This commit was attributed to a different person from the person who
posted the patch to the list, and the person who posted it the list
claimed when they did that they were not the author, but that the author
was yet a third person. I don't know what is going on here, but
reverting until the attribution is clear and the author has explicitly
contributed the patch.

Also, the review hasn't really involved any of the MC maintainers and
that seems questionable too.

llvm-svn: 206576
2014-04-18 09:35:51 +00:00
Tim Northover 66c36b814f AArch64/ARM64: port atomics test to ARM64.
Covers quite a few extra instructions (like any of the max/min ones
which were broken until recently on ARM64).

llvm-svn: 206575
2014-04-18 09:31:31 +00:00
Tim Northover a2c4c71c12 AArch64/ARM64: spot a greater variety of concat_vector operations.
Code mostly copied from AArch64, just tidied up a trifle and plumbed
into the ARM64 way of doing things.

This also enables the AArch64 tests which inspired the previous
untested commits.

llvm-svn: 206574
2014-04-18 09:31:27 +00:00
Tim Northover 848bb3ced5 ARM64: implement cunning optimisation from AArch64
A vector extract followed by a dup can become a single instruction even if the
types don't match. AArch64 handled this in ISelLowering, but a few reasonably
simple patterns can take care of it in TableGen, so that's where I've put it.

llvm-svn: 206573
2014-04-18 09:31:20 +00:00
Tim Northover 5ec51a8981 ARM64: spot a vector_shuffle that maps to INS and expand.
Tests will be coming very shortly when all the optimisations needed to
support AArch64's neon-copy.ll file are committed.

llvm-svn: 206572
2014-04-18 09:31:15 +00:00
Tim Northover 46d98ea8de ARM64: nick some AArch64 patterns for extract/insert -> INS.
Tests will be committed shortly when all optimisations needed to
support AArch64's neon-copy.ll file are supported.

llvm-svn: 206571
2014-04-18 09:31:11 +00:00
Tim Northover 8b2fa3dfef AArch64/ARM64: emit all vector FP comparisons as such.
ARM64 was scalarizing some vector comparisons which don't quite map to
AArch64's compare and mask instructions. AArch64's approach of sacrificing a
little efficiency to emulate them with the limited set available was better, so
I ported it across.

More "inspired by" than copy/paste since the backend's internal expectations
were a bit different, but the tests were invaluable.

llvm-svn: 206570
2014-04-18 09:31:07 +00:00
Tim Northover 0a44e66bb8 AArch64/ARM64: port BSL logic from AArch64 & enable test.
I enhanced it a little in the process. The decision shouldn't really be beased
on whether a BUILD_VECTOR is a splat: any set of constants will do the job
provided they're related in the correct way.

Also, the BUILD_VECTOR could be any operand of the incoming AND nodes, so it's
best to check for all 4 possibilities rather than assuming it'll be the RHS.

llvm-svn: 206569
2014-04-18 09:31:01 +00:00
Tim Northover 547a4ae6fa AArch64/ARM64: copy byval implementation from AArch64.
It's not actually used to handle C or C++ ABI rules on ARM64, but could well be
emitted by other language front-ends, so it's as well to have a sensible
implementation.

llvm-svn: 206568
2014-04-18 09:30:52 +00:00
Jiangning Liu 300a6b84f2 Add missing config file for newly added test case introduced by r206563.
llvm-svn: 206567
2014-04-18 09:05:50 +00:00
Yaron Keren d0751ce197 Updated test with register names following r206565.
llvm-svn: 206566
2014-04-18 08:50:09 +00:00
Yaron Keren 8ca45e0c05 Patch by Ray Donnelly.
Emit WIN64 SEH registers by name instead of just number.

llvm-svn: 206565
2014-04-18 08:03:38 +00:00
Kostya Serebryany 22e8810838 [asan] one more workaround for PR17409: don't do BB-level coverage instrumentation if there are more than N (=1500) basic blocks. This makes ASanCoverage work on libjpeg_turbo/jchuff.c used by Chrome, which has 1824 BBs
llvm-svn: 206564
2014-04-18 08:02:42 +00:00
Jiangning Liu ad874fca28 This commit allows vectorized loops to be unrolled by a factor of 2 for AArch64.
A new test case is also added for ARM64.

Patched by Z.Zheng

llvm-svn: 206563
2014-04-18 07:57:54 +00:00
Matt Arsenault 209a7b92b5 R600: Minor cleanups.
Fix indentation, better line wrapping, unused includes.

llvm-svn: 206562
2014-04-18 07:40:20 +00:00
Lang Hames bc876017c2 [ExecutionEngine] Allow JIT clients to enable/disable module verification.
Previously module verification was always enabled, with no way to turn it off.
As of this commit, module verification is on by default in Debug builds, and off
by default in release builds. The default behaviour can be overridden by calling
setVerifyModules(bool) on the JIT instance (this works for both the old JIT, and
MCJIT).

<rdar://problem/16150008>

llvm-svn: 206561
2014-04-18 06:48:23 +00:00
Rui Ueyama d48665b1fd [ELF] Fix GNU_RELRO section name.
llvm-svn: 206560
2014-04-18 06:01:43 +00:00
Jiangning Liu 40d81e10c5 This is one of the optimizations ported from ARM64 to AArch64 to address the performance gap between these two back ends. The test case newly added for AArch64 already exists in ARM64.
Patched by Z.Zheng

llvm-svn: 206559
2014-04-18 05:58:09 +00:00
Matt Arsenault 78b8670aac R600/SI: Try to use scalar BFE.
Use scalar BFE with constant shift and offset when possible.
This is complicated by the fact that the scalar version packs
the two operands of the vector version into one.

llvm-svn: 206558
2014-04-18 05:19:26 +00:00
Jiangning Liu e56c30614f This commit enables unaligned memory accesses of vector types on AArch64 back end. This should boost vectorized code performance.
Patched by Z. Zheng

llvm-svn: 206557
2014-04-18 03:58:38 +00:00