Commit Graph

4029 Commits

Author SHA1 Message Date
Matt Arsenault 0be1cb1c7b Don't use runtime bounds check between address spaces.
Don't vectorize with a runtime check if it requires a
comparison between pointers with different address spaces.
The values can't be assumed to be directly comparable.
Previously it would create an illegal bitcast.

llvm-svn: 191862
2013-10-02 22:38:17 +00:00
Matt Arsenault e64c7c7530 Fix missing CHECK-LABELs
llvm-svn: 191853
2013-10-02 20:29:00 +00:00
Yi Jiang 8fd1a806d5 Apply slp vectorization on fully-vectorizable tree of height 2
llvm-svn: 191852
2013-10-02 20:20:39 +00:00
Benjamin Kramer b9add84ef6 SLPVectorizer: Make store chain finding more aggressive with GetUnderlyingObject.
This recursively strips all GEPs like the existing code. It also handles bitcasts and
other operations that do not change the pointer value.

llvm-svn: 191847
2013-10-02 19:06:06 +00:00
Tom Stellard d3e916eb6a StructurizeCFG: Add dependency on LowerSwitch pass
Switch instructions were crashing the StructurizeCFG pass, and it's
probably easier anyway if we don't need to handle them in this pass.

Reviewed-by: Christian König <christian.koenig@amd.com>
llvm-svn: 191841
2013-10-02 17:04:59 +00:00
Alexey Samsonov 31540172d0 Remove "localize global" optimization
Summary:
As discussed in http://llvm-reviews.chandlerc.com/D1754,
this optimization isn't really valid for C, and fires too rarely anyway.

Reviewers: rafael, nicholas

Reviewed By: nicholas

CC: rnk, llvm-commits, nicholas

Differential Revision: http://llvm-reviews.chandlerc.com/D1769

llvm-svn: 191834
2013-10-02 15:31:34 +00:00
Manman Ren 9a0a67035e Debug Info: In DIBuilder, the derived-from field of a DW_TAG_pointer_type
is updated to use DITypeRef.

Move isUnsignedDIType and getOriginalTypeSize from DebugInfo.h to be static
helper functions in DwarfCompileUnit. We already have a static helper function
"isTypeSigned" in DwarfCompileUnit, and a pointer to DwarfDebug is added to
resolve the derived-from field. All three functions need to go across link
for derived-from fields, so we need to get hold of a type identifier map.

A pointer to DwarfDebug is also added to DbgVariable in order to resolve the
derived-from field.

Debug info verifier is updated to check a derived-from field is a TypeRef.
Verifier will not go across link for derived-from fields, in debug info finder,
we go across the link to add derived-from fields to types.

Function getDICompositeType is only used by dragonegg and since dragonegg does
not generate identifier for types, we use an empty map to resolve the
derived-from field.

When printing a derived-from field, we use DITypeRef::getName to either return
the type identifier or getName of the DIType.

A paired commit at clang is required due to changes to DIBuilder.

llvm-svn: 191800
2013-10-01 23:45:54 +00:00
Matt Arsenault 517d84e268 Don't merge tiny functions.
It's silly to merge functions like these:

define void @foo(i32 %x) {
  ret void
}

define void @bar(i32 %x) {
  ret void
}

to get

define void @bar(i32) {
  tail call void @foo(i32 %0)
  ret void
}

llvm-svn: 191786
2013-10-01 18:05:30 +00:00
Benjamin Kramer 58f1ced564 SCEVExpander: Fix a regression I introduced by to eagerly adding RAII objects.
PR17425.

llvm-svn: 191741
2013-10-01 12:17:11 +00:00
Matt Arsenault 8468062c6e Use right address space size in InstCombineCompares
The test's output doesn't change, but this ensures
this is actually hit with a different address space.

llvm-svn: 191701
2013-09-30 21:11:01 +00:00
Matt Arsenault 06adecabe7 Constant fold ptrtoint + compare with address spaces
llvm-svn: 191699
2013-09-30 21:06:18 +00:00
Manman Ren adf4cc171e TBAA: update tbaa format from scalar format to struct-path aware format.
llvm-svn: 191690
2013-09-30 18:17:55 +00:00
Manman Ren 1047fe452f TBAA: remove !tbaa from testing cases when they are not needed.
llvm-svn: 191689
2013-09-30 18:17:35 +00:00
Benjamin Kramer d36f1abefd IRBuilder: Add RAII objects to reset insertion points or fast math flags.
Inspired by the object from the SLPVectorizer. This found a minor bug in the
debug loc restoration in the vectorizer where the location of a following
instruction was attached instead of the location from the original instruction.

llvm-svn: 191673
2013-09-30 15:39:48 +00:00
Joey Gouly d51a35c6a0 Fix a bug in InstCombine where it attempted to cast a Value* to an Instruction*
when it was actually a Constant*.

There are quite a few other casts to Instruction that might have the same problem,
but this is the only one I have a test case for.

llvm-svn: 191668
2013-09-30 14:18:35 +00:00
Benjamin Kramer d75c8ebdd1 Add a test that large offsets on GEPs on 32 bits targets are handled correctly.
llvm-svn: 191628
2013-09-28 21:27:49 +00:00
Matt Arsenault 31cfc78f81 Use right pointer type in DebugIR
llvm-svn: 191576
2013-09-27 22:26:25 +00:00
Matt Arsenault 29f31735a2 Fix SLPVectorizer using wrong address space for load/store
llvm-svn: 191564
2013-09-27 21:24:57 +00:00
Justin Bogner 4a9ac8cd75 InstCombine: Only foldSelectICmpAndOr for integer types
Currently foldSelectICmpAndOr asserts if the "or" involves a vector
containing several of the same power of two. We can easily avoid this by
only performing the fold on integer types, like foldSelectICmpAnd does.

Fixes <rdar://problem/15012516>

llvm-svn: 191552
2013-09-27 20:35:39 +00:00
Manman Ren 0ed04fc9ab TBAA: handle scalar TBAA format and struct-path aware TBAA format.
Remove the command line argument "struct-path-tbaa" since we should not depend
on command line argument to decide which format the IR file is using. Instead,
we check the first operand of the tbaa tag node, if it is a MDNode, we treat
it as struct-path aware TBAA format, otherwise, we treat it as scalar TBAA
format.

When clang starts to use struct-path aware TBAA format no matter whether
struct-path-tbaa is no, and we can auto-upgrade existing bc files, the support
for scalar TBAA format can be dropped.

Existing testing cases are updated to use the struct-path aware TBAA format.

llvm-svn: 191538
2013-09-27 18:34:27 +00:00
Justin Bogner ca9bd8fac1 Transforms: Use getFirstNonPHI to set the insertion point for PHIs
We were previously using getFirstInsertionPt to insert PHI
instructions when vectorizing, but getFirstInsertionPt also skips past
landingpads, causing this to generate invalid IR.

We can avoid this issue by using getFirstNonPHI instead.

llvm-svn: 191526
2013-09-27 15:30:25 +00:00
Arnold Schwaighofer 07520324f5 SLPVectorize: Put horizontal reductions feeding a store under separate flag
Put them under a separate flag for experimentation. They are more likely to
interfere with loop vectorization which happens later in the pass pipeline.

llvm-svn: 191371
2013-09-25 14:02:32 +00:00
Yi Jiang 582ba6c808 Test case for r191314.
Some supplemental information for r191314: We would like to make sure SLP Vectorizer will not try to vectorize tiny trees even with a negative threshold so we set the cost to INT_MAX. 

llvm-svn: 191327
2013-09-24 19:33:53 +00:00
Benjamin Kramer d59bf255d5 Verify that we don't optimize null return checks to the nothrow_t version of operator new.
llvm-svn: 191325
2013-09-24 18:37:49 +00:00
Benjamin Kramer 2939dd3d11 MemoryBuiltins: Reinstate optimizing (uninitialized) loads from operator new.
llvm-svn: 191315
2013-09-24 17:34:29 +00:00
Benjamin Kramer 4d4df04353 MemoryBuiltins: Fix operator new bits.
We really don't want to optimize malloc return value checks away.

llvm-svn: 191313
2013-09-24 17:15:14 +00:00
Benjamin Kramer fd4777c046 Teach MemoryBuiltins and InstructionSimplify that operator new never returns NULL.
This is safe per C++11 18.6.1.1p3: [operator new returns] a non-null pointer to
suitably aligned storage (3.7.4), or else throw a bad_alloc exception. This
requirement is binding on a replacement version of this function.

Brings us a tiny bit closer to eliminating more vector push_backs.

llvm-svn: 191310
2013-09-24 16:37:51 +00:00
Arnold Schwaighofer 22639407d7 Revert "LoopVectorizer: Only allow vectorization of intrinsics."
Revert 191122 - with extra checks we are allowed to vectorize math library
function calls.

Standard library indentifiers are reserved names so functions with external
linkage must not overrided them. However, functions with internal linkage can.

Therefore, we can vectorize calls to math library functions with a check for
external linkage and matching signature. This matches what we do during
SelectionDAG building.

llvm-svn: 191206
2013-09-23 14:54:39 +00:00
Benjamin Kramer b517194f33 Expand test case a bit.
llvm-svn: 191205
2013-09-23 14:41:35 +00:00
Benjamin Kramer 942dfe625b InstSimplify: Fold equality comparisons between non-inbounds GEPs.
Overflow doesn't affect the correctness of equalities. Computing this is cheap,
we just reuse the computation for the inbounds case and try to peel of more
non-inbounds GEPs. This pattern is unlikely to ever appear in code generated by
Clang, but SCEV occasionally produces it.

llvm-svn: 191200
2013-09-23 14:16:38 +00:00
Benjamin Kramer 90901a35ce SROA: Handle casts involving vectors of pointers and integer scalars.
SROA wants to convert any types of equivalent widths but it's not possible to
convert vectors of pointers to an integer scalar with a single cast. As a
workaround we add a bitcast to the corresponding int ptr type first. This type
of cast used to be an edge case but has become common with SLP vectorization.
Fixes PR17271.

llvm-svn: 191143
2013-09-21 20:36:04 +00:00
Arnold Schwaighofer 500242d4fe Reapply "SLPVectorizer: Handle more horizontal reductions (disabled)""
Reapply r191108 with a fix for a memory corruption error I introduced.  Of
course, we can't reference the scalars that we replace by vectorizing and then
call their eraseFromParent method. I only 'needed' the scalars to get the
DebugLoc. Just store the DebugLoc before actually vectorizing instead. As a nice
side effect, this also simplifies the interface between BoUpSLP and the
HorizontalReduction class to returning a value pointer (the vectorized tree
root).

radar://14607682

llvm-svn: 191123
2013-09-21 01:06:00 +00:00
Nadav Rotem 3371172a67 LoopVectorizer: Only allow vectorization of intrinsics. We can't know for sure that the functions 'abs' or 'round' are the functions from libm.
rdar://15012650

llvm-svn: 191122
2013-09-21 00:27:05 +00:00
Arnold Schwaighofer f1dfbfdde1 Revert "SLPVectorizer: Handle more horizontal reductions (disabled)"
This reverts commit r191108.

The horizontal.ll test case fails under libgmalloc. Thanks Shuxin for pointing
this out to me.

llvm-svn: 191121
2013-09-21 00:06:20 +00:00
Shuxin Yang 6e35094bbf Resurrect r191017 " GVN proceeds in the presence of dead code" plus a fix to PR17307 & 17308.
The problem of r191017 is that when GVN fabricate a val-number for a dead instruction (in order
to make following expr-PRE happy), it forget to fabricate a leader-table entry for it as well.

llvm-svn: 191118
2013-09-20 23:12:57 +00:00
Arnold Schwaighofer 4724963112 SLPVectorizer: Handle more horizontal reductions (disabled)
Match reductions starting at binary operation feeding into a phi. The code
handles trees like

 r += v1 + v2 + v3 ...

and

 r += v1
 r += v2
 ...

and

 r *= v1 + v2 + ...

We currently only handle associative operations (add, fadd fast).

The code can now also handle reductions feeding into stores.

 a[i] = v1 + v2 + v3 + ...

The code is currently disabled behind the flag "-slp-vectorize-hor".  The cost
model for most architectures is not there yet.

I found one opportunity of a horizontal reduction feeding a phi in TSVC
(LoopRerolling-flt) and there are several opportunities where reductions feed
into stores.

radar://14607682

llvm-svn: 191108
2013-09-20 21:18:20 +00:00
Joerg Sonnenberger cf90a12170 Delete empty files.
llvm-svn: 191105
2013-09-20 20:40:22 +00:00
Joerg Sonnenberger 1fbe323649 Revert r191017, it results in segmentation faults in Qt.
llvm-svn: 191104
2013-09-20 20:33:57 +00:00
Benjamin Kramer e6461e3053 InstCombine: Canonicalize (gep i8* X, -(ptrtoint Y)) to (sub (ptrtoint X), (ptrtoint Y))
The GEP pattern is what SCEV expander emits for "ugly geps". The latter is what
you get for pointer subtraction in C code. The rest of instcombine already
knows how to deal with that so just canonicalize on that.

llvm-svn: 191090
2013-09-20 14:38:44 +00:00
Shuxin Yang 3a7ca6ec87 [Fast-math] Disable "(C1/X)*C2 => (C1*C2)/X" if C1/X has multiple uses.
If "C1/X" were having multiple uses, the only benefit of this
transformation is to potentially shorten critical path. But it is at the
cost of instroducing additional div.

  The additional div may or may not incur cost depending on how div is
implemented. If it is implemented using Newton–Raphson iteration, it dosen't
seem to incur any cost (FIXME). However, if the div blocks the entire
pipeline, that sounds to be pretty expensive. Let CodeGen to take care 
this transformation.

  This patch sees 6% on a benchmark.

rdar://15032743

llvm-svn: 191037
2013-09-19 21:13:46 +00:00
Benjamin Kramer 0b37cdf9af InstCombine: Don't allow turning vector-of-pointer loads into vector-of-integer.
The code below can't handle any pointers. PR17293.

llvm-svn: 191036
2013-09-19 20:59:04 +00:00
Shuxin Yang 74c9a170b8 GVN proceeds in the presence of dead code.
This is how it ignores the dead code:
1) When a dead branch target, say block B, is identified, all the
    blocks dominated by B is dead as well.

2) The PHIs of those blocks in dominance-frontier(B) is updated such
   that the operands corresponding to dead predecessors are replaced
   by "UndefVal".

   Using lattice's jargon, the "UndefVal" is the "Top" in essence.
   Phi node like this "phi(v1 bb1, undef xx)" will be optimized into
   "v1" if v1 is constant, or v1 is an instruction which dominate this
   PHI node.

3) When analyzing the availability of a load L, all dead mem-ops which
   L depends on disguise as a load which evaluate exactly same value as L.

4) The dead mem-ops will be materialized as "UndefVal" during code motion.

llvm-svn: 191017
2013-09-19 17:22:51 +00:00
Chandler Carruth b5a34963c8 Name the XCore target-specific subdirectories canonically.
llvm-svn: 190940
2013-09-18 14:08:30 +00:00
NAKAMURA Takumi 69ae1b9aa2 A couple of tests, in llvm/test/Transforms/*/xcore, are XCore-specific. They should be excluded when XCore is not built.
llvm-svn: 190938
2013-09-18 13:56:16 +00:00
Robert Lytton f637e2cb23 Prevent LoopVectorizer and SLPVectorizer running if the target has no vector registers.
XCore target: Add XCoreTargetTransformInfo
This is where getNumberOfRegisters() resides, which in turn returns the
number of vector registers (=0).

llvm-svn: 190936
2013-09-18 12:43:35 +00:00
Andrea Di Biagio 1f5d74d8ae Re-add tests from r179291 which were accidentally removed by r181177.
llvm-svn: 190934
2013-09-18 12:06:59 +00:00
Matt Arsenault d12e8020ec Fix a constant folding address space place I missed.
If address space 0 was smaller than the address space
in a constant inttoptr/ptrtoint pair, the wrong mask size
would be used.

llvm-svn: 190899
2013-09-17 23:23:16 +00:00
Quentin Colombet 870b662779 Revert the load slicing done in r190870.
To avoid regressions with bitfield optimizations, this slicing should take place
later, like ISel time.

llvm-svn: 190891
2013-09-17 22:01:26 +00:00
Matt Arsenault e6952f28ca Cleanup handling of constant function casts.
Some of this code is no longer necessary since int<->ptr casts are no
longer occur as of r187444.

This also fixes handling vectors of pointers, and adds a bunch of new
testcases for vectors and address spaces.

llvm-svn: 190885
2013-09-17 21:10:14 +00:00
Arnold Schwaighofer 4a3dcaa193 SLPVectorizer: Don't vectorize phi nodes that use invoke values
We can't insert an insertelement after an invoke. We would have to split a
critical edge. So when we see a phi node that uses an invoke we just give up.

radar://14990770

llvm-svn: 190871
2013-09-17 17:03:29 +00:00
Quentin Colombet b8d672ef5b [InstCombiner] Slice a big load in two loads when the elements are next to each
other in memory.

The motivation was to get rid of truncate and shift right instructions that get
in the way of paired load or floating point load.
E.g.,
Consider the following example:
struct Complex {
  float real;
  float imm;
};

When accessing a complex, llvm was generating a 64-bits load and the imm field
was obtained by a trunc(lshr) sequence, resulting in poor code generation, at
least for x86.

The idea is to declare that two load instructions is the canonical form for
loading two arithmetic type, which are next to each other in memory.

Two scalar loads at a constant offset from each other are pretty
easy to detect for the sorts of passes that like to mess with loads. 

<rdar://problem/14477220>

llvm-svn: 190870
2013-09-17 16:57:34 +00:00
Stepan Dyatkovskiy dc2c4b4462 Bugfix for PR17099:
Wrong cast operation.
MergeFunctions emits Bitcast instead of pointer-to-integer operation.
Patch fixes MergeFunctions::writeThunk function. It replaces
unconditional Bitcast creation with "Value* createCast(...)" method, that
checks operand types and selects proper instruction.
See unit-test as example.

llvm-svn: 190859
2013-09-17 09:36:11 +00:00
Krzysztof Parzyszek 3c463aa5e7 Add testcase for r190631
llvm-svn: 190807
2013-09-16 21:24:30 +00:00
Arnold Schwaighofer 53e622cef4 Don't vectorize if there are outside loop users of the induction variable.
We would have to compute the pre increment value, either by computing it on
every loop iteration or by splitting the edge out of the loop and inserting a
computation for it there.

For now, just give up vectorizing such loops.

Fixes PR17179.

llvm-svn: 190790
2013-09-16 16:17:24 +00:00
Chandler Carruth ebeac5cb89 Remove the long, long defunct IR block placement pass.
This pass was based on the previous (essentially unused) profiling
infrastructure and the assumption that by ordering the basic blocks at
the IR level in a particular way, the correct layout would happen in the
end. This sometimes worked, and mostly didn't. It also was a really
naive implementation of the classical paper that dates from when branch
predictors were primarily directional and when loop structure wasn't
commonly available. It also didn't factor into the equation
non-fallthrough branches and other machine level details.

Anyways, for all of these reasons and more, I wrote
MachineBlockPlacement, which completely supercedes this pass. It both
uses modern profile information infrastructure, and actually works. =]

llvm-svn: 190748
2013-09-14 09:28:14 +00:00
Matt Arsenault 2e5f5b2e78 Add missing CHECK-LABEL
llvm-svn: 190740
2013-09-14 02:44:06 +00:00
Matt Arsenault 8e48a7f911 Add test for untested path in SimplifyCFG
This case wasn't checked with a pointer condition.

llvm-svn: 190739
2013-09-14 02:44:02 +00:00
Hal Finkel 71780ec4fd Implement TTI getUnrollingPreferences for PowerPC
The PowerPC A2 core greatly benefits from aggressive concatenation unrolling;
use the new getUnrollingPreferences to enable this by default when targeting
the PPC A2 core.

llvm-svn: 190549
2013-09-11 21:20:40 +00:00
Matt Arsenault 009faed1be Teach loop-idiom about address space pointer sizes
llvm-svn: 190491
2013-09-11 05:09:42 +00:00
Matt Arsenault 1cee407a9b Fix missing CHECK-LABELs
llvm-svn: 190426
2013-09-10 19:57:05 +00:00
Eli Friedman 33d3700716 Don't shrink atomic ops to bool in GlobalOpt.
LLVM IR doesn't currently allow atomic bool load/store operations, and the
transformation is dubious anyway because it isn't profitable on all platforms.

PR17163.

llvm-svn: 190357
2013-09-09 22:00:13 +00:00
Quentin Colombet 5ab555532b [InstCombiner] Expose opportunities to merge subtract and comparison.
Several architectures use the same instruction to perform both a comparison and
a subtract. The instruction selection framework does not allow to consider
different basic blocks to expose such fusion opportunities.

Therefore, these instructions are “merged” by CSE at MI IR level.

To increase the likelihood of CSE to apply in such situation, we reorder the
operands of the comparison, when they have the same complexity, so that they
matches the order of the most frequent subtract.
E.g.,

icmp A, B
...
sub B, A

<rdar://problem/14514580>

llvm-svn: 190352
2013-09-09 20:56:48 +00:00
Bob Wilson e407736a06 Revert patches to add case-range support for PR1255.
The work on this project was left in an unfinished and inconsistent state.
Hopefully someone will eventually get a chance to implement this feature, but
in the meantime, it is better to put things back the way the were.  I have
left support in the bitcode reader to handle the case-range bitcode format,
so that we do not lose bitcode compatibility with the llvm 3.3 release.

This reverts the following commits: 155464, 156374, 156377, 156613, 156704,
156757, 156804 156808, 156985, 157046, 157112, 157183, 157315, 157384, 157575,
157576, 157586, 157612, 157810, 157814, 157815, 157880, 157881, 157882, 157884,
157887, 157901, 158979, 157987, 157989, 158986, 158997, 159076, 159101, 159100,
159200, 159201, 159207, 159527, 159532, 159540, 159583, 159618, 159658, 159659,
159660, 159661, 159703, 159704, 160076, 167356, 172025, 186736

llvm-svn: 190328
2013-09-09 19:14:35 +00:00
Manman Ren f2a88f3622 Debug Info Testing: update context from empty string to null.
Context should be either null or MDNode.

llvm-svn: 190267
2013-09-08 03:11:54 +00:00
Manman Ren deeafd8a58 Debug Info Testing: updated to use NULL instead of "i32 0" in a few fields.
Field 2 of DIType (Context), field 9 of DIDerivedType (TypeDerivedFrom),
field 12 of DICompositeType (ContainingType), fields 2, 7, 12 of DISubprogram
(Context, Type, ContainingType).

llvm-svn: 190205
2013-09-06 21:03:58 +00:00
Rafael Espindola 75a3ccd177 Merge these 2 tests in a single file.
llvm-svn: 189975
2013-09-04 19:19:32 +00:00
Rafael Espindola 128c5ea902 Revert "Add r159136 back now that pr13124 has been fixed."
This reverts commit r189886.

I found a corner case where this optimization is not valid:

Say we have a "linkonce_odr unnamed_addr" in two translation units:
* In TU 1 this optimization kicks in and makes it hidden.
* In TU 2 it gets const merged with a constant that is *not* unnamed_addr,
  resulting in a non unnamed_addr constant with default visibility.
* The static linker rules for combining visibility them produce a hidden
  symbol, which is incorrect from the point of view of the non unnamed_addr
  constant.

The one place we can do this is when we know that the symbol is not used from
another TU in the same shared object, i.e., during LTO. I will move it there.

llvm-svn: 189954
2013-09-04 16:09:01 +00:00
Tim Northover dc647a2603 InstCombine: allow unmasked icmps to be combined with logical ops
"(icmp op i8 A, B)" is equivalent to "(icmp op i8 (A & 0xff), B)" as a
degenerate case. Allowing this as a "masked" comparison when analysing "(icmp)
&/| (icmp)" allows us to combine them in more cases.

rdar://problem/7625728

llvm-svn: 189931
2013-09-04 11:57:17 +00:00
Tim Northover c0756c454c InstCombine: look for masked compares with subset relation
Even in cases which aren't universally optimisable like "(A & B) != 0 && (A &
C) != 0", the masks can make one of the comparisons completely redundant. In
this case, since we've gone to the effort of spotting masked comparisons we
should combine them.

rdar://problem/7625728

llvm-svn: 189930
2013-09-04 11:57:13 +00:00
Rafael Espindola 5eb7df68bf Add r159136 back now that pr13124 has been fixed.
Original message:
If a constant or a function has linkonce_odr linkage and unnamed_addr, mark
hidden. Being linkonce_odr guarantees that it is available in every dso that
needs it. Being a constant/function with unnamed_addr guarantees that the
copies don't have to be merged.

llvm-svn: 189886
2013-09-03 23:34:36 +00:00
Michael Gottesman e29b1c1825 [objc-arc] Turn off the objc_retainBlock -> objc_retain optimization.
The reason that I am turning off this optimization is that there is an
additional case where a block can escape that has come up. Specifically, this
occurs when a block is used in a scope outside of its current scope.

This can cause a captured retainable object pointer whose life is preserved by
the objc_retainBlock to be deallocated before the block is invoked.

An example of the code needed to trigger the bug is:

----
\#import <Foundation/Foundation.h>
int main(int argc, const char * argv[]) {
  void (^somethingToDoLater)();

  {
    NSObject *obj = [NSObject new];

    somethingToDoLater = ^{
      [obj self]; // Crashes here
    };
  }

  NSLog(@"test.");

  somethingToDoLater();
  return 0;
}
----

In the next commit, I remove all the dead code that results from this.

Once I put in the fixing commit I will bring back the tests that I deleted in
this commit.

rdar://14802782.
rdar://14868830.

llvm-svn: 189869
2013-09-03 22:40:54 +00:00
Michael Gottesman 9506431fac [objc-arc] Move some block tests from basic.ll -> retain-block.ll and add some missing CHECK-LABELS.
llvm-svn: 189868
2013-09-03 22:40:50 +00:00
Matt Arsenault 3dfe54e954 Teach InstCombineLoadCast about address spaces.
This is another one that doesn't matter much,
but uses the right GEP index types in the first
place.

llvm-svn: 189854
2013-09-03 21:05:48 +00:00
Yi Jiang aeb5b46a85 In this patch we are trying to do two things:
1) If the width of vectorization list candidate is bigger than vector reg width, we will break it down to fit the vector reg.
2) We do not vectorize the width which is not power of two.

The performance result shows it will help some spec benchmarks. mesa improved 6.97% and ammp improved 1.54%. 

llvm-svn: 189830
2013-09-03 17:26:04 +00:00
Benjamin Kramer 2702caad08 SimplifyLibCalls: When emitting an overloaded fp function check that it's available.
The existing code missed some edge cases when e.g. we're going to emit sqrtf but
only the availability of sqrt was checked. This happens on odd platforms like
windows.

llvm-svn: 189724
2013-08-31 18:19:35 +00:00
Benjamin Kramer 010f108382 InstCombine: Check for zero shift amounts before subtracting one causing integer overflow.
PR17026. Also avoid undefined shifts and shift amounts larger than 64 bits
(those are always undef because we can't represent integer types that large).

llvm-svn: 189672
2013-08-30 14:35:35 +00:00
Daniel Dunbar 673bcfea83 Fix a test to not fail for users with my name. :)
llvm-svn: 189547
2013-08-29 00:41:22 +00:00
Matt Arsenault 80ecd77303 Convert tests to FileCheck
llvm-svn: 189529
2013-08-28 23:04:41 +00:00
Matt Arsenault 54c3cbcefe Handle address spaces in TargetTransformInfo
llvm-svn: 189527
2013-08-28 22:41:57 +00:00
Hal Finkel 6d09904cc9 Disable unrolling in the loop vectorizer when disabled in the pass manager
When unrolling is disabled in the pass manager, the loop vectorizer should also
not unroll loops. This will allow the -fno-unroll-loops option in Clang to
behave as expected (even for vectorizable loops). The loop vectorizer's
-force-vector-unroll option will (continue to) override the pass-manager
setting (including -force-vector-unroll=0 to force use of the internal
auto-selection logic).

In order to test this, I added a flag to opt (-disable-loop-unrolling) to force
disable unrolling through opt (the analog of -fno-unroll-loops in Clang). Also,
this fixes a small bug in opt where the loop vectorizer was enabled only after
the pass manager populated the queue of passes (the global_alias.ll test needed
a slight update to the RUN line as a result of this fix).

llvm-svn: 189499
2013-08-28 18:33:10 +00:00
Matt Arsenault ed9f76d37b Fix inserting instructions before last in bundle.
The builder inserts from before the insert point,
not after, so this would insert before the last
instruction in the bundle instead of after it.

I'm not sure if this can actually be a problem
with any of the current insertions.

llvm-svn: 189285
2013-08-26 23:08:37 +00:00
Manman Ren 0ed70aeb85 Debug Info: add an identifier field to DICompositeType.
DICompositeType will have an identifier field at position 14. For now, the
field is set to null in DIBuilder.
For DICompositeTypes where the template argument field (the 13th field)
was optional, modify DIBuilder to make sure the template argument field is set.
Now DICompositeType has 15 fields.

Update DIBuilder to use NULL instead of "i32 0" for null value of a MDNode.
Update verifier to check that DICompositeType has 15 fields and the last
field is null or a MDString.

Update testing cases to include an extra field for DICompositeType.
The identifier field will be used by type uniquing so a front end can
genearte a DICompositeType with a unique identifer.

llvm-svn: 189282
2013-08-26 22:39:55 +00:00
Nadav Rotem bdc9ff4498 LoopVectorize: Implement partial loop unrolling when vectorization is not profitable.
This patch enables unrolling of loops when vectorization is legal but not profitable.
We add a new class InnerLoopUnroller, that extends InnerLoopVectorizer and replaces some of the vector-specific logic with scalars.

This patch does not introduce any runtime regressions and improves the following workloads:

SingleSource/Benchmarks/Shootout/matrix -22.64%
SingleSource/Benchmarks/Shootout-C++/matrix -13.06%
External/SPEC/CINT2006/464_h264ref/464_h264ref  -3.99%
SingleSource/Benchmarks/Adobe-C++/simple_types_constant_folding -1.95%

llvm-svn: 189281
2013-08-26 22:33:26 +00:00
Matt Arsenault b3d8b48353 Forgot to add slp threshold to test
llvm-svn: 189248
2013-08-26 18:08:35 +00:00
Matt Arsenault 39274be65f Vectorize starting from insertelements building a vector
llvm-svn: 189233
2013-08-26 17:56:35 +00:00
Michael Gottesman e5904417f2 Filecheckize some tests.
llvm-svn: 189079
2013-08-23 00:23:28 +00:00
Michael Gottesman 823aaffd37 Update StripDeadDebugInfo to use DebugInfoFinder so that it is no longer stale to the point of not working and more resilient to debug info changes.
The current version of StripDeadDebugInfo became stale and no longer actually
worked since it was expecting an older version of debug info.

This patch updates it to use DebugInfoFinder and the modern DebugInfo classes as
much as possible to make it more redundent to such changes. Additionally, the
only place where that was avoided (the code where we replace the old sets with
the new), I call verify on the DIContextUnit implying that if the format changes
and my live set changes no longer make sense an assert will be hit. In order to
ensure that that occurs I have included a test case.

The actual stripping of the dead debug info follows the same strategy as was
used before in this class: find the live set and replace the old set in the
given compile unit (which may contain dead global variables/functions) with the
new live one.

llvm-svn: 189078
2013-08-23 00:23:24 +00:00
Manman Ren 64ba24a325 [Debug Info Tests] Update testing cases.
A single metadata will not span multiple lines. This also helps me with
my script to automatic update the testing cases.
A debug info testing case should have a llvm.dbg.cu.
Do not use hard-coded id for debug nodes.

llvm-svn: 189033
2013-08-22 17:11:18 +00:00
Chandler Carruth 1c34afcb61 Teach the SLP vectorizer the correct way to check for consecutive access
using GEPs. Previously, it used a number of different heuristics for
analyzing the GEPs. Several of these were conservatively correct, but
failed to fall back to SCEV even when SCEV might have given a reasonable
answer. One was simply incorrect in how it was formulated.

There was good code already to recursively evaluate the constant offsets
in GEPs, look through pointer casts, etc. I gathered this into a form
code like the SLP code can use in a previous commit, which allows all of
this code to become quite simple.

There is some performance (compile time) concern here at first glance as
we're directly attempting to walk both pointers constant GEP chains.
However, a couple of thoughts:

1) The very common cases where there is a dynamic pointer, and a second
   pointer at a constant offset (usually a stride) from it, this code
   will actually not do any unnecessary work.

2) InstCombine and other passes work very hard to collapse constant
   GEPs, so it will be rare that we iterate here for a long time.

That said, if there remain performance problems here, there are some
obvious things that can improve the situation immensely. Doing
a vectorizer-pass-wide memoizer for each individual layer of pointer
values, their base values, and the constant offset is likely to be able
to completely remove redundant work and strictly limit the scaling of
the work to scrape these GEPs. Since this optimization was not done on
the prior version (which would still benefit from it), I've not done it
here. But if folks have benchmarks that slow down it should be straight
forward for them to add.

I've added a test case, but I'm not really confident of the amount of
testing done for different access patterns, strides, and pointer
manipulation.

llvm-svn: 189007
2013-08-22 12:45:17 +00:00
Matt Arsenault f599d97449 Teach LoopVectorize about address space sizes
llvm-svn: 188980
2013-08-22 02:42:55 +00:00
Manman Ren a2e9a98b06 TBAA: remove !tbaa from testing cases when they are not needed.
This will make it easier to turn on struct-path aware TBAA since the metadata
format will change.

llvm-svn: 188944
2013-08-21 22:20:53 +00:00
Matt Arsenault 745101d666 Teach InstCombine about address spaces
llvm-svn: 188926
2013-08-21 19:53:10 +00:00
Matt Arsenault bf1adaa05c Add test for bitcast array ptrs with address spaces
llvm-svn: 188919
2013-08-21 19:09:28 +00:00
Matt Arsenault e5e9f8911f Add enforce known alignment test with address space
llvm-svn: 188917
2013-08-21 18:54:53 +00:00
Arnold Schwaighofer e1f3ab69d1 SLPVectorizer: Fix invalid iterator errors
Update iterator when the SLP vectorizer changes the instructions in the basic
block by restarting the traversal of the basic block.

Patch by Yi Jiang!

Fixes PR 16899.

llvm-svn: 188832
2013-08-20 21:21:45 +00:00
Matt Arsenault 7a960a8455 Teach ConstantFolding about pointer address spaces
llvm-svn: 188831
2013-08-20 21:20:04 +00:00
Hal Finkel 0c5c01aa4a Add a llvm.copysign intrinsic
This adds a llvm.copysign intrinsic; We already have Libfunc recognition for
copysign (which is turned into the FCOPYSIGN SDAG node). In order to
autovectorize calls to copysign in the loop vectorizer, we need a corresponding
intrinsic as well.

In addition to the expected changes to the language reference, the loop
vectorizer, BasicTTI, and the SDAG builder (the intrinsic is transformed into
an FCOPYSIGN node, just like the function call), this also adds FCOPYSIGN to a
few lists in LegalizeVector{Ops,Types} so that vector copysigns can be
expanded.

In TargetLoweringBase::initActions, I've made the default action for FCOPYSIGN
be Expand for vector types. This seems correct for all in-tree targets, and I
think is the right thing to do because, previously, there was no way to generate
vector-values FCOPYSIGN nodes (and most targets don't specify an action for
vector-typed FCOPYSIGN).

llvm-svn: 188728
2013-08-19 23:35:46 +00:00
Matt Arsenault d79f7d9ea1 Teach InstCombine visitGetElementPtr about address spaces
llvm-svn: 188721
2013-08-19 22:17:40 +00:00
Matt Arsenault 74742a1bb0 Fix assert with GEP ptr vector indexing structs
Also fix it calculating the wrong value. The struct index
is not a ConstantInt, so it was being interpreted as an array
index.

llvm-svn: 188713
2013-08-19 21:43:16 +00:00
Matt Arsenault 5aeae18e9d Revert non-test parts of r188507
Re-add the inboundsless tests I didn't add originally

llvm-svn: 188710
2013-08-19 21:40:31 +00:00