Commit Graph

21746 Commits

Author SHA1 Message Date
Tom Stellard c0845334da R600/SI: Fixing handling of condition codes
We were ignoring the ordered/onordered bits and also the signed/unsigned
bits of condition codes when lowering the DAG to MachineInstrs.

NOTE: This is a candidate for the 3.4 branch.
llvm-svn: 195514
2013-11-22 23:07:58 +00:00
Manman Ren 409558f81e Debug Info: update testing cases to specify the debug info version number.
We are going to drop debug info without a version number or with a different
version number, to make sure we don't crash when we see bitcode files with
different debug info metadata format.

llvm-svn: 195504
2013-11-22 21:49:45 +00:00
Jim Grosbach 860934a924 X86: Perform integer comparisons at i32 or larger.
Utilizing the 8 and 16 bit comparison instructions, even when an input can
be folded into the comparison instruction itself, is typically not worth it.
There are too many partial register stalls as a result, leading to significant
slowdowns. By always performing comparisons on at least 32-bit
registers, performance of the calculation chain leading to the
comparison improves. Continue to use the smaller comparisons when
minimizing size, as that allows better folding of loads into the
comparison instructions.

rdar://15386341

llvm-svn: 195496
2013-11-22 19:57:47 +00:00
Matt Arsenault 6ea0aade26 StructurizeCFG: Fix verification failure with some loops.
If the beginning of the loop was also the entry block
of the function, branches were inserted to the entry block
which isn't allowed. If this occurs, create a new dummy
function entry block that branches to the start of the loop.

llvm-svn: 195493
2013-11-22 19:24:39 +00:00
Matt Arsenault 9fb6e0ba58 StructurizeCFG: Fix inverting a branch on an argument
llvm-svn: 195492
2013-11-22 19:24:37 +00:00
Paul Robinson d89125a5d8 Teach ISel not to optimize 'optnone' functions (revised).
Improvements over r195317:
- Set/restore EnableFastISel flag instead of just running FastISel within
  SelectAllBasicBlocks; the flag is checked in various places, and
  FastISel won't run properly if those places don't do the right thing.
- Test looks for normal ISel versus FastISel behavior, and not
  something more subtle that doesn't work everywhere.

Based on work by Andrea Di Biagio.

llvm-svn: 195491
2013-11-22 19:11:24 +00:00
Andrew Trick 4a1abb7ab5 patchpoint: factor SD builder code for live vars. Plain stackmap also optimizes Constant values now.
llvm-svn: 195488
2013-11-22 19:07:36 +00:00
Rafael Espindola 6597992c69 Add a fixed version of r195470 back.
The fix is simply to use CurI instead of I when handling aliases to
avoid accessing a invalid iterator.

original message:

Convert linkonce* to weak* instead of strong.

Also refactor the logic into a helper function. This is an important improve
on mingw where the linker complains about mixed weak and strong symbols.
Converting to weak ensures that the symbol is not dropped, but keeps in a
comdat, making the linker happy.

llvm-svn: 195477
2013-11-22 17:58:12 +00:00
Michael Liao 02160d580b Fix PR18014
- When simplifying the mask generation for BLEND, check whether that mask is
  also consumed by other non-BLEND insns. If true, skip that simplification.

llvm-svn: 195476
2013-11-22 17:56:57 +00:00
Richard Sandiford f03789ca3f [SystemZ] Fix TMHH and TMHL usage for z10 with -O0
I've no idea why I decided to handle TMxx differently from all the other
high/low logic operations, but it was a stupid thing to do.  The high
registers aren't available as separate 32-bit registers on z10,
so subreg_h32 can't be used on a GR64 there.

I've normally been testing with z196 and with -O3 and so hadn't noticed
this until now.

llvm-svn: 195473
2013-11-22 17:28:28 +00:00
Rafael Espindola 77aa674cc4 Revert "Convert linkonce* to weak* instead of strong."
This reverts commit r195470.
Debugging failure in some bots.

llvm-svn: 195472
2013-11-22 17:09:34 +00:00
Richard Sandiford 8ee1b77de3 Add a Scalarizer pass.
llvm-svn: 195471
2013-11-22 16:58:05 +00:00
Rafael Espindola 5574032575 Convert linkonce* to weak* instead of strong.
Also refactor the logic into a helper function. This is an important improvement
on mingw where the linker complains about mixed weak and strong symbols.
Converting to weak ensures that the symbol is not dropped, but keeps in a
comdat, making the linker happy.

llvm-svn: 195470
2013-11-22 16:14:30 +00:00
Daniel Sanders b516aae48e [mips][msa] Add test case that should have been added in r195456.
llvm-svn: 195469
2013-11-22 15:47:18 +00:00
Rafael Espindola 5a8e985ad3 Don't produce tail calls when the caller is x86_thiscallcc.
The callee will not pop the stack for us.

llvm-svn: 195467
2013-11-22 15:18:28 +00:00
Tim Northover 74e3637a0c ARM: use CHECK-LABEL on a test.
llvm-svn: 195457
2013-11-22 13:25:07 +00:00
Richard Barton c31078cded Add support for Cortex-A12.
Patch by Oliver Stannard!

llvm-svn: 195448
2013-11-22 11:53:16 +00:00
Daniel Sanders fd8e416879 [mips][msa] Float vector constants cannot use ldi.[wd] directly. Bitcast from the appropriate integer vector type.
Fixes an instruction selection failure detected by llvm-stress.

llvm-svn: 195444
2013-11-22 11:24:50 +00:00
Kostya Serebryany 4007009815 Revert r195318 as it causes miscompilation (PR18029)
llvm-svn: 195439
2013-11-22 10:30:39 +00:00
Hao Liu 25aed9bb5b Fix the bugs about AArch64 Load/Store vector types and bitcast between i64 and vector types.
e.g. "%tmp = load <2 x i64>* %ptr" can't be selected. 
     "%tmp = bitcast i64 %in to <2 x i32>" can't be selected.

llvm-svn: 195424
2013-11-22 08:47:22 +00:00
Jiangning Liu a91633a435 For AArch64 back-end instruction selection, lower Neon_Lowxxx with EXTRCT_SUBREG.
llvm-svn: 195408
2013-11-22 02:45:13 +00:00
NAKAMURA Takumi 08be5afea4 Tweak 3 tests in llvm/test/CodeGen/X86 to add -mcpu=generic since r195383.
They failed on bdver2 buildslave.

FIXME: FileCheck-ize them.
llvm-svn: 195407
2013-11-22 02:28:04 +00:00
Yi Jiang 79a2b0a6d1 SLP Vectorizer: Extract cost will only be added once even if the scalar has multiple external uses.
llvm-svn: 195406
2013-11-22 01:57:02 +00:00
Tom Stellard 06c67bcbe4 SelectionDAG: Optimize expansion of vec_type = BITCAST scalar_type
The legalizer can now do this type of expansion for more
type combinations without loading and storing to and
from the stack.

NOTE: This is a candidate for the 3.4 branch.
llvm-svn: 195398
2013-11-22 00:41:05 +00:00
Eric Christopher 33ff697cb1 In Dwarf 3 (and Dwarf 2) attributes whose value are offsets into a
section use the form DW_FORM_data4 whilst in Dwarf 4 and later they
use the form DW_FORM_sec_offset.

This patch updates the places where such attributes are generated to
use the appropriate form depending on the Dwarf version. The DIE entries
affected have the following tags:
DW_AT_stmt_list, DW_AT_ranges, DW_AT_location, DW_AT_GNU_pubnames,
DW_AT_GNU_pubtypes, DW_AT_GNU_addr_base, DW_AT_GNU_ranges_base

It also adds a hidden command line option "--dwarf-version=<uint>"
to llc which allows the version of Dwarf to be generated to override
what is specified in the metadata; this makes it possible to update
existing tests to check the debugging information generated for both
Dwarf 4 (the default) and Dwarf 3 using the same metadata.

Patch (slightly modified) by Keith Walker!

llvm-svn: 195391
2013-11-21 23:46:41 +00:00
Ekaterina Romanova d5fa55470c SHLD/SHRD are VectorPath (microcode) instructions known to have poor latency on certain architectures. While generating SHLD/SHRD instructions is acceptable when optimizing for size, optimizing for speed on these platforms should be implemented using alternative sequences of instructions composed of add, adc, shr, shl, or and lea which are directPath instructions. These alternative instructions not only have a lower latency but they also increase the decode bandwidth by allowing simultaneous decoding of a third directPath instruction.
AMD's processors family K7, K8, K10, K12, K15 and K16 are known to have SHLD/SHRD instructions with very poor latency. Optimization guides for these processors recommend using an alternative sequence of instructions. For these AMD's processors, I disabled folding (or (x << c) | (y >> (64 - c))) when we are not optimizing for size.

It might be beneficial to disable this folding for some of the Intel's processors. However, since I couldn't find specific recommendations regarding using SHLD/SHRD instructions on Intel's processors, I haven't disabled this peephole for Intel.

llvm-svn: 195383
2013-11-21 23:21:26 +00:00
Peter Collingbourne 0be79e1ade Introduce two command-line flags for the instrumentation pass to control whether the labels of pointers should be ignored in load and store instructions
The new command line flags are -dfsan-ignore-pointer-label-on-store and -dfsan-ignore-pointer-label-on-load. Their default value matches the current labelling scheme.

Additionally, the function __dfsan_union_load is marked as readonly.

Patch by Lorenzo Martignoni!

Differential Revision: http://llvm-reviews.chandlerc.com/D2187

llvm-svn: 195382
2013-11-21 23:20:54 +00:00
Artyom Skrobov 97af2ec096 [ARM] add the overlooked tests for Cortex-A7 build attributes
llvm-svn: 195365
2013-11-21 16:22:39 +00:00
Daniel Sanders c8c50fb41f [mips][msa] Fix a corner case in performORCombine() when combining nodes into VSELECT.
Mask == ~InvMask asserts if the width of Mask and InvMask differ.
The combine isn't valid (with two exceptions, see below) if the widths differ
so test for this before testing Mask == ~InvMask.

In the specific cases of Mask=~0 and InvMask=0, as well as Mask=0 and
InvMask=~0, the combine is still valid. However, there are more appropriate
combines that could be used in these cases such as folding x & 0 to 0, or
x & ~0 to x.

llvm-svn: 195364
2013-11-21 16:11:31 +00:00
Daniel Sanders edc071b815 Add support for legalizing SETNE/SETEQ by inverting the condition code and the result of the comparison.
Summary:
LegalizeSetCCCondCode can now legalize SETEQ and SETNE by returning the inverse
condition and requesting that the caller invert the result of the condition.

The caller of LegalizeSetCCCondCode must handle the inverted CC, and they do
so as follows:
  SETCC, BR_CC:
    Invert the result of the SETCC with SelectionDAG::getNOT()
  SELECT_CC:
    Swap the true/false operands.

This is necessary for MSA which lacks an integer SETNE instruction.

Reviewers: resistor

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D2229

llvm-svn: 195355
2013-11-21 13:24:49 +00:00
Evgeniy Stepanov cb5bdffc4e [msan] Propagate condition origin in select instruction.
llvm-svn: 195349
2013-11-21 12:00:24 +00:00
Daniel Sanders 6e664bcef3 [mips][msa/dsp] Only do DSP combines if DSP is enabled.
Fixes a crash (null pointer dereferenced) when MSA is enabled.

llvm-svn: 195343
2013-11-21 11:40:14 +00:00
Evgeniy Stepanov b55c7e7cc4 Use multiple filecheck prefixes in msan instrumentation tests.
llvm-svn: 195342
2013-11-21 11:37:16 +00:00
NAKAMURA Takumi 43aa939625 Revert r195317 (and r195333), "Teach ISel not to optimize 'optnone' functions."
It broke, at least, i686 target. It is reproducible with "llc -mtriple=i686-unknown".

FYI, it didn't appear to add either "-O0" or "-fast-isel".

llvm-svn: 195339
2013-11-21 10:55:15 +00:00
Kostya Serebryany ef26f4ed56 add 'REQUIRES: asserts' to a test that uses 'llc -debug'; this fixes the no-asserts build
llvm-svn: 195333
2013-11-21 09:28:16 +00:00
Ana Pazos 9ac2fc85d2 Implemented Neon scalar vdup_lane intrinsics.
Fixed scalar dup alias and added test case.

llvm-svn: 195330
2013-11-21 08:16:15 +00:00
Ana Pazos fbc1adbaa7 Implemented Neon scalar by element intrinsics.
Intrinsics implemented: vqdmull_lane, vqdmulh_lane, vqrdmulh_lane,
vqdmlal_lane, vqdmlsl_lane scalar Neon intrinsics.

llvm-svn: 195327
2013-11-21 07:37:04 +00:00
Kostya Serebryany 0b458286e1 Don't speculate loads under ThreadSanitizer
Summary:
Don't speculate loads under ThreadSanitizer.
This fixes https://code.google.com/p/thread-sanitizer/issues/detail?id=40
Also discussed here: http://lists.cs.uiuc.edu/pipermail/llvmdev/2013-November/067929.html

Reviewers: chandlerc

Reviewed By: chandlerc

CC: llvm-commits, dvyukov

Differential Revision: http://llvm-reviews.chandlerc.com/D2227

llvm-svn: 195324
2013-11-21 07:29:28 +00:00
Bill Wendling 07787f8747 The basic problem is that some mainstream programs cannot deal with the way
clang optimizes tail calls, as in this example:

int foo(void);
int bar(void) {
 return foo();
}

where the call is transformed to:

  calll .L0$pb
.L0$pb:
  popl  %eax
.Ltmp0:
  addl  $_GLOBAL_OFFSET_TABLE_+(.Ltmp0-.L0$pb), %eax
  movl  foo@GOT(%eax), %eax
  popl  %ebp
  jmpl  *%eax                   # TAILCALL

However, the GOT references must all be resolved at dlopen() time, and so this
approach cannot be used with lazy dynamic linking (e.g. using RTLD_LAZY), which
usually populates the PLT with stubs that perform the actual resolving.

This patch changes X86TargetLowering::LowerCall() to skip tail call
optimization, if the called function is a global or external symbol.

Patch by Dimitry Andric!

PR15086

llvm-svn: 195318
2013-11-21 07:04:30 +00:00
Paul Robinson b379efeb53 Teach ISel not to optimize 'optnone' functions.
Based on work by Andrea Di Biagio.

llvm-svn: 195317
2013-11-21 06:33:32 +00:00
Reed Kotler 2fc05be887 Add, to constant islands, long jumps similar to ARM far branch.
llvm-svn: 195312
2013-11-21 05:13:23 +00:00
Hal Finkel 884bde3031 PPC popcnt[dw] do not have record forms
The instruction definitions incorrectly specified that popcntd and popcntw have
record forms; they do not. This mistake was causing invalid code generation.

llvm-svn: 195272
2013-11-20 20:54:55 +00:00
Benjamin Kramer c8160d6523 MachineBlockPlacement: Strengthen the source order bias when picking an exit block.
We now only allow breaking source order if the exit block frequency is
significantly higher than the other exit block. The actual bias is
currently under a flag so the best cut-off can be found; the flag
defaults to the old behavior. The idea is to get some benchmark coverage
over different values for the flag and pick the best one.

When we require the new frequency to be at least 20% higher than the old
frequency I see a 5% speedup on zlib's deflate when compressing a random
file on x86_64/westmere. Hal reported a small speedup on Fhourstones on
a BG/Q and no regressions in the test suite.

The test case is the full long_match function from zlib's deflate. I was
reluctant to add it for previous tweaks to branch probabilities because
it's large and potentially fragile, but changed my mind since it's an
important use case and more likely to break with all the current work
going into the PGO infrastructure.

Differential Revision: http://llvm-reviews.chandlerc.com/D2202

llvm-svn: 195265
2013-11-20 19:08:44 +00:00
Daniel Sanders 43b5f572fa FileCheck: fix a bug with multiple --check-prefix options. Similar to r194565
Summary:
Directives are being ignored, when they occur between a partial-word false
match and any match on another prefix.

For example, with FOO and BAR prefixes:
   _FOO
   FOO: foo
   BAR: bar
FileCheck incorrectly matches:
   fog
   bar

This happens because FOO falsely matched as a partial word at '_FOO' and was
ignored while BAR matched at 'BAR:'. The match of BAR is incorrectly returned
as the 'first match' causing the FOO directive to be discarded.

Fixed this the same way as r194565 (D2166) did for a similar test case.
The partial-word false match should be counted as a match for the purposes of
finding the first match of a prefix, but should be returned as a false match
using CheckTy::CheckNone so that it isn't treated as a directive.

Fixes PR17995

Reviewers: samsonov, arsenm

Reviewed By: samsonov

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D2228

llvm-svn: 195248
2013-11-20 13:25:05 +00:00
Elena Demikhovsky e1f9bf054f AVX-512: Concat 4 128-bit vectors in one 512-bit vector.
llvm-svn: 195229
2013-11-20 09:10:40 +00:00
Yuchen Wu babe749125 llvm-cov: Added file checksum to gcno and gcda files.
Instead of permanently outputting "MVLL" as the file checksum, clang
will create gcno and gcda checksums by hashing the destination block
numbers of every arc. This allows for llvm-cov to check if the two gcov
files are synchronized.

Regenerated the test files so they contain the checksum. Also added
negative test to ensure error when the checksums don't match.

llvm-svn: 195191
2013-11-20 04:15:05 +00:00
Hal Finkel 22498fa6e3 PPC: Optimize rldicl generation for masked shifts
Masking operations (where only some number of the low bits are being kept) are
selected to rldicl(x, 0, mb). If x is a logical right shift (which would become
rldicl(y, 64-n, n)), we might be able to fold the two instructions together:

  rldicl(rldicl(x, 64-n, n), 0, mb) -> rldicl(x, 64-n, mb) for n <= mb

The right shift is really a left rotate followed by a mask, and if the explicit
mask is a more-restrictive sub-mask of the mask implied by the shift, only one
rldicl is needed.

llvm-svn: 195185
2013-11-20 01:10:15 +00:00
David Blaikie 409dd9c34a DebugInfo: Partial implementation of DWARF type units.
Emit DW_TAG_type_units into the debug_info section using compile unit
headers. This is bogus/unusable by debuggers, but testable and provides
more isolated review.

Subsequent patches will include support for type unit headers and
emission into the debug_types section, as well as comdat grouping the
types based on their hash. Also the CompileUnit type will be renamed
'Unit' and relevant portions pulled out into respective CompileUnit and
TypeUnit types.

llvm-svn: 195166
2013-11-19 23:08:21 +00:00
Arnold Schwaighofer 8bc4a0ba14 SLPVectorizer: Fix stale for Value pointer array
We are slicing an array of Value pointers and process those slices in a loop.
The problem is that we might invalidate a later slice by vectorizing a former
slice.

Use a WeakVH to track the pointer. If the pointer is deleted or RAUW'ed we can
tell.

The test case will only fail when running with libgmalloc.

radar://15498655

llvm-svn: 195162
2013-11-19 22:20:20 +00:00
Petar Jovanovic 45115f877c [mips] Resolve relocation for the stubs in MCJIT when load address is known
Instead of processing relocation for branch to stubs right away, emit a
modified relocation and add it to queue to be resolved later when final load
address is known.
This resolves seven MIPS MCJIT issues that were caused by missing relocation
fixups at the end.

llvm-svn: 195157
2013-11-19 21:56:00 +00:00