Commit Graph

18501 Commits

Author SHA1 Message Date
Akira Hatanaka 1454ed8ad3 [mips] Remove android calling convention.
This calling convention was added just to handle functions which return vector
of floats. The fix committed in r165585 solves the problem.

llvm-svn: 176530
2013-03-05 23:22:30 +00:00
Akira Hatanaka e092f72956 [mips] Fix MipsCC::analyzeReturn so that, in soft-float mode, fp128 gets
returned in registers $2 and $4.

llvm-svn: 176527
2013-03-05 22:54:59 +00:00
Akira Hatanaka 5f3ba9e595 [mips] Fix MipsTargetLowering::LowerCallResult and LowerReturn to correctly
handle fp128 returns.

llvm-svn: 176523
2013-03-05 22:41:55 +00:00
Akira Hatanaka 3b7391d140 [mips] Fix MipsTargetLowering::LowerCall to pass fp128 arguments in floating
point registers.

llvm-svn: 176521
2013-03-05 22:20:28 +00:00
Akira Hatanaka 4b634fa3b3 [mips] Correct handling of fp128 (long double) formals and read long double
parameters from floating point registers if target is mips64 hard float.

llvm-svn: 176520
2013-03-05 22:13:04 +00:00
Jyotsna Verma 457801f7ab reverting patch 176508.
llvm-svn: 176513
2013-03-05 20:29:23 +00:00
Jyotsna Verma 7179e712dd Hexagon: Add support for lowering block address.
llvm-svn: 176508
2013-03-05 19:37:46 +00:00
Jyotsna Verma 0eeea14e3e Hexagon: Expand addc, adde, subc and sube.
llvm-svn: 176505
2013-03-05 19:04:47 +00:00
Eli Bendersky 59d7cb2386 Fixes a test by replacing .align by .p2align and setting triples explicitly.
Patch by David Sehr

llvm-svn: 176502
2013-03-05 18:56:14 +00:00
Jyotsna Verma f4e324f4fb Hexagon: Add encoding bits to the TFR64 instructions.
Set imMoveImm, isAsCheapAsAMove flags for TFRI instructions.

llvm-svn: 176499
2013-03-05 18:42:28 +00:00
David Sehr af76f18fc3 Add a test that .align directives on capable processors use long NOPs.
llvm-svn: 176490
2013-03-05 16:46:54 +00:00
Vincent Lejeune 3b6f20e944 R600: Turn BUILD_VECTOR into Reg_Sequence
Reviewed-by: Tom Stellard <thomas.stellard at amd.com>
llvm-svn: 176487
2013-03-05 15:04:49 +00:00
Vincent Lejeune a199d01e4d R600: Use MUL_IEEE for trig/fdiv intrinsic
Reviewed-by: Tom Stellard <thomas.stellard at amd.com>
llvm-svn: 176485
2013-03-05 15:04:37 +00:00
NAKAMURA Takumi 3ae057c6ab llvm/test/CodeGen/Mips/mips64-f128.ll: Add explicit -mtriple=mips64el-unknown-unknown to appease win32.
FIXME: Is it expected for win32 to affect mips targets?
llvm-svn: 176471
2013-03-05 02:18:59 +00:00
NAKAMURA Takumi cc91d50289 llvm/test/CodeGen/Thumb/iabs.ll: Add explicit -mtriple=thumb-unknown-unknown to appease win32 hosts.
llvm-svn: 176470
2013-03-05 02:18:52 +00:00
David Sehr 4c8979cd4d The current X86 NOP padding uses one long NOP followed by the remainder in
one-byte NOPs.  If the processor actually executes those NOPs, as it sometimes
does with aligned bundling, this can have a performance impact.  From my
micro-benchmarks run on my one machine, a 15-byte NOP followed by twelve
one-byte NOPs is about 20% worse than a 15 followed by a 12.  This patch
changes NOP emission to emit as many 15-byte (the maximum) as possible followed
by at most one shorter NOP.

llvm-svn: 176464
2013-03-05 00:02:23 +00:00
Lang Hames 30be8a30cc Check isDiscardableIfUnused, rather than hasLocalLinkage, when bumping
GlobalValue linkage up to ExternalLinkage in the ExtractGV pass. This
prevents linkonce and linkonce_odr symbols from being DCE'd.

llvm-svn: 176459
2013-03-04 22:40:44 +00:00
Akira Hatanaka c7828356aa [mips] Print move instructions.
"move $4, $5" is printed instead of "or $4, $5, $zero".

llvm-svn: 176455
2013-03-04 22:25:01 +00:00
Jack Carter 0e149b04f6 Mips specific inline assembler constraint 'R'
'R' An address that can be sued in a non-macro load or store.
This patch includes a positive test case.

llvm-svn: 176452
2013-03-04 21:33:15 +00:00
Eli Bendersky 4e1db8d7f7 Reapply r176381, writing the CHECKs in a more forgiving manner to account for
running llvm-objdump on Darwin.

llvm-svn: 176443
2013-03-04 18:20:31 +00:00
Preston Gurd 485296d1e8 Bypass Slow Divides
* Only apply divide bypass optimization when not optimizing for size. 
* Fixed bug caused by constant for 0 value of type Int32,
  used dividend type to generate the constant instead.
* For atom x86-64 apply the divide bypass to use 16-bit divides instead of
  64-bit divides when operand values are small enough.
* Added lit tests for 64-bit divide bypass.

Patch by Tyler Nowicki!

llvm-svn: 176442
2013-03-04 18:13:57 +00:00
Jim Grosbach a3c5c769d6 ARM: Creating a vector from a lane of another.
The VDUP instruction source register doesn't allow a non-constant lane
index, so make sure we don't construct a ARM::VDUPLANE node asking it to
do so.

rdar://13328063
http://llvm.org/bugs/show_bug.cgi?id=13963

llvm-svn: 176413
2013-03-02 20:16:24 +00:00
Arnold Schwaighofer 99cba9697a ARM NEON: Fix v2f32 float intrinsics
Mark them as expand, they are not legal as our backend does not match them.

llvm-svn: 176410
2013-03-02 19:38:33 +00:00
Nuno Lopes 589443bd93 recommit r172363 & r171325 (reverted in r172756)
This adds minimalistic support for PHI nodes to llvm.objectsize() evaluation

fingers crossed so that it does break clang boostrap again..

llvm-svn: 176408
2013-03-02 11:36:24 +00:00
Arnold Schwaighofer 20ef54f4c1 X86 cost model: Adjust cost for custom lowered vector multiplies
This matters for example in following matrix multiply:

int **mmult(int rows, int cols, int **m1, int **m2, int **m3) {
  int i, j, k, val;
  for (i=0; i<rows; i++) {
    for (j=0; j<cols; j++) {
      val = 0;
      for (k=0; k<cols; k++) {
        val += m1[i][k] * m2[k][j];
      }
      m3[i][j] = val;
    }
  }
  return(m3);
}

Taken from the test-suite benchmark Shootout.

We estimate the cost of the multiply to be 2 while we generate 9 instructions
for it and end up being quite a bit slower than the scalar version (48% on my
machine).

Also, properly differentiate between avx1 and avx2. On avx-1 we still split the
vector into 2 128bits and handle the subvector muls like above with 9
instructions.
Only on avx-2 will we have a cost of 9 for v4i64.

I changed the test case in test/Transforms/LoopVectorize/X86/avx1.ll to use an
add instead of a mul because with a mul we now no longer vectorize. I did
verify that the mul would be indeed more expensive when vectorized with 3
kernels:

for (i ...)
   r += a[i] * 3;
for (i ...)
  m1[i] = m1[i] * 3; // This matches the test case in avx1.ll
and a matrix multiply.

In each case the vectorized version was considerably slower.

radar://13304919

llvm-svn: 176403
2013-03-02 04:02:52 +00:00
Nadav Rotem 739e37a0d2 PR14448 - prevent the loop vectorizer from vectorizing the same loop twice.
The LoopVectorizer often runs multiple times on the same function due to inlining.
When this happens the loop vectorizer often vectorizes the same loops multiple times, increasing code size and adding unneeded branches.
With this patch, the vectorizer during vectorization puts metadata on scalar loops and marks them as 'already vectorized' so that it knows to ignore them when it sees them a second time.

PR14448.

llvm-svn: 176399
2013-03-02 01:33:49 +00:00
Michael Gottesman ee45c03fec Revert "Rewrite a test to count emitted instructions without using -stats"
This reverts commit aac7922b8fe7ae733d3fe6697e6789fd730315dc. I am reverting the
commit since it broke the phase 1 public buildbot for a few hours.

http://lab.llvm.org:8013/builders/clang-x86_64-darwin11-nobootstrap-RA/builds/2137

llvm-svn: 176394
2013-03-02 00:53:20 +00:00
Akira Hatanaka ece459bb66 [mips] Fix inefficient code generation.
This patch eliminates the need to emit a constant move instruction when this
pattern is matched:

(select (setgt a, Constant), T, F)

The pattern above effectively turns into this:

(conditional-move (setlt a, Constant + 1), F, T)

llvm-svn: 176384
2013-03-01 21:52:08 +00:00
Eli Bendersky 0091e2ff00 Rewrite a test to count emitted instructions without using -stats
Also removed the comments of "should produce..." because they completely
don't match the actually produced output.

llvm-svn: 176381
2013-03-01 21:34:37 +00:00
Akira Hatanaka 3d055580a9 Set properties for f128 type.
llvm-svn: 176378
2013-03-01 21:11:44 +00:00
Eli Bendersky 10ab5e72e1 Rewrite a test to check actual output rather than intermediate implementation
detail.

The was this test was written, it was relying on an implementation detail
(fixups) and hence was very brittle (relying, among other things, on the
exact ordering of statistics printed by MC).

The test was rewritten to check a more observable output difference. While it
doesn't cover 100% of the things the original test covered, it's a good
practice to write regression tests this way. If we want to check that
internal details and invariants hold, such tests should be expressed as unit
tests.

llvm-svn: 176377
2013-03-01 20:54:00 +00:00
Edwin Vane 510c341517 No need to force-create clang-tools-extra lit.site.cfg
The make (all) target takes care of creating lit configs and auto-generating
tests. The problem with the original 'lit.site.cfg' target is it's not
recursive and doesn't fully create everything necessary for testing
clang-tools-extra.

llvm-svn: 176374
2013-03-01 19:58:58 +00:00
Michael Liao d10584e38b Add regression tests (WORKSFORME)
- These tests wont't crash on trunk but would be better to add them so that
  they don't break again in the future.

llvm-svn: 176369
2013-03-01 19:23:37 +00:00
Chad Rosier b3864609cf Generate an error message instead of asserting or segfaulting when we can't
handle indirect register inputs.
rdar://13322011

llvm-svn: 176367
2013-03-01 19:12:05 +00:00
Benjamin Kramer 12f98fae98 LoopVectorize: Don't hang forever if a PHI only has skipped PHI uses.
Fixes PR15384.

llvm-svn: 176366
2013-03-01 19:07:31 +00:00
Michael Liao 6af16fc3b7 Fix PR10475
- ISD::SHL/SRL/SRA must have either both scalar or both vector operands
  but TLI.getShiftAmountTy() so far only return scalar type. As a
  result, backend logic assuming that breaks.
- Rename the original TLI.getShiftAmountTy() to
  TLI.getScalarShiftAmountTy() and re-define TLI.getShiftAmountTy() to
  return target-specificed scalar type or the same vector type as the
  1st operand.
- Fix most TICG logic assuming TLI.getShiftAmountTy() a simple scalar
  type.

llvm-svn: 176364
2013-03-01 18:40:30 +00:00
Chad Rosier 9660343b42 Add support for using non-pic code for arm and thumb1 when emitting the sjlj
dispatch code.  As far as I can tell the thumb2 code is behaving as expected.
I was able to compile and run the associated test case for both arm and thumb1.
rdar://13066352

llvm-svn: 176363
2013-03-01 18:30:38 +00:00
Christian Konig 3c54770365 R600/SI: fix sampler tests after fixing wait insertions
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
llvm-svn: 176359
2013-03-01 17:39:05 +00:00
Jyotsna Verma 8425643728 Hexagon: Add constant extender support framework.
llvm-svn: 176358
2013-03-01 17:37:13 +00:00
Akira Hatanaka e9e588dd72 [mips] Remove unused option. Fix 80-column violations.
llvm-svn: 176330
2013-03-01 02:17:02 +00:00
Akira Hatanaka 8f7bfb39be [mips] Add the capability to search delay slot filling instructions in
successor basic blocks.

Currently this is off by default.

llvm-svn: 176329
2013-03-01 02:03:51 +00:00
Akira Hatanaka e01ff9dc60 [mips] Add capability to search in the forward direction for instructions that
can fill the delay slot.

Currently, this is off by default.

llvm-svn: 176320
2013-03-01 00:50:52 +00:00
Akira Hatanaka eb33ced08f [mips] Define class MemDefsUses.
This class tracks dependence between memory instructions using underlying
objects of memory operands. 

llvm-svn: 176313
2013-03-01 00:16:31 +00:00
Quentin Colombet e684a6d4aa Fix a bug in instcombine for fmul in fast math mode.
The instcombine recognized pattern looks like:
a = b * c
d = a +/- Cst
or
a = b * c
d = Cst +/- a

When creating the new operands for fadd or fsub instruction following the related fmul, the first operand was created with the second original operand (M0 was created with C1) and the second with the first (M1 with Opnd0).

The fix consists in creating the new operands with the appropriate original operand, i.e., M0 with Opnd0 and M1 with C1.

llvm-svn: 176300
2013-02-28 21:12:40 +00:00
Benjamin Kramer f7cfac7a14 Cost model support for lowered math builtins.
We make the cost for calling libm functions extremely high as emitting the
calls is expensive and causes spills (on x86) so performance suffers. We still
vectorize important calls like ceilf and friends on SSE4.1. and fabs.

Differential Revision: http://llvm-reviews.chandlerc.com/D466

llvm-svn: 176287
2013-02-28 19:09:33 +00:00
Tim Northover ce17020c97 AArch64: remove post-encoder method from FCMP (immediate) instructions.
The work done by the post-encoder (setting architecturally unused bits to 0 as
required) can be done by the existing operand that covers the "#0.0". This
removes at least one use of the discouraged PostEncoderMethod uses.

llvm-svn: 176261
2013-02-28 14:46:14 +00:00
Tim Northover c3c5c0971d AArch64: be more careful resorting to inefficient addressing for weak vars.
If an otherwise weak var is actually defined in this unit, it can't be
undefined at runtime so we can use normal global variable sequences (ADRP/ADD)
to access it.

llvm-svn: 176259
2013-02-28 14:36:31 +00:00
Tim Northover b9d4fd210b AArch64: don't drop GlobalAddress offset when handling extern_weak decls.
llvm-svn: 176258
2013-02-28 14:36:24 +00:00
Tim Northover 9fafdf6d5a AArch64: Use cbnz instead of cmp/b.ne pair for atomic operations.
llvm-svn: 176253
2013-02-28 13:52:07 +00:00
Evgeniy Stepanov 00062b4498 [msan] Implement sanitize_memory attribute.
Shadow checks are disabled and memory loads always produce fully initialized
values in functions that don't have a sanitize_memory attribute. Value and
argument shadow is propagated as usual.

This change also updates blacklist behaviour to match the above.

llvm-svn: 176247
2013-02-28 11:25:14 +00:00