Commit Graph

63965 Commits

Author SHA1 Message Date
Chris Lattner 7a05e6dca2 I have manually decoded the imm field of an insertps one too many
times.  This patch causes llc and llvm-mc (which both default to
verbose-asm) to print out comments after a few common shuffle 
instructions which indicates the shuffle mask, e.g.:

	insertps	$113, %xmm3, %xmm0     ## xmm0 = zero,xmm0[1,2],xmm3[1]
	unpcklps	%xmm1, %xmm0    ## xmm0 = xmm0[0],xmm1[0],xmm0[1],xmm1[1]
	pshufd	$1, %xmm1, %xmm1        ## xmm1 = xmm1[1,0,0,0]

This is carefully factored to keep the information extraction (of the
shuffle mask) separate from the printing logic.  I plan to move the
extraction part out somewhere else at some point for other parts of
the x86 backend that want to introspect on the behavior of shuffles.

llvm-svn: 112387
2010-08-28 20:42:31 +00:00
Chris Lattner 112b6ee3f2 fixme accomplished
llvm-svn: 112386
2010-08-28 20:40:28 +00:00
Chris Lattner d39d2aad3f tidy up
llvm-svn: 112385
2010-08-28 20:34:35 +00:00
NAKAMURA Takumi d1ca3e145f Add me to the "blame list"!
And it is my 1st test commit.

llvm-svn: 112384
2010-08-28 20:24:43 +00:00
Dan Gohman d13b1a3581 Remove obsolete keywords which are no longer relevant.
llvm-svn: 112382
2010-08-28 20:14:05 +00:00
Dan Gohman 5458b9ddc6 Remove unions from the vim syntax highlighting.
llvm-svn: 112381
2010-08-28 20:11:28 +00:00
Chris Lattner 94656b1c8c fix the buildvector->insertp[sd] logic to not always create a redundant
insertp[sd] $0, which is a noop.  Before:

_f32:                                   ## @f32
	pshufd	$1, %xmm1, %xmm2
	pshufd	$1, %xmm0, %xmm3
	addss	%xmm2, %xmm3
	addss	%xmm1, %xmm0
                                        ## kill: XMM0<def> XMM0<kill> XMM0<def>
	insertps	$0, %xmm0, %xmm0
	insertps	$16, %xmm3, %xmm0
	ret

after:

_f32:                                   ## @f32
	movdqa	%xmm0, %xmm2
	addss	%xmm1, %xmm2
	pshufd	$1, %xmm1, %xmm1
	pshufd	$1, %xmm0, %xmm3
	addss	%xmm1, %xmm3
	movdqa	%xmm2, %xmm0
	insertps	$16, %xmm3, %xmm0
	ret

The extra movs are due to a random (poor) scheduling decision.

llvm-svn: 112379
2010-08-28 17:59:08 +00:00
Chris Lattner bcb6090ad0 fix the BuildVector -> unpcklps logic to not do pointless shuffles
when the top elements of a vector are undefined.  This happens all
the time for X86-64 ABI stuff because only the low 2 elements of
a 4 element vector are defined.  For example, on:

_Complex float f32(_Complex float A, _Complex float B) {
  return A+B;
}

We used to produce (with SSE2, SSE4.1+ uses insertps):

_f32:                                   ## @f32
	movdqa	%xmm0, %xmm2
	addss	%xmm1, %xmm2
	pshufd	$16, %xmm2, %xmm2
	pshufd	$1, %xmm1, %xmm1
	pshufd	$1, %xmm0, %xmm0
	addss	%xmm1, %xmm0
	pshufd	$16, %xmm0, %xmm1
	movdqa	%xmm2, %xmm0
	unpcklps	%xmm1, %xmm0
	ret

We now produce:

_f32:                                   ## @f32
	movdqa	%xmm0, %xmm2
	addss	%xmm1, %xmm2
	pshufd	$1, %xmm1, %xmm1
	pshufd	$1, %xmm0, %xmm3
	addss	%xmm1, %xmm3
	movaps	%xmm2, %xmm0
	unpcklps	%xmm3, %xmm0
	ret

This implements rdar://8368414

llvm-svn: 112378
2010-08-28 17:28:30 +00:00
Chris Lattner 96db6e66f4 improve comments in the unpcklps generating logic, introduce
a new EltStride variable instead of reusing NumElems variable
for a non-obvious purpose.  No functionality change.

llvm-svn: 112377
2010-08-28 17:15:43 +00:00
Michael J. Spencer d75cf22f72 Don't cast Win32 FILETIME structs to int64. Patch by Dimitry Andric!
According to the Microsoft documentation here:
http://msdn.microsoft.com/en-us/library/ms724284%28VS.85%29.aspx

this cast used in lib/System/Win32/Path.inc:

__int64 ft = *reinterpret_cast<__int64*>(&fi.ftLastWriteTime);

should not be done.  The documentation says: "Do not cast a pointer to a
FILETIME structure to either a ULARGE_INTEGER* or __int64* value because
it can cause alignment faults on 64-bit Windows."

llvm-svn: 112376
2010-08-28 16:39:32 +00:00
Chris Lattner bd24404718 remove the MSIL backend. It isn't maintained, is buggy, has no testcases
and hasn't kept up with ToT.  Approved by Anton.

llvm-svn: 112375
2010-08-28 16:33:36 +00:00
Benjamin Kramer 2e5c14713c Update ocaml test.
llvm-svn: 112364
2010-08-28 10:29:41 +00:00
Benjamin Kramer 1d872f5a9f Remove unions from the ocaml bindings.
llvm-svn: 112363
2010-08-28 09:47:42 +00:00
Bob Wilson 950882be07 Use pseudo instructions for VST1 and VST2.
llvm-svn: 112357
2010-08-28 05:12:57 +00:00
Chris Lattner 13ee795c42 remove unions from LLVM IR. They are severely buggy and not
being actively maintained, improved, or extended.

llvm-svn: 112356
2010-08-28 04:09:24 +00:00
Chris Lattner 504e5100d3 remove the ABCD and SSI passes. They don't have any clients that
I'm aware of, aren't maintained, and LVI will be replacing their value.
nlewycky approved this on irc.

llvm-svn: 112355
2010-08-28 03:51:24 +00:00
Chris Lattner a5217a19a4 remove dead proto
llvm-svn: 112354
2010-08-28 03:45:03 +00:00
Chris Lattner 2c9e253ca9 more dead thing zapping.
llvm-svn: 112353
2010-08-28 03:43:50 +00:00
Chris Lattner d069114613 zap dead method
llvm-svn: 112352
2010-08-28 03:42:45 +00:00
Chris Lattner 50df36ac0a for completeness, allow undef also.
llvm-svn: 112351
2010-08-28 03:36:51 +00:00
Chris Lattner 95bb297c26 squish dead code.
llvm-svn: 112350
2010-08-28 03:21:03 +00:00
Chris Lattner ca936ac966 zap dead code
llvm-svn: 112349
2010-08-28 03:18:45 +00:00
Bruno Cardoso Lopes a982aa24ef Clean up the logic of vector shuffles -> vector shifts.
Also teach this logic how to handle target specific shuffles if
needed, this is necessary while searching recursively for zeroed
scalar elements in vector shuffle operands.

llvm-svn: 112348
2010-08-28 02:46:39 +00:00
Chris Lattner d0214f3efe handle the constant case of vector insertion. For something
like this:

struct S { float A, B, C, D; };

struct S g;
struct S bar() { 
  struct S A = g;
  ++A.B;
  A.A = 42;
  return A;
}

we now generate:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	12(%rax), %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	unpcklps	%xmm0, %xmm1
	addss	LCPI1_0(%rip), %xmm2
	pshufd	$16, %xmm2, %xmm2
	movss	LCPI1_1(%rip), %xmm0
	pshufd	$16, %xmm0, %xmm0
	unpcklps	%xmm2, %xmm0
	ret

instead of:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	12(%rax), %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	unpcklps	%xmm0, %xmm1
	addss	LCPI1_0(%rip), %xmm2
	movd	%xmm2, %eax
	shlq	$32, %rax
	addq	$1109917696, %rax       ## imm = 0x42280000
	movd	%rax, %xmm0
	ret

llvm-svn: 112345
2010-08-28 01:50:57 +00:00
Duncan Sands 3bd97fec8f Straighten out any triple strings passed on the command line before
they hit the rest of the system.

llvm-svn: 112344
2010-08-28 01:30:02 +00:00
Chris Lattner dd6601048e optimize bitcasts from large integers to vector into vector
element insertion from the pieces that feed into the vector.
This handles a pattern that occurs frequently due to code
generated for the x86-64 abi.  We now compile something like
this:

struct S { float A, B, C, D; };
struct S g;
struct S bar() { 
  struct S A = g;
  ++A.A;
  ++A.C;
  return A;
}

into all nice vector operations:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	12(%rax), %xmm3
	pshufd	$16, %xmm2, %xmm2
	unpcklps	%xmm2, %xmm0
	addss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	pshufd	$16, %xmm3, %xmm2
	unpcklps	%xmm2, %xmm1
	ret

instead of icky integer operations:

_bar:                                   ## @bar
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	movd	%xmm0, %ecx
	movl	4(%rax), %edx
	movl	12(%rax), %esi
	shlq	$32, %rdx
	addq	%rcx, %rdx
	movd	%rdx, %xmm0
	addss	8(%rax), %xmm1
	movd	%xmm1, %eax
	shlq	$32, %rsi
	addq	%rax, %rsi
	movd	%rsi, %xmm1
	ret

This resolves rdar://8360454

llvm-svn: 112343
2010-08-28 01:20:38 +00:00
Dan Gohman e06905d1f0 Completely disable tail calls when fast-isel is enabled, as fast-isel
doesn't currently support dealing with this.

llvm-svn: 112341
2010-08-28 00:51:03 +00:00
Dan Gohman 1e06dbf881 Trim a #include.
llvm-svn: 112340
2010-08-28 00:49:13 +00:00
Dan Gohman fe22f1d3cc Fix an index calculation thinko.
llvm-svn: 112337
2010-08-28 00:39:27 +00:00
Bob Wilson 8ee9394750 We don't need to custom-select VLDMQ and VSTMQ anymore.
llvm-svn: 112336
2010-08-28 00:20:11 +00:00
Benjamin Kramer 83f9ff0452 Update CMake build. Add newline at end of file.
llvm-svn: 112332
2010-08-28 00:11:12 +00:00
Bob Wilson ca5af12920 When merging Thumb2 loads/stores, do not give up when the offset is one of
the special values that for ARM would be used with IB or DA modes.  Fall
through and consider materializing a new base address is it would be
profitable.

llvm-svn: 112329
2010-08-27 23:57:52 +00:00
Owen Anderson cf7f941121 Add a prototype of a new peephole optimizing pass that uses LazyValue info to simplify PHIs and select's.
This pass addresses the missed optimizations from PR2581 and PR4420.

llvm-svn: 112325
2010-08-27 23:31:36 +00:00
Owen Anderson 38f6b7fe3b Improve the precision of getConstant().
llvm-svn: 112323
2010-08-27 23:29:38 +00:00
Bob Wilson 13ce07fa92 Change ARM VFP VLDM/VSTM instructions to use addressing mode #4, just like
all the other LDM/STM instructions.  This fixes asm printer crashes when
compiling with -O0.  I've changed one of the NEON tests (vst3.ll) to run
with -O0 to check this in the future.

Prior to this change VLDM/VSTM used addressing mode #5, but not really.
The offset field was used to hold a count of the number of registers being
loaded or stored, and the AM5 opcode field was expanded to specify the IA
or DB mode, instead of the standard ADD/SUB specifier.  Much of the backend
was not aware of these special cases.  The crashes occured when rewriting
a frameindex caused the AM5 offset field to be changed so that it did not
have a valid submode.  I don't know exactly what changed to expose this now.
Maybe we've never done much with -O0 and NEON.  Regardless, there's no longer
any reason to keep a count of the VLDM/VSTM registers, so we can use
addressing mode #4 and clean things up in a lot of places.

llvm-svn: 112322
2010-08-27 23:18:17 +00:00
Chris Lattner 954e9557e3 tidy up test.
llvm-svn: 112321
2010-08-27 23:15:21 +00:00
Chris Lattner b8b7d52631 no really, fix the test.
llvm-svn: 112317
2010-08-27 23:05:54 +00:00
Chris Lattner c8908b4cdb fix this test. It's not clear what it's really testing.
llvm-svn: 112316
2010-08-27 23:05:27 +00:00
Chris Lattner 6c1395f62a Enhance the shift propagator to handle the case when you have:
A = shl x, 42
...
B = lshr ..., 38

which can be transformed into:
A = shl x, 4
...

iff we can prove that the would-be-shifted-in bits
are already zero.  This eliminates two shifts in the testcase
and allows eliminate of the whole i128 chain in the real example.

llvm-svn: 112314
2010-08-27 22:53:44 +00:00
Devang Patel f2855b147f Simplify.
llvm-svn: 112305
2010-08-27 22:25:51 +00:00
Chris Lattner 18d7fc8fc6 Implement a pretty general logical shift propagation
framework, which is good at ripping through bitfield
operations.  This generalize a bunch of the existing
xforms that instcombine does, such as 
  (x << c) >> c -> and
to handle intermediate logical nodes.  This is useful for
ripping up the "promote to large integer" code produced by
SRoA.

llvm-svn: 112304
2010-08-27 22:24:38 +00:00
Bob Wilson aaff8f539a Fix a comment typo.
llvm-svn: 112302
2010-08-27 21:56:59 +00:00
Bob Wilson af371b49a8 Unsigned value cannot be < 0.
llvm-svn: 112300
2010-08-27 21:44:35 +00:00
Dan Gohman 15871f23e3 When merging adjacent operands, scan ahead and merge all equal
adjacent operands at once, instead of just two at a time.

llvm-svn: 112299
2010-08-27 21:39:59 +00:00
Eric Christopher ec5030b213 Fix a couple of typos.
Patch by Cameron Esfahani!

llvm-svn: 112297
2010-08-27 21:38:11 +00:00
Chris Lattner 25a198e72b remove some special shift cases that have been subsumed into the
more general simplify demanded bits logic.

llvm-svn: 112291
2010-08-27 21:04:34 +00:00
Dan Gohman c866bf4fec Make the {A,+,B}<L> + {C,+,D}<L> --> Other + {A+C,+,B+D}<L>
transformation collect all the addrecs with the same loop
add combine them at once rather than starting everything over
at the first chance.

llvm-svn: 112290
2010-08-27 20:45:56 +00:00
Chris Lattner 606b76eba6 merge and filecheckize test
llvm-svn: 112289
2010-08-27 20:44:45 +00:00
Chris Lattner c665156e9f merge two tests
llvm-svn: 112288
2010-08-27 20:42:10 +00:00
Bill Wendling 6628431a91 Remove now unneeded command line flag that enables 'optimize compares.'
llvm-svn: 112287
2010-08-27 20:39:09 +00:00