Commit Graph

280 Commits

Author SHA1 Message Date
Chris Lattner 9c1efcd4f6 in 32-bit pic mode for targets with a GOT, x86 emits jump table
entries with @GOTOFF whih is EK_GPRel32BlockAddress.

llvm-svn: 94474
2010-01-25 23:38:14 +00:00
Mon P Wang 586d997e98 Improved widening loads by adding support for wider loads if
the alignment allows.  Fixed a bug where we didn't use a
vector load/store for PR5626.

llvm-svn: 94338
2010-01-24 00:05:03 +00:00
Evan Cheng 0e8b9e32d1 Use sbb x, x to materialize carry bit in a GPR. The result is all one's or all zero's.
llvm-svn: 91381
2009-12-15 00:53:42 +00:00
Evan Cheng 493b882f80 Optimize splat of a scalar load into a shuffle of a vector load when it's legal. e.g.
vector_shuffle (scalar_to_vector (i32 load (ptr + 4))), undef, <0, 0, 0, 0>
=>
vector_shuffle (v4i32 load ptr), undef, <1, 1, 1, 1>

iff ptr is 16-byte aligned (or can be made into 16-byte aligned).

llvm-svn: 90984
2009-12-09 21:00:30 +00:00
Nate Begeman 3a313df69b x86 vector shuffle cleanup/fixes:
1. rename the movhp patfrag to movlhps, since thats what it actually matches
2. eliminate the bogus movhps load and store patterns, they were incorrect.  The load transforms are already handled (correctly) by shufps/unpack.
3. revert a recent test change to its correct form.

llvm-svn: 86415
2009-11-07 23:17:15 +00:00
Kenneth Uildriks 07119737aa Add code to check at SelectionDAGISel::LowerArguments time to see if return values can be lowered to registers. Coming soon, code to perform sret-demotion if return values cannot be lowered to registers
llvm-svn: 86324
2009-11-07 02:11:54 +00:00
Dan Gohman f7c4299312 Initial x86 support for BlockAddresses.
llvm-svn: 85557
2009-10-30 01:28:02 +00:00
Evan Cheng 83896a59e1 Add a second ValueType argument to isFPImmLegal.
llvm-svn: 85361
2009-10-28 01:43:28 +00:00
Evan Cheng 16993aa30b Do away with addLegalFPImmediate. Add a target hook isFPImmLegal which returns true if the fp immediate can be natively codegened by target.
llvm-svn: 85281
2009-10-27 19:56:55 +00:00
Nate Begeman 18df82a20c Add support for matching shuffle patterns with palignr.
llvm-svn: 84459
2009-10-19 02:17:23 +00:00
Dan Gohman 48b185d6f7 Improve MachineMemOperand handling.
- Allocate MachineMemOperands and MachineMemOperand lists in MachineFunctions.
   This eliminates MachineInstr's std::list member and allows the data to be
   created by isel and live for the remainder of codegen, avoiding a lot of
   copying and unnecessary translation. This also shrinks MemSDNode.
 - Delete MemOperandSDNode. Introduce MachineSDNode which has dedicated
   fields for MachineMemOperands.
 - Change MemSDNode to have a MachineMemOperand member instead of its own
   fields with the same information. This introduces some redundancy, but
   it's more consistent with what MachineInstr will eventually want.
 - Ignore alignment when searching for redundant loads for CSE, but remember
   the greatest alignment.

Target-specific code which previously used MemOperandSDNodes with generic
SDNodes now use MemIntrinsicSDNodes, with opcodes in a designated range
so that the SelectionDAG framework knows that MachineMemOperand information
is available.

llvm-svn: 82794
2009-09-25 20:36:54 +00:00
Evan Cheng b82b5514fe Fix funky comments.
llvm-svn: 82314
2009-09-19 10:09:15 +00:00
Evan Cheng 9827ad39a7 Fix PR4926. When target hook EmitInstrWithCustomInserter() insert new basic blocks and update CFG, it should also inform sdisel of the changes so the phi source operands will come from the right basic blocks.
llvm-svn: 82311
2009-09-19 09:51:03 +00:00
Evan Cheng 270d0f986f Enhance EmitInstrWithCustomInserter() so target can specify CFG changes that sdisel will use to properly complete phi nodes.
Not functionality change yet.

llvm-svn: 82273
2009-09-18 21:02:19 +00:00
Dan Gohman 722b1eefdb Add support for using the FLAGS result of or, xor, and and instructions
on x86, to avoid explicit test instructions. A few existing tests changed
due to arbitrary register allocation differences.

llvm-svn: 82263
2009-09-18 19:59:53 +00:00
Sandeep Patel 68c5f477fa Retype from unsigned to CallingConv::ID accordingly. Approved by Bob Wilson.
llvm-svn: 80773
2009-09-02 08:44:58 +00:00
Chris Lattner d5f4fcceae refactor select 'sched insertion' out to its own method.
llvm-svn: 80764
2009-09-02 05:57:00 +00:00
Duncan Sands 9cf8bcb69d Revert commit 80428. It completely broke exception
handling on x86-32 linux.

llvm-svn: 80592
2009-08-31 16:45:16 +00:00
Bill Wendling 39bb29f7fe - Add target lowering methods to get the preferred format for the FDE and LSDA
encodings.
- Make some of the values emitted by the FDEs dependent upon the pointer
  size. This is in line with how GCC does things. And it has the benefit of
  working for Darwin in 64-bit mode now.

llvm-svn: 80428
2009-08-29 12:20:54 +00:00
Eric Christopher 9fe912de5f Implement sse4.2 string/text processing instructions:
Add patterns and instruction encoding information.
Add custom lowering to deal with hardwired return register of
uncertain type (xmm0).

llvm-svn: 79377
2009-08-18 22:50:32 +00:00
Bill Wendling bae6b2cca3 Reapply r79127. It was fixed by d0k.
llvm-svn: 79136
2009-08-15 21:21:19 +00:00
Bill Wendling d3fade656f Revert r79127. It was causing compilation errors.
llvm-svn: 79135
2009-08-15 21:14:01 +00:00
Evan Cheng 52d4e64711 Change allowsUnalignedMemoryAccesses to take type argument since some targets
support unaligned mem access only for certain types. (Should it be size
instead?)

ARM v7 supports unaligned access for i16 and i32, some v6 variants support it
as well.

llvm-svn: 79127
2009-08-15 19:23:44 +00:00
Dan Gohman 0700a56860 On x86-64, for a varargs function, don't store the xmm registers to
the register save area if %al is 0. This avoids touching xmm
regsiters when they aren't actually used.

llvm-svn: 79061
2009-08-15 01:38:56 +00:00
Owen Anderson 9f94459d24 Split EVT into MVT and EVT, the former representing _just_ a primitive type, while
the latter is capable of representing either a primitive or an extended type.

llvm-svn: 78713
2009-08-11 20:47:22 +00:00
Owen Anderson 53aa7a960c Rename MVT to EVT, in preparation for splitting SimpleValueType out into its own struct type.
llvm-svn: 78610
2009-08-10 22:56:29 +00:00
Owen Anderson c30530d105 Start moving TargetLowering away from using full MVTs and towards SimpleValueType, which will simplify the privatization of IntegerType in the future.
llvm-svn: 78584
2009-08-10 18:56:59 +00:00
Anton Korobeynikov 741ea0d7fd Better handle kernel code model. Also, generalize the things and fix one
subtle bug with small code model.

llvm-svn: 78255
2009-08-05 23:01:26 +00:00
Dan Gohman f9bbcd1afd Major calling convention code refactoring.
Instead of awkwardly encoding calling-convention information with ISD::CALL,
ISD::FORMAL_ARGUMENTS, ISD::RET, and ISD::ARG_FLAGS nodes, TargetLowering
provides three virtual functions for targets to override:
LowerFormalArguments, LowerCall, and LowerRet, which replace the custom
lowering done on the special nodes. They provide the same information, but
in a more immediately usable format.

This also reworks much of the target-independent tail call logic. The
decision of whether or not to perform a tail call is now cleanly split
between target-independent portions, and the target dependent portion
in IsEligibleForTailCallOptimization.

This also synchronizes all in-tree targets, to help enable future
refactoring and feature work.

llvm-svn: 78142
2009-08-05 01:29:28 +00:00
Dan Gohman 9139b02cda Fix typos in comments.
llvm-svn: 77806
2009-08-01 21:25:00 +00:00
Evan Cheng e62288fdd4 Optimize some common usage patterns of atomic built-ins __sync_add_and_fetch() and __sync_sub_and_fetch.
When the return value is not used (i.e. only care about the value in the memory), x86 does not have to use add to implement these. Instead, it can use add, sub, inc, dec instructions with the "lock" prefix.

This is currently implemented using a bit of instruction selection trick. The issue is the target independent pattern produces one output and a chain and we want to map it into one that just output a chain. The current trick is to select it into a merge_values with the first definition being an implicit_def. The proper solution is to add new ISD opcodes for the no-output variant. DAG combiner can then transform the node before it gets to target node selection.

Problem #2 is we are adding a whole bunch of x86 atomic instructions when in fact these instructions are identical to the non-lock versions. We need a way to add target specific information to target nodes and have this information carried over to machine instructions. Asm printer (or JIT) can use this information to add the "lock" prefix.

llvm-svn: 77582
2009-07-30 08:33:02 +00:00
Eric Christopher f7802a33ce Add support for gcc __builtin_ia32_ptest{z,c,nzc} intrinsics. Lower
to ptest instruction plus setcc. Revamp ptest instruction. Add test.

llvm-svn: 77407
2009-07-29 00:28:05 +00:00
Chris Lattner 5849d22bd1 Copy ExpandInlineAsm to TargetLowering from TargetAsmInfo.
llvm-svn: 76441
2009-07-20 17:51:36 +00:00
Chris Lattner 88765d4831 change a few methods to be static functions.
llvm-svn: 75089
2009-07-09 02:44:11 +00:00
Bill Wendling 512ff7353e Update comments to make it clear that the function alignment is the Log2 of the
bytes and not bytes.

llvm-svn: 74624
2009-07-01 18:50:55 +00:00
Bill Wendling 31ceb1bcba Add an "alignment" field to the MachineFunction object. It makes more sense to
have the alignment be calculated up front, and have the back-ends obey whatever
alignment is decided upon.

This allows for future work that would allow for precise no-op placement and the
like.

llvm-svn: 74564
2009-06-30 22:38:32 +00:00
Devang Patel d1c7d34924 Add new function attribute - noimplicitfloat
Update code generator to use this attribute and remove NoImplicitFloat target option.
Update llc to set this attribute when -no-implicit-float command line option is used.

llvm-svn: 72959
2009-06-05 21:57:13 +00:00
Dale Johannesen 5234d3795f Revert 72707 and 72709, for the moment.
llvm-svn: 72712
2009-06-02 03:12:52 +00:00
Dale Johannesen 0b8ca79253 Make the implicit inputs and outputs of target-independent
ADDC/ADDE use MVT::i1 (later, whatever it gets legalized to)
instead of MVT::Flag.  Remove CARRY_FALSE in favor of 0; adjust
all target-independent code to use this format.

Most targets will still produce a Flag-setting target-dependent
version when selection is done.  X86 is converted to use i32
instead, which means TableGen needs to produce different code
in xxxGenDAGISel.inc.  This keys off the new supportsHasI1 bit
in xxxInstrInfo, currently set only for X86; in principle this
is temporary and should go away when all other targets have
been converted.  All relevant X86 instruction patterns are
modified to represent setting and using EFLAGS explicitly.  The
same can be done on other targets.

The immediate behavior change is that an ADC/ADD pair are no
longer tightly coupled in the X86 scheduler; they can be
separated by instructions that don't clobber the flags (MOV).
I will soon add some peephole optimizations based on using
other instructions that set the flags to feed into ADC.

llvm-svn: 72707
2009-06-01 23:27:20 +00:00
Evan Cheng a9cda8abf2 Added optimization that narrow load / op / store and the 'op' is a bit twiddling instruction and its second operand is an immediate. If bits that are touched by 'op' can be done with a narrower instruction, reduce the width of the load and store as well. This happens a lot with bitfield manipulation code.
e.g.
orl     $65536, 8(%rax)
=>
orb     $1, 10(%rax)

Since narrowing is not always a win, e.g. i32 -> i16 is a loss on x86, dag combiner consults with the target before performing the optimization.

llvm-svn: 72507
2009-05-28 00:35:15 +00:00
Eli Friedman dfe4f25355 Make the x86 backend custom-lower UINT_TO_FP and FP_TO_UINT on 32-bit
systems instead of attempting to promote them to a 64-bit SINT_TO_FP or 
FP_TO_SINT.  This is in preparation for removing the type legalization 
code from LegalizeDAG: once type legalization is gone from LegalizeDAG, 
it won't be able to handle the i64 operand/result correctly.

This isn't quite ideal, but I don't think any other operation for any 
target ends up in this situation, so treating this case specially seems 
reasonable.

llvm-svn: 72324
2009-05-23 09:59:16 +00:00
Nate Begeman 5f829d896d Implement review feedback for vector shuffle work.
llvm-svn: 70372
2009-04-29 05:20:52 +00:00
Nate Begeman 8d6d4b9289 2nd attempt, fixing SSE4.1 issues and implementing feedback from duncan.
PR2957

ISD::VECTOR_SHUFFLE now stores an array of integers representing the shuffle
mask internal to the node, rather than taking a BUILD_VECTOR of ConstantSDNodes
as the shuffle mask.  A value of -1 represents UNDEF.

In addition to eliminating the creation of illegal BUILD_VECTORS just to 
represent shuffle masks, we are better about canonicalizing the shuffle mask,
resulting in substantially better code for some classes of shuffles.

llvm-svn: 70225
2009-04-27 18:41:29 +00:00
Rafael Espindola b93db668b3 Revert 69952. Causes testsuite failures on linux x86-64.
llvm-svn: 69967
2009-04-24 12:40:33 +00:00
Nate Begeman bb881d66f4 PR2957
ISD::VECTOR_SHUFFLE now stores an array of integers representing the shuffle
mask internal to the node, rather than taking a BUILD_VECTOR of ConstantSDNodes
as the shuffle mask.  A value of -1 represents UNDEF.

In addition to eliminating the creation of illegal BUILD_VECTORS just to 
represent shuffle masks, we are better about canonicalizing the shuffle mask,
resulting in substantially better code for some classes of shuffles.

A clean up of x86 shuffle code, and some canonicalizing in DAGCombiner is next.

llvm-svn: 69952
2009-04-24 03:42:54 +00:00
Rafael Espindola 3b2df10c9e Re-apply 68552.
Tested by bootstrapping llvm-gcc and using that to build llvm.

llvm-svn: 68645
2009-04-08 21:14:34 +00:00
Dan Gohman ad3e549a53 Implement support for using modeling implicit-zero-extension on x86-64
with SUBREG_TO_REG, teach SimpleRegisterCoalescing to coalesce
SUBREG_TO_REG instructions (which are similar to INSERT_SUBREG
instructions), and teach the DAGCombiner to take advantage of this on
targets which support it. This eliminates many redundant
zero-extension operations on x86-64.

This adds a new TargetLowering hook, isZExtFree. It's similar to
isTruncateFree, except it only applies to actual definitions, and not
no-op truncates which may not zero the high bits.

Also, this adds a new optimization to SimplifyDemandedBits: transform
operations like x+y into (zext (add (trunc x), (trunc y))) on targets
where all the casts are no-ops. In contexts where the high part of the
add is explicitly masked off, this allows the mask operation to be
eliminated. Fix the DAGCombiner to avoid undoing these transformations
to eliminate casts on targets where the casts are no-ops.

Also, this adds a new two-address lowering heuristic. Since
two-address lowering runs before coalescing, it helps to be able to
look through copies when deciding whether commuting and/or
three-address conversion are profitable.

Also, fix a bug in LiveInterval::MergeInClobberRanges. It didn't handle
the case that a clobber range extended both before and beyond an
existing live range. In that case, multiple live ranges need to be
added. This was exposed by the new subreg coalescing code.

Remove 2008-05-06-SpillerBug.ll. It was bugpoint-reduced, and the
spiller behavior it was looking for no longer occurrs with the new
instruction selection.

llvm-svn: 68576
2009-04-08 00:15:30 +00:00
Bill Wendling 4aa25b79f9 Temporarily revert r68552. This was causing a failure in the self-hosting LLVM
builds.

--- Reverse-merging (from foreign repository) r68552 into '.':
U    test/CodeGen/X86/tls8.ll
U    test/CodeGen/X86/tls10.ll
U    test/CodeGen/X86/tls2.ll
U    test/CodeGen/X86/tls6.ll
U    lib/Target/X86/X86Instr64bit.td
U    lib/Target/X86/X86InstrSSE.td
U    lib/Target/X86/X86InstrInfo.td
U    lib/Target/X86/X86RegisterInfo.cpp
U    lib/Target/X86/X86ISelLowering.cpp
U    lib/Target/X86/X86CodeEmitter.cpp
U    lib/Target/X86/X86FastISel.cpp
U    lib/Target/X86/X86InstrInfo.h
U    lib/Target/X86/X86ISelDAGToDAG.cpp
U    lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.cpp
U    lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.cpp
U    lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.h
U    lib/Target/X86/AsmPrinter/X86IntelAsmPrinter.h
U    lib/Target/X86/X86ISelLowering.h
U    lib/Target/X86/X86InstrInfo.cpp
U    lib/Target/X86/X86InstrBuilder.h
U    lib/Target/X86/X86RegisterInfo.td

llvm-svn: 68560
2009-04-07 22:35:25 +00:00
Rafael Espindola 1edda06792 Reduce code duplication on the TLS implementation.
This introduces a small regression on the generated code
quality in the case we are just computing addresses, not
loading values.

Will work on it and on X86-64 support.

llvm-svn: 68552
2009-04-07 21:37:46 +00:00
Evan Cheng a84a318873 When optimzing a mul by immediate into two, the resulting mul's should get a x86 specific node to avoid dag combiner from hacking on them further.
llvm-svn: 68066
2009-03-30 21:36:47 +00:00