Commit Graph

276 Commits

Author SHA1 Message Date
Evan Cheng 7f02cfa599 Clean up sub-register implementation by moving subReg information back to
MachineOperand auxInfo. Previous clunky implementation uses an external map
to track sub-register uses. That works because register allocator uses
a new virtual register for each spilled use. With interval splitting (coming
soon), we may have multiple uses of the same register some of which are
of using different sub-registers from others. It's too fragile to constantly
update the information.

llvm-svn: 44104
2007-11-14 07:59:08 +00:00
Dale Johannesen 7a7085f6d3 Add parameter to getDwarfRegNum to permit targets
to use different mappings for EH and debug info;
no functional change yet.
Fix warning in X86CodeEmitter.

llvm-svn: 44056
2007-11-13 19:13:01 +00:00
Evan Cheng c891ae92dc Fix x86-64 jit: remove reliance on Dwarf numbers.
llvm-svn: 44048
2007-11-13 17:54:34 +00:00
Anton Korobeynikov 4edfea438a Use TableGen to emit information for dwarf register numbers.
This makes DwarfRegNum to accept list of numbers instead.
Added three different "flavours", but only slightly tested on x86-32/linux.
Please check another subtargets if possible,

llvm-svn: 43997
2007-11-11 19:50:10 +00:00
Dale Johannesen dfb85c7831 Revert previous rewrite per chris's comments.
llvm-svn: 43950
2007-11-09 18:07:11 +00:00
Dale Johannesen 04fd82088e Rewrite Dwarf number handling per review comments.
llvm-svn: 43918
2007-11-09 00:47:10 +00:00
Dale Johannesen 1b9de4dd6f Complete conditionalization of Dwarf reg numbers.
Would somebody not on Darwin please make sure this
doesn't break anything.  Exception handling failures
would be the most likely symptom.

llvm-svn: 43844
2007-11-07 21:48:35 +00:00
Dale Johannesen fbe69d2cd6 Interchange Dwarf numbers of ESP and EBP on x86 Darwin.
Much improvement in exception handling.

llvm-svn: 43794
2007-11-07 00:25:05 +00:00
Evan Cheng 9337929aae Use movups to spill / restore SSE registers on targets where stacks alignment is
less than 16. This is a temporary solution until dynamic stack alignment is
implemented.

llvm-svn: 43703
2007-11-05 07:30:01 +00:00
Anton Korobeynikov d07d6a411c Fix off-by-one stack offset computations (dwarf information) for callee-saved
registers in case, when FP pointer was eliminated. This should fixes misc. random
EH-related crahses, when stuff is compiled with -fomit-frame-pointer.
Thanks Duncan for nailing this bug!

llvm-svn: 43381
2007-10-26 09:13:24 +00:00
Evan Cheng c92446af1f Fix an unfolding bug.
llvm-svn: 43212
2007-10-22 03:03:20 +00:00
Evan Cheng 45e096c77e Resolve unfold tables ambiguity.
llvm-svn: 43194
2007-10-19 23:50:58 +00:00
Evan Cheng 35ff79370b Local spiller optimization:
Turn a store folding instruction into a load folding instruction. e.g.
     xorl  %edi, %eax
     movl  %eax, -32(%ebp)
     movl  -36(%ebp), %eax
     orl   %eax, -32(%ebp)
=>
     xorl  %edi, %eax
     orl   -36(%ebp), %eax
     mov   %eax, -32(%ebp)
This enables the unfolding optimization for a subsequent instruction which will
also eliminate the newly introduced store instruction.

llvm-svn: 43192
2007-10-19 21:23:22 +00:00
Evan Cheng 463e2ab0ac - Added getOpcodeAfterMemoryUnfold(). It doesn't unfold an instruction, but only returns the opcode of the instruction post unfolding.
- Fix some copy+paste bugs.

llvm-svn: 43153
2007-10-18 22:40:57 +00:00
Evan Cheng aa9a225699 Use SmallVectorImpl instead of SmallVector with hardcoded size in MRegister public interface.
llvm-svn: 43150
2007-10-18 21:29:24 +00:00
Evan Cheng 7082dcf605 Change unfoldMemoryOperand(). User is now responsible for passing in the
register used by the unfolded instructions. User can also specify whether to
unfold the load, the store, or both.

llvm-svn: 42946
2007-10-13 02:35:06 +00:00
Evan Cheng 09c0fe0a7f Fold load / store into MOV32to32_ and MOV16to16_.
llvm-svn: 42895
2007-10-12 08:38:01 +00:00
Arnold Schwaighofer 9ccea99165 Added tail call optimization to the x86 back end. It can be
enabled by passing -tailcallopt to llc.  The optimization is
performed if the following conditions are satisfied:
* caller/callee are fastcc
* elf/pic is disabled OR
  elf/pic enabled + callee is in module + callee has
  visibility protected or hidden

llvm-svn: 42870
2007-10-11 19:40:01 +00:00
Chris Lattner b20757d578 disable this entirely: it is causing use of invalidated iterators and infinite looping.
llvm-svn: 42739
2007-10-07 22:00:31 +00:00
Chris Lattner 8dd66ab3b2 Fix many regressions on x86 by avoiding dereferencing the end iterator.
llvm-svn: 42738
2007-10-07 21:53:12 +00:00
Anton Korobeynikov 67ac2de8bf Oops, I really wanted to commit this part also :)
llvm-svn: 42700
2007-10-06 16:39:43 +00:00
Anton Korobeynikov c59496f737 Move merge code into new helper function.
llvm-svn: 42699
2007-10-06 16:17:49 +00:00
Evan Cheng a851e2b92e Added storeRegToAddr, loadRegFromAddr, and unfoldMemoryOperand's.
llvm-svn: 42624
2007-10-05 01:34:55 +00:00
Evan Cheng 1f79ba6fe6 Refactor code to add load / store folded instructions -> register only
instructions reverse map.

llvm-svn: 42509
2007-10-01 23:44:33 +00:00
Evan Cheng 5fb5a1f389 Enabling new condition code modeling scheme.
llvm-svn: 42459
2007-09-29 00:00:36 +00:00
Dan Gohman a1d46c7d0a TargetAsmInfo::getAddressSize() was incorrect for x86-64 and 64-bit targets
other than PPC64. Instead of fixing it, just remove it and fix all the
places that use it to use TargetData::getPointerSize() instead, as there
aren't very many. Most of the references were in DwarfWriter.cpp.

llvm-svn: 42419
2007-09-27 23:12:31 +00:00
Evan Cheng 99dc695da5 Use GR64 in 64-bit mode.
llvm-svn: 42417
2007-09-27 21:50:05 +00:00
Evan Cheng 8728c3376a - Added MRegisterInfo::getCrossCopyRegClass() hook. For register classes where reg to reg copies are not possible, this returns another register class which registers in the specified register class can be copied to (and copy back from).
- X86 copyRegToReg() now supports copying between EFLAGS and GR32 / GR64 registers.

llvm-svn: 42372
2007-09-26 21:31:07 +00:00
Evan Cheng c1e4e3743b Allow copyRegToReg to emit cross register classes copies.
Tested with "make check"!

llvm-svn: 42346
2007-09-26 06:25:56 +00:00
Anton Korobeynikov e291f727e3 Correctly restore stack pointer after realignment in main() on Cygwin/Mingw32
llvm-svn: 42332
2007-09-26 00:13:34 +00:00
Evan Cheng 5321fa44f4 Missing load / store folding entries.
llvm-svn: 42323
2007-09-25 22:10:43 +00:00
Evan Cheng e95f391ef1 Added support for new condition code modeling scheme (i.e. physical register dependency). These are a bunch of instructions that are duplicated so the x86 backend can support both the old and new schemes at the same time. They will be deleted after
all the kinks are worked out.

llvm-svn: 42285
2007-09-25 01:57:46 +00:00
Dan Gohman 82dcfd2dab The code that used the StartLabelId label was removed, so remove the
code that creates the label too.

llvm-svn: 42265
2007-09-24 16:44:26 +00:00
Dan Gohman 4dbc582a36 Fix several more entries in the x86 reload/remat folding tables.
llvm-svn: 42162
2007-09-20 14:17:21 +00:00
Evan Cheng 513874cf3c PSHUFDmi, etc. are actually folding a load, not a store.
llvm-svn: 42147
2007-09-19 19:02:47 +00:00
Dan Gohman 8cca8469de Move the entries for 64-bit CMP, IMUL, and a few others into the correct
tables so that they are eligible for reload/remat folding. And add
entries for JMP and CALL.

llvm-svn: 42094
2007-09-18 14:59:14 +00:00
Dale Johannesen ff7e443792 Remove RSTRegClass case from loadRegFromStackSlot
and storeRegToStackSlot.  Evan and I concluded this
should never be needed and it appears to be true.
(It if is needed, adjustment would be needed for
long double to work.)

llvm-svn: 42049
2007-09-17 20:15:38 +00:00
Dale Johannesen 98d3a08d8f Remove the assumption that FP's are either float or
double from some of the many places in the optimizers
it appears, and do something reasonable with x86
long double.
Make APInt::dump() public, remove newline, use it to
dump ConstantSDNode's.
Allow APFloats in FoldingSet.
Expand X86 backend handling of long doubles (conversions
to/from int, mostly).

llvm-svn: 41967
2007-09-14 22:26:36 +00:00
Dan Gohman 9da02f5ee2 Remove isReg, isImm, and isMBB, and change all their users to use
isRegister, isImmediate, and isMachineBasicBlock, which are equivalent,
and more popular.

llvm-svn: 41958
2007-09-14 20:33:02 +00:00
Evan Cheng 637395e6bd It's not safe to rematerialize MOV32r0 etc. by simply cloning the original
instruction. These are implemented with xor which will modify the conditional
code. They should be rematerialized as move instructions.

llvm-svn: 41802
2007-09-10 20:48:53 +00:00
Owen Anderson e2f23a3abf Add lengthof and endof templates that hide a lot of sizeof computations.
Patch by Sterling Stein!

llvm-svn: 41758
2007-09-07 04:06:50 +00:00
Evan Cheng ebb8540067 Added support to fold X86 load / store instructions. This allow rematerialized loads to be folded into their uses.
llvm-svn: 41599
2007-08-30 05:54:07 +00:00
Duncan Sands 7741427a09 Move getX86RegNum into X86RegisterInfo and use it
in the trampoline lowering.  Lookup the jump and
mov opcodes for the trampoline rather than hard
coding them.

llvm-svn: 41577
2007-08-29 19:01:20 +00:00
Evan Cheng d204d08b97 Make sure epilogue esp adjustment is placed before any terminator and pop instructions.
llvm-svn: 40538
2007-07-26 17:45:41 +00:00
Anton Korobeynikov 0c46451d2b Heal EH handling stuff by emitting correct offsets to callee-saved registers.
Pretty hackish, but code itself is dirty mess, so we won't make anything worse. :)

llvm-svn: 40472
2007-07-24 21:07:39 +00:00
Evan Cheng 94b5a80b93 Change instruction description to split OperandList into OutOperandList and
InOperandList. This gives one piece of important information: # of results
produced by an instruction.
An example of the change:
def ADD32rr  : I<0x01, MRMDestReg, (ops GR32:$dst, GR32:$src1, GR32:$src2),
                 "add{l} {$src2, $dst|$dst, $src2}",
                 [(set GR32:$dst, (add GR32:$src1, GR32:$src2))]>;
=>
def ADD32rr  : I<0x01, MRMDestReg, (outs GR32:$dst), (ins GR32:$src1, GR32:$src2),
                 "add{l} {$src2, $dst|$dst, $src2}",
                 [(set GR32:$dst, (add GR32:$src1, GR32:$src2))]>;

llvm-svn: 40033
2007-07-19 01:14:50 +00:00
Evan Cheng 7b5b06805a Only adjust esp around calls in presence of alloca.
llvm-svn: 40028
2007-07-19 00:42:05 +00:00
Evan Cheng 8941071ae1 Use MOV instead of LEA to restore ESP if callee-saved frame size is 0; if previous instruction updates esp, fold it in.
llvm-svn: 40018
2007-07-18 21:26:06 +00:00
Evan Cheng 97b5dc63d7 Fold prologue esp update when possible.
llvm-svn: 39984
2007-07-17 21:26:42 +00:00
Evan Cheng b2bb4b4040 Make sure not to break eh_return.
llvm-svn: 39978
2007-07-17 18:40:47 +00:00