Commit Graph

22763 Commits

Author SHA1 Message Date
Duraid Madina 4698e4f5fe fix storing booleans (grawp missed this one)
llvm-svn: 26120
2006-02-11 07:33:17 +00:00
Duraid Madina 0010a92375 now short immediates will get matched (previously constants were all
triggering movl 64bit imm fat instructions)

llvm-svn: 26119
2006-02-11 07:32:15 +00:00
Chris Lattner 05bf90dddf Make this check stricter. Disallow loop exit blocks from being shared by
loops and their subloops.

llvm-svn: 26118
2006-02-11 02:13:17 +00:00
Evan Cheng a86ba85dc5 Prevent certain nodes that have already been selected from being folded into
X86 addressing mode. Currently we do not allow any node whose target node
produces a chain as well as any node that is at the root of the addressing
mode expression tree.

llvm-svn: 26117
2006-02-11 02:05:36 +00:00
Chris Lattner a6ae101afa remove dead expr
llvm-svn: 26116
2006-02-11 01:43:37 +00:00
Jim Laskey 5995d0160c Reorg for integration with gcc4. Old style debug info will not be passed though
to SelIDAG.

llvm-svn: 26115
2006-02-11 01:01:30 +00:00
Chris Lattner fbadd7e1ee implement unswitching of loops with switch stmts and selects in them
llvm-svn: 26114
2006-02-11 00:43:37 +00:00
Chris Lattner f1b151684d Update PHI nodes in successors of exit blocks.
llvm-svn: 26113
2006-02-10 23:26:14 +00:00
Chris Lattner fe4151efe7 Reform the unswitching code in terms of edge splitting, not block splitting.
llvm-svn: 26112
2006-02-10 23:16:39 +00:00
Evan Cheng 2b6f78b664 Nicer code. :-)
llvm-svn: 26111
2006-02-10 22:46:26 +00:00
Evan Cheng d49cc3634e Added X86 isel debugging stuff.
llvm-svn: 26110
2006-02-10 22:24:32 +00:00
Chris Lattner 975486db5e Remove a level of indirection.
llvm-svn: 26109
2006-02-10 21:32:11 +00:00
Chris Lattner ec6b40a093 Fix a case where UnswitchTrivialCondition broke critical edges with
phi's in the successors

llvm-svn: 26108
2006-02-10 19:08:15 +00:00
Chris Lattner fcb8a3aa76 Use the auto-generated call matcher. Remove a broken impl of the frameaddr/returnaddr
intrinsics.

Autogen frameindex matcher

llvm-svn: 26107
2006-02-10 07:35:42 +00:00
Chris Lattner 0c4dea4cb2 Update to new-style flags usage, simplifying the .td file
llvm-svn: 26106
2006-02-10 06:58:25 +00:00
Evan Cheng 907be3e24c Remove a completed entry; add a new entry about fisttp op
llvm-svn: 26105
2006-02-10 05:48:15 +00:00
Chris Lattner 6e263155a6 add some notes, move some code around. Implement unswitching of loops
with branches on partially invariant computations.

llvm-svn: 26104
2006-02-10 02:30:37 +00:00
Chris Lattner 4935417a84 Move code around to be more logical, no functionality change.
llvm-svn: 26103
2006-02-10 02:01:22 +00:00
Chris Lattner 3fc3148b85 When unswitching a trivial loop, do admit we are doing it! :)
llvm-svn: 26102
2006-02-10 01:36:35 +00:00
Chris Lattner ed7a67b0de Implement unconditional unswitching of 'trivial' loops, those loops that contain
branches in their entry block that control whether or not the loop is a noop or not.

llvm-svn: 26101
2006-02-10 01:24:09 +00:00
Chris Lattner 4f0e66df6a Simplify control flow a bit, note that unswitch preserves canonical loop form
llvm-svn: 26098
2006-02-09 22:15:42 +00:00
Evan Cheng 101e4b916a Match tblgen change.
llvm-svn: 26096
2006-02-09 22:12:53 +00:00
Evan Cheng 5940bc210e Call InsertISelMapEntry rather than map insertion operator to prevent overly
aggrssive inlining. This reduces Select_store frame size from 24k to 10k.

llvm-svn: 26095
2006-02-09 22:12:27 +00:00
Evan Cheng a1ef3ec5b5 Added SelectionDAG::InsertISelMapEntry(). This is used to workaround the gcc
problem where it inline the map insertion call too aggressively. Before this
change it was producing a frame size of 24k for Select_store(), now it's down
to 10k (by calling this method rather than calling the map insertion operator).

llvm-svn: 26094
2006-02-09 22:11:03 +00:00
Chris Lattner 8976219850 Make the threshold a parameter
llvm-svn: 26093
2006-02-09 20:15:48 +00:00
Chris Lattner 4c0bd5bcdf Done
llvm-svn: 26091
2006-02-09 20:00:19 +00:00
Chris Lattner 5259aa1c86 Enable LSR by default for SPARC: it is a clear win.
llvm-svn: 26090
2006-02-09 19:59:55 +00:00
Chris Lattner 2826e0511b Simplify the loop-unswitch pass, by not even trying to unswitch loops with
uses of loop values outside the loop.  We need loop-closed SSA form to do
this right, or to use SSA rewriting if we really care.

llvm-svn: 26089
2006-02-09 19:14:52 +00:00
Chris Lattner 24cd2fa269 Fix 80-column violations
llvm-svn: 26088
2006-02-09 07:41:14 +00:00
Chris Lattner 4534dd59a3 Enhance MVIZ in three ways:
1. Teach it new tricks: in particular how to propagate through signed shr and sexts.
2. Teach it to return a bitset of known-1 and known-0 bits, instead of just zero.
3. Teach instcombine (AND X, C) to fold when we know all C bits of X.

This implements Regression/Transforms/InstCombine/bittest.ll, and allows
future things to be simplified.

llvm-svn: 26087
2006-02-09 07:38:58 +00:00
Chris Lattner 11f380962b new testcase
llvm-svn: 26086
2006-02-09 07:38:30 +00:00
Evan Cheng d1b82d8db0 Match getTargetNode() changes (now return SDNode* instead of SDOperand).
llvm-svn: 26085
2006-02-09 07:17:49 +00:00
Evan Cheng 9ce762d51f Match getTargetNode() changes (now returns SDNode* instead of SDOperand).
llvm-svn: 26084
2006-02-09 07:16:09 +00:00
Evan Cheng d3f1db93c1 More changes to reduce frame size.
Move all getTargetNode() out of SelectionDAG.h into SelectionDAG.cpp. This
prevents them from being inlined.
Change getTargetNode() so they return SDNode * instead of SDOperand to prevent
copying. It should also help compilation speed.

llvm-svn: 26083
2006-02-09 07:15:23 +00:00
Chris Lattner e19b89f59c this apparently passes on linux
llvm-svn: 26082
2006-02-09 07:12:13 +00:00
Chris Lattner c75d5b093d add an option to turn on LSR.
llvm-svn: 26080
2006-02-09 05:06:36 +00:00
Chris Lattner 729ffe95c4 simplify this code now that each constant pool entry is not separately allocated
llvm-svn: 26079
2006-02-09 04:49:59 +00:00
Chris Lattner f6190821da Adjust to MachineConstantPool interface change: instead of keeping a
value/alignment pair for each constant, keep a value/offset pair.

llvm-svn: 26078
2006-02-09 04:46:04 +00:00
Chris Lattner 8bb44151e1 instead of keeping track of Constant/alignment pairs, actually compute the
offset of each entry from the start of the constant pool.

llvm-svn: 26077
2006-02-09 04:44:32 +00:00
Chris Lattner ba97264e72 rename fields of constant pool entries
llvm-svn: 26076
2006-02-09 04:22:52 +00:00
Chris Lattner 1cd22199dd Use a MachineConstantPoolEntry struct instead of a pair to hold
constant pool entries.

llvm-svn: 26075
2006-02-09 04:21:49 +00:00
Chris Lattner 47f7319f00 Simplify code, alignment must be specified now.
llvm-svn: 26074
2006-02-09 02:26:04 +00:00
Chris Lattner 0ece4214ad Assert invariants
llvm-svn: 26073
2006-02-09 02:25:42 +00:00
Chris Lattner c360656c1d Require an alignment.
llvm-svn: 26072
2006-02-09 02:24:25 +00:00
Chris Lattner 4576bb74d5 Make MachineConstantPool entries alignments explicit
llvm-svn: 26071
2006-02-09 02:23:13 +00:00
Chris Lattner 832d78d981 Always pass in an alignment.
llvm-svn: 26070
2006-02-09 02:19:16 +00:00
Chris Lattner d94a3d2c8a provide an explicit alignment for cp entries
llvm-svn: 26069
2006-02-09 02:15:30 +00:00
Chris Lattner ea25aba382 Add a comment: value is log2
llvm-svn: 26068
2006-02-09 02:10:15 +00:00
Evan Cheng 6dc90ca172 Change Select() from
SDOperand Select(SDOperand N);
to
void Select(SDOperand &Result, SDOperand N);

llvm-svn: 26067
2006-02-09 00:37:58 +00:00
Chris Lattner 2e07d6370a Darwin doesn't support #APP/#NO_APP
llvm-svn: 26066
2006-02-08 23:42:22 +00:00
Chris Lattner ed87dcd45f Add support for assembler directives that wrap inline asm
llvm-svn: 26065
2006-02-08 23:41:56 +00:00
Chris Lattner 26e385a623 Rename BSel -> PPCBSel for the benefit of doxygen users.
Move the methods out of line.
Remove unused Debug.h stuff.
Teach getNumBytesForInstruction to know the size of an inline asm.

llvm-svn: 26064
2006-02-08 19:33:26 +00:00
Jim Laskey 15f9188d73 Disable this test for the time being as debug is brought up to speed.
llvm-svn: 26063
2006-02-08 18:17:06 +00:00
Chris Lattner b4fc050f0f add a simple optimization
llvm-svn: 26062
2006-02-08 17:47:22 +00:00
Chris Lattner 6f5a00fd68 Mention that delta can be used to reduce some Front-end problems.
Patch by Marco Matthies, thanks!

llvm-svn: 26061
2006-02-08 17:01:37 +00:00
Chris Lattner 4999a5c0c7 Add SRoA to the lexicon. Patch by Marco Matthies!
llvm-svn: 26060
2006-02-08 16:59:49 +00:00
Evan Cheng df5ef770b6 Added options -cflag, -cxxflags, and -ldflags to override the default C
compilation, C++ compilation, and linker options.
e.g. This is the options I use for testing on my x86 iMac:
nice ./NightlyTest.pl -release -cflags "-Os -DNDEBUG -fomit-frame-pointer" -cxxflags "-Os -DNDEBUG -finline-functions -felide-constructors -fomit-frame-pointer"

llvm-svn: 26057
2006-02-08 09:08:06 +00:00
Chris Lattner ab2dc4d70d Simplify some code, reducing calls to MaskedValueIsZero. Implement a minor
optimization where we reduce the number of bits in AND masks when possible.

llvm-svn: 26056
2006-02-08 07:34:50 +00:00
Evan Cheng 0ba2808775 Remove -pedantic. It no longer works.
llvm-svn: 26055
2006-02-08 07:28:22 +00:00
Chris Lattner b7e074ab9b more email -> README moving
llvm-svn: 26054
2006-02-08 07:12:07 +00:00
Chris Lattner f7b962d7d7 Emit the 'mr' pseudoop for easier reading.
llvm-svn: 26053
2006-02-08 06:56:40 +00:00
Chris Lattner 45bb34b715 Add some random notes, not high-prio
llvm-svn: 26052
2006-02-08 06:52:06 +00:00
Chris Lattner b97142eec0 Move emails from nate into public places
llvm-svn: 26051
2006-02-08 06:43:51 +00:00
Chris Lattner 5997cf9381 Use EraseInstFromFunction in a few cases to put the uses of the removed
instruction onto the worklist (in case they are now dead).

Add a really trivial local DSE implementation to help out bitfield code.
We now fold this:

struct S {
    unsigned char a : 1, b : 1, c : 1, d : 2, e : 3;
    S();
};

S::S() : a(0), b(0), c(1), d(0), e(6) {}

to this:

void %_ZN1SC1Ev(%struct.S* %this) {
entry:
        %tmp.1 = getelementptr %struct.S* %this, int 0, uint 0
        store ubyte 38, ubyte* %tmp.1
        ret void
}

much earlier (in gccas instead of only in gccld after DSE runs).

llvm-svn: 26050
2006-02-08 03:25:32 +00:00
Chris Lattner 06a0ed1ee0 Implement some more interesting select sccp cases. This implements:
test/Regression/Transforms/SCCP/select.ll

llvm-svn: 26049
2006-02-08 02:38:11 +00:00
Chris Lattner d82b2036e9 new testcase for more interesting select sccp cases
llvm-svn: 26048
2006-02-08 02:37:40 +00:00
Chris Lattner a10e23c19f Compile this:
xori r6, r2, 1
        rlwinm r6, r6, 0, 31, 31
        cmpwi cr0, r6, 0
        bne cr0, LBB1_3 ; endif

to this:

        rlwinm r6, r2, 0, 31, 31
        cmpwi cr0, r6, 0
        beq cr0, LBB1_3 ; endif

llvm-svn: 26047
2006-02-08 02:13:15 +00:00
Chris Lattner 49c0457529 Add some happy helper methods.
llvm-svn: 26046
2006-02-08 02:05:45 +00:00
Chris Lattner ddba3289b5 Fix a problem in my patch yesterday, causing a miscompilation of 176.gcc
llvm-svn: 26045
2006-02-08 01:20:23 +00:00
Evan Cheng adeb8fb5a2 Fixed a local common symbol bug.
llvm-svn: 26044
2006-02-07 23:32:58 +00:00
Evan Cheng ec212fb66d For ELF, .comm takes alignment value as the optional 3rd argument. It must be
specified in bytes.

llvm-svn: 26043
2006-02-07 21:54:08 +00:00
Chris Lattner 203b2f1288 Implement getConstraintType for PPC.
llvm-svn: 26042
2006-02-07 20:16:30 +00:00
Chris Lattner 56a2621ef5 getConstraintType should be virtual.
llvm-svn: 26041
2006-02-07 20:13:44 +00:00
Chris Lattner 44314827d6 Fix Transforms/InstCombine/2006-02-07-SextZextCrash.ll
llvm-svn: 26040
2006-02-07 19:07:40 +00:00
Chris Lattner ee278d9417 new testcase that caused instcombine to crash on 176.gcc last night.
llvm-svn: 26039
2006-02-07 19:07:25 +00:00
Evan Cheng 5a76680de1 Darwin ABI issues: weak, linkonce, etc. dynamic-no-pic support is complete.
Also fixed a function stub bug. Added weak and linkonce support for
x86 Linux.

llvm-svn: 26038
2006-02-07 08:38:37 +00:00
Evan Cheng 227e469c25 Remind myself to add PIC and static asm printer support.
llvm-svn: 26037
2006-02-07 08:35:44 +00:00
Chris Lattner 92a6865321 Generalize MaskedValueIsZero into a ComputeMaskedNonZeroBits function, which
is just as efficient as MVIZ and is also more general.

Fix a few minor bugs introduced in recent patches

llvm-svn: 26036
2006-02-07 08:05:22 +00:00
Chris Lattner c3ebf40031 Make MaskedValueIsZero take a uint64_t instead of a ConstantIntegral as a
mask.  This allows the code to be simpler and more efficient.

Also, generalize some of the cases in MVIZ a bit, making it slightly more aggressive.

llvm-svn: 26035
2006-02-07 07:27:52 +00:00
Chris Lattner 77defbae0a Use Type::getIntegralTypeMask() to simplify some code
llvm-svn: 26034
2006-02-07 07:00:41 +00:00
Chris Lattner 2590e511d8 Implement the beginnings of a facility for simplifying expressions based on
'demanded bits', inspired by Nate's work in the dag combiner.  This isn't
complete, but needs to unrelated instcombiner changes to continue.

llvm-svn: 26033
2006-02-07 06:56:34 +00:00
Chris Lattner f1197c9f21 add a new Type::getIntegralTypeMask() method, which is useful for clients that
want to do bitwise inspection of integer types.

llvm-svn: 26032
2006-02-07 06:17:10 +00:00
Jeff Cohen 2439669c6f The interpreter assumes that the caller of runFunction() must be lli, and
therefore the function being called must be a main() returning an int.  The
consequences when these assumptions are false are not good, so don't assume
them.

llvm-svn: 26031
2006-02-07 05:29:44 +00:00
Jeff Cohen 69e849014c Teach the interpreter to handle global variables that are added to a module after
interpretation has begun.  The JIT already handles this situation correctly, and
the interpreter can already handle new functions being added.

llvm-svn: 26030
2006-02-07 05:11:57 +00:00
Jeff Cohen c89a2a1a83 Fix some truncation warnings.
llvm-svn: 26029
2006-02-07 03:34:35 +00:00
Chris Lattner 49b3414322 fix an error compiling with -pedantic
llvm-svn: 26028
2006-02-07 01:12:49 +00:00
Chris Lattner 15a6c4c444 Add the simple PPC integer constraints
llvm-svn: 26027
2006-02-07 00:47:13 +00:00
Evan Cheng 1630c2848f Hoist all SDOperand declarations within a Select_{opcode}() to the top level
to reduce stack memory usage. This is intended to work around the gcc bug.

llvm-svn: 26026
2006-02-07 00:37:41 +00:00
Chris Lattner d62a3bfa66 Eliminate the printCallOperand method, using a 'call' modifier on
printOperand instead.

llvm-svn: 26025
2006-02-06 23:41:19 +00:00
Chris Lattner 5c76b499aa Add support for modifier strings in machine instr descriptions. This allows
us to avoid creating lots of "Operand" types with different printers, instead
we can fold several together and use modifiers.  For example, we can now use:

${target:call} to say that the operand should be printed like a 'call' operand.

llvm-svn: 26024
2006-02-06 23:40:48 +00:00
Chris Lattner 033e32e5d9 Simplify the variant handling code, no functionality change.
llvm-svn: 26023
2006-02-06 22:43:28 +00:00
Chris Lattner 2bf2c8d7e7 Change prototype
llvm-svn: 26022
2006-02-06 22:18:19 +00:00
Chris Lattner 34f74c180a Add support for modifier characters to operand printers
llvm-svn: 26021
2006-02-06 22:17:23 +00:00
Chris Lattner f1eac134b9 Change the prototype of PrintAsmOperand
llvm-svn: 26020
2006-02-06 22:16:41 +00:00
Jim Laskey 0458fb76fd Goodbye nasty macro.
llvm-svn: 26019
2006-02-06 21:54:05 +00:00
Jim Laskey b643ff5546 Edit requests from Sabre.
llvm-svn: 26018
2006-02-06 19:12:02 +00:00
Andrew Lenharth f5b7f16259 see what this allignment thing will do
llvm-svn: 26017
2006-02-06 17:15:17 +00:00
Jim Laskey 85263234a8 Changing model for the construction of debug information.
llvm-svn: 26016
2006-02-06 15:33:21 +00:00
Jim Laskey 58d48c8118 We seem to have settled to __DWARF for section name.
llvm-svn: 26015
2006-02-06 14:16:15 +00:00
Evan Cheng ebf05bea1b At the end of isel, select a replacement node for each handle that does not
have one. This can happen if a load's real uses are dead (i.e. they do not
have uses themselves).

llvm-svn: 26014
2006-02-06 08:12:55 +00:00
Evan Cheng 308d0090e6 Name change.
llvm-svn: 26013
2006-02-06 06:03:35 +00:00
Evan Cheng d5f2ba0d6f - Update load folding checks to match those auto-generated by tblgen.
- Manually select SDOperand's returned by TryFoldLoad which make up the
  load address.

llvm-svn: 26012
2006-02-06 06:02:33 +00:00
Evan Cheng aa26555b25 Handle HANDLENODE: just return itself.
llvm-svn: 26011
2006-02-05 08:46:14 +00:00
Evan Cheng bfa4b7cc75 Complex pattern isel code shouldn't select nodes.
llvm-svn: 26010
2006-02-05 08:45:01 +00:00
Chris Lattner 463fa70eaa Fix the Sparc backend with Evan's recent tblgen changes
llvm-svn: 26009
2006-02-05 08:35:50 +00:00
Chris Lattner 806024f7a3 SparcV8 -> Sparc
llvm-svn: 26008
2006-02-05 08:30:45 +00:00
Chris Lattner 8467e5d6af This xform isn't safe
llvm-svn: 26007
2006-02-05 08:26:16 +00:00
Nate Begeman 8c9cd461df Back out previous commit, it isn't safe.
llvm-svn: 26006
2006-02-05 08:23:00 +00:00
Nate Begeman 3dc8b89493 fold c1 << (x + c2) into (c1 << c2) << x. fix a warning.
llvm-svn: 26005
2006-02-05 08:07:24 +00:00
Chris Lattner 4b8fcc229f some stuff is done
llvm-svn: 26004
2006-02-05 07:54:37 +00:00
Chris Lattner 2e90b732fa Turn A % (C << N), where C is 2^k, into A & ((C << N)-1) [urem only].
Turn A / (C1 << N), where C1 is "1<<C2" into A >> (N+C2) [udiv only].

Tested with: rem.ll:test5, div.ll:test10

llvm-svn: 26003
2006-02-05 07:54:04 +00:00
Chris Lattner b624bb5f0d new testcases
llvm-svn: 26002
2006-02-05 07:52:47 +00:00
Nate Begeman c89fdf1eb3 Handle urem by shifted powers of 2.
llvm-svn: 26001
2006-02-05 07:36:48 +00:00
Nate Begeman 25d178bece handle combining A / (B << N) into A >>u (log2(B)+N) when B is a power of 2
llvm-svn: 26000
2006-02-05 07:20:23 +00:00
Evan Cheng a28b764886 Use SelectRoot() as the entry to any tblgen based isel.
llvm-svn: 25998
2006-02-05 06:51:51 +00:00
Evan Cheng 54cb1833a4 Use SelectRoot() as entry of any tblgen based isel.
llvm-svn: 25997
2006-02-05 06:46:41 +00:00
Chris Lattner 51473c3379 Encourage use of the V8 ABI over the V9 ABI.
llvm-svn: 25996
2006-02-05 06:44:17 +00:00
Evan Cheng 368f20e53d Allow more loads to be folded which were previously prevented from happening
due to ordering issue. i.e. they were selected for chain use first.
Now at load select time, check if it is being selected for a chain use and if
it has only a single real use. If so, return a HANDLENODE (with the load as
its operand) in its place and record it.
When it is folded or the load is selected for a real use, the isel records it
as the replacement for the HANDLENODE. The replacement is done when all nodes
are selected.
This scheme exposed a couple of problems where cycles can happen. (See comments
in EmitMatchCode() for descriptions of the problems and their workaround /
solutions.) These problems have been resolved with a small compile time
penality.

llvm-svn: 25995
2006-02-05 06:43:12 +00:00
Chris Lattner a07b2d5b3a This document is out of date. :(
llvm-svn: 25994
2006-02-05 06:40:12 +00:00
Chris Lattner a38fe5fc8e V8 -> Sparc
llvm-svn: 25993
2006-02-05 06:39:36 +00:00
Chris Lattner 25777c8c25 Remove the SparcV8 backend. It has been renamed to be the Sparc backend.
llvm-svn: 25992
2006-02-05 06:33:29 +00:00
Chris Lattner a3e5b2c61c remove V8 reference
llvm-svn: 25991
2006-02-05 06:32:59 +00:00
Evan Cheng d37645c07d * Added SDNode::isOnlyUse().
* Fix hasNUsesOfValue(), it should be const.

llvm-svn: 25990
2006-02-05 06:29:23 +00:00
Chris Lattner 03e6bc6676 SparcV8 -> Sparc
llvm-svn: 25989
2006-02-05 06:26:43 +00:00
Chris Lattner 1b82081e0e SparcV8 -> Sparc
llvm-svn: 25988
2006-02-05 05:56:51 +00:00
Chris Lattner 90263ea903 These were moved to ../SPARC
llvm-svn: 25987
2006-02-05 05:53:48 +00:00
Chris Lattner a590d80e5b move V8 testcases here
llvm-svn: 25986
2006-02-05 05:52:55 +00:00
Chris Lattner 158e1f519c Rename SPARC V8 target to be the LLVM SPARC target.
llvm-svn: 25985
2006-02-05 05:50:24 +00:00
Chris Lattner c0e48c6c58 add a note
llvm-svn: 25984
2006-02-05 05:27:35 +00:00
Evan Cheng d19d51f414 Re-commit the last bit of change that was backed out.
llvm-svn: 25983
2006-02-05 05:25:07 +00:00
Evan Cheng 7ab4579c68 Re-committing the last bit of change. It shouldn't break PPC this time.
llvm-svn: 25982
2006-02-05 05:22:18 +00:00
Chris Lattner cbab28414e make sure that global doubles are aligned to 8 bytes
llvm-svn: 25981
2006-02-05 01:46:49 +00:00
Chris Lattner c070cb685d Use getPreferredAlignmentLog.
llvm-svn: 25980
2006-02-05 01:45:04 +00:00
Chris Lattner 1b1a8731c0 Use the asmprinter to find out what the preferred alignment of a global is.
This patch speeds up 172.mgrid from 31.81s to 11.39s on darwin/ppc.
Many many thanks to Nate for tracking down the root cause of the issue.

llvm-svn: 25979
2006-02-05 01:30:45 +00:00
Chris Lattner a9b2525d3e Implement the AsmPrinter::getPreferredAlignmentLog method.
llvm-svn: 25978
2006-02-05 01:29:18 +00:00
Chris Lattner 9d0b2f7b50 add a new method, getPreferredAlignmentLog.
llvm-svn: 25977
2006-02-05 01:24:06 +00:00
Andrew Lenharth 1fcff15f86 linkage fix for weak functions
llvm-svn: 25976
2006-02-04 19:13:09 +00:00
Jeff Cohen 95ae171d5b Fix VC++ warning.
llvm-svn: 25975
2006-02-04 16:20:31 +00:00
Chris Lattner d30c4991a1 Use SCEVExpander::InsertCastOfTo instead of our own code. This reduces
#LLVM LOC, and auto-cse's cast instructions.

llvm-svn: 25974
2006-02-04 09:52:43 +00:00
Chris Lattner a6da69cab0 Pull the InsertCastOfTo out of the header, implement CSE'ing of arguments.
llvm-svn: 25973
2006-02-04 09:51:53 +00:00
Chris Lattner 5107cdcc73 Refactor a bunch of code into a non-inlined method
llvm-svn: 25972
2006-02-04 09:51:33 +00:00
Chris Lattner 22b4edfb42 Temporarily revert this patch, which probably breaks with the
tblgen patch reverted.

llvm-svn: 25971
2006-02-04 09:24:16 +00:00
Chris Lattner 586b06281c Temporarily revert the last change, which breaks PPC and other targets that
DO select things.

llvm-svn: 25970
2006-02-04 09:23:06 +00:00
Chris Lattner b6a1865bca Value# select instructions, allowing -gcse to remove duplicates
llvm-svn: 25969
2006-02-04 09:15:29 +00:00
Evan Cheng ce87cac555 Complex pattern's custom matcher should not call Select() on any operands.
Select them afterwards if it returns true.

llvm-svn: 25968
2006-02-04 08:50:49 +00:00
Chris Lattner ab146eae38 Custom lower VAARG for the case when we are doing vaarg(double). In this
case, the double being loaded may not be 8-byte aligned, so we have to use
our standard bit_convert game.

llvm-svn: 25967
2006-02-04 08:31:30 +00:00
Chris Lattner a1fa8b1c88 Fix a nasty typo that broke functions with big stack frames.
llvm-svn: 25966
2006-02-04 08:04:21 +00:00
Chris Lattner d096b2f3e0 fix a bug in my last checkin
llvm-svn: 25965
2006-02-04 07:48:46 +00:00
Chris Lattner 2959f0003e Fix two significant bugs in LSR:
1. When rewriting code in outer loops, sometimes we would insert code into
   inner loops that is invariant in that loop.
2. Notice that 4*(2+x) is 8+4*x and use that to simplify expressions.

This is a performance neutral change.

llvm-svn: 25964
2006-02-04 07:36:50 +00:00
Nate Begeman a1e895cf97 Remove some stuff that now works
llvm-svn: 25963
2006-02-04 07:29:35 +00:00
Chris Lattner 32ed2b45c7 add a note
llvm-svn: 25962
2006-02-04 07:07:31 +00:00
Chris Lattner 2c0956bcea Two changes:
1. Treat FMOVD as a copy instruction, to help with coallescing in V9 mode
2. When in V9 mode, insert FMOVD instead of FpMOVD instructions, as we don't
   ever rewrite FpMOVD instructions into FMOVS instructions, thus we just end
   up with commented out copies!
This should fix a bunch of failures in V9 mode on sparc.

llvm-svn: 25961
2006-02-04 06:58:46 +00:00
Evan Cheng f9adce90bf Get rid of some memory leaks identified by Valgrind
llvm-svn: 25960
2006-02-04 06:49:00 +00:00
Chris Lattner ca05e6a837 add a method
llvm-svn: 25959
2006-02-04 05:49:01 +00:00
Chris Lattner 2d2e2e3c0e Let bugpoint work on sparc with v9 instructions enabled.
llvm-svn: 25958
2006-02-04 05:02:27 +00:00
Jeff Cohen 57a004abfe Fix VC++ warning.
llvm-svn: 25957
2006-02-04 03:27:39 +00:00
Jeff Cohen 0e7aee2382 Keep Visual Studio informed.
llvm-svn: 25956
2006-02-04 03:27:04 +00:00
Chris Lattner 3b48431333 Add initial support for immediates. This allows us to compile this:
int %rlwnm(int %A, int %B) {
  %C = call int asm "rlwnm $0, $1, $2, $3, $4", "=r,r,r,n,n"(int %A, int %B, int 4, int 17)
  ret int %C
}

into:

_rlwnm:
        or r2, r3, r3
        or r3, r4, r4
        rlwnm r2, r2, r3, 4, 17    ;; note the immediates :)
        or r3, r2, r2
        blr

llvm-svn: 25955
2006-02-04 02:26:14 +00:00
Evan Cheng 0a977c95aa Remove an unnecessary predicate.
llvm-svn: 25954
2006-02-04 02:23:01 +00:00
Evan Cheng 11613a5219 Separate FILD and FILD_FLAG, the later is only used for SSE2. It produces a
flag so it can be flagged to a FST.

llvm-svn: 25953
2006-02-04 02:20:30 +00:00
Chris Lattner 65ad53feb3 Initial early support for non-register operands, like immediates
llvm-svn: 25952
2006-02-04 02:16:44 +00:00
Chris Lattner ee1dadbccf implementation of some methods for inlineasm
llvm-svn: 25951
2006-02-04 02:13:02 +00:00
Chris Lattner 3fb81ddda3 Add some methods for inline asm support.
llvm-svn: 25950
2006-02-04 02:12:09 +00:00
Chris Lattner c93403a7fb Handle another case exposed on X86.
llvm-svn: 25949
2006-02-03 23:50:46 +00:00
Chris Lattner 71d20c4e18 Fix a nasty problem on two-address machines in the following situation:
store EAX -> [ss#0]
[ss#0] += 1
...
use(EAX)

In this case, it is not valid to rewrite this as:


store EAX -> [ss#0]
EAX += 1
store EAX -> [ss#0]  ;;; this would also delete the store above
...
use(EAX)

... because EAX is not a dead at that point.  Keep track of which registers
we are allowed to clobber, and which ones we aren't, and don't clobber the
ones we're not supposed to.  :)

This should resolve the issues on X86 last night.

llvm-svn: 25948
2006-02-03 23:28:46 +00:00
Chris Lattner 507a3a7bd1 significantly simplify the VirtRegMap code by pulling the SpillSlotsAvailable
and PhysRegsAvailable maps out into a new AvailableSpills struct.  No
functionality change.

This paves the way for a bugfix, coming up next.

llvm-svn: 25947
2006-02-03 23:13:58 +00:00
Nate Begeman 20a894282d Implement some feedback from sabre
llvm-svn: 25946
2006-02-03 22:38:07 +00:00
Nate Begeman dc7bba9ffe Add a framework for eliminating instructions that produces undemanded bits.
llvm-svn: 25945
2006-02-03 22:24:05 +00:00
Chris Lattner 81e66abd1e add a note
llvm-svn: 25944
2006-02-03 22:06:45 +00:00
Chris Lattner d079dbb9b0 another case Nate came up with
llvm-svn: 25943
2006-02-03 22:05:41 +00:00
Chris Lattner 277462e20f add a note
llvm-svn: 25942
2006-02-03 21:25:23 +00:00
Chris Lattner f68fd20286 remove some #ifdef'd out code, which should properly be in the dag combiner anyway.
llvm-svn: 25941
2006-02-03 20:13:59 +00:00
Chris Lattner a1d312c6ea remove an old comment
llvm-svn: 25940
2006-02-03 18:59:39 +00:00
Chris Lattner 23d55f2547 Remove the X86PeepholeOptimizerPass, a truly horrible old hack that is now
obsolete.  yaay :)

llvm-svn: 25939
2006-02-03 18:54:24 +00:00
Chris Lattner c408558638 When rewriting frame instructions, emit the appropriate small-immediate
instruction when possible.

llvm-svn: 25938
2006-02-03 18:20:04 +00:00
Chris Lattner 2bcfc52af6 node predicates add to the complexity of a pattern. This ensures that the
X86 backend attempts to match small-immediate versions of instructions before
the full size immediate versions.

llvm-svn: 25937
2006-02-03 18:06:02 +00:00
Chris Lattner ca76917388 Teach sparc to fold loads/stores into copies.
Remove the dead getRegClassForType method
minor formating changes.

llvm-svn: 25936
2006-02-03 07:06:25 +00:00
Chris Lattner 6091407783 remove dead fn
llvm-svn: 25935
2006-02-03 06:51:34 +00:00
Nate Begeman 22e251abf1 Add common code for reassociating ops in the dag combiner
llvm-svn: 25934
2006-02-03 06:46:56 +00:00
Evan Cheng d65461c232 Added a (store (op (load ...) ...) ...) folding test case.
llvm-svn: 25933
2006-02-03 06:46:41 +00:00
Chris Lattner d7d98611ca Implement isLoadFromStackSlot and isStoreToStackSlot
llvm-svn: 25932
2006-02-03 06:44:54 +00:00
Evan Cheng 717db484a5 (store (op (load ...))) folding problem. In the generated matching code,
Chain is initially set to the chain operand of store node, when it reaches
load, if it matches the load then Chain is set to the chain operand of the
load.

However, if the matching code that follows this fails, isel moves on to the
next pattern but it does not restore Chain to the chain operand of the store.
So when it tries to match the next store / op / load pattern it would fail on
the Chain == load.getOperand(0) test.

The solution is for each chain operand to get a unique name. e.g. Chain10.

llvm-svn: 25931
2006-02-03 06:22:41 +00:00
Chris Lattner a23b04acdb remove some target-indep and implemented notes
llvm-svn: 25930
2006-02-03 06:22:11 +00:00
Chris Lattner d1aaee03ce target independent notes
llvm-svn: 25929
2006-02-03 06:21:43 +00:00
Nate Begeman fc567d85d5 Flesh out a couple of the items in the README
llvm-svn: 25928
2006-02-03 05:17:06 +00:00
Jeff Cohen 3276ff7ac6 Fix VC++ compilation error caused by using a std::map iterator variable to receive
a std::multimap iterator value.  For some reason, GCC doesn't have a problem with this.

llvm-svn: 25927
2006-02-03 03:48:54 +00:00
Chris Lattner e18ef0d4a6 Remove move copies and dead stuff by not clobbering the result reg of a noop copy.
llvm-svn: 25926
2006-02-03 03:16:14 +00:00
Andrew Lenharth 1318240fd0 isStoreToStackSlot
llvm-svn: 25925
2006-02-03 03:07:37 +00:00
Chris Lattner 774d4a190b Simplify some code
llvm-svn: 25924
2006-02-03 03:06:49 +00:00
Chris Lattner a1eac9b978 the X86 backend no longer needs to delete its own noop copies
llvm-svn: 25923
2006-02-03 02:59:58 +00:00
Chris Lattner 1ef239afb4 Add code that checks for noop copies, which triggers when either:
1. a target doesn't know how to fold load/stores into copies, or
2. the spiller rewrites the input to a copy to the same register as the dest
   instead of to the reloaded reg.

This will be moved/improved in the near future, but allows elimination of
some ancient x86 hacks.  This eliminates 92 copies from SMG2000 on X86 and
163 copies from 252.eon.

llvm-svn: 25922
2006-02-03 02:02:59 +00:00
Chris Lattner f0a2d66d1c Add a note
llvm-svn: 25921
2006-02-03 01:49:49 +00:00
Evan Cheng 02b5b9cdd6 Added case HANDLENODE to getOperationName().
llvm-svn: 25920
2006-02-03 01:33:01 +00:00
Chris Lattner b7f24de4c8 Physregs may hold multiple stack slot values at the same time. Keep track
of this, and use it to our advantage (bwahahah).  This allows us to eliminate another
60 instructions from smg2000 on PPC (probably significantly more on X86).  A common
old-new diff looks like this:

        stw r2, 3304(r1)
-       lwz r2, 3192(r1)
        stw r2, 3300(r1)
-       lwz r2, 3192(r1)
        stw r2, 3296(r1)
-       lwz r2, 3192(r1)
        stw r2, 3200(r1)
-       lwz r2, 3192(r1)
        stw r2, 3196(r1)
-       lwz r2, 3192(r1)
+       or r2, r2, r2
        stw r2, 3188(r1)

and

-       lwz r31, 604(r1)
-       lwz r13, 604(r1)
-       lwz r14, 604(r1)
-       lwz r15, 604(r1)
-       lwz r16, 604(r1)
-       lwz r30, 604(r1)
+       or r31, r30, r30
+       or r13, r30, r30
+       or r14, r30, r30
+       or r15, r30, r30
+       or r16, r30, r30
+       or r30, r30, r30

Removal of the R = R copies is coming next...

llvm-svn: 25919
2006-02-03 00:36:31 +00:00
Chris Lattner 9b178ce225 update a note
llvm-svn: 25918
2006-02-02 23:50:22 +00:00
Chris Lattner f3aef1b004 Fix a deficiency in the spiller that Evan noticed. In particular, consider
this code:

  store [stack slot #0],  R10
    = add R14, [stack slot #0]

The spiller didn't know that the store made the value of [stackslot#0] available
in R10 *IF* the store came from a copy instruction with the store folded into it.

This patch teaches VirtRegMap to look at these stores and recognize the values
they make available.  In one case Evan provided, this code:

        divsd %XMM0, %XMM1
        movsd %XMM1, QWORD PTR [%ESP + 40]
1)      movsd QWORD PTR [%ESP + 48], %XMM1
2)      movsd %XMM1, QWORD PTR [%ESP + 48]
        addsd %XMM1, %XMM0
3)      movsd QWORD PTR [%ESP + 48], %XMM1
        movsd QWORD PTR [%ESP + 4], %XMM0

turns into:

        divsd %XMM0, %XMM1
        movsd %XMM1, QWORD PTR [%ESP + 40]
        addsd %XMM1, %XMM0
3)      movsd QWORD PTR [%ESP + 48], %XMM1
        movsd QWORD PTR [%ESP + 4], %XMM0

In this case, instruction #2 was removed because of the value made
available by #1, and inst #1 was later deleted because it is now
never used before the stack slot is redefined by #3.

This occurs here and there in a lot of code with high spilling, on PPC
most of the removed loads/stores are LSU-reject-causing loads, which is
nice.

On X86, things are much better (because it spills more), where we nuke
about 1% of the instructions from SMG2000 and several hundred from eon.

More improvements to come...

llvm-svn: 25917
2006-02-02 23:29:36 +00:00
Nate Begeman 4efb328926 add 64b gpr store to the possible list of isStoreToStackSlot opcodes.
llvm-svn: 25916
2006-02-02 21:07:50 +00:00
Chris Lattner 5123346708 fix operand numbers
llvm-svn: 25915
2006-02-02 20:38:12 +00:00
Chris Lattner c327d71e06 implement isStoreToStackSlot for PPC
llvm-svn: 25914
2006-02-02 20:16:12 +00:00
Chris Lattner bb53acd03c Move isLoadFrom/StoreToStackSlot from MRegisterInfo to TargetInstrInfo,a far more logical place. Other methods should also be moved if anyoneis interested. :)
llvm-svn: 25913
2006-02-02 20:12:32 +00:00