Commit Graph

15152 Commits

Author SHA1 Message Date
Jim Laskey d07be232ba Core antialiasing for load and store.
llvm-svn: 30597
2006-09-25 16:29:54 +00:00
Andrew Lenharth 5e2bacd6be Fix jump tables to match gcc (and the ABI and whatnot)
llvm-svn: 30594
2006-09-24 19:46:56 +00:00
Andrew Lenharth 783a4a9d86 Add support for other relocation bases to jump tables, as well as custom asm directives
llvm-svn: 30593
2006-09-24 19:45:58 +00:00
Andrew Lenharth 68324f8f85 jump table note
llvm-svn: 30591
2006-09-24 13:13:10 +00:00
Evan Cheng 77c0757f8b PIC jump table entries are always 32-bit. This fixes PIC jump table support on X86-64.
llvm-svn: 30590
2006-09-24 05:22:38 +00:00
Nick Lewycky 059c79264f Style changes only. Remove dead code, fix a comment.
llvm-svn: 30588
2006-09-23 15:13:08 +00:00
Chris Lattner 6bd6da4097 Be far more careful when splitting a loop header, either to form a preheader
or when splitting loops with a common header into multiple loops.  In particular
the old code would always insert the preheader before the old loop header.  This
is disasterous in cases where the loop hasn't been rotated.  For example, it can
produce code like:

        .. outside the loop...
        jmp LBB1_2      #bb13.outer
LBB1_1: #bb1
        movsd 8(%esp,%esi,8), %xmm1
        mulsd (%edi), %xmm1
        addsd %xmm0, %xmm1
        addl $24, %edi
        incl %esi
        jmp LBB1_3      #bb13
LBB1_2: #bb13.outer
        leal (%edx,%eax,8), %edi
        pxor %xmm1, %xmm1
        xorl %esi, %esi
LBB1_3: #bb13
        movapd %xmm1, %xmm0
        cmpl $4, %esi
        jl LBB1_1       #bb1

Note that the loop body is actually LBB1_1 + LBB1_3, which means that the
loop now contains an uncond branch WITHIN it to jump around the inserted
loop header (LBB1_2).  Doh.

This patch changes the preheader insertion code to insert it in the right
spot, producing this code:

        ... outside the loop, fall into the header ...
LBB1_1: #bb13.outer
        leal (%edx,%eax,8), %esi
        pxor %xmm0, %xmm0
        xorl %edi, %edi
        jmp LBB1_3      #bb13
LBB1_2: #bb1
        movsd 8(%esp,%edi,8), %xmm0
        mulsd (%esi), %xmm0
        addsd %xmm1, %xmm0
        addl $24, %esi
        incl %edi
LBB1_3: #bb13
        movapd %xmm0, %xmm1
        cmpl $4, %edi
        jl LBB1_2       #bb1

Totally crazy, no branch in the loop! :)

llvm-svn: 30587
2006-09-23 08:19:21 +00:00
Chris Lattner 608cd05e3f Teach UpdateDomInfoForRevectoredPreds to handle revectored preds that are not
reachable, making it general purpose enough for use by InsertPreheaderForLoop.
Eliminate custom dominfo updating code in InsertPreheaderForLoop, using
UpdateDomInfoForRevectoredPreds instead.

llvm-svn: 30586
2006-09-23 07:40:52 +00:00
Chris Lattner 4091f4690a add method, correct comment
llvm-svn: 30584
2006-09-23 04:03:45 +00:00
Evan Cheng 1da0ab2f58 Delete dead code; fix 80 col violations.
llvm-svn: 30583
2006-09-22 21:43:59 +00:00
Rafael Espindola 72d4c070c0 add a note
llvm-svn: 30581
2006-09-22 11:36:17 +00:00
Nate Begeman d31efd190f Fold AND and ROTL more often
llvm-svn: 30577
2006-09-22 05:01:56 +00:00
Devang Patel 81c9e42bea remove extra white spaces.
llvm-svn: 30576
2006-09-22 01:07:57 +00:00
Devang Patel 0c4e730c9c Use iterative algorith to assign DFS number. This reduces
call stack depth.

llvm-svn: 30575
2006-09-22 01:05:33 +00:00
Evan Cheng 449a0c7e33 Make it work for DAG combine of multi-value nodes.
llvm-svn: 30573
2006-09-21 19:04:05 +00:00
Jim Laskey 35f7eebb49 core corrections
llvm-svn: 30570
2006-09-21 17:35:47 +00:00
Jim Laskey 5d19d59017 Basic "in frame" alias analysis.
llvm-svn: 30568
2006-09-21 16:28:59 +00:00
Rafael Espindola 7b700e517a more condition codes
llvm-svn: 30567
2006-09-21 13:06:26 +00:00
Rafael Espindola 0c71a5adc8 if a constant can't be an immediate, add it to the constant pool
llvm-svn: 30566
2006-09-21 11:29:52 +00:00
Chris Lattner 082db3f9aa fold (aext (and (trunc x), cst)) -> (and x, cst).
llvm-svn: 30561
2006-09-21 06:40:43 +00:00
Chris Lattner fa9f92cf65 Check the right value type. This fixes 186.crafty on x86
llvm-svn: 30560
2006-09-21 06:17:39 +00:00
Chris Lattner 08a8ccaaf1 implemented
llvm-svn: 30559
2006-09-21 06:14:54 +00:00
Chris Lattner 8d8a3bf9c9 Compile:
int %test(ulong *%tmp) {
        %tmp = load ulong* %tmp         ; <ulong> [#uses=1]
        %tmp.mask = shr ulong %tmp, ubyte 50            ; <ulong> [#uses=1]
        %tmp.mask = cast ulong %tmp.mask to ubyte
        %tmp2 = and ubyte %tmp.mask, 3          ; <ubyte> [#uses=1]
        %tmp2 = cast ubyte %tmp2 to int         ; <int> [#uses=1]
        ret int %tmp2
}

to:

_test:
        movl 4(%esp), %eax
        movl 4(%eax), %eax
        shrl $18, %eax
        andl $3, %eax
        ret

instead of:

_test:
        movl 4(%esp), %eax
        movl 4(%eax), %eax
        shrl $18, %eax
        # TRUNCATE movb %al, %al
        andb $3, %al
        movzbl %al, %eax
        ret

llvm-svn: 30558
2006-09-21 06:14:31 +00:00
Chris Lattner a31f0a622b Generalize (zext (truncate x)) and (sext (truncate x)) folding to work when
the src/dst are not the same size.  This catches things like "truncate
32-bit X to 8 bits, then zext to 16", which happens a bit on X86.

llvm-svn: 30557
2006-09-21 06:00:20 +00:00
Chris Lattner 1c18c0db79 Fit in 80-cols
llvm-svn: 30556
2006-09-21 05:46:00 +00:00
Chris Lattner 51c95cdd82 Fix Transforms/IndVarsSimplify/2006-09-20-LFTR-Crash.ll
llvm-svn: 30555
2006-09-21 05:12:20 +00:00
Nick Lewycky c68bbef874 Fix compile error.
llvm-svn: 30553
2006-09-21 02:08:31 +00:00
Nick Lewycky fde9c308b2 Don't rewrite ConstantExpr::get.
llvm-svn: 30552
2006-09-21 01:05:35 +00:00
Nick Lewycky d74c55f483 Once we're down to "setcc type constant1, constant2", at least come up
with the right answer.

llvm-svn: 30550
2006-09-20 23:02:24 +00:00
Anton Korobeynikov 3c5b3df6a0 Adding codegeneration for StdCall & FastCall calling conventions
llvm-svn: 30549
2006-09-20 22:03:51 +00:00
Andrew Lenharth ccdaecc448 Account for pseudo-ops correctly
llvm-svn: 30548
2006-09-20 20:08:52 +00:00
Chris Lattner a81a75c390 The DarwinAsmPrinter need not check for isDarwin. createPPCAsmPrinterPass
should create the right asmprinter subclass.

llvm-svn: 30542
2006-09-20 17:12:19 +00:00
Chris Lattner 8597a2fc4e Wrap some darwin'isms with isDarwin checks.
llvm-svn: 30541
2006-09-20 17:07:15 +00:00
Nick Lewycky cfff1c3f86 Use a total ordering to compare instructions.
Fixes infinite loop in resolve().

llvm-svn: 30540
2006-09-20 17:04:01 +00:00
Andrew Lenharth 44cb67af5c simplify
llvm-svn: 30535
2006-09-20 15:37:57 +00:00
Andrew Lenharth f007f21c8a catch constants more often
llvm-svn: 30534
2006-09-20 15:05:49 +00:00
Andrew Lenharth 97a4e99aff clarify with test case
llvm-svn: 30531
2006-09-20 14:48:00 +00:00
Andrew Lenharth e2d138a462 Add Note
llvm-svn: 30530
2006-09-20 14:40:01 +00:00
Chris Lattner fba9e8f422 item done
llvm-svn: 30518
2006-09-20 06:41:56 +00:00
Chris Lattner c8cd62d381 Compile:
int test3(int a, int b) { return (a < 0) ? a : 0; }

to:

_test3:
        srawi r2, r3, 31
        and r3, r2, r3
        blr

instead of:

_test3:
        cmpwi cr0, r3, 1
        li r2, 0
        blt cr0, LBB2_2 ;entry
LBB2_1: ;entry
        mr r3, r2
LBB2_2: ;entry
        blr


This implements: PowerPC/select_lt0.ll:seli32_a_a

llvm-svn: 30517
2006-09-20 06:41:35 +00:00
Chris Lattner 27d8985a71 add a note
llvm-svn: 30515
2006-09-20 06:32:10 +00:00
Chris Lattner 8746e2cd57 Fold the full generality of (any_extend (truncate x))
llvm-svn: 30514
2006-09-20 06:29:17 +00:00
Chris Lattner 8b68decb27 Two things:
1. teach SimplifySetCC that '(srl (ctlz x), 5) == 0' is really x != 0.
2. Teach visitSELECT_CC to use SimplifySetCC instead of calling it and
   ignoring the result.  This allows us to compile:

bool %test(ulong %x) {
  %tmp = setlt ulong %x, 4294967296
  ret bool %tmp
}

to:

_test:
        cntlzw r2, r3
        cmplwi cr0, r3, 1
        srwi r2, r2, 5
        li r3, 0
        beq cr0, LBB1_2 ;
LBB1_1: ;
        mr r3, r2
LBB1_2: ;
        blr

instead of:

_test:
        addi r2, r3, -1
        cntlzw r2, r2
        cntlzw r3, r3
        srwi r2, r2, 5
        cmplwi cr0, r2, 0
        srwi r2, r3, 5
        li r3, 0
        bne cr0, LBB1_2 ;
LBB1_1: ;
        mr r3, r2
LBB1_2: ;
        blr

This isn't wonderful, but it's an improvement.

llvm-svn: 30513
2006-09-20 06:19:26 +00:00
Chris Lattner f62f090ea1 This is already done
llvm-svn: 30512
2006-09-20 04:59:33 +00:00
Chris Lattner 380c7e9a59 We went through all that trouble to compute whether it was safe to transform
this comparison, but never checked it.  Whoops, no wonder we miscompiled
177.mesa!

llvm-svn: 30511
2006-09-20 04:44:59 +00:00
Chris Lattner da9b1a9322 Improve PPC64 equality comparisons like PPC32 comparisons.
llvm-svn: 30510
2006-09-20 04:33:27 +00:00
Chris Lattner aa3926b7ea Two improvements:
1. Codegen this comparison:
     if (X == 0x8000)

as:

        cmplwi cr0, r3, 32768
        bne cr0, LBB1_2 ;cond_next

instead of:

        lis r2, 0
        ori r2, r2, 32768
        cmpw cr0, r3, r2
        bne cr0, LBB1_2 ;cond_next


2. Codegen this comparison:
      if (X == 0x12345678)

as:

        xoris r2, r3, 4660
        cmplwi cr0, r2, 22136
        bne cr0, LBB1_2 ;cond_next

instead of:

        lis r2, 4660
        ori r2, r2, 22136
        cmpw cr0, r3, r2
        bne cr0, LBB1_2 ;cond_next

llvm-svn: 30509
2006-09-20 04:25:47 +00:00
Chris Lattner ab33d350a7 Add a note that we should match rlwnm better
llvm-svn: 30508
2006-09-20 03:59:25 +00:00
Chris Lattner 601b86513d Legalize is no longer limited to cleverness with just constant shift amounts.
Allow it to be clever when possible and fall back to the gross code when needed.

This allows us to compile:

long long foo1(long long X, int C) {
  return X << (C|32);
}
long long foo2(long long X, int C) {
  return X << (C&~32);
}

to:
_foo1:
        rlwinm r2, r5, 0, 27, 31
        slw r3, r4, r2
        li r4, 0
        blr


        .globl  _foo2
        .align  4
_foo2:
        rlwinm r2, r5, 0, 27, 25
        subfic r5, r2, 32
        slw r3, r3, r2
        srw r5, r4, r5
        or r3, r3, r5
        slw r4, r4, r2
        blr

instead of:

_foo1:
        ori r2, r5, 32
        subfic r5, r2, 32
        addi r6, r2, -32
        srw r5, r4, r5
        slw r3, r3, r2
        slw r6, r4, r6
        or r3, r3, r5
        slw r4, r4, r2
        or r3, r3, r6
        blr


        .globl  _foo2
        .align  4
_foo2:
        rlwinm r2, r5, 0, 27, 25
        subfic r5, r2, 32
        addi r6, r2, -32
        srw r5, r4, r5
        slw r3, r3, r2
        slw r6, r4, r6
        or r3, r3, r5
        slw r4, r4, r2
        or r3, r3, r6
        blr

llvm-svn: 30507
2006-09-20 03:47:40 +00:00
Chris Lattner 875ea0cdbd Expand 64-bit shifts more optimally if we know that the high bit of the
shift amount is one or zero.  For example, for:

long long foo1(long long X, int C) {
  return X << (C|32);
}

long long foo2(long long X, int C) {
  return X << (C&~32);
}

we get:

_foo1:
        movb $31, %cl
        movl 4(%esp), %edx
        andb 12(%esp), %cl
        shll %cl, %edx
        xorl %eax, %eax
        ret
_foo2:
        movb $223, %cl
        movl 4(%esp), %eax
        movl 8(%esp), %edx
        andb 12(%esp), %cl
        shldl %cl, %eax, %edx
        shll %cl, %eax
        ret

instead of:

_foo1:
        subl $4, %esp
        movl %ebx, (%esp)
        movb $32, %bl
        movl 8(%esp), %eax
        movl 12(%esp), %edx
        movb %bl, %cl
        orb 16(%esp), %cl
        shldl %cl, %eax, %edx
        shll %cl, %eax
        xorl %ecx, %ecx
        testb %bl, %bl
        cmovne %eax, %edx
        cmovne %ecx, %eax
        movl (%esp), %ebx
        addl $4, %esp
        ret
_foo2:
        subl $4, %esp
        movl %ebx, (%esp)
        movb $223, %cl
        movl 8(%esp), %eax
        movl 12(%esp), %edx
        andb 16(%esp), %cl
        shldl %cl, %eax, %edx
        shll %cl, %eax
        xorl %ecx, %ecx
        xorb %bl, %bl
        testb %bl, %bl
        cmovne %eax, %edx
        cmovne %ecx, %eax
        movl (%esp), %ebx
        addl $4, %esp
        ret

llvm-svn: 30506
2006-09-20 03:38:48 +00:00