Commit Graph

3005 Commits

Author SHA1 Message Date
Andrew Lenharth 6a5f52d15a beter Select on FP
llvm-svn: 20424
2005-03-03 21:47:53 +00:00
Chris Lattner 00ee68c612 Print -X like this:
double test(double l1_X) {
  return (-l1_X);
}

instead of like this:

double test(double l1_X) {
  return (-0x0p+0 - l1_X);
}

llvm-svn: 20423
2005-03-03 21:12:04 +00:00
Andrew Lenharth 00348ce902 LSR cleanup patch
llvm-svn: 20422
2005-03-03 19:03:21 +00:00
Chris Lattner 1a678c67c9 Do not lower malloc's to pass "sizeof" expressions like this:
ltmp_0_7 = malloc(((unsigned )(&(((signed char (*)[784])/*NULL*/0)[1u]))));

Instead, just emit the literal constant, like this:

  ltmp_0_7 = malloc(784u);

This works around a bug in ICC 8.1 compiling the CBE generated code.  :-(

llvm-svn: 20415
2005-03-03 01:04:50 +00:00
Chris Lattner 5b5caaf3cd cleanup the cfg after lsr
llvm-svn: 20410
2005-03-02 21:56:00 +00:00
Andrew Lenharth 180a04a4bb remove 32 sign extend after 32 sextload and handle small negative constant
llvm-svn: 20408
2005-03-02 17:23:03 +00:00
Andrew Lenharth ed4b6488a7 Added LSR as a beta pass for alpha
llvm-svn: 20407
2005-03-02 17:21:38 +00:00
Chris Lattner c8bb99760a Add a temporary option for llc-beta: -enable-lsr-for-ppc, which turns on
Loop Strength Reduction.

llvm-svn: 20399
2005-03-02 06:19:22 +00:00
Chris Lattner 12328e9378 Remove tabs from file.
llvm-svn: 20380
2005-02-28 19:36:15 +00:00
Chris Lattner 811107350a Add support to the C backend for llvm.prefetch. Patch contributed by
Justin Wick!

llvm-svn: 20378
2005-02-28 19:29:46 +00:00
Andrew Lenharth 76eff48195 fix integer division and stuff
llvm-svn: 20372
2005-02-28 17:22:18 +00:00
Chris Lattner 0ce80cd542 Fix spelling, patch contributed by Gabor Greif!
llvm-svn: 20343
2005-02-27 06:18:25 +00:00
Andrew Lenharth 10bc4c0ff6 make BB labels be exported for debuging, add fp negation optimization, further pecimise the FP instructions
llvm-svn: 20332
2005-02-25 22:55:15 +00:00
Andrew Lenharth 904650cdd0 fix Allocas. Really. I mean it this time.
llvm-svn: 20306
2005-02-24 18:36:32 +00:00
Tanya Lattner ee47100d44 Only print out machine instructions before modulo scheduling if we are actually doing modulo scheduling! :)
llvm-svn: 20292
2005-02-24 02:14:44 +00:00
Andrew Lenharth 27cf4eb1c7 Ah the problems you have to fix when you stray from the One True Way (TM)
llvm-svn: 20290
2005-02-23 17:33:42 +00:00
Chris Lattner 80c5b97046 Silence some uninit variable warnings.
llvm-svn: 20284
2005-02-23 05:57:21 +00:00
Tanya Lattner a31ad5172e Fixed bug in findAllcircuits. Fixed branch addition to schedule. Added debug information.
llvm-svn: 20280
2005-02-23 02:01:42 +00:00
Andrew Lenharth ccdfdd7aee oops
llvm-svn: 20278
2005-02-22 23:29:25 +00:00
Andrew Lenharth 7ac0143fa6 dynamic stack allocas
llvm-svn: 20273
2005-02-22 21:59:48 +00:00
Andrew Lenharth 5ab3986e49 no longer build as a shared library
llvm-svn: 20264
2005-02-22 04:58:26 +00:00
Misha Brukman 20430321c2 Fix compilation errors with VS 2005, patch contributed by Aaron Gray.
llvm-svn: 20232
2005-02-17 21:40:27 +00:00
Tanya Lattner c28fd0db2e Fixed node deletion bug.
llvm-svn: 20207
2005-02-16 04:00:59 +00:00
Chris Lattner 915fd0de4b Fix a problem where the PPC backend lost track of the fact that it had
to save and restore the LR register on entry and exit of a leaf function
that needed to access globals or the constant pool.  This should hopefully
fix oscar from sending the PPC tester spinning out of control.

llvm-svn: 20197
2005-02-15 20:26:49 +00:00
Chris Lattner 3bef66fcf3 Fix volatile load/store of pointers. Consider this testcase:
void %test(int** %P) {
  %A = volatile load int** %P
  ret void
}

void %test2(int*** %Q) {
  %P = load int*** %Q
  volatile store int** %P, int*** %Q
  ret void
}

instead of emitting:

void test(int **l1_P) {
  int *l2_A;

  l2_A = (int **((volatile int **)l1_P));
  return;
}
void test2(int ***l2_Q) {
  int **l1_P;

  l1_P = *l2_Q;
  *((volatile int ***)l2_Q) = l1_P;
  return;
}

... which is loading/storing volatile pointers, not through volatile pointers,
emit this (which is right):

void test(int **l1_P) {
  int *l3_A;

  l3_A = *((int * volatile*)l1_P);
  return;
}
void test2(int ***l2_Q) {
  int **l1_P;

  l1_P = *l2_Q;
  *((int ** volatile*)l2_Q) = l1_P;
  return;
}

llvm-svn: 20191
2005-02-15 05:52:14 +00:00
Misha Brukman 2b1e10031f Write out single characters as chars, not strings.
llvm-svn: 20179
2005-02-14 18:52:35 +00:00
Chris Lattner 7afbdcad15 Implement CodeGen/CBackend/2005-02-14-VolatileOperations.ll
Volatile loads and stores need to emit volatile pointer operations in C.

llvm-svn: 20177
2005-02-14 16:47:52 +00:00
Andrew Lenharth cae2f21e3b fix setcc on floats, fixes singlesource:pi, perhaps others
llvm-svn: 20172
2005-02-14 05:41:43 +00:00
Andrew Lenharth d39febfc66 try to do better match for i32 adds
llvm-svn: 20143
2005-02-12 21:11:17 +00:00
Andrew Lenharth ab4db0522a make FP conversion more conservative (matches gcc)
llvm-svn: 20142
2005-02-12 21:10:58 +00:00
Andrew Lenharth df5cd0868f oops, I was sure this had already gond though the nightly tester
llvm-svn: 20141
2005-02-12 20:42:09 +00:00
Andrew Lenharth 76c5d97750 added sign extend for boolean
llvm-svn: 20137
2005-02-12 19:35:12 +00:00
Andrew Lenharth b301af712e fix a bunch of regressions due to call behavior
llvm-svn: 20110
2005-02-10 20:10:38 +00:00
Tanya Lattner 56807c6f4a Added new circuit finding alogrithm.
Fixed bug in graph so that phi ite diff edges are added.

llvm-svn: 20108
2005-02-10 17:02:58 +00:00
Tanya Lattner 1137d7c6a1 Allow modsched and local scheduling to both be run.
llvm-svn: 20107
2005-02-10 17:02:06 +00:00
Andrew Lenharth e0b789fdf5 so, if you beat on it, you too can talk emacs into having a sane indenting policy... Also, optimize many function calls with pc-relative calls (partial prologue skipping for that case coming soon), try to fix the random jumps to strange places problem by pesimizing div et. al. register usage and fixing up GP before using, some calling convention tweaks, and make frame pointer unallocatable (not strickly necessary, but let's go for correctness first)
llvm-svn: 20106
2005-02-10 06:25:22 +00:00
Andrew Lenharth f70ef47ee1 fix fp branch
llvm-svn: 20105
2005-02-10 05:17:38 +00:00
Misha Brukman 06a1d47f96 * Fix spelling of `volatile'
* Align comments with tablegen elements

llvm-svn: 20103
2005-02-10 01:52:22 +00:00
Andrew Lenharth 8ec0a2b13a BranchCC, nifty
llvm-svn: 20067
2005-02-08 00:40:03 +00:00
Andrew Lenharth d4f440de0f fix store issue and an FP conversion (segfault) issue
llvm-svn: 20066
2005-02-07 23:02:23 +00:00
Andrew Lenharth 9d3f7704fd copytoreg fix
llvm-svn: 20063
2005-02-07 06:31:44 +00:00
Andrew Lenharth 57047720ce copyfromreg fix
llvm-svn: 20062
2005-02-07 06:21:37 +00:00
Andrew Lenharth 351df0c2dc fix load bug
llvm-svn: 20061
2005-02-07 05:55:55 +00:00
Andrew Lenharth 5d004edc3d more FP load store fixes and Load store simplifications
llvm-svn: 20060
2005-02-07 05:33:15 +00:00
Andrew Lenharth 5fb9b53060 clean up load and stores alot
llvm-svn: 20059
2005-02-07 05:18:02 +00:00
Andrew Lenharth a9e02156ce teach all loads and stores about the stack
llvm-svn: 20058
2005-02-07 05:07:00 +00:00
Andrew Lenharth 0021f55863 prefer FP scratch registers and more check in LowerArguments
llvm-svn: 20057
2005-02-06 21:07:31 +00:00
Andrew Lenharth eefd410522 fix oopso
llvm-svn: 20056
2005-02-06 16:22:15 +00:00
Andrew Lenharth 6c018f77d1 smarter loads and stores. can now handle base+offset.
llvm-svn: 20055
2005-02-06 15:40:40 +00:00
Andrew Lenharth d9bf7b81eb fix build
llvm-svn: 20053
2005-02-05 19:46:51 +00:00
Andrew Lenharth 7be9854594 clean up
llvm-svn: 20051
2005-02-05 17:41:39 +00:00
Andrew Lenharth ea9224a69a fix f32 setcc, and fp select
llvm-svn: 20050
2005-02-05 16:41:03 +00:00
Andrew Lenharth 060d58b88f added ugly support for fp compares
llvm-svn: 20049
2005-02-05 13:19:12 +00:00
Misha Brukman ffe9968b5a Make the rest of file header comments consistent in format and style
llvm-svn: 20048
2005-02-05 02:24:26 +00:00
Misha Brukman 076b9f4507 Make file header comment consistent: extend the whole 80 cols to fill the line
llvm-svn: 20039
2005-02-04 20:25:52 +00:00
Andrew Lenharth 5152be292a alignment
llvm-svn: 20028
2005-02-04 14:09:38 +00:00
Andrew Lenharth 202011fcc7 get alignment printing correctly and get rid of __main hack
llvm-svn: 20027
2005-02-04 14:01:21 +00:00
Andrew Lenharth 799479138e FP fixes
llvm-svn: 20019
2005-02-03 21:01:15 +00:00
Andrew Lenharth 75c6225f32 Store fix
llvm-svn: 20004
2005-02-02 17:32:39 +00:00
Andrew Lenharth cf2e21e879 oops
llvm-svn: 20003
2005-02-02 17:01:31 +00:00
Andrew Lenharth fe6e7a30c0 prevent register allocator from using the stack pointer :)
llvm-svn: 20002
2005-02-02 17:00:21 +00:00
Andrew Lenharth c7042c2d8b fix loading of floats
llvm-svn: 19997
2005-02-02 15:05:33 +00:00
Andrew Lenharth 0f42d92ca0 marked mem* as not supported
llvm-svn: 19992
2005-02-02 05:49:42 +00:00
Andrew Lenharth 07c0b0d92b fix Load bug
llvm-svn: 19987
2005-02-02 04:35:44 +00:00
Andrew Lenharth c7e55f430c try to make a bug bugpointable, add yet more constant pool stuff, fixup constant loads for FP
llvm-svn: 19985
2005-02-02 03:36:35 +00:00
Andrew Lenharth ae88b6a8a8 better constant handling, should fix many remaining cases
llvm-svn: 19984
2005-02-02 00:51:15 +00:00
Andrew Lenharth 9df6a764b9 fix FP arg passing bug, Add unsigned to/from int, fix SELECT, fix Constant pool
llvm-svn: 19976
2005-02-01 20:40:27 +00:00
Andrew Lenharth 20d8b2ff71 Print the Constant pool
llvm-svn: 19975
2005-02-01 20:38:53 +00:00
Andrew Lenharth 32124c0a70 Make cmov work right and loads for fp from constant pool
llvm-svn: 19974
2005-02-01 20:36:44 +00:00
Andrew Lenharth c777d4f03d Correct stack stuff for FP
llvm-svn: 19973
2005-02-01 20:35:57 +00:00
Andrew Lenharth 8fb0d5002b try to match alpha pattern
llvm-svn: 19972
2005-02-01 20:35:11 +00:00
Andrew Lenharth 7703d1ab25 fix register names
llvm-svn: 19971
2005-02-01 20:34:29 +00:00
Andrew Lenharth cdc9e33ae5 pecimise loads, put indirect call addr in right register. still doesn't fix methcall
llvm-svn: 19963
2005-02-01 01:37:24 +00:00
Misha Brukman 8dfa2e4465 Fix hyphenation in output comment
llvm-svn: 19954
2005-01-31 06:19:57 +00:00
Andrew Lenharth ae25bb1dc5 indirect call fix
llvm-svn: 19945
2005-01-31 03:19:31 +00:00
Andrew Lenharth c40d156dc9 fp to int and back conversion sequences
llvm-svn: 19944
2005-01-31 01:44:26 +00:00
Andrew Lenharth 7141334f98 added fp extend and removed a forgotten assert in more than 6 arg support (should break somewhere else now :) ) and fix an incorrect asm sequence for indirect calls
llvm-svn: 19938
2005-01-30 20:42:36 +00:00
Chris Lattner 8e62f434cd This code is really unreachable.
llvm-svn: 19934
2005-01-30 16:33:46 +00:00
Chris Lattner bfa060c5d2 Fix warnings.
llvm-svn: 19933
2005-01-30 16:32:48 +00:00
Andrew Lenharth 918a29fc51 support for larger calls
llvm-svn: 19932
2005-01-30 00:35:27 +00:00
Chris Lattner fdec565f1f Unbreak the build :(
llvm-svn: 19926
2005-01-29 19:27:28 +00:00
Andrew Lenharth 41bc2c2897 first step towards a correct and complete stack. also add some forms for things that were getting stuck in the nightly tester.
llvm-svn: 19914
2005-01-29 15:42:07 +00:00
Chris Lattner 3479f9cca8 Finegrainify namespacification.
Adjust TmpInstruction to work with the new User model.

llvm-svn: 19896
2005-01-29 00:36:59 +00:00
Chris Lattner 68afd89730 add namespace qualifier
llvm-svn: 19895
2005-01-29 00:36:38 +00:00
Andrew Lenharth 4a0d200c13 fix ExprMap, partially teach about add long
llvm-svn: 19882
2005-01-28 23:17:54 +00:00
Andrew Lenharth 579a324137 fix ExprMap and constant check in setcc
llvm-svn: 19870
2005-01-28 14:06:46 +00:00
Andrew Lenharth 479bc61455 move FP into it's own select
llvm-svn: 19867
2005-01-28 06:57:18 +00:00
Andrew Lenharth 7c538a6593 stack frame fix and zero FP reg fix
llvm-svn: 19857
2005-01-27 08:31:19 +00:00
Andrew Lenharth 96515adad6 Floating point instructions like Floating point registers
llvm-svn: 19856
2005-01-27 07:58:15 +00:00
Andrew Lenharth 0cceb5165e int to float conversion and another setcc
llvm-svn: 19855
2005-01-27 07:50:35 +00:00
Andrew Lenharth 3c361fd6f7 teach isel about comparison with constants and zero extending bits
llvm-svn: 19853
2005-01-27 03:49:45 +00:00
Andrew Lenharth 5374789198 perhaps this will let me have calls again
llvm-svn: 19851
2005-01-27 01:22:48 +00:00
Andrew Lenharth 9e27e54d70 minor bug fix
llvm-svn: 19850
2005-01-27 00:52:26 +00:00
Andrew Lenharth 9748b623a4 minor bug fix
llvm-svn: 19849
2005-01-27 00:51:05 +00:00
Andrew Lenharth 267908ad47 added instructions for fp to int to fp moves
llvm-svn: 19848
2005-01-26 23:56:48 +00:00
Andrew Lenharth 5ae5f81720 initial fp support
llvm-svn: 19847
2005-01-26 21:54:09 +00:00
Andrew Lenharth 589304de7f hum, writing on one machine, testing on another...
llvm-svn: 19844
2005-01-26 02:53:56 +00:00
Andrew Lenharth 02c5459948 add some operations, fix others. should compile several more tests now
llvm-svn: 19843
2005-01-26 01:24:38 +00:00
Chris Lattner 1b20615173 We can fold promoted and non-promoted loads into divs also!
llvm-svn: 19835
2005-01-25 20:35:10 +00:00
Chris Lattner 30607ec66e Fold promoted loads into binary ops for FP, allowing us to generate m32 forms
of FP ops.

llvm-svn: 19834
2005-01-25 20:03:11 +00:00
Andrew Lenharth ba2bcd867f problems with bools, and their work arounds
llvm-svn: 19833
2005-01-25 19:58:40 +00:00
Andrew Lenharth 122489bcab more load choices, better add with imm
llvm-svn: 19821
2005-01-25 00:35:34 +00:00
Andrew Lenharth 2f0f845534 Clean ups, and taught the instruction selector about immediate forms
llvm-svn: 19816
2005-01-24 19:44:07 +00:00
Andrew Lenharth 6d1a96bccc Alpha JIT prune
llvm-svn: 19815
2005-01-24 18:48:22 +00:00
Andrew Lenharth 3c12772190 include prune and JIT prune
llvm-svn: 19814
2005-01-24 18:45:41 +00:00
Andrew Lenharth 4680f89526 Pruned includes
llvm-svn: 19813
2005-01-24 18:37:48 +00:00
Chris Lattner 39837024ae Fix a spurious warning.
llvm-svn: 19799
2005-01-24 01:40:18 +00:00
Chris Lattner 0e1de101a1 Silence a warning.
llvm-svn: 19798
2005-01-23 23:20:06 +00:00
Chris Lattner debae1e3c3 Allow the FP stackifier to completely ignore functions that do not use FP at
all.  This should speed up the X86 backend fairly significantly on integer
codes.  Now if only we didn't have to compute livevar still... ;-)

llvm-svn: 19796
2005-01-23 23:13:59 +00:00
Chris Lattner 28939a222e Build Alpha by default.
llvm-svn: 19777
2005-01-23 04:34:46 +00:00
Reid Spencer d5d45b8d1a Fix alloca support for Cygwin. On cygwin its __alloca not __builtin_alloca
llvm-svn: 19776
2005-01-23 04:32:47 +00:00
Reid Spencer 30226da5b3 Support Cygwin assembly generation. The cygwin version of Gnu ASsembler
doesn't support certain directives and symbols on cygwin are prefixed with
an underscore. This patch makes the necessary adjustments to the output.

llvm-svn: 19775
2005-01-23 03:52:14 +00:00
Andrew Lenharth a1b5ca2b9d Let me introduce you to the early stages of the llvm backend for the alpha processor
llvm-svn: 19764
2005-01-22 23:41:55 +00:00
Chris Lattner e70eb9da7d Speed up folding operations into loads.
llvm-svn: 19733
2005-01-21 21:43:02 +00:00
Chris Lattner e1e844c416 The ever-important vanity pass name :)
llvm-svn: 19731
2005-01-21 21:35:14 +00:00
Chris Lattner c78776d209 Fix a FIXME: realize that argument stores are all independent (don't alias)
llvm-svn: 19728
2005-01-21 19:46:38 +00:00
Chris Lattner 2a631fa406 Implement ADD_PARTS/SUB_PARTS so that 64-bit integer add/sub work. This
fixes most of the remaining llc-beta failures.

llvm-svn: 19716
2005-01-20 18:53:00 +00:00
Chris Lattner 5b04f33405 Fix a crash compiling 134.perl.
llvm-svn: 19711
2005-01-20 16:50:16 +00:00
Chris Lattner 474aac4da9 Fix a problem where were were literally selecting for INCREASED register
pressure, not decreases register pressure.  Fix problem where we accidentally
swapped the operands of SHLD, which caused fourinarow to fail.  This fixes
fourinarow.

llvm-svn: 19697
2005-01-19 17:24:34 +00:00
Chris Lattner 25be208e02 When commuting these instructions, make sure to actually swap the operands too.
llvm-svn: 19694
2005-01-19 16:55:52 +00:00
Chris Lattner de87d146ab Implement Regression/CodeGen/X86/rotate.ll: emit rotate instructions (which
typically cost 1 cycle) instead of shld/shrd instruction (which are typically
6 or more cycles).  This also saves code space.

For example, instead of emitting:

rotr:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %CL, BYTE PTR [%ESP + 8]
        shrd %EAX, %EAX, %CL
        ret
rotli:
        mov %EAX, DWORD PTR [%ESP + 4]
        shrd %EAX, %EAX, 27
        ret

Emit:

rotr32:
        mov %CL, BYTE PTR [%ESP + 8]
        mov %EAX, DWORD PTR [%ESP + 4]
        ror %EAX, %CL
        ret
rotli32:
        mov %EAX, DWORD PTR [%ESP + 4]
        ror %EAX, 27
        ret

We also emit byte rotate instructions which do not have a sh[lr]d counterpart
at all.

llvm-svn: 19692
2005-01-19 08:07:05 +00:00
Chris Lattner 0edf9535b9 Add rotate instructions.
llvm-svn: 19690
2005-01-19 07:50:03 +00:00
Chris Lattner 29f5819158 Match 16-bit shld/shrd instructions as well, implementing shift-double.llx:test5
llvm-svn: 19689
2005-01-19 07:37:26 +00:00
Chris Lattner d54845f530 Improve coverage of the X86 instruction set by adding 16-bit shift doubles.
llvm-svn: 19687
2005-01-19 07:31:24 +00:00
Chris Lattner 2947801735 Teach the code generator that shrd/shld is commutable if it has an immediate.
This allows us to generate this:

foo:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        shld %EDX, %EDX, 2
        shl %EAX, 2
        ret

instead of this:

foo:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
        mov %EDX, %EAX
        shrd %EDX, %ECX, 30
        shl %EAX, 2
        ret

Note the magically transmogrifying immediate.

llvm-svn: 19686
2005-01-19 07:11:01 +00:00
Chris Lattner f6932b700b Finegrainify namespacification
Add default impl of commuteInstruction
Add notes about ugly V9 code.

llvm-svn: 19684
2005-01-19 06:53:34 +00:00
Chris Lattner 41fe201b61 Codegen long >> 2 to this:
foo:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %EDX, DWORD PTR [%ESP + 8]
        shrd %EAX, %EDX, 2
        sar %EDX, 2
        ret

instead of this:

test1:
        mov %ECX, DWORD PTR [%ESP + 4]
        shr %ECX, 2
        mov %EDX, DWORD PTR [%ESP + 8]
        mov %EAX, %EDX
        shl %EAX, 30
        or %EAX, %ECX
        sar %EDX, 2
        ret

and long << 2 to this:

foo:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, DWORD PTR [%ESP + 8]
***     mov %EDX, %EAX
        shrd %EDX, %ECX, 30
        shl %EAX, 2
        ret

instead of this:

foo:
        mov %EAX, DWORD PTR [%ESP + 4]
        mov %ECX, %EAX
        shr %ECX, 30
        mov %EDX, DWORD PTR [%ESP + 8]
        shl %EDX, 2
        or %EDX, %ECX
        shl %EAX, 2
        ret

The extra copy (marked ***) can be eliminated when I teach the code generator
that shrd32rri8 is really commutative.

llvm-svn: 19681
2005-01-19 06:18:43 +00:00
Chris Lattner d8d306601a X86 shifts mask the amount.
llvm-svn: 19678
2005-01-19 03:36:30 +00:00
Chris Lattner a05cd83d2f Add a hook to find out how the target handles shift amounts that are out of
range.  Either they are undefined (the default), they mask the shift amount
to the size of the register (X86, Alpha, etc), or they extend the shift (PPC).

This defaults to undefined, which is conservatively correct.

llvm-svn: 19677
2005-01-19 03:36:14 +00:00
Chris Lattner 14947c34cc Code to handle FP_EXTEND is dead now. X86 doesn't support any data types to
FP_EXTEND from!

llvm-svn: 19674
2005-01-18 20:05:56 +00:00
Chris Lattner c6e928cba5 Remove more dead code.
llvm-svn: 19673
2005-01-18 19:50:08 +00:00
Chris Lattner 0616fa6b9b The selection dag code handles the promotions from F32 to F64 for us, so we
don't need to even think about F32 in the X86 code anymore.

llvm-svn: 19672
2005-01-18 19:46:54 +00:00
Chris Lattner 479c7118e4 Fix 124.m88ksim.
llvm-svn: 19667
2005-01-18 17:35:28 +00:00
Chris Lattner ed246ec0d2 Do not emit loads multiple times, potentially in the wrong places.
llvm-svn: 19661
2005-01-18 04:18:32 +00:00
Tanya Lattner c227ad2640 Minor changes.
llvm-svn: 19660
2005-01-18 04:15:41 +00:00
Chris Lattner 28a205e01b Eliminate bad assertions.
llvm-svn: 19659
2005-01-18 04:00:54 +00:00
Chris Lattner 78d3028350 * Eliminate the TokenSet and just use the ExprMap for both tokens and values.
* Insert some really pedantic assertions that will notice when we emit the
  same loads more than one time, exposing bugs.  This turns a miscompilation in
  bzip2 into a compile-fail.  yaay.

llvm-svn: 19658
2005-01-18 03:51:59 +00:00
Chris Lattner d7f93950aa Rely on the code in MatchAddress to do this work. Otherwise we fail to
match (X+Y)+(Z << 1), because we match the X+Y first, consuming the index
register, then there is no place to put the Z.

llvm-svn: 19652
2005-01-18 02:25:52 +00:00
Chris Lattner a7acdda064 Fix a problem where probing for addressing modes caused expressions to be
emitted too early.  In particular, this fixes
Regression/CodeGen/X86/regpressure.ll:regpressure3.

This also improves the 2nd basic block in 164.gzip:flush_block, which went from

.LBBflush_block_1:      # loopentry.1.i
        movzx %EAX, WORD PTR [dyn_ltree + 20]
        movzx %ECX, WORD PTR [dyn_ltree + 16]
        mov DWORD PTR [%ESP + 32], %ECX
        movzx %ECX, WORD PTR [dyn_ltree + 12]
        movzx %EDX, WORD PTR [dyn_ltree + 8]
        movzx %EBX, WORD PTR [dyn_ltree + 4]
        mov DWORD PTR [%ESP + 36], %EBX
        movzx %EBX, WORD PTR [dyn_ltree]
        add DWORD PTR [%ESP + 36], %EBX
        add %EDX, DWORD PTR [%ESP + 36]
        add %ECX, %EDX
        add DWORD PTR [%ESP + 32], %ECX
        add %EAX, DWORD PTR [%ESP + 32]
        movzx %ECX, WORD PTR [dyn_ltree + 24]
        add %EAX, %ECX
        mov %ECX, 0
        mov %EDX, %ECX

to

.LBBflush_block_1:      # loopentry.1.i
        movzx %EAX, WORD PTR [dyn_ltree]
        movzx %ECX, WORD PTR [dyn_ltree + 4]
        add %ECX, %EAX
        movzx %EAX, WORD PTR [dyn_ltree + 8]
        add %EAX, %ECX
        movzx %ECX, WORD PTR [dyn_ltree + 12]
        add %ECX, %EAX
        movzx %EAX, WORD PTR [dyn_ltree + 16]
        add %EAX, %ECX
        movzx %ECX, WORD PTR [dyn_ltree + 20]
        add %ECX, %EAX
        movzx %EAX, WORD PTR [dyn_ltree + 24]
        add %ECX, %EAX
        mov %EAX, 0
        mov %EDX, %EAX

... which results in less spilling in the function.

This change alone speeds up 164.gzip from 37.23s to 36.24s on apoc.  The
default isel takes 37.31s.

llvm-svn: 19650
2005-01-18 01:06:26 +00:00
Chris Lattner b93409f3e2 Fix indentation.
llvm-svn: 19649
2005-01-17 23:25:45 +00:00
Chris Lattner a5d137f471 Don't bother using max here.
llvm-svn: 19647
2005-01-17 23:02:13 +00:00
Chris Lattner ca318edb94 Do not give token factor nodes outrageous weights
llvm-svn: 19645
2005-01-17 22:56:09 +00:00
Chris Lattner e86c933df7 Two changes:
1. Fold  [mem] += (1|-1) into inc [mem]/dec [mem] to save some icache space.
 2. Do not let token factor nodes prevent forming '[mem] op= val' folds.

llvm-svn: 19643
2005-01-17 22:10:42 +00:00
Chris Lattner 96113fd08f Refactor load/op/store folding into it's own method, no functionality changes.
llvm-svn: 19641
2005-01-17 19:25:26 +00:00
Chris Lattner 9098879472 Fix a major regression last night that prevented us from producing [mem] op= reg
operations.

The body of the if is less indented but unmodified in this patch.

llvm-svn: 19638
2005-01-17 17:49:14 +00:00
Chris Lattner b72ea1b719 Codegen this:
int %foo(int %X) {
        %T = add int %X, 13
        %S = mul int %T, 3
        ret int %S
}

as this:

        mov %ECX, DWORD PTR [%ESP + 4]
        lea %EAX, DWORD PTR [%ECX + 2*%ECX + 39]
        ret

instead of this:

        mov %ECX, DWORD PTR [%ESP + 4]
        mov %EAX, %ECX
        add %EAX, 13
        imul %EAX, %EAX, 3
        ret

llvm-svn: 19633
2005-01-17 06:48:02 +00:00
Tanya Lattner a8b2929f45 Added tmp instructions to preserve ssa.
llvm-svn: 19632
2005-01-17 06:47:26 +00:00
Chris Lattner a56d29d517 Fix test/Regression/CodeGen/X86/2005-01-17-CycleInDAG.ll and 132.ijpeg.
Do not fold a load into an operation if it will induce a cycle in the DAG.

Repeat after me: dAg.

llvm-svn: 19631
2005-01-17 06:26:58 +00:00
Chris Lattner 3be6cd57c9 Do not fold a load into a comparison that is used by more than one place.
The comparison will probably be folded, so this is not ok to do.
This fixed 197.parser.

llvm-svn: 19624
2005-01-17 01:34:14 +00:00
Chris Lattner 0cd6b9ae1e Do not codegen 'xor bool, true' as 'not reg'. not reg inverts the upper bits
of the bytereg.  This fixes yacr2, 300.twolf and probably others.

llvm-svn: 19622
2005-01-17 00:23:16 +00:00
Chris Lattner c1f386c7b8 Set up the shift and setcc types.
If we emit a load because we followed a token chain to get to it, try to
fold it into its single user if possible.

llvm-svn: 19620
2005-01-17 00:00:33 +00:00
Chris Lattner 5f180e4645 Shift and setcc types default to the pointer type.
llvm-svn: 19619
2005-01-16 23:59:48 +00:00
Tanya Lattner e4872342db Added paramters to a few functions in order to allow me to change the functions to preserve SSA
llvm-svn: 19615
2005-01-16 08:51:10 +00:00
Chris Lattner b14a63aa1c * Adjust to changes in TargetLowering interfaces.
* Remove custom promotion for bool and byte select ops.  Legalize now
  promotes them for us.
* Allow folding ConstantPoolIndexes into EXTLOAD's, useful for float immediates.
* Declare which operations are not supported better.
* Add some hacky code for TRUNCSTORE to pretend that we have truncstore
  for i16 types.  This is useful for testing promotion code because I can
  just remove 16-bit registers all together and verify that programs work.

llvm-svn: 19614
2005-01-16 07:34:08 +00:00
Chris Lattner 6f8097951c Use enums, move virtual dtor out of line.
llvm-svn: 19610
2005-01-16 07:28:11 +00:00
Chris Lattner d0a65013ab cycles_t -> CycleCount_t
llvm-svn: 19604
2005-01-16 04:20:30 +00:00
Reid Spencer 0e48bf8a19 Rename BUILD_* to PROJ_*
llvm-svn: 19592
2005-01-16 02:21:29 +00:00
Tanya Lattner 7462d192b8 Fixed a couple of instructions that broke SSA.
llvm-svn: 19587
2005-01-16 02:14:17 +00:00
Chris Lattner cebb964fef Improve compatiblity with HPUX on Itanium, patch by Duraid Madina
llvm-svn: 19586
2005-01-16 01:31:31 +00:00
Chris Lattner 8ec1dc5fc0 Set up identity transforms.
llvm-svn: 19584
2005-01-16 01:20:18 +00:00
Chris Lattner 1bc93bac69 Move some information out of LegalizeDAG into the generic Target interface.
llvm-svn: 19581
2005-01-16 01:10:58 +00:00
Chris Lattner 9de5890211 Add a new target-independent code generator flag.
llvm-svn: 19567
2005-01-15 06:00:32 +00:00
Chris Lattner e18a4c4c19 Add support for truncstore and *extload.
llvm-svn: 19566
2005-01-15 05:22:24 +00:00
Chris Lattner 720a62e8c7 Adjust to CopyFromREg changes.
llvm-svn: 19561
2005-01-14 22:37:41 +00:00
Chris Lattner d7bffad559 Fix Regression/CodeGen/PowerPC/2005-01-14-UndefLong.ll
llvm-svn: 19557
2005-01-14 20:22:02 +00:00
Chris Lattner c3ed31f837 Fix: Regression/CodeGen/PowerPC/2005-01-14-SetSelectCrash.ll
llvm-svn: 19555
2005-01-14 19:31:00 +00:00
Chris Lattner e727af06c8 Add new ImplicitDef node, rename CopyRegSDNode class to RegSDNode.
llvm-svn: 19535
2005-01-13 20:50:02 +00:00
Chris Lattner 15bd19dd76 Codegen factor nodes more intelligently according to perceived register pressure.
llvm-svn: 19532
2005-01-13 19:56:00 +00:00
Chris Lattner c251fb6441 Initial trivial (but stupid) codegen for this node.
llvm-svn: 19529
2005-01-13 18:01:36 +00:00
Chris Lattner 3676cd6fe2 Add some really pedantic assertions to the load folding code. Fix a bunch
of cases where we accidentally emitted a load folded once and unfolded
elsewhere.

llvm-svn: 19522
2005-01-13 05:53:16 +00:00
Chris Lattner 7b1dae8032 We can only fold a load into an op if there is exactly one use of the value.
Checking to see if the load has two uses is not equivalent, as the chain
value may have zero uses.

llvm-svn: 19518
2005-01-12 18:38:26 +00:00
Chris Lattner 1755360db6 Try both ways to fold an add together. This allows us to generate this code
imul %EAX, %EAX, 400
        add %ECX, %EAX
        add %ESI, DWORD PTR [%ECX + 4*%EDX]
        inc %EDX
        cmp %EDX, 100

instead of this:

        imul %EAX, %EAX, 400
        add %ECX, %EAX
        mov %EAX, %EDX
        shl %EAX, 2
        add %ECX, %EAX
        add %ESI, DWORD PTR [%ECX]
        inc %EDX
        cmp %EDX, 100

llvm-svn: 19513
2005-01-12 18:08:53 +00:00
Chris Lattner 42e7e98908 Fix a major miscompilation where we were overwriting the scale reg.
llvm-svn: 19511
2005-01-12 07:33:20 +00:00
Chris Lattner db4b67c81e Do not use the type of the RHS constant to determine the type of the operation.
This fails for shifts because the constant is always 8 bits.

llvm-svn: 19508
2005-01-12 05:22:07 +00:00
Chris Lattner bdb2e9dabc Do not lose the offset from teh global when peephole optimizing instructions.
This fixes FreeBench/pcompress

llvm-svn: 19507
2005-01-12 05:17:28 +00:00
Jeff Cohen 407aa0198c Fix C++ more compilatiom errors
llvm-svn: 19504
2005-01-12 04:29:05 +00:00
Chris Lattner efe90209ef Fix a compile error with VC++, which things that static const arrays need
to be dynamically initialized. :(

llvm-svn: 19503
2005-01-12 04:23:22 +00:00
Chris Lattner 6fba62d6ec Fix a bug that caused us to crash on povray. We weren't emitting an FP_REG_KILL into a block that had a successor with a FP PHI node.
llvm-svn: 19502
2005-01-12 04:21:28 +00:00
Chris Lattner bb4c14f270 Print a load of a null pointer (in intel mode) like this:
mov %AX, WORD PTR [0]

instead of like this:

        mov %AX, WORD PTR []

llvm-svn: 19501
2005-01-12 04:07:11 +00:00
Chris Lattner b372fab2be Print a load of a null pointer like this:
movw 0, %ax

instead of like this:

        movw , %ax

llvm-svn: 19500
2005-01-12 04:05:19 +00:00
Chris Lattner f8f79c4192 Fix a crash compiling povray on UINT_TO_FP from i16.
llvm-svn: 19499
2005-01-12 04:00:00 +00:00
Chris Lattner 3278ce8871 There are no [mem] op= reg instructions for FP, so remove their entries.
llvm-svn: 19496
2005-01-12 03:16:09 +00:00
Chris Lattner e49a335797 Fix a bug where we didn't insert FP_REG_KILL instructions into MBB's that
contain FP PHI nodes but no other FP defining instructions.  This fixes
183.equake

llvm-svn: 19495
2005-01-12 02:57:10 +00:00
Chris Lattner b7fe57a0f1 Fold TRUNCATE (LOAD P) into a smaller load from P.
llvm-svn: 19494
2005-01-12 02:19:06 +00:00
Chris Lattner 2cfce6853b Be more careful about order of arg evalution for CopyToReg nodes. This shrinks
256.bzip2 from 7142 to 7103 lines of .s file.

Second, add initial support for folding loads into compares, though this code
is dynamically dead for now. :(

llvm-svn: 19493
2005-01-12 02:02:48 +00:00
Chris Lattner 021cfd2a80 Fold some more [mem] op= val operators. This allows us to things like this
several times in 256.bzip2:

        mov %EAX, DWORD PTR [%ESP + 204]
-       mov %EAX, DWORD PTR [%EAX]
-       or %EAX, 2097152
-       mov %ECX, DWORD PTR [%ESP + 204]
-       mov DWORD PTR [%ECX], %EAX
+       or DWORD PTR [%EAX], 2097152

llvm-svn: 19492
2005-01-12 01:28:00 +00:00
Chris Lattner b0eef82b82 Fold loads into sign/zero extends. instead of:
mov %AL, BYTE PTR [%EDX + l18_length_code]
  movzx %EAX, %AL

Emit:

  movzx %EAX, BYTE PTR [%EDX + l18_length_code]

llvm-svn: 19489
2005-01-11 23:33:00 +00:00
Chris Lattner 75bac9f786 Comment out debug code :)
Select [mem] += Val operations.  For constants, we used to get:

  mov %ECX, -32768
  add %ECX, DWORD PTR [l4_match_start]
  mov DWORD PTR [l4_match_start], %ECX

Now we get:

  add DWORD PTR [l4_match_start], -32768

For other values we used to get:

  mov %EBP, %EDI   ;; because the add destroys the value
  add %EBP, DWORD PTR [l4_input_len]
  mov DWORD PTR [l4_input_len], %EBP

now we get:

  add DWORD PTR [l4_input_len], %EDI

Both of these use less registers than the alternative, are faster and smaller.

llvm-svn: 19488
2005-01-11 23:21:30 +00:00
Chris Lattner d28ae12168 Handle the global address case here, not just the offset case.
llvm-svn: 19487
2005-01-11 22:58:43 +00:00
Chris Lattner 8aa10fcd1b Treat int constants as not requiring a register, since they are almost always
folded into an instruction.

llvm-svn: 19486
2005-01-11 22:29:12 +00:00
Chris Lattner 62b22420be * Factor a bunch of binary operator cases into shared code.
* Fold loads into Add, sub, and, or, xor and mul when possible.
* Codegen shl X, 1 as add X, X

llvm-svn: 19483
2005-01-11 21:19:59 +00:00
Chris Lattner 4cbf1f0038 Clear the whole array, always.
llvm-svn: 19482
2005-01-11 20:25:26 +00:00
Chris Lattner 8cf9cdae3d Fold multiplies by 3,5,9 into addressing modes when possible.
llvm-svn: 19480
2005-01-11 19:37:02 +00:00
Chris Lattner b74ec4cd24 Instead of generating stuff like this:
mov %ECX, %EAX
        add %ECX, 32768
        mov %SI, WORD PTR [2*%ECX + l13_prev]

Generate this:

        mov %SI, WORD PTR [2*%ECX + l13_prev + 65536]

This occurs when you have a GEP instruction where an index is
"something + imm".

llvm-svn: 19472
2005-01-11 06:36:20 +00:00
Chris Lattner c07164e909 Implement MEMCPY natively in terms of rep movs*
llvm-svn: 19468
2005-01-11 06:19:26 +00:00
Chris Lattner 36f7848b26 Implement memset -> rep stos*
llvm-svn: 19467
2005-01-11 06:14:36 +00:00
Chris Lattner a19f84240f Announce that we don't support mem ops yet.
llvm-svn: 19466
2005-01-11 05:57:36 +00:00
Chris Lattner 378262d33b Teach the address selector to make 'reg+reg' addressing modes.
llvm-svn: 19457
2005-01-11 04:40:19 +00:00
Chris Lattner 9d7cf998ca Emit NOT instructions.
llvm-svn: 19455
2005-01-11 04:31:30 +00:00
Chris Lattner 37ed28558f Fix a bug emitting branches that broke a lot of programs.
llvm-svn: 19452
2005-01-11 04:06:27 +00:00