Commit Graph

19 Commits

Author SHA1 Message Date
Eli Friedman 5abfd79900 Chris fixed this README a while back by changing how clang generates code for structs like the given struct.
llvm-svn: 132815
2011-06-09 23:02:19 +00:00
Chris Lattner 0ab5e2cded Fix a ton of comment typos found by codespell. Patch by
Luis Felipe Strano Moraes!

llvm-svn: 129558
2011-04-15 05:18:47 +00:00
Benjamin Kramer 25e6e06e42 Try to reuse the value when lowering memset.
This allows us to compile:
  void test(char *s, int a) {
    __builtin_memset(s, a, 15);
  }
into 1 mul + 3 stores instead of 3 muls + 3 stores.

llvm-svn: 122710
2011-01-02 19:57:05 +00:00
Eli Friedman ba1f1fcae5 Add back some possible optimizations for va_arg, with wording that makes it
more clear what exactly is missing.

llvm-svn: 105934
2010-06-14 07:03:30 +00:00
Eli Friedman ab44d1281a A few new x86-64 specific README entries.
llvm-svn: 105674
2010-06-09 02:43:17 +00:00
Eli Friedman 6382c9c681 Remove outdated README entries.
llvm-svn: 105303
2010-06-02 00:10:36 +00:00
Chris Lattner 3219d85f16 add a note about dead zero extends.
llvm-svn: 78511
2009-08-08 22:46:59 +00:00
Dan Gohman 29705333e5 The x86-64 red zone is now being used.
llvm-svn: 64535
2009-02-14 03:30:05 +00:00
Dan Gohman fd18d630bc i128 and f80 are implemented for x86-64 now.
llvm-svn: 55920
2008-09-08 16:42:56 +00:00
Dan Gohman f166d2d0d6 Implement an x86-64 ABI detail of passing structs by hidden first
argument. The x86-64 ABI requires the incoming value of %rdi to
be copied to %rax on exit from a function that is returning a
large C struct.

Also, add a README-X86-64 entry detailing the missed optimization
opportunity and proposing an alternative approach.

llvm-svn: 50075
2008-04-21 23:59:07 +00:00
Chris Lattner 83263b8cfb Make X86TargetLowering::LowerSINT_TO_FP return without creating a dead
stack slot and store if the  SINT_TO_FP is actually legal.  This allows
us to compile:

double a(double b) {return (unsigned)b;}

to:

_a:
	cvttsd2siq	%xmm0, %rax
	movl	%eax, %eax
	cvtsi2sdq	%rax, %xmm0
	ret

instead of:

_a:
	subq	$8, %rsp
	cvttsd2siq	%xmm0, %rax
	movl	%eax, %eax
	cvtsi2sdq	%rax, %xmm0
	addq	$8, %rsp
	ret

crazy.

llvm-svn: 47660
2008-02-27 05:57:41 +00:00
Chris Lattner 5fe95a04f5 this code is correct but strange looking ;-)
llvm-svn: 47659
2008-02-27 05:48:44 +00:00
Chris Lattner 3c7d3d5700 Compile x86-64-and-mask.ll into:
_test:
	movl	%edi, %eax
	ret

instead of:

_test:
        movl    $4294967295, %ecx
        movq    %rdi, %rax
        andq    %rcx, %rax
        ret

It would be great to write this as a Pat pattern that used subregs 
instead of a 'pseudo' instruction, but I don't know how to do that
in td files.

llvm-svn: 47658
2008-02-27 05:47:54 +00:00
Chris Lattner 3f86109fd1 add a note
llvm-svn: 47652
2008-02-27 01:17:20 +00:00
Evan Cheng e32e923a6a divb / mulb outputs to ah. Under x86-64 it's not legal to read ah if the instruction requires a rex prefix (i.e. outputs to r8b, etc.). So issue shift right by 8 on AX and then truncate it to 8 bits instead.
llvm-svn: 40972
2007-08-09 21:59:35 +00:00
Chris Lattner a6527d6a61 Dan pointed out that this is done, remove it!
llvm-svn: 35430
2007-03-28 17:26:52 +00:00
Evan Cheng dd60ca029c - Switch X86-64 JIT to large code size model.
- Re-enable some codegen niceties for X86-64 static relocation model codegen.
- Clean ups, etc.

llvm-svn: 32238
2006-12-05 19:50:18 +00:00
Evan Cheng 830f224bf5 Update
llvm-svn: 32214
2006-12-05 03:58:23 +00:00
Evan Cheng 11b0a5dbd4 Committing X86-64 support.
llvm-svn: 30177
2006-09-08 06:48:29 +00:00