Commit Graph

103 Commits

Author SHA1 Message Date
Anton Korobeynikov 383a324735 Long live the exception handling!
This patch fills the last necessary bits to enable exceptions
handling in LLVM. Currently only on x86-32/linux.

In fact, this patch adds necessary intrinsics (and their lowering) which
represent really weird target-specific gcc builtins used inside unwinder.

After corresponding llvm-gcc patch will land (easy) exceptions should be
more or less workable. However, exceptions handling support should not be 
thought as 'finished': I expect many small and not so small glitches
everywhere.

llvm-svn: 39855
2007-07-14 14:06:15 +00:00
Dan Gohman 57111e7a60 Define non-intrinsic instructions for vector min, max, sqrt, rsqrt, and rcp,
in addition to the intrinsic forms. Add spill-folding entries for these new
instructions, and for the scalar min and max instrinsic instructions which
were missing. And add some preliminary ISelLowering code for using the new
non-intrinsic vector sqrt instruction, and fneg and fabs.

llvm-svn: 38478
2007-07-10 00:05:58 +00:00
Dan Gohman 309d3d51b3 Move ComputeMaskedBits, MaskedValueIsZero, and ComputeNumSignBits from
TargetLowering to SelectionDAG so that they have more convenient
access to the current DAG, in preparation for the ValueType routines
being changed from standalone functions to members of SelectionDAG for
the pre-legalize vector type changes.

llvm-svn: 37704
2007-06-22 14:59:07 +00:00
Bill Wendling 591eab8844 Support for the special case of a vector with the canonical form:
vector_shuffle v1, v2, <2, 6, 3, 7>

I.e.

         vector_shuffle v, undef, <2, 2, 3, 3>

MMX only has a shuffle for v4i16 vectors. It needs to use the unpackh for
this type of operation.

llvm-svn: 36403
2007-04-24 21:16:55 +00:00
Lauro Ramos Venancio 2518889872 Implement "general dynamic", "initial exec" and "local exec" TLS models for
X86 32 bits.

llvm-svn: 36283
2007-04-20 21:38:10 +00:00
Anton Korobeynikov 8b7aab009e Implemented correct stack probing on mingw/cygwin for dynamic alloca's.
Also, fixed static case in presence of eax livin. This fixes PR331

PS: Why don't we still have push/pop instructions? :)
llvm-svn: 36195
2007-04-17 09:20:00 +00:00
Chris Lattner 808ac93f68 remove some dead hooks
llvm-svn: 35845
2007-04-09 23:31:19 +00:00
Chris Lattner 39f65335d5 remove some dead target hooks, subsumed by isLegalAddressingMode
llvm-svn: 35840
2007-04-09 22:27:04 +00:00
Chris Lattner 1eb94d973a implement the new addressing mode description hook.
llvm-svn: 35521
2007-03-30 23:15:24 +00:00
Chris Lattner d685514e2e switch TargetLowering::getConstraintType to take the entire constraint,
not just the first letter.  No functionality change.

llvm-svn: 35322
2007-03-25 02:14:49 +00:00
Dale Johannesen 0c6bb5eab7 repair x86 performance, dejagnu problems from previous change
llvm-svn: 35245
2007-03-21 21:51:52 +00:00
Evan Cheng 3ab7ea7965 More flexible TargetLowering LSR hooks for testing whether an immediate is
a legal target address immediate or scale.

llvm-svn: 35073
2007-03-12 23:28:50 +00:00
Evan Cheng deaea25eb9 X86-64 VACOPY needs custom expansion. va_list is a struct { i32, i32, i8*, i8* }.
llvm-svn: 34857
2007-03-02 23:16:35 +00:00
Chris Lattner 3ed3be3b4a remove fastcc (not fastcall) support
llvm-svn: 34730
2007-02-28 06:05:16 +00:00
Chris Lattner 74f5bcf8eb add an accessor.
llvm-svn: 34625
2007-02-26 04:01:25 +00:00
Chris Lattner 7802f3e2ea pass the calling convention into Lower*CallTo, instead of using ad-hoc flags.
llvm-svn: 34587
2007-02-25 09:06:15 +00:00
Chris Lattner 0cd9960fe7 factor a bunch of code out of LowerCCCCallTo into a new LowerCallResult
function.  This function now uses GetRetValueLocs to determine *where*
the result values are located and concerns itself with *how* to pull the
values out.

llvm-svn: 34586
2007-02-25 08:59:22 +00:00
Chris Lattner dfda38f7dc simplify result value lowering by splitting the selection of *where* to return
registers out from the logic of *how* to return them.

This changes X86-64 to mark EAX live out when returning a 32-bit value,
where before it marked RAX liveout.

llvm-svn: 34582
2007-02-25 08:15:11 +00:00
Nate Begeman eda5997cc8 Finish off bug 680, allowing targets to custom lower frame and return
address nodes.

llvm-svn: 33636
2007-01-29 22:58:52 +00:00
Anton Korobeynikov 037c867b54 Propagate changes from my local tree. This patch includes:
1. New parameter attribute called 'inreg'. It has meaning "place this
parameter in registers, if possible". This is some generalization of
gcc's regparm(n) attribute. It's currently used only in X86-32 backend.
2. Completely rewritten CC handling/lowering code inside X86 backend.
Merged stdcall + c CCs and fastcall + fast CC.
3. Dropped CSRET CC. We cannot add struct return variant for each
target-specific CC (e.g. stdcall + csretcc and so on).
4. Instead of CSRET CC introduced 'sret' parameter attribute. Setting in
on first attribute has meaning 'This is hidden pointer to structure
return. Handle it gently'.
5. Fixed small bug in llvm-extract + add new feature to
FunctionExtraction pass, which relinks all internal-linkaged callees
from deleted function to external linkage. This will allow further
linking everything together.

NOTEs: 1. Documentation will be updated soon.
       2. llvm-upgrade should be improved to translate csret => sret.
          Before this, there will be some unexpected test fails.
llvm-svn: 33597
2007-01-28 13:31:35 +00:00
Evan Cheng 82241c86e9 - FCOPYSIGN custom lowering bug. Clear the sign bit of operand 0 first before
or'ing in the sign bit of operand 1.
- Tweaking: rather than left shift the sign bit, fp_extend operand 1 first
  before taking its sign bit if its type is smaller than that of operand 0.

llvm-svn: 32932
2007-01-05 21:37:56 +00:00
Evan Cheng 4363e884c0 With SSE2, expand FCOPYSIGN to a series of SSE bitwise operations.
llvm-svn: 32900
2007-01-05 07:55:56 +00:00
Evan Cheng ae1cd75af7 - Use a different wrapper node for RIP-relative GV, etc.
- Proper support for both small static and PIC modes under X86-64
- Some (non-optimal) support for medium modes.

llvm-svn: 32046
2006-11-30 21:55:46 +00:00
Evan Cheng 49683ba236 Don't dag combine floating point select to max and min intrinsics. Those
take v4f32 / v2f64 operands and may end up causing larger spills / restores.
Added X86 specific nodes X86ISD::FMAX, X86ISD::FMIN instead.

This fixes PR996.

llvm-svn: 31645
2006-11-10 21:43:37 +00:00
Evan Cheng 922e191116 Fixed a bug which causes x86 be to incorrectly match
shuffle v, undef, <2, ?, 3, ?>
to movhlps
It should match to unpckhps instead.

Added proper matching code for
shuffle v, undef, <2, 3, 2, 3>

llvm-svn: 31519
2006-11-07 22:14:24 +00:00
Chris Lattner 44daa50bed allow the address of a global to be used with the "i" constraint when in
-static mode.  This implements PR882.

llvm-svn: 31326
2006-10-31 20:13:11 +00:00
Evan Cheng e056dd5928 Fixed a significant bug where unpcklpd is incorrectly used to extract element 1 from a v2f64 value.
llvm-svn: 31228
2006-10-27 21:08:32 +00:00
Chris Lattner c0fb567e23 Implement branch analysis/xform hooks required by the branch folding pass.
llvm-svn: 31065
2006-10-20 17:42:20 +00:00
Chris Lattner f4aeff00c2 fit in 80 cols
llvm-svn: 31039
2006-10-18 18:26:48 +00:00
Chris Lattner d9e4bf5285 update comments
llvm-svn: 30663
2006-09-28 23:33:12 +00:00
Anton Korobeynikov 3c5b3df6a0 Adding codegeneration for StdCall & FastCall calling conventions
llvm-svn: 30549
2006-09-20 22:03:51 +00:00
Evan Cheng 4259a0f654 X86ISD::CMP now produces a chain as well as a flag. Make that the chain
operand of a conditional branch to allow load folding into CMP / TEST
instructions.

llvm-svn: 30241
2006-09-11 02:19:56 +00:00
Evan Cheng 11b0a5dbd4 Committing X86-64 support.
llvm-svn: 30177
2006-09-08 06:48:29 +00:00
Chris Lattner 524129dd64 Fix PR850 and CodeGen/X86/2006-07-31-SingleRegClass.ll.
The CFE refers to all single-register constraints (like "A") by their 16-bit
name, even though the 8 or 32-bit version of the register may be needed.
The X86 backend should realize what is going on and redecode the name back
to its proper form.

llvm-svn: 29420
2006-07-31 23:26:50 +00:00
Chris Lattner 298ef37e02 Implement the inline asm 'A' constraint. This implements PR825 and
CodeGen/X86/2006-07-10-InlineAsmAConstraint.ll

llvm-svn: 29101
2006-07-11 02:54:03 +00:00
Evan Cheng 5987cfb7b1 X86 target specific DAG combine: turn build_vector (load x), (load x+4),
(load x+8), (load x+12), <0, 1, 2, 3> to a single 128-bit load (aligned and
unaligned).

e.g.

__m128 test(float a, float b, float c, float d) {
  return _mm_set_ps(d, c, b, a);
}

_test:
        movups 4(%esp), %xmm0
        ret

llvm-svn: 29042
2006-07-07 08:33:52 +00:00
Evan Cheng 38c5aee959 Simplify X86CompilationCallback: always align to 16-byte boundary; don't save EAX/EDX if unnecessary.
llvm-svn: 28910
2006-06-24 08:36:10 +00:00
Evan Cheng 2a33094284 Switch X86 over to a call-selection model where the lowering code creates
the copyto/fromregs instead of making the X86ISD::CALL selection code create
them.

llvm-svn: 28463
2006-05-25 00:59:30 +00:00
Chris Lattner aa2372562e Patches to make the LLVM sources more -pedantic clean. Patch provided
by Anton Korobeynikov!  This is a step towards closing PR786.

llvm-svn: 28447
2006-05-24 17:04:05 +00:00
Evan Cheng 17e734f0a6 Remove PreprocessCCCArguments and PreprocessFastCCArguments now that
FORMAL_ARGUMENTS nodes include a token operand.

llvm-svn: 28439
2006-05-23 21:06:34 +00:00
Chris Lattner 8be5be817c Implement an annoying part of the Darwin/X86 abi: the callee of a struct
return argument pops the hidden struct pointer if present, not the caller.

For example, in this testcase:

struct X { int D, E, F, G; };
struct X bar() {
  struct X a;
  a.D = 0;
  a.E = 1;
  a.F = 2;
  a.G = 3;
  return a;
}
void foo(struct X *P) {
  *P = bar();
}

We used to emit:

_foo:
        subl $28, %esp
        movl 32(%esp), %eax
        movl %eax, (%esp)
        call _bar
        addl $28, %esp
        ret
_bar:
        movl 4(%esp), %eax
        movl $0, (%eax)
        movl $1, 4(%eax)
        movl $2, 8(%eax)
        movl $3, 12(%eax)
        ret

This is correct on Linux/X86 but not Darwin/X86.  With this patch, we now
emit:

_foo:
        subl $28, %esp
        movl 32(%esp), %eax
        movl %eax, (%esp)
        call _bar
***     addl $24, %esp
        ret
_bar:
        movl 4(%esp), %eax
        movl $0, (%eax)
        movl $1, 4(%eax)
        movl $2, 8(%eax)
        movl $3, 12(%eax)
***     ret $4

For the record, GCC emits (which is functionally equivalent to our new code):

_bar:
        movl    4(%esp), %eax
        movl    $3, 12(%eax)
        movl    $2, 8(%eax)
        movl    $1, 4(%eax)
        movl    $0, (%eax)
        ret     $4
_foo:
        pushl   %esi
        subl    $40, %esp
        movl    48(%esp), %esi
        leal    16(%esp), %eax
        movl    %eax, (%esp)
        call    _bar
        subl    $4, %esp
        movl    16(%esp), %eax
        movl    %eax, (%esi)
        movl    20(%esp), %eax
        movl    %eax, 4(%esi)
        movl    24(%esp), %eax
        movl    %eax, 8(%esi)
        movl    28(%esp), %eax
        movl    %eax, 12(%esi)
        addl    $40, %esp
        popl    %esi
        ret

This fixes SingleSource/Benchmarks/CoyoteBench/fftbench with LLC and the
JIT, and fixes the X86-backend portion of PR729.  The CBE still needs to
be updated.

llvm-svn: 28438
2006-05-23 18:50:38 +00:00
Evan Cheng 8c6b234ce8 Should pass by reference.
llvm-svn: 28357
2006-05-17 19:07:40 +00:00
Evan Cheng 48940d16b2 - Clean up formal argument lowering code. Prepare for vector pass by value work.
- Fixed vararg support.

llvm-svn: 27985
2006-04-27 01:32:22 +00:00
Evan Cheng e0bcfbe811 Switching over FORMAL_ARGUMENTS mechanism to lower call arguments.
llvm-svn: 27975
2006-04-26 01:20:17 +00:00
Evan Cheng a9467aab0a Separate LowerOperation() into multiple functions, one per opcode.
llvm-svn: 27972
2006-04-25 20:13:52 +00:00
Evan Cheng e8b5180044 Now generating perfect (I think) code for "vector set" with a single non-zero
scalar value.

e.g.
        _mm_set_epi32(0, a, 0, 0);
==>
	movd 4(%esp), %xmm0
	pshufd $69, %xmm0, %xmm0

        _mm_set_epi8(0, 0, 0, 0, 0, a, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0);
==>
	movzbw 4(%esp), %ax
	movzwl %ax, %eax
	pxor %xmm0, %xmm0
	pinsrw $5, %eax, %xmm0

llvm-svn: 27923
2006-04-21 01:05:10 +00:00
Evan Cheng 60f0b8998e - Added support to turn "vector clear elements", e.g. pand V, <-1, -1, 0, -1>
to a vector shuffle.
- VECTOR_SHUFFLE lowering change in preparation for more efficient codegen
of vector shuffle with zero (or any splat) vector.

llvm-svn: 27875
2006-04-20 08:58:49 +00:00
Evan Cheng 7855e4d032 Commute vector_shuffle to match more movlhps, movlp{s|d} cases.
llvm-svn: 27840
2006-04-19 20:35:22 +00:00
Evan Cheng 5d247f81c1 Last few SSE3 intrinsics.
llvm-svn: 27711
2006-04-14 21:59:03 +00:00
Evan Cheng 12ba3e23d0 Added support for _mm_move_ss and _mm_move_sd.
llvm-svn: 27575
2006-04-11 00:19:04 +00:00