Commit Graph

3132 Commits

Author SHA1 Message Date
Ulrich Weigand ca25643a05 [SystemZ] Add support for vecintrin.h vector built-in functions
This patch adds support for the System Z vector built-in functions.
The API-defined header file has the name vecintrin.h.

The user-level functions are defined in the same style as the clang
version of altivec.h, making heavy use of the __overloadable__ and
__always_inline__ attributes.  Where possible the functions expand to
generic operations rather than specific built-in functions, in the hope
that that form can be optimised better.

Where a built-in routine is specified to require an immediate integer
argument, the __enable_if__ attribute is used to verify the argument is
in fact constant and in the appropriate range.

Based on a patch by Richard Sandiford.

llvm-svn: 243643
2015-07-30 14:10:43 +00:00
Ulrich Weigand 3c5038a535 Add support for System z vector language extensions
The z13 vector facility has an associated language extension,
closely modeled on AltiVec/VSX.  The main differences are:

- vector long, vector float and vector pixel are not supported

- vector long long and vector double are supported (like VSX)

- comparison operators return a vector rather than a scalar integer

- shift operators behave like the OpenCL shift operators

- vector bool is only supported as argument to certain operators;
  some operators allow mixing a bool with a non-bool vector 

This patch adds clang support for the extension.  It is closely modelled
on the AltiVec support.  Similarly to the -faltivec option, there's a
new -fzvector option to enable the extensions (as well as an -mzvector
alias for compatibility with GCC).  There's also a separate LangOpt.

The extension as implemented here is intended to be compatible with
the -mzvector extension recently implemented by GCC.

Based on a patch by Richard Sandiford.

Differential Revision: http://reviews.llvm.org/D11001

llvm-svn: 243642
2015-07-30 14:08:36 +00:00
Michael Kuperstein 2229ca71fd [X86] Recognize "flags" as an identifier, not a register in Intel-syntax inline asm
This contains the test-case for r243630.

Patch by: marina.yatsina@intel.com
Differential Revision: http://reviews.llvm.org/D11513

llvm-svn: 243632
2015-07-30 10:10:47 +00:00
Asaf Badouh 1998eb2077 [X86][AVX512BW] add convert i16 to i8 and unpack intrinsics
Differential Revision: http://reviews.llvm.org/D11564

llvm-svn: 243514
2015-07-29 12:34:20 +00:00
Chih-Hung Hsieh 2c656c9417 Add -femulated-tls flag to select the emulated TLS model.
This will be used for old targets like Android that do not
support ELF TLS models.

Differential Revision: http://reviews.llvm.org/D10524

llvm-svn: 243441
2015-07-28 16:27:56 +00:00
Kristof Beyls 918f8ab7c6 RegParmMax must be 0 for AArch64, as the regparm function attribute is not supported on AArch64.
llvm-svn: 243417
2015-07-28 14:23:47 +00:00
Adhemerval Zanella 3916c910d1 [AArch64] Implement __builtin_thread_pointer
This path add the aarch64 __builtin_thread_pointer support.  It will be
lowered to llvm.aarch64.thread.pointer.

llvm-svn: 243413
2015-07-28 13:10:10 +00:00
Asaf Badouh 93aa4c808a [X86][AVX512VL] add AVX512VL intrinsics 4 out of 4
Differential Revision: http://reviews.llvm.org/D11526

llvm-svn: 243409
2015-07-28 12:04:40 +00:00
Asaf Badouh b7cf71b63d [X86][AVX512VL] add AVX512VL intrinsics 3 out of 4
http://reviews.llvm.org/D11526

llvm-svn: 243406
2015-07-28 11:14:09 +00:00
Asaf Badouh 78ee5cc8e1 [X86][AVX512VL] add AVX512VL intrinsics 2 out of 4
http://reviews.llvm.org/D11526

llvm-svn: 243402
2015-07-28 10:30:56 +00:00
Asaf Badouh 74da38706e [X86][AVX512VL] add AVX512VL intrinsics 1 out of 4
http://reviews.llvm.org/D11526

llvm-svn: 243394
2015-07-28 08:26:14 +00:00
Simon Pilgrim 917c4d41ea Fixed test in rL243305
llvm-svn: 243314
2015-07-27 19:49:54 +00:00
Simon Pilgrim f81966d04b [X86] Add missing _m_prefetch intrinsic
The 3DNOW/PRFCHW cpu targets define both the PREFETCHW (set cache line modified) and PREFETCH (set cache line exclusive) instructions but only the _m_prefetchw (PREFETCHW) intrinsic is included in the header. This patch adds the missing _m_prefetch intrinsic.

I'm basing this off AMD documentation - the intel docs on the support for PREFETCHW isn't clear whether Silvermont/Broadwell properly support PREFETCH but given that the intrinsic implementation is a default __builtin_prefetch call, it is safe whatever.

Fix for PR23648

Differential Revision: http://reviews.llvm.org/D11338

llvm-svn: 243305
2015-07-27 19:01:52 +00:00
David Majnemer 72f8c5e0c9 Try to make the buildbots happy
This test was missing a triple causing it to error out on windows
targets.  They accept a much smaller alignment value.

llvm-svn: 243234
2015-07-26 02:16:35 +00:00
David Majnemer 6cd35912c0 [CodeGen] Don't UBSan-ize the argument to __builtin_frame_address
__builtin_frame_address requires its argument to be a constant
expression which already implies that it cannot have undefined behavior.
However, we used EmitScalarExpr to emit the argument causing UBSan to
try to check for overflow.

Instead, use the constant expression emission system.

This fixes PR24256.

llvm-svn: 243206
2015-07-25 05:57:24 +00:00
Chih-Hung Hsieh 0b0eeaaaf6 Correct x86_64 Android fp128 mangled name
These changes are for Android x86_64 targets to be compatible with current Android g++.
https://llvm.org/bugs/show_bug.cgi?id=23897
Use 'g' and 'Cg' for "long double" and "long double _Complex" mangled type names.

Differential Revision: http://reviews.llvm.org/D11466

llvm-svn: 243133
2015-07-24 18:12:54 +00:00
Asaf Badouh f6a58b6dff [X86][AVX512F] Add FP scalar intrinsics
intrinsics for: add/sub/mul/div/min/max in their FP scalar versions

Differential Revision: http://reviews.llvm.org/D11418

llvm-svn: 243009
2015-07-23 12:13:32 +00:00
Asaf Badouh 7d99966e91 [X86][AVX512BW] add madd and maddubs intrinsics
Differential Revision: http://reviews.llvm.org/D11420

llvm-svn: 242986
2015-07-23 07:07:25 +00:00
Asaf Badouh ffeb624483 [X86][AVX512F] add FP arithmetic intrinsics
add/div/mul/sub include rounding versions


Differential Revision: http://reviews.llvm.org/D11354

llvm-svn: 242790
2015-07-21 15:27:28 +00:00
David Majnemer 1bf0f8ede6 [MS Compat] Add support for __declspec(noalias)
The attribute '__declspec(noalias)' communicates that the function only
accesses memory pointed to by its pointer-typed arguments.

llvm-svn: 242728
2015-07-20 22:51:52 +00:00
Yunzhong Gao d65200cbfd Fix quoting of #pragma comment for PS4.
This is the PS4 counterpart to r229376, which quotes the library name if the
name contains space. It was discovered that if a library name contains both
double-quote and space characters, quoting the name might produce unexpected
results, but we are mostly concerned with a Windows host environment, which
does not allow double-quote or slashes in file/folder names.

Differential Revision: http://reviews.llvm.org/D11275

llvm-svn: 242689
2015-07-20 17:46:56 +00:00
Benjamin Kramer b596056413 [CodeGen] Flip lanes when lowering __builtin_palignr with one lane
Otherwise we'd pick the wrong lane for the resulting shuffle and
miscompile code. PR24187.

llvm-svn: 242678
2015-07-20 15:31:17 +00:00
Alexey Bataev 91e5860fad [X86, inlineasm] Improve analysis of x,Y0,Yi,Ym,Yt,L,e,Z,s asm constraints (patch by Alexey Frolov)
Improve Sema checking of 9 existing inline asm constraints (‘x’, ‘Y*’, ‘L’, ‘e’, ‘Z’, ‘s’).
Differential Revision: http://reviews.llvm.org/D10536

llvm-svn: 242665
2015-07-20 12:08:00 +00:00
Asaf Badouh d4419ca657 [X86][AVX512BW] add clang intrinsics for pmulhrsw / pmulhuw / pmulhw
also made minor fix in "test_mm512_maskz_permutex2var_epi16"

Differential Revision: http://reviews.llvm.org/D11336

llvm-svn: 242635
2015-07-19 08:47:31 +00:00
Steven Wu 3db51cbc21 Fix test case in r242565
llvm-svn: 242571
2015-07-17 20:49:01 +00:00
Steven Wu 546a19628b Fix -save-temp when using objc-arc, sanitizer and profiling
Currently, -save-temp will cause ObjCARC optimization to be dropped,
sanitizer pass to run early in the pipeline, and profiling
instrumentation to run twice.
Fix the issue by properly disable all passes in the optimization
pipeline when generating bitcode output and parse some of the Language
Options even when the input is bitcode so the passes can be setup
correctly.

llvm-svn: 242565
2015-07-17 20:09:56 +00:00
Aaron Ballman 7572e58b66 Disable #pragma redefine_extname for C++ code as it does not make sense in such a context.
Patch by Andrey Bokhanko!

llvm-svn: 242420
2015-07-16 17:06:53 +00:00
Akira Hatanaka 580efb2475 [ARM] Pass subtarget feature "+no-movt" instead of passing backend option
"-arm-use-movt=0".
        
This change is needed since backend options do not make it to the backend
when doing LTO and are not capable of changing the behavior of code-gen
passes on a per-function basis.

rdar://problem/21529937

Differential Revision: http://reviews.llvm.org/D11025

llvm-svn: 242368
2015-07-16 00:43:00 +00:00
Bill Schmidt f4aa8fe4aa [PPC64] Update tests for vec_sld
Revision 224297 modified the behavior of vec_sld for little endian so
that LLVM will generate the correct corresponding vsldoi instruction.
I neglected to update the existing tests, which continued to pass
because they were not specific enough.  This patch adds enough
specificity to the tests to make them useful for BE and LE testing of
vec_sld.

llvm-svn: 242313
2015-07-15 18:55:02 +00:00
Nemanja Ivanovic 6c363ed67a Add missing builtins to altivec.h for ABI compliance (vol. 4)
This patch corresponds to review:
http://reviews.llvm.org/D11184

A number of new interfaces for altivec.h (as mandated by the ABI):
vector float vec_cpsgn(vector float, vector float)
vector double vec_cpsgn(vector double, vector double)
vector double vec_or(vector bool long long, vector double)
vector double vec_or(vector double, vector bool long long)
vector double vec_re(vector double)
vector signed char vec_cntlz(vector signed char)
vector unsigned char vec_cntlz(vector unsigned char)
vector short vec_cntlz(vector short)
vector unsigned short vec_cntlz(vector unsigned short)
vector int vec_cntlz(vector int)
vector unsigned int vec_cntlz(vector unsigned int)
vector signed long long vec_cntlz(vector signed long long)
vector unsigned long long vec_cntlz(vector unsigned long long)
vector signed char vec_nand(vector bool signed char, vector signed char)
vector signed char vec_nand(vector signed char, vector bool signed char)
vector signed char vec_nand(vector signed char, vector signed char)
vector unsigned char vec_nand(vector bool unsigned char, vector unsigned char)
vector unsigned char vec_nand(vector unsigned char, vector bool unsigned char)
vector unsigned char vec_nand(vector unsigned char, vector unsigned char)
vector short vec_nand(vector bool short, vector short)
vector short vec_nand(vector short, vector bool short)
vector short vec_nand(vector short, vector short)
vector unsigned short vec_nand(vector bool unsigned short, vector unsigned short)
vector unsigned short vec_nand(vector unsigned short, vector bool unsigned short)
vector unsigned short vec_nand(vector unsigned short, vector unsigned short)
vector int vec_nand(vector bool int, vector int)
vector int vec_nand(vector int, vector bool int)
vector int vec_nand(vector int, vector int)
vector unsigned int vec_nand(vector bool unsigned int, vector unsigned int)
vector unsigned int vec_nand(vector unsigned int, vector bool unsigned int)
vector unsigned int vec_nand(vector unsigned int, vector unsigned int)
vector signed long long vec_nand(vector bool long long, vector signed long long)
vector signed long long vec_nand(vector signed long long, vector bool long long)
vector signed long long vec_nand(vector signed long long, vector signed long long)
vector unsigned long long vec_nand(vector bool long long, vector unsigned long long)
vector unsigned long long vec_nand(vector unsigned long long, vector bool long long)
vector unsigned long long vec_nand(vector unsigned long long, vector unsigned long long)
vector signed char vec_orc(vector bool signed char, vector signed char)
vector signed char vec_orc(vector signed char, vector bool signed char)
vector signed char vec_orc(vector signed char, vector signed char)
vector unsigned char vec_orc(vector bool unsigned char, vector unsigned char)
vector unsigned char vec_orc(vector unsigned char, vector bool unsigned char)
vector unsigned char vec_orc(vector unsigned char, vector unsigned char)
vector short vec_orc(vector bool short, vector short)
vector short vec_orc(vector short, vector bool short)
vector short vec_orc(vector short, vector short)
vector unsigned short vec_orc(vector bool unsigned short, vector unsigned short)
vector unsigned short vec_orc(vector unsigned short, vector bool unsigned short)
vector unsigned short vec_orc(vector unsigned short, vector unsigned short)
vector int vec_orc(vector bool int, vector int)
vector int vec_orc(vector int, vector bool int)
vector int vec_orc(vector int, vector int)
vector unsigned int vec_orc(vector bool unsigned int, vector unsigned int)
vector unsigned int vec_orc(vector unsigned int, vector bool unsigned int)
vector unsigned int vec_orc(vector unsigned int, vector unsigned int)
vector signed long long vec_orc(vector bool long long, vector signed long long)
vector signed long long vec_orc(vector signed long long, vector bool long long)
vector signed long long vec_orc(vector signed long long, vector signed long long)
vector unsigned long long vec_orc(vector bool long long, vector unsigned long long)
vector unsigned long long vec_orc(vector unsigned long long, vector bool long long)
vector unsigned long long vec_orc(vector unsigned long long, vector unsigned long long)
vector signed char vec_div(vector signed char, vector signed char)
vector unsigned char vec_div(vector unsigned char, vector unsigned char)
vector signed short vec_div(vector signed short, vector signed short)
vector unsigned short vec_div(vector unsigned short, vector unsigned short)
vector signed int vec_div(vector signed int, vector signed int)
vector unsigned int vec_div(vector unsigned int, vector unsigned int)
vector signed long long vec_div(vector signed long long, vector signed long long)
vector unsigned long long vec_div(vector unsigned long long, vector unsigned long long)
vector unsigned char vec_mul(vector unsigned char, vector unsigned char)
vector unsigned int vec_mul(vector unsigned int, vector unsigned int)
vector unsigned long long vec_mul(vector unsigned long long, vector unsigned long long)
vector unsigned short vec_mul(vector unsigned short, vector unsigned short)
vector signed char vec_mul(vector signed char, vector signed char)
vector signed int vec_mul(vector signed int, vector signed int)
vector signed long long vec_mul(vector signed long long, vector signed long long)
vector signed short vec_mul(vector signed short, vector signed short)
vector signed long long vec_mergeh(vector signed long long, vector signed long long)
vector signed long long vec_mergeh(vector signed long long, vector bool long long)
vector signed long long vec_mergeh(vector bool long long, vector signed long long)
vector unsigned long long vec_mergeh(vector unsigned long long, vector unsigned long long)
vector unsigned long long vec_mergeh(vector unsigned long long, vector bool long long)
vector unsigned long long vec_mergeh(vector bool long long, vector unsigned long long)
vector double vec_mergeh(vector double, vector double)
vector double vec_mergeh(vector double, vector bool long long)
vector double vec_mergeh(vector bool long long, vector double)
vector signed long long vec_mergel(vector signed long long, vector signed long long)
vector signed long long vec_mergel(vector signed long long, vector bool long long)
vector signed long long vec_mergel(vector bool long long, vector signed long long)
vector unsigned long long vec_mergel(vector unsigned long long, vector unsigned long long)
vector unsigned long long vec_mergel(vector unsigned long long, vector bool long long)
vector unsigned long long vec_mergel(vector bool long long, vector unsigned long long)
vector double vec_mergel(vector double, vector double)
vector double vec_mergel(vector double, vector bool long long)
vector double vec_mergel(vector bool long long, vector double)
vector signed int vec_pack(vector signed long long, vector signed long long)
vector unsigned int vec_pack(vector unsigned long long, vector unsigned long long)
vector bool int vec_pack(vector bool long long, vector bool long long)

llvm-svn: 242171
2015-07-14 17:50:27 +00:00
Asaf Badouh 1626545667 [x86] add 2 bit to ObjCOrBuiltinID and new intrinsics
add 2 bit to ObjCOrBuiltinID (changed from 11bits to 13bits), see discussion in
Add new intrinsics support that already covered by the BE.
All the intrinsics are covered by tests

Differential Revision: http://reviews.llvm.org/D10893

llvm-svn: 242144
2015-07-14 14:02:45 +00:00
David Majnemer c0c42f3dea [MS ABI] Don't generates code for unreferenced inline definitions of library builtins
We should only consider declarations which were written, implicit
declarations shouldn't be considered.

This fixes PR24084.

llvm-svn: 241941
2015-07-10 20:55:38 +00:00
Akira Hatanaka 10bdb2b144 [inlineasm] Attach readonly and readnone to inline-asm instructions.
Previously, clang/llvm treated inline-asm instructions conservatively,
choosing not to eliminate the instructions or hoisting them out of a loop
even when it was safe to do so. This commit makes changes to attach a
readonly or readnone attribute to an inline-asm instruction, which enables
passes such as LICM and EarlyCSE to move or optimize away the instruction.

rdar://problem/11358192

Differential Revision: http://reviews.llvm.org/D10546

llvm-svn: 241930
2015-07-10 18:44:40 +00:00
Ulrich Weigand 03ce2a16bf Respect alignment of nested bitfields
tools/clang/test/CodeGen/packed-nest-unpacked.c contains this test:

struct XBitfield {
  unsigned b1 : 10;
  unsigned b2 : 12;
  unsigned b3 : 10;
};
struct YBitfield {
  char x;
  struct XBitfield y;
} __attribute((packed));
struct YBitfield gbitfield;

unsigned test7() {
  // CHECK: @test7
  // CHECK: load i32, i32* getelementptr inbounds (%struct.YBitfield, %struct.YBitfield* @gbitfield, i32 0, i32 1, i32 0), align 4
  return gbitfield.y.b2;
}

The "align 4" is actually wrong.  Accessing all of "gbitfield.y" as a single
i32 is of course possible, but that still doesn't make it 4-byte aligned as
it remains packed at offset 1 in the surrounding gbitfield object.

This alignment was changed by commit r169489, which also introduced changes
to bitfield access code in CGExpr.cpp.  Code before that change used to take
into account *both* the alignment of the field to be accessed within the
current struct, *and* the alignment of that outer struct itself; this logic
was removed by the above commit.

Neglecting to consider both values can cause incorrect code to be generated
(I've seen an unaligned access crash on SystemZ due to this bug).

In order to always use the best known alignment value, this patch removes
the CGBitFieldInfo::StorageAlignment member and replaces it with a
StorageOffset member specifying the offset from the start of the surrounding
struct to the bitfield's underlying storage.  This offset can then be combined
with the best-known alignment for a bitfield access lvalue to determine the
alignment to use when accessing the bitfield's storage.

Differential Revision: http://reviews.llvm.org/D11034

llvm-svn: 241916
2015-07-10 17:30:00 +00:00
Nemanja Ivanovic 26c3534b84 Add missing builtins to altivec.h for ABI compliance (vol. 3)
This patch corresponds to review:
http://reviews.llvm.org/D10972

Fix for the handling of dependent features that are enabled by default
on some CPU's (such as -mvsx, -mpower8-vector).

Also provides a number of new interfaces or fixes existing ones in
altivec.h.

Changed signatures to conform to ABI:
vector short vec_perm(vector signed short, vector signed short, vector unsigned char)
vector int vec_perm(vector signed int, vector signed int, vector unsigned char)
vector long long vec_perm(vector signed long long, vector signed long long, vector unsigned char)
vector signed char vec_sld(vector signed char, vector signed char, const int)
vector unsigned char vec_sld(vector unsigned char, vector unsigned char, const int)
vector bool char vec_sld(vector bool char, vector bool char, const int)
vector unsigned short vec_sld(vector unsigned short, vector unsigned short, const int)
vector signed short vec_sld(vector signed short, vector signed short, const int)
vector signed int vec_sld(vector signed int, vector signed int, const int)
vector unsigned int vec_sld(vector unsigned int, vector unsigned int, const int)
vector float vec_sld(vector float, vector float, const int)
vector signed char vec_splat(vector signed char, const int)
vector unsigned char vec_splat(vector unsigned char, const int)
vector bool char vec_splat(vector bool char, const int)
vector signed short vec_splat(vector signed short, const int)
vector unsigned short vec_splat(vector unsigned short, const int)
vector bool short vec_splat(vector bool short, const int)
vector pixel vec_splat(vector pixel, const int)
vector signed int vec_splat(vector signed int, const int)
vector unsigned int vec_splat(vector unsigned int, const int)
vector bool int vec_splat(vector bool int, const int)
vector float vec_splat(vector float, const int)

Added a VSX path to:
vector float vec_round(vector float)

Added interfaces:
vector signed char vec_eqv(vector signed char, vector signed char)
vector signed char vec_eqv(vector bool char, vector signed char)
vector signed char vec_eqv(vector signed char, vector bool char)
vector unsigned char vec_eqv(vector unsigned char, vector unsigned char)
vector unsigned char vec_eqv(vector bool char, vector unsigned char)
vector unsigned char vec_eqv(vector unsigned char, vector bool char)
vector signed short vec_eqv(vector signed short, vector signed short)
vector signed short vec_eqv(vector bool short, vector signed short)
vector signed short vec_eqv(vector signed short, vector bool short)
vector unsigned short vec_eqv(vector unsigned short, vector unsigned short)
vector unsigned short vec_eqv(vector bool short, vector unsigned short)
vector unsigned short vec_eqv(vector unsigned short, vector bool short)
vector signed int vec_eqv(vector signed int, vector signed int)
vector signed int vec_eqv(vector bool int, vector signed int)
vector signed int vec_eqv(vector signed int, vector bool int)
vector unsigned int vec_eqv(vector unsigned int, vector unsigned int)
vector unsigned int vec_eqv(vector bool int, vector unsigned int)
vector unsigned int vec_eqv(vector unsigned int, vector bool int)
vector signed long long vec_eqv(vector signed long long, vector signed long long)
vector signed long long vec_eqv(vector bool long long, vector signed long long)
vector signed long long vec_eqv(vector signed long long, vector bool long long)
vector unsigned long long vec_eqv(vector unsigned long long, vector unsigned long long)
vector unsigned long long vec_eqv(vector bool long long, vector unsigned long long)
vector unsigned long long vec_eqv(vector unsigned long long, vector bool long long)
vector float vec_eqv(vector float, vector float)
vector float vec_eqv(vector bool int, vector float)
vector float vec_eqv(vector float, vector bool int)
vector double vec_eqv(vector double, vector double)
vector double vec_eqv(vector bool long long, vector double)
vector double vec_eqv(vector double, vector bool long long)
vector bool long long vec_perm(vector bool long long, vector bool long long, vector unsigned char)
vector double vec_round(vector double)
vector double vec_splat(vector double, const int)
vector bool long long vec_splat(vector bool long long, const int)
vector signed long long vec_splat(vector signed long long, const int)
vector unsigned long long vec_splat(vector unsigned long long,
vector bool int vec_sld(vector bool int, vector bool int, const int)
vector bool short vec_sld(vector bool short, vector bool short, const int)

llvm-svn: 241904
2015-07-10 13:11:34 +00:00
Ulrich Weigand 6e2cea6f0c Respect alignment when loading up a coerced function argument
Code in CGCall.cpp that loads up function arguments that need to be
coerced to a different type may in some cases ignore the fact that
the source of the argument is not naturally aligned. This may cause
incorrect code to be generated. In some places in CreateCoercedLoad,
we already have setAlignment calls to address this, but I ran into one
where it was missing, causing wrong code generation on SystemZ.

However, in that location, we do not actually know what alignment of
the source location we can rely on; the callers do not pass anything
to this routine. This is already an issue in other places in
CreateCoercedLoad; and the same problem exists for CreateCoercedStore.

To avoid pessimising code, and to fix the FIXMEs already in place,
this patch also adds an alignment argument to the CreateCoerced*
routines and uses it instead of forcing an alignment of 1. The
callers are changed to pass in the best information they have.

This actually requires changes in a number of existing test cases
since we now get better alignment in many places.

Differential Revision: http://reviews.llvm.org/D11033

llvm-svn: 241898
2015-07-10 11:31:43 +00:00
Reid Kleckner 8819a4065f Re-enable 32-bit SEH after the alignment fix
llvm-svn: 241878
2015-07-10 00:16:25 +00:00
Reid Kleckner e7844ea7f8 Disable 32-bit SEH, again
Move the diagnostic back to codegen so that we can compile ATL on the
self-host bot. We don't actually end up emitting code for the __try, so
the diagnostic won't be hit.

llvm-svn: 241761
2015-07-08 23:57:03 +00:00
Reid Kleckner 338635389f [SEH] Re-enable SEH on x86 Windows after r241699
llvm-svn: 241704
2015-07-08 18:27:10 +00:00
Adrian Prantl bc068586ac Revert "Revert r241620 and follow-up commits" and move the initialization
of the llvm targets from clang/CodeGen into ClangCheck.cpp and CIndex.cpp.

llvm-svn: 241653
2015-07-08 01:00:30 +00:00
Reid Kleckner 15d152d3ac [SEH] Switch from frameaddress(0) to localaddress
This should do the right thing for stack realignment prologues.

llvm-svn: 241644
2015-07-07 23:23:31 +00:00
Adrian Prantl 142ec39739 Revert r241620 and follow-up commits while investigating linux buildbot failures.
llvm-svn: 241642
2015-07-07 23:19:46 +00:00
Reid Kleckner 98cb8ba64c Update clang for intrinsic rename of framerecover to localrecover
llvm-svn: 241634
2015-07-07 22:26:07 +00:00
Adrian Prantl e50371b948 Wrap clang modules and pch files in an object file container.
This patch adds ObjectFilePCHContainerOperations uses the LLVM backend
to put the contents of a PCH into a __clangast section inside a COFF, ELF,
or Mach-O object file container.

This is done to facilitate module debugging by makeing it possible to
store the debug info for the types defined by a module alongside the AST.

rdar://problem/20091852

llvm-svn: 241620
2015-07-07 20:11:29 +00:00
Akira Hatanaka 3fb33a5d18 [ARM] Pass subtarget feature "+long-calls" instead of passing backend option
"-arm-long-calls".

This change allows using -mlong-calls/-mno-long-calls for LTO and enabling or
disabling long call on a per-function basis.

rdar://problem/21529937

Differential Revision: http://reviews.llvm.org/D9414

llvm-svn: 241565
2015-07-07 06:42:05 +00:00
Adrian Prantl 3d2c051cf6 Debug info: Emit distinct __block_literal_generic types for blocks with
different function signatures. (Previously clang would emit all block
pointer types with the type of the first block pointer in the compile
unit.)

rdar://problem/21602473

llvm-svn: 241534
2015-07-07 00:49:35 +00:00
Reid Kleckner 9fe7f2396b Revert "Revert 241171, 241187, 241199 (32-bit SEH)."
This reverts commit r241244, but restricts SEH support to Win64.

This way, Chromium builds will still fall back on TUs with SEH, and
Clang developers can work on this incrementally upstream while patching
this small predicate locally. It'll also make it easier to review small
fixes.

llvm-svn: 241533
2015-07-07 00:36:30 +00:00
Eric Christopher af4d608d13 Handle arbitrary whitespace in the target attribute support.
This allows us to deal a bit more gracefully with inclusions done
by macros, token pasting, or just code layout/formatting.

llvm-svn: 241525
2015-07-06 23:51:59 +00:00
Adrian Prantl 498fff661d Debug info: Don't emit a bogus location for the global block pointer type
(__block_literal_generic).

The arbitrary nature of the location confuses lldb and prevents type
uniquing.

rdar://problem/21602473

llvm-svn: 241511
2015-07-06 21:31:35 +00:00
Teresa Johnson 8749d80431 Resubmit "Pass down the -flto option to the -cc1 job" (r239481)
The patch is the same except for the addition of a new test for the
issue that required reverting the dependent llvm commit.

--Original Commit Message--

Pass down the -flto option to the -cc1 job, and from there into the
CodeGenOptions and onto the PassManagerBuilder. This enables gating
the new EliminateAvailableExternally module pass on whether we are
preparing for LTO.

If we are preparing for LTO (e.g. a -flto -c compile), the new pass is not
included as we want to preserve available externally functions for possible
link time inlining.

llvm-svn: 241467
2015-07-06 16:23:00 +00:00