Commit Graph

2828 Commits

Author SHA1 Message Date
Kostya Serebryany 68c29da4c5 Do not insert asan paddings after fields that have flexible arrays.
Summary:
We should avoid a tail padding not only if the last field
has zero size but also if the last field is a struct with a flexible array.

If/when http://reviews.llvm.org/D5478 is committed,
this will also handle the case of structs with zero-sized arrays.

Reviewers: majnemer, rsmith

Reviewed By: rsmith

Subscribers: cfe-commits

Differential Revision: http://reviews.llvm.org/D5924

llvm-svn: 220708
2014-10-27 19:34:10 +00:00
NAKAMURA Takumi 729be14435 Prune CRLF.
llvm-svn: 220678
2014-10-27 12:37:26 +00:00
Rafael Espindola 5a1106f8fc Make this test a bit stricter by checking clang's output too.
llvm-svn: 220604
2014-10-25 01:51:19 +00:00
Reid Kleckner d7857f05f4 Add frontend support for __vectorcall
Wire it through everywhere we have support for fastcall, essentially.

This allows us to parse the MSVC "14" CTP headers, but we will
miscompile them because LLVM doesn't support __vectorcall yet.

Reviewed By: Aaron Ballman

Differential Revision: http://reviews.llvm.org/D5808

llvm-svn: 220573
2014-10-24 17:42:17 +00:00
Daniel Sanders aa1b35590f [mips] Mark aggregate arguments passed in registers with the inreg attribute
Summary:
This allows us to easily identify them in the backend which in turn allows us
to handle them correctly for big-endian targets (where they must be shifted
into the upper bits of the register).

Depends on D5961

Reviewers: atanasyan

Reviewed By: atanasyan

Subscribers: cfe-commits, theraven

Differential Revision: http://reviews.llvm.org/D5962

llvm-svn: 220566
2014-10-24 15:30:16 +00:00
Daniel Sanders 5b445b3844 [mips] Promote all integral/enumeration types to the GPR width
Summary:
Ensure all integral/enumeration types are appropriately annotated with
signext/zeroext. In particular, i32 now has these attributes when using the
N32/N64 ABI. This paves the way for accurately representing the way the
N32/N64 ABI's promotes integer arguments to i64.

Reviewers: atanasyan

Reviewed By: atanasyan

Subscribers: cfe-commits, theraven

Differential Revision: http://reviews.llvm.org/D5961

llvm-svn: 220563
2014-10-24 14:42:42 +00:00
David Blaikie 60a877b5b9 DebugInfo: Omit scopes in -gmlt to reduce metadata size (on disk and in memory)
I haven't done any actual impact analysis of this change as it's a
strict improvement, but I'd be curious to know how much it helps.

llvm-svn: 220408
2014-10-22 19:34:33 +00:00
Alexey Samsonov 6d87ce8bd5 Fixup for r220403: Use getFileLoc() instead of getSpellingLoc() in SanitizerBlacklist.
This also handles the case where function name (not its body)
is obtained from macro expansion.

llvm-svn: 220407
2014-10-22 19:34:25 +00:00
Alexey Samsonov fa7a8569bb SanitizerBlacklist: Use spelling location for blacklisting purposes.
When SanitizerBlacklist decides if the SourceLocation is blacklisted,
we need to first turn it into a SpellingLoc before fetching the filename
and scanning "src:" entries. Otherwise we will fail to fecth the
correct filename for function definitions coming from macro expansion.

llvm-svn: 220403
2014-10-22 18:26:07 +00:00
Jiangning Liu 2bafc2d5ae Remove including <complex.h> in test case, and change to use _Complex instead.
llvm-svn: 220258
2014-10-21 02:19:58 +00:00
Jiangning Liu 444822bbcf Lower compound assignment for the missing type llvm::Type::FP128TyID.
llvm-svn: 220257
2014-10-21 01:34:34 +00:00
David Majnemer 8e133965c8 CodeGen: ConstStructBuilder must verify packed constraints after padding
This reverts commit r220169 which reverted r220153.  However, it also
contains additional changes:
- We may need to add padding *after* we've packed the struct.  This
  occurs when the aligned next field offset is greater than the new
  field's offset.  When this occurs, we make the struct packed.
  *However*, once packed the next field offset might be less than the
  new feild's offset.  It is in this case that we might further pad the
  struct.
- We would pad structs which were perfectly sized!  This behavior is
  immensely old.  This behavior came from blindly subtracting
  NextFieldOffsetInChars from RecordSize.  This doesn't take into
  account the fact that the struct might have a greater overall
  alignment than the last field.

llvm-svn: 220175
2014-10-19 23:40:06 +00:00
Chandler Carruth bf972bb2e0 Revert r220153: "CodeGen: ConstStructBuilder must verify packed constraints after padding"
This commit caused two tests in LNT to regress. I'm able to reproduce on
any platform and will send reproduction steps to the original commit
log. This should restore the LNT bots that have been failing.

llvm-svn: 220169
2014-10-19 19:41:46 +00:00
Chandler Carruth 0c4b230b32 [complex] Teach the complex math IR gen to emit direct math and
a NaN-test prior to the call to the library function.

This should automatically make fastmath (including just non-NaNs) able to avoid
the expensive libcalls and also open the door to more advanced folding in LLVM
based on the rules for complex math.

Two important notes to remember: first is that this isn't yet a proper
limited range mode, it's still just improving the unlimited range mode.
Also, it isn't really perfecet w.r.t. what an unlimited range mode
should be doing because it isn't quite handling the flags produced by
all the operations in the way desirable for that mode, but then neither
is compiler-rt's libcall. When the compiler-rt libcall is improved to
carefully manage flags, the code emitted here should be improved
correspondingly. And it is still a long-term desirable thing to add
a limited range mode to Clang that would be able to use direct math
without library calls here.

Special thanks to Steve Canon for the careful review on this patch and
teaching me about these issues. =D

Differential Revision: http://reviews.llvm.org/D5756

llvm-svn: 220167
2014-10-19 19:13:49 +00:00
David Majnemer afefe97e1c CodeGen: ConstStructBuilder must verify packed constraints after padding
Before, ConstStructBuilder::AppendBytes would check packed constraints
prior to padding being added before the field's offset.  However, adding
this padding might force our struct to be packed.  Because we wouldn't
check *after* adding padding, ConstStructBuilder would be in an
inconsistent state leading to a crash.

This fixes PR21300.

llvm-svn: 220153
2014-10-19 00:03:10 +00:00
Alexey Samsonov a0ac3c2bf0 [ASan] Improve blacklisting of global variables.
This commit changes the way we blacklist global variables in ASan.
Now the global is excluded from instrumentation (either regular
bounds checking, or initialization-order checking) if:

1) Global is explicitly blacklisted by its mangled name.
This part is left unchanged.

2) SourceLocation of a global is in blacklisted source file.
This changes the old behavior, where instead of looking at the
SourceLocation of a variable we simply considered llvm::Module
identifier. This was wrong, as identifier may not correspond to
the file name, and we incorrectly disabled instrumentation
for globals coming from #include'd files.

3) Global is blacklisted by type.
Now we build the type of a global variable using Clang machinery
(QualType::getAsString()), instead of llvm::StructType::getName().

After this commit, the active users of ASan blacklist files
may have to revisit them (this is a backwards-incompatible change).

llvm-svn: 220097
2014-10-17 22:37:33 +00:00
Kostya Serebryany 644492139f fix -fsanitize-address-field-padding for the cases with virtual base classes
Summary: Correctly compute the non-virtual size of a class.

Test Plan: Build SPEC 2016 with -fsanitize-address-field-padding

Reviewers: rsmith

Reviewed By: rsmith

Subscribers: cfe-commits

Differential Revision: http://reviews.llvm.org/D5848

llvm-svn: 220089
2014-10-17 21:02:13 +00:00
Hans Wennborg 0b603cc4e9 Move test/CodeGen/sections.c to CodeGenCXX/sections.cpp
The test was running with -xc++. Seems it wants to be a C++ file.

llvm-svn: 220069
2014-10-17 18:13:21 +00:00
NAKAMURA Takumi e316722f4d Add explicit triple to clang/test/CodeGen/sanitize-address-field-padding.cpp, for now. It's incompatible to ms mangling.
llvm-svn: 220037
2014-10-17 12:48:01 +00:00
Joerg Sonnenberger aa3e9f5a0f complex long double support for PowerPC
llvm-svn: 220034
2014-10-17 11:51:19 +00:00
Renato Golin 031e817630 User c-tor name to fix the sanitizer test
llvm-svn: 220030
2014-10-17 10:09:25 +00:00
Renato Golin de44aec0e6 Trying to fix failing Clang sanitizer test on ARM bots
llvm-svn: 220029
2014-10-17 09:40:21 +00:00
Kostya Serebryany 23387754f8 trying to fix the new test again, this time for the clang-cmake-armv7-a15 bot
llvm-svn: 220002
2014-10-17 00:47:30 +00:00
Alexey Samsonov 1444bb9fc8 SanitizerBlacklist: blacklist functions by their source location.
This commit changes the way we blacklist functions in ASan, TSan,
MSan and UBSan. We used to treat function as "blacklisted"
and turned off instrumentation in it in two cases:

1) Function is explicitly blacklisted by its mangled name.
This part is not changed.

2) Function is located in llvm::Module, whose identifier is
contained in the list of blacklisted sources. This is completely
wrong, as llvm::Module may not correspond to the actual source
file function is defined in. Also, function can be defined in
a header, in which case user had to blacklist the .cpp file
this header was #include'd into, not the header itself.
Such functions could cause other problems - for instance, if the
header was included in multiple source files, compiled
separately and linked into a single executable, we could end up
with both instrumented and non-instrumented version of the same
function participating in the same link.

After this change we will make blacklisting decision based on
the SourceLocation of a function definition. If a function is
not explicitly defined in the source file, (for example, the
function is compiler-generated and responsible for
initialization/destruction of a global variable), then it will
be blacklisted if the corresponding global variable is defined
in blacklisted source file, and will be instrumented otherwise.

After this commit, the active users of blacklist files may have
to revisit them. This is a backwards-incompatible change, but
I don't think it's possible or makes sense to support the
old incorrect behavior.

I plan to make similar change for blacklisting GlobalVariables
(which is ASan-specific).

llvm-svn: 219997
2014-10-17 00:20:19 +00:00
Hans Wennborg 528c926b3c test/CodeGen/sections.c: add triple
llvm-svn: 219969
2014-10-16 21:36:23 +00:00
Kostya Serebryany 330e9f6c5f trying to fix the new test on hexagon-build
llvm-svn: 219965
2014-10-16 21:22:40 +00:00
Kostya Serebryany 293dc9be6e Insert poisoned paddings between fields in C++ classes so that AddressSanitizer can find intra-object-overflow bugs
Summary:
The general approach is to add extra paddings after every field
in AST/RecordLayoutBuilder.cpp, then add code to CTORs/DTORs that poisons the paddings
(CodeGen/CGClass.cpp).

Everything is done under the flag -fsanitize-address-field-padding. 
The blacklist file (-fsanitize-blacklist) allows to avoid the transformation 
for given classes or source files. 

See also https://code.google.com/p/address-sanitizer/wiki/IntraObjectOverflow

Test Plan: run SPEC2006 and some of the Chromium tests with  -fsanitize-address-field-padding

Reviewers: samsonov, rnk, rsmith

Reviewed By: rsmith

Subscribers: majnemer, cfe-commits

Differential Revision: http://reviews.llvm.org/D5687

llvm-svn: 219961
2014-10-16 20:54:52 +00:00
Hans Wennborg 899ded9cdf MS Compat: mark globals emitted in read-only sections const
They cannot be written to, so marking them const makes sense and may improve
optimisation.

As a side-effect, SectionInfos has to be moved from Sema to ASTContext.

It also fixes this problem, that occurs when compiling ATL:

  warning LNK4254: section 'ATL' (C0000040) merged into '.rdata' (40000040) with different attributes

The ATL headers are putting variables in a special section that's marked
read-only. However, Clang currently can't model that read-onlyness in the IR.
But, by making the variables const, the section does become read-only, and
the linker warning is avoided.

Differential Revision: http://reviews.llvm.org/D5812

llvm-svn: 219960
2014-10-16 20:52:46 +00:00
Rafael Espindola c55172ecbc Update for llvm change.
llvm-svn: 219952
2014-10-16 20:00:22 +00:00
Bradley Smith 04ee8aa1fc [AArch64] Enable A53 erratum workaround (835769) by default for Android targets
llvm-svn: 219933
2014-10-16 16:35:14 +00:00
Alexander Eremin 670c62770e specify dwarf version for Solaris
llvm-svn: 219901
2014-10-16 05:55:24 +00:00
David Majnemer bb525f7c20 CodeGen: Don't drop thread_local when emitting __thread aliases
CodeGen wouldn't mark the aliasee as thread_local if the aliasee was a
tentative definition.

Even if the definition was already emitted, it would never mark the
alias as thread_local.

This fixes PR21288.

llvm-svn: 219859
2014-10-15 22:38:23 +00:00
Saleem Abdulrasool 4c879bed5b test: simplify test further
Remove the use of an unnecessary function.  NFC.

llvm-svn: 219850
2014-10-15 21:37:52 +00:00
Tim Northover 147cd2f6e5 ARM: remove ARM/Thumb distinction for preferred alignment.
Thumb1 has legitimate reasons for preferring 32-bit alignment of types
i1/i8/i16, since the 16-bit encoding of "add rD, sp, #imm" requires #imm to be
a multiple of 4. However, this is a trade-off betweem code size and RAM usage;
the DataLayout string is not the best place to represent it even if desired.

So this patch removes the extra Thumb requirements, hopefully making ARM and
Thumb completely compatible in this respect.

llvm-svn: 219735
2014-10-14 22:12:21 +00:00
Tim Northover b98dc4b015 ARM: set preferred aggregate alignment to 32 universally.
Before, ARM and Thumb mode code had different preferred alignments, which could
lead to some rather unexpected results. There's justification for reducing it
from the default 64-bits (wasted space), but I don't think there is for going
below 32-bits.

There's no actual ABI change here, just to reassure people.

llvm-svn: 219720
2014-10-14 20:57:29 +00:00
Saleem Abdulrasool 64ab4de443 CodeGen: correct mangling for blocks
This addresses a regression introduced with SVN r219393.  A block may be
contained within another block.  In such a scenario, we would end up within a
BlockDecl, which is not a NamedDecl (as the names are synthesised).  The cast to
a NamedDecl of the DeclContext would then assert as the types are unrelated.

Restore the mangling behaviour to that prior to SVN r219393.  If the current
block is contained within a BlockDecl, walk up to the parent DeclContext,
recursively, until we have a non-BlockDecl.  This is expected to be a NamedDecl.
Add in a couple of asserts to ensure that the assumption that we only encounter
a block within a NamedDecl or a BlockDecl.

llvm-svn: 219696
2014-10-14 17:20:14 +00:00
Tyler Nowicki c724a83e20 Allow constant expressions in pragma loop hints.
Previously loop hints such as #pragma loop vectorize_width(#) required a constant. This patch allows a constant expression to be used as well. Such as a non-type template parameter or an expression (2 * c + 1).

Reviewed by Richard Smith

llvm-svn: 219589
2014-10-12 20:46:07 +00:00
Chandler Carruth b29a743891 [complex] Teach the other two binary operators on complex numbers (==
and !=) to support mixed complex and real operand types.

This requires removing an assert from SemaChecking, and adding support
both to the constant evaluator and the code generator to synthesize the
imaginary part when needed. This seemed somewhat cleaner than having
just the comparison operators force real-to-complex conversions.

I've added test cases for these operations. I'm really terrified that
there were *no* tests in-tree which exercised this.

This turned up when trying to build R after my change to the complex
type lowering.

llvm-svn: 219570
2014-10-11 11:03:30 +00:00
Chandler Carruth 686de24128 [complex] Use the much more powerful EmitCall routine to call libcalls
for complex math.

This should fix the windows build bots that started having trouble here
and generally fix complex libcall emission on targets which use sret for
complex data types. It also makes the code a bit simpler (despite
calling into a much more complex bucket of code).

llvm-svn: 219565
2014-10-11 09:24:41 +00:00
Chandler Carruth a216cad0fc [complex] Teach Clang to preserve different-type operands to arithmetic
operators where one type is a C complex type, and to emit both the
efficient and correct implementation for complex arithmetic according to
C11 Annex G using this extra information.

For both multiply and divide the old code was writing a long-hand
reduced version of the math without any of the special handling of inf
and NaN recommended by the standard here. Instead of putting more
complexity here, this change does what GCC does which is to emit
a libcall for the fully general case.

However, the old code also failed to do the proper minimization of the
set of operations when there was a mixed complex and real operation. In
those cases, C provides a spec for much more minimal operations that are
valid. Clang now emits the exact suggested operations. This change isn't
*just* about performance though, without minimizing these operations, we
again lose the correct handling of infinities and NaNs. It is critical
that this happen in the frontend based on assymetric type operands to
complex math operations.

The performance implications of this change aren't trivial either. I've
run a set of benchmarks in Eigen, an open source mathematics library
that makes heavy use of complex. While a few have slowed down due to the
libcall being introduce, most sped up and some by a huge amount: up to
100% and 140%.

In order to make all of this work, also match the algorithm in the
constant evaluator to the one in the runtime library. Currently it is
a broken port of the simplifications from C's Annex G to the long-hand
formulation of the algorithm.

Splitting this patch up is very hard because none of this works without
the AST change to preserve non-complex operands. Sorry for the enormous
change.

Follow-up changes will include support for sinking the libcalls onto
cold paths in common cases and fastmath improvements to allow more
aggressive backend folding.

Differential Revision: http://reviews.llvm.org/D5698

llvm-svn: 219557
2014-10-11 00:57:18 +00:00
Reid Kleckner 79b0fd7a48 Promote null pointer constants used as arguments to variadic functions
Make it possible to pass NULL through variadic functions on 64-bit
Windows targets. The Visual C++ headers define NULL to 0, when they
should define it to 0LL on Win64 so that NULL is a pointer-sized
integer.

Fixes PR20949.

Reviewers: thakis, rsmith

Differential Revision: http://reviews.llvm.org/D5480

llvm-svn: 219456
2014-10-10 00:05:45 +00:00
Alexey Bataev 9b280eab66 Fix compatibility issues in tests for PredefinedExpr with MSVC.
llvm-svn: 219405
2014-10-09 11:58:26 +00:00
Robert Khasanov b9f3a911c9 [AVX512] Added VPCMPEQ intrinisics to headers.
Added tests.

Patch by Maxim Blumenthal <maxim.blumenthal@intel.com>

llvm-svn: 219319
2014-10-08 17:18:13 +00:00
Hal Finkel 64567a80d2 Emit @llvm.assume for non-parameter lvalue align_value-attribute loads
We already add the align parameter attribute for function parameters that have
the align_value attribute (or those with a typedef type having that attribute),
which is an important special case, but does not handle pointers with value
alignment assumptions that come into scope in any other way. To handle the
general case, emit an @llvm.assume-based alignment assumption whenever we load
the pointer-typed lvalue of an align_value-attributed variable (except for
function parameters, which we already deal with at entry).

I'll also note that this is more general than Intel's described support in:
  https://software.intel.com/en-us/articles/data-alignment-to-assist-vectorization
which states that the compiler inserts __assume_aligned directives in response
to align_value-attributed variables only for function parameters and for the
initializers of local variables. I think that we can make the optimizer deal
with this more-general scheme (which could lead to a lot of calls to
@llvm.assume inside of loop bodies, for example), but if not, I'll rework this
to be less aggressive.

llvm-svn: 219052
2014-10-04 15:26:49 +00:00
Duncan P. N. Exon Smith 3c51fa6aae Revert "Revert "DI: LLVM schema change: fold constants into string""
This reverts commit r218917, effectively reapplying r218913.  Original
commit message follows.

--

Update debug info testcases for an LLVM metadata schema change to fold
metadata constant operands into a single `MDString`.

Part of PR17891.

llvm-svn: 219011
2014-10-03 20:01:52 +00:00
Hal Finkel 189c699cad Make test/CodeGen/atomic-ops.c free-standing
This test includes stdint.h (via stdatomic.h), which might include system
headers (and that might not work, depending on the system configuration).
Attempting to fix llvm-clang-lld-x86_64-debian-fast.

llvm-svn: 218960
2014-10-03 05:04:49 +00:00
Hal Finkel 6970ac8b0a Add an implementation of C11's stdatomic.h
Adds a Clang-specific implementation of C11's stdatomic.h header. On systems,
such as FreeBSD, where a stdatomic.h header is already provided, we defer to
that header instead (using our __has_include_next technology). Otherwise, we
provide an implementation in terms of our __c11_atomic_* intrinsics (that were
created for this purpose).

C11 7.1.4p1 requires function declarations for atomic_thread_fence,
atomic_signal_fence, atomic_flag_test_and_set,
atomic_flag_test_and_set_explicit, and atomic_flag_clear, and requires that
they have external linkage. Accordingly, we provide these declarations, but if
a user elides the shadowing macros and uses them, then they must have a libc
(or similar) that actually provides definitions.

atomic_flag is implemented using _Bool as the underlying type. This is
consistent with the implementation provided by FreeBSD and also GCC 4.9 (at
least when __GCC_ATOMIC_TEST_AND_SET_TRUEVAL == 1).

Patch by Richard Smith (rebased and slightly edited by me -- Richard said I
should drive at this point).

llvm-svn: 218957
2014-10-03 04:29:40 +00:00
Duncan P. N. Exon Smith 834c265e85 Revert "DI: LLVM schema change: fold constants into string"
This reverts commit r218913 while I investigate some bots.

llvm-svn: 218917
2014-10-02 22:15:09 +00:00
Duncan P. N. Exon Smith 02b418a875 DI: LLVM schema change: fold constants into string
Update debug info testcases for an LLVM metadata schema change to fold
metadata constant operands into a single `MDString`.

Part of PR17891.

llvm-svn: 218913
2014-10-02 21:56:07 +00:00
Hal Finkel 1b0d24e03a Initial support for the align_value attribute
This adds support for the align_value attribute. This attribute is supported by
Intel's compiler (versions 14.0+), and several of my HPC users have requested
support in Clang. It specifies an alignment assumption on the values to which a
pointer points, and is used by numerical libraries to encourage efficient
generation of vector code.

Of course, we already have an aligned attribute that can specify enhanced
alignment for a type, so why is this additional attribute important? The
problem is that if you want to specify that an input array of T is, say,
64-byte aligned, you could try this:

  typedef double aligned_double attribute((aligned(64)));
  void foo(aligned_double *P) {
    double x = P[0]; // This is fine.
    double y = P[1]; // What alignment did those doubles have again?
  }

the access here to P[1] causes problems. P was specified as a pointer to type
aligned_double, and any object of type aligned_double must be 64-byte aligned.
But if P[0] is 64-byte aligned, then P[1] cannot be, and this access causes
undefined behavior. Getting round this problem requires a lot of awkward
casting and hand-unrolling of loops, all of which is bad.

With the align_value attribute, we can accomplish what we'd like in a well
defined way:

  typedef double *aligned_double_ptr attribute((align_value(64)));
  void foo(aligned_double_ptr P) {
    double x = P[0]; // This is fine.
    double y = P[1]; // This is fine too.
  }

This attribute does not create a new type (and so it not part of the type
system), and so will only "propagate" through templates, auto, etc. by
optimizer deduction after inlining. This seems consistent with Intel's
implementation (thanks to Alexey for confirming the various Intel-compiler
behaviors).

As a final note, I would have chosen to call this aligned_value, not
align_value, for better naming consistency with the aligned attribute, but I
think it would be more useful to users to adopt Intel's name.

llvm-svn: 218910
2014-10-02 21:21:25 +00:00
Hal Finkel d2208b59cf Add __sync_fetch_and_nand (again)
Prior to GCC 4.4, __sync_fetch_and_nand was implemented as:

  { tmp = *ptr; *ptr = ~tmp & value; return tmp; }

but this was changed in GCC 4.4 to be:

  { tmp = *ptr; *ptr = ~(tmp & value); return tmp; }

in response to this change, support for sync_fetch_and_nand (and
sync_nand_and_fetch) was removed in r99522 in order to avoid miscompiling code
depending on the old semantics. However, at this point:

  1. Many years have passed, and the amount of code relying on the old
     semantics is likely smaller.

  2. Through the work of many contributors, all LLVM backends have been updated
     such that "atomicrmw nand" provides the newer GCC 4.4+ semantics (this process
     was complete July of 2014 (added to the release notes in r212635).

  3. The lack of this intrinsic is now a needless impediment to porting codes
     from GCC to Clang (I've now seen several examples of this).

It is true, however, that we still set GNUC_MINOR to 2 (corresponding to GCC
4.2). To compensate for this, and to address the original concern regarding
code relying on the old semantics, I've added a warning that specifically
details the fact that the semantics have changed and that we provide the newer
semantics.

Fixes PR8842.

llvm-svn: 218905
2014-10-02 20:53:50 +00:00
Job Noorman ac95cd5c22 Make sure aggregates are properly alligned on MSP430.
llvm-svn: 218666
2014-09-30 11:19:13 +00:00
NAKAMURA Takumi 6ed6ef7ac2 clang/test/CodeGen/builtin-assume-aligned.c: Fix for -Asserts.
llvm-svn: 218507
2014-09-26 09:37:15 +00:00
Hal Finkel ee90a223ea Support the assume_aligned function attribute
In addition to __builtin_assume_aligned, GCC also supports an assume_aligned
attribute which specifies the alignment (and optional offset) of a function's
return value. Here we implement support for the assume_aligned attribute by making
use of the @llvm.assume intrinsic.

llvm-svn: 218500
2014-09-26 05:04:30 +00:00
Jan Vesely b4379f9c2c CGBuiltin: Use frem instruction rather than libcall to implement fmod
AFAICT the semantics of frem match libm's fmod.

Signed-off-by: Jan Vesely <jan.vesely@rutgers.edu>
Reviewed-by: Tom Stellard <tom@stellard.net>
llvm-svn: 218488
2014-09-26 01:19:41 +00:00
Nico Weber 8f63ae1d4c Simplify tests.
This reverts bits of r218166 that are no longer necessary now that r218394 made
-Wmissing-prototype-for-cc a regular warning.

llvm-svn: 218400
2014-09-24 18:25:54 +00:00
Reid Kleckner 2e0717e129 Downgrade error about stdcall decls with no prototype to a warning
Fixes PR21027.  The MIDL compiler produces code that does this.

If we wanted to improve the warning, I think we could do this:
  void __stdcall f(); // Don't warn without -Wstrict-prototypes.
  void g() {
    f(); // Might warn, the user probably meant for f to take no args.
    f(1, 2, 3); // Warn, we have no idea what args f takes.
    f(1); // Error, this is insane, one of these calls is broken.
  }

Reviewers: thakis

Differential Revision: http://reviews.llvm.org/D5481

llvm-svn: 218394
2014-09-24 17:49:24 +00:00
Robert Khasanov ea13042cf2 [x86] Fixed argument types in intrinsics:
_addcarryx_u64
_addcarry_u64
_subborrow_u64

Thanks Pasi Parviainen for notice.

llvm-svn: 218376
2014-09-24 06:45:23 +00:00
Daniel Sanders caf534ef96 [mips] Fix r218248's testcase to use -O1 instead of -O3.
llvm-svn: 218298
2014-09-23 08:58:04 +00:00
Ehsan Akhgari 3e2db26efc ms-inline-asm: Add a test case for the usage of labels in bracket expressions
Summary: This is a test for this patch: http://reviews.llvm.org/D5445.

Reviewers: rnk

Subscribers: cfe-commits

Differential Revision: http://reviews.llvm.org/D5446

llvm-svn: 218271
2014-09-22 20:41:39 +00:00
Kaelyn Takata a1e18cc5b9 Fix test/CodeGen/mips-varargs.c to use %clang_cc1
Only tests under test/Driver should use %clang, and test/CodeGen in
particular must always use %clang_cc1.

llvm-svn: 218260
2014-09-22 18:06:01 +00:00
NAKAMURA Takumi 22a0fd416c clang/test/CodeGen/mips-varargs.c: Fixup for -Asserts.
llvm-svn: 218256
2014-09-22 16:40:05 +00:00
Daniel Sanders 8d36a61f52 [mips] Correct alignment of vectors passed in varargs for the O32 ABI.
Summary:
Vectors are normally 16-byte aligned, however the O32 ABI enforces a
maximum alignment of 8-bytes since the base of the stack is 8-byte aligned.
Previously, this was enforced on the caller side, but not on the callee side.

This fixes the output of OpenCL's printf when given vectors.

Reviewers: atanasyan

Reviewed By: atanasyan

Subscribers: llvm-commits, pekka.jaaskelainen

Differential Revision: http://reviews.llvm.org/D5433

llvm-svn: 218248
2014-09-22 13:27:06 +00:00
Ehsan Akhgari 31097581aa ms-inline-asm: Scope inline asm labels to functions
Summary:
This fixes PR20023.  In order to implement this scoping rule, we piggy
back on the existing LabelDecl machinery, by creating LabelDecl's that
will carry the "internal" name of the inline assembly label, which we
will rewrite the asm label to.

Reviewers: rnk

Subscribers: cfe-commits

Differential Revision: http://reviews.llvm.org/D4589

llvm-svn: 218230
2014-09-22 02:21:54 +00:00
Nico Weber d191063c6c Follow-up to r214408: Warn on other callee-cleanup functions without prototype too.
According to lore, we used to verifier-fail on:

  void __thiscall f();
  int main() { f(1); }

So that's fixed now. System headers use prototype-less __stdcall functions,
so make that a warning that's DefaultError -- then it fires on regular code
but is suppressed in system headers.

Since it's used in system headers, we have codegen tests for this; massage
them slightly so that they still compile.

llvm-svn: 218166
2014-09-19 23:07:12 +00:00
Robert Khasanov 2c589bcc5e [x86] Add _addcarry_u{32|64} and _subborrow_u{32|64}.
They are added to adxintrin.h but outside __ADX__ block.
These intrinics generates adc and sbb correspondingly that were available before ADX
            

llvm-svn: 218118
2014-09-19 10:29:22 +00:00
Robert Khasanov 83c419b349 [x86] Added _addcarryx_u32, _addcarryx_u64 intrinsics
llvm-svn: 218117
2014-09-19 10:17:06 +00:00
Akira Hatanaka e867e422e2 [X86, inlineasm] Do not allow using constraint 'x' for a variable larger than
128-bit unless the target CPU supports AVX.

rdar://problem/11846140

llvm-svn: 218082
2014-09-18 21:58:54 +00:00
Hans Wennborg 3c619a43d5 [X86, inline-asm] Allow 256-bit wide operands for the 'x' constraints
The 'x' constraint is for "any SSE register", and GCC seems to include the
256-bit ymm registers in that concept.

llvm-svn: 218073
2014-09-18 20:24:04 +00:00
Akira Hatanaka 974131ea88 [X86, inlineasm] Check that the output size is correct for the given constraint.
llvm-svn: 218064
2014-09-18 18:17:18 +00:00
Akira Hatanaka 3ab9ada59c Fix test case.
This is another follow-up patch to r217996.

llvm-svn: 218003
2014-09-18 00:29:04 +00:00
Akira Hatanaka d7e375d4b3 Fix test case.
This is a follow-up to r217994.

llvm-svn: 217996
2014-09-18 00:04:10 +00:00
Akira Hatanaka 31c6d3b71e [X86, inline-asm] Check that the input size is correct for constraints R, q, Q,
S, D, A, y, x, f, t, and u.

This is a follow-up patch for r167717.

rdar://problem/11846140
rdar://problem/17476970

llvm-svn: 217994
2014-09-17 23:35:14 +00:00
Alexey Samsonov 8e1162c71d Implement nonnull-attribute sanitizer
Summary:
This patch implements a new UBSan check, which verifies
that function arguments declared to be nonnull with __attribute__((nonnull))
are actually nonnull in runtime.

To implement this check, we pass FunctionDecl to CodeGenFunction::EmitCallArgs
(where applicable) and if function declaration has nonnull attribute specified
for a certain formal parameter, we compare the corresponding RValue to null as
soon as it's calculated.

Test Plan: regression test suite

Reviewers: rsmith

Reviewed By: rsmith

Subscribers: cfe-commits, rnk

Differential Revision: http://reviews.llvm.org/D5082

llvm-svn: 217389
2014-09-08 17:22:45 +00:00
NAKAMURA Takumi 4b04c11d00 clang/test/CodeGen/builtin-assume*.c: Fixup for -Asserts.
llvm-svn: 217352
2014-09-08 01:12:55 +00:00
Hal Finkel bcc06085a8 Add __builtin_assume and __builtin_assume_aligned using @llvm.assume.
This makes use of the recently-added @llvm.assume intrinsic to implement a
__builtin_assume(bool) intrinsic (to provide additional information to the
optimizer). This hooks up __assume in MS-compatibility mode to mirror
__builtin_assume (the semantics have been intentionally kept compatible), and
implements GCC's __builtin_assume_aligned as assume((p - o) & mask == 0). LLVM
now contains special logic to deal with assumptions of this form.

llvm-svn: 217349
2014-09-07 22:58:14 +00:00
Chandler Carruth 2949e548f4 [x86] Clean up the x86 builtin specs to reflect r217310 in LLVM which
made the 8-bit masks actually 8-bit arguments to these intrinsics.

These builtins are a mess. Many were missing the I qualifier which
I added where obviously correct. Most aren't tested, but I've updated
the relevant tests. I've tried to catch all the things that should
become 'c' in this round.

It's also frustrating because the set of these is really ad-hoc and
doesn't really map that cleanly to the set supported by either GCC or
LLVM. Oh well...

llvm-svn: 217311
2014-09-06 10:30:51 +00:00
James Molloy 163b1ba471 [ARMv8] Add support for 32-bit MIN/MAXNM and directed rounding.
This patch adds support for the 32bit numeric max/min and directed round-to-integral NEON intrinsics that were added as part of v8, along with unit tests.

Patch by Graham Hunter!

llvm-svn: 217242
2014-09-05 13:50:34 +00:00
Hans Wennborg d71907dd07 Don't emit prologues or epilogues for naked functions (PR18791, PR20028)
For naked functions with parameters, Clang would still emit stores in the prologue
that would clobber the stack, because LLVM doesn't set up a stack frame. (This
shows up in -O0 compiles, because the stores are optimized away otherwise.)

For example:

  __attribute__((naked)) int f(int x) {
    asm("movl $42, %eax");
    asm("retl");
  }

Would result in:

  _Z1fi:
  movl    12(%esp), %eax
  movl    %eax, (%esp)    <--- Oops.
  movl    $42, %eax
  retl

Differential Revision: http://reviews.llvm.org/D5183

llvm-svn: 217198
2014-09-04 22:16:33 +00:00
Reid Kleckner 9b3e3dfc54 MS inline asm: Allow __asm blocks to set a return value
If control falls off the end of a function after an __asm block, MSVC
assumes that the inline assembly filled the EAX and possibly EDX
registers with an appropriate return value. This functionality is used
in inline functions returning 64-bit integers in system headers, so we
need some amount of compatibility.

This is implemented in Clang by adding extra output constraints to every
inline asm block, and storing the resulting output registers into the
return value slot. If we see an asm block somewhere in the function
body, we emit a normal epilogue instead of marking the end of the
function with a return type unreachable.

Normal returns in functions not using this functionality will overwrite
the return value slot, and in most cases LLVM should be able to
eliminate the dead stores.

Fixes PR17201.

Reviewed By: majnemer

Differential Revision: http://reviews.llvm.org/D5177

llvm-svn: 217187
2014-09-04 20:04:38 +00:00
Reid Kleckner a4ab03ec21 MS inline asm: Add a test for xgetbv clobbers
llvm-svn: 217174
2014-09-04 16:58:47 +00:00
Daniel Sanders e5018b6c00 [mips] Mark aggregates returned in registers with the 'inreg' attribute.
Summary:
This allows us to easily find them in the backend after the aggregates have
been lowered to other types. This is important on big-endian targets using
the N32/N64 ABI's since these ABI's must shift small structures into the
upper bits of the register.

Reviewers: atanasyan

Reviewed By: atanasyan

Subscribers: llvm-commits

Differential Revision: http://reviews.llvm.org/D5005

llvm-svn: 217160
2014-09-04 15:05:39 +00:00
Daniel Sanders ed39f58390 [mips] Zero-sized structs cannot be ignored in MipsABIInfo::classifyReturnType() for O32
Summary:
They are returned indirectly which causes the other arguments to move to
the next argument slot.

With this, utils/ABITest does not discover any failing cases in the first
500 attempts on big/little endian for O32. Previously some of these failed.
Also tested N32/N64 little endian (big endian has other known issues) with
no issues.

Reviewers: atanasyan

Reviewed By: atanasyan

Subscribers: atanasyan, cfe-commits

Differential Revision: http://reviews.llvm.org/D4811

llvm-svn: 217147
2014-09-04 13:28:14 +00:00
Tom Stellard c4e0c1075b CGBuiltin: Use @llvm.fabs rather than fabs libcall when emitting builtins
Using the intrinsic allows the SelectionDAGBuilder to turn this call
into the FABS Node and also the intrinsic is something the vectorizer knows
how to vectorize.

This patch also sets the readnone attribute on this call, which should
enable additional optmizations.

llvm-svn: 217042
2014-09-03 15:24:29 +00:00
Hans Wennborg 2029991d74 Check in a test case for the problem with late-dropped dllimport (PR20803)
llvm-svn: 216749
2014-08-29 17:36:11 +00:00
James Molloy 90d6101410 Use store size instead of alloc size when coercing.
Previously, EnterStructPointerForCoercedAccess used Alloc size when determining how to convert. This was problematic, because there were situations were the alloc size was larger than the store size. For example, if the first element of a structure were i24 and the destination type were i32, the old code would generate a GEP and a load i24. The code should compare store sizes to ensure the whole object is loaded. I have attached a test case.

This patch modifies the output of arm64-be-bitfield.c test case, but the new IR seems to be equivalent, and after -O3, the compiler generates identical ARM assembly. (asr x0, x0, #54)

Patch by Thomas Jablin!

llvm-svn: 216722
2014-08-29 10:17:52 +00:00
David Majnemer 0392cf892f CodeGen: Don't completely mess-up optimized atomic libcalls
Summary:
We did a great job getting this wrong:
- We messed up which LLVM IR types to use for arguments and return values.
  The optimized libcalls use integer types for values.

  Clang attempted to use the IR type which corresponds to the value
  passed in instead of using an appropriately sized integer type.  This
  would result in violations of the ABI for, as an example, floating
  point types.
- We didn't bother recording the result of the atomic libcall in the
  destination memory.

Instead, call the functions with arguments matching the type of the
libcall prototype's parameters.

This fixes PR20780.

Differential Revision: http://reviews.llvm.org/D5098

llvm-svn: 216714
2014-08-29 07:27:49 +00:00
Kostya Serebryany 4a9187a810 call __asan_load_cxx_array_cookie when loading array cookie in asan mode.
Summary:
The current implementation of asan cookie is incorrect:
we add nosanitize metadata to the cookie load, but the metadata may be lost
and we will instrument the load from poisoned memory.
This change replaces the load with a call to __asan_load_cxx_array_cookie (r216692)

Reviewers: rsmith

Reviewed By: rsmith

Subscribers: cfe-commits

Differential Revision: http://reviews.llvm.org/D5111

llvm-svn: 216702
2014-08-29 01:01:32 +00:00
Hans Wennborg 0a20f5417c Better codegen support for DLL attributes being dropped after the first declaration (PR20792)
For the following code:

  __declspec(dllimport) int f(int x);
  int user(int x) {
    return f(x);
  }
  int f(int x) { return 1; }

Clang will drop the dllimport attribute in the AST, but CodeGen would have
already put it on the LLVM::Function, and that would never get updated.
(The same thing happens for global variables.)

This makes Clang check dropped DLL attribute case each time the LLVM object
is referenced.

This isn't perfect, because we will still get it wrong if the function is
never referenced by codegen after the attribute is dropped, but this handles
the common cases and makes us not fail in the verifier.

llvm-svn: 216699
2014-08-29 00:16:06 +00:00
Yi Kong 623393f31e arm_acle: Implement data processing intrinsics
Summary:
ACLE 2.0 section 9.2 defines the following "miscellaneous data processing intrinsics": `__clz`, `__cls`, `__ror`, `__rev`, `__rev16`, `__revsh` and `__rbit`.

`__clz` has already been implemented in the arm_acle.h header file. The rest are not supported yet. This patch completes ACLE data processing intrinsics.

Reviewers: t.p.northover, rengolin

Reviewed By: rengolin

Subscribers: aemerson, mroth, llvm-commits

Differential Revision: http://reviews.llvm.org/D4983

llvm-svn: 216658
2014-08-28 09:44:07 +00:00
Alexey Samsonov 9fc9bf83a8 Properly handle multiple nonnull attributes in CodeGen
llvm-svn: 216638
2014-08-28 00:53:20 +00:00
Richard Smith 00cc1c09c3 Fix regression in r216520: don't apply nonnull to non-pointer function
parameters in the IR.

llvm-svn: 216574
2014-08-27 18:56:18 +00:00
Oliver Stannard ed8ecc8429 Allow __fp16 as a function arg or return type for AArch64
ACLE 2.0 allows __fp16 to be used as a function argument or return
type. This enables this for AArch64.

This also fixes an existing bug that causes clang to not allow
homogeneous floating-point aggregates with a base type of __fp16. This
is valid for AAPCS64, but not for AAPCS-VFP.

llvm-svn: 216558
2014-08-27 16:31:57 +00:00
NAKAMURA Takumi 6107a8f4db Quick fix to test/CodeGen/2007-06-18-SextAttrAggregate.c for x86_64-mingw32, corresponding to r216507.
FIXME: Explicit triplets might be given here.
llvm-svn: 216557
2014-08-27 16:22:26 +00:00
Oliver Stannard 2bfdc5b517 Move some ARM-specific code from CGCall.cpp to TargetInfo.cpp
This tidies up some ARM-specific code added by r208417 to move it out
of the target-independent parts of clang into TargetInfo.cpp. This
also has the advantage that we can now flatten struct arguments to
variadic AAPCS functions.

llvm-svn: 216535
2014-08-27 10:43:15 +00:00
Julien Lerouge 10dcff81be Re-apply r216491 (Win64 ABI shouldn't extend integer type arguments.)
This time though, preserve the extension for bool types since that's compatible
with what MSVC expects.

See http://reviews.llvm.org/D4380

llvm-svn: 216507
2014-08-27 00:36:55 +00:00
Julien Lerouge e8d34fa172 Revert 216491, it breaks CodeGenCXX/microsoft-abi-member-pointers.cpp
llvm-svn: 216496
2014-08-26 22:11:53 +00:00
Julien Lerouge 0056256b55 Win64 ABI shouldn't extend integer type arguments.
Summary:
MSVC doesn't extend integer types smaller than 64bit, so to preserve
binary compatibility, clang shouldn't either.

For example, the following C code built with MSVC:

unsigned test(unsigned v);
unsigned foobar(unsigned short);
int main() { return test(0xffffffff) + foobar(28); }

Produces the following:

  0000000000000004: B9 FF FF FF FF     mov         ecx,0FFFFFFFFh
  0000000000000009: E8 00 00 00 00     call        test
  000000000000000E: 89 44 24 20        mov         dword ptr [rsp+20h],eax
  0000000000000012: 66 B9 1C 00        mov         cx,1Ch
  0000000000000016: E8 00 00 00 00     call        foobar

And as you can see, when setting up the call to foobar, only cx is overwritten.

If foobar is compiled with clang, then the zero extension added by clang means
the rest of the register, which contains garbage, could be used.

For example if foobar is:

unsigned foobar(unsigned short v) {
    return v;
}

Compiled with clang -fomit-frame-pointer -O3 gives the following assembly:

foobar:
  0000000000000000: 89 C8              mov         eax,ecx
  0000000000000002: C3                 ret

And that function would return garbage because the 16 most significant bits of
ecx still contain garbage from the first call.

With this change, the code for that function is now:

foobar:
  0000000000000000: 0F B7 C1           movzx       eax,cx
  0000000000000003: C3                 ret

Reviewers: chapuni, rnk

Reviewed By: rnk

Subscribers: majnemer, cfe-commits

Differential Revision: http://reviews.llvm.org/D4380

llvm-svn: 216491
2014-08-26 21:52:27 +00:00
Fariborz Jahanian ffc120a900 revert patch r216469.
llvm-svn: 216485
2014-08-26 21:10:47 +00:00
Quentin Colombet bb9a858b25 [test/CodeGen/ARM] Update arm_neon_intrinsics test case to actually test the
lowering of the intrinsics.
Prior to this commit, most of the copy-related intrinsics could be optimized
away. The situation is still not ideal as there are several possibilities to
lower a given intrinsic. Currently, we match LLVM behavior.

llvm-svn: 216474
2014-08-26 18:43:31 +00:00
Fariborz Jahanian 840438bb06 c11- Check for c11 language option as documentation says
feature is c11 about nested struct declarations must have
struct-declarator-list. Without this change, code
which was meant for c99 breaks. rdar://18125536

llvm-svn: 216469
2014-08-26 18:13:47 +00:00
Yi Kong 6891746cd8 arm_acle: Add mappings for dbg intrinsic
This completes all ACLE hint intrinsics.

llvm-svn: 216453
2014-08-26 12:48:11 +00:00
Yi Kong 1d268af094 ARM: Add dbg builtin intrinsic
llvm-svn: 216452
2014-08-26 12:48:06 +00:00
Yi Kong 0705e0065e arm_acle: Implement swap intrinsic
Insert the LDREX/STREX instruction sequence specified in ARM ACLE 2.0,
as SWP instruction is deprecated since ARMv6.

llvm-svn: 216446
2014-08-26 09:50:54 +00:00
Kostya Serebryany 4ee6904288 [clang/asan] call __asan_poison_cxx_array_cookie after operator new[]
Summary:
PR19838
When operator new[] is called and an array cookie is created
we want asan to detect buffer overflow bugs that touch the cookie.
For that we need to
  a) poison the shadow for the array cookie (call __asan_poison_cxx_array_cookie).
  b) ignore the legal accesses to the cookie generated by clang (add 'nosanitize' metadata)

Reviewers: timurrrr, samsonov, rsmith

Reviewed By: rsmith

Subscribers: cfe-commits

Differential Revision: http://reviews.llvm.org/D4774

llvm-svn: 216434
2014-08-26 02:29:59 +00:00
Hal Finkel 6208251923 Implement __builtin_signbitl for PowerPC
PowerPC uses the special PPC_FP128 type for long double on Linux, which is
composed of two 64-bit doubles. The higher-order double (which contains the
overall sign) comes first, and so the __builtin_signbitl implementation
requires special handling to extract the sign bit.

Fixes PR20691.

llvm-svn: 216341
2014-08-24 03:47:06 +00:00
David Majnemer 58e4ea904b CodeGen: Skip unnamed bitfields when handling designated initializers
We would accidently initialize unnamed bitfields instead of the
following field.

llvm-svn: 216313
2014-08-23 01:48:50 +00:00
Quentin Colombet a1c34d3560 [test/CodeGen/ARM] Adpat test to match new codegen after r216274.
Moreover, rework some patterns to actually check the emitted instructions
instead of matching unrelated string!

E.g.,
some of the "// CHECK: vmov" were matching stuff like ".globl
funcname_with_vmov" instead of actual instructions.

llvm-svn: 216275
2014-08-22 18:08:37 +00:00
Quentin Colombet ffe5e5a42d [test/CodeGen/ARM] Adpat test to match new codegen after r216236.
llvm-svn: 216249
2014-08-22 00:27:52 +00:00
Fariborz Jahanian 91b2fa2a9a ext_vector IRGen. Patch to allow indexing into
ext_vector_type's 'hi/lo' components when
used as lvalue. rdar://18031917 pr20697

llvm-svn: 215991
2014-08-19 17:17:40 +00:00
Adam Nemet 2278fcbf0c [AVX512] Add FMA intrinsics
Part of <rdar://problem/17688758>

llvm-svn: 215666
2014-08-14 17:17:57 +00:00
Justin Bogner 085c4b294b Revert "CodeGen: When bitfields fall on natural boundaries, split them up"
It fits better with LLVM's memory model to try to do this in the
backend. Specifically, narrowing wide loads in the backends should be
relatively straightforward and is generally valuable, whereas widening
loads tends to be very constrained.

Discussion here:

  http://lists.cs.uiuc.edu/pipermail/cfe-commits/Week-of-Mon-20140811/112581.html

This reverts commit r215614.

llvm-svn: 215648
2014-08-14 15:44:29 +00:00
Rafael Espindola 764837431a Delete support for AuroraUX.
auroraux.org is not resolving.

llvm-svn: 215644
2014-08-14 15:14:51 +00:00
Pekka Jaaskelainen ab751a8f71 Fix a crash when compiling blocks in OpenCL with multiple
address spaces.

llvm-svn: 215629
2014-08-14 09:37:50 +00:00
Justin Bogner caf1c6e3dd CodeGen: When bitfields fall on natural boundaries, split them up
Currently when laying out bitfields that don't need any padding, we
represent them as a wide enough int to contain all of the bits. This
can be hard on the backend since we'll do things like represent stores
to a few bits as loading an i144, masking it with a large constant,
and storing it back.

This turns up in less pathological cases where we load and mask 64 bit
word on a 32 bit platform when we actually only need to access 32 bits.
This leads to bad code being generated in most of our 32 bit backends.

In practice, there are often natural breaks in bitfields, and it's a
fairly simple and effective heuristic to split these fields into legal
integer sized chunks when it will be equivalent (ie, it won't force us
to add any extra padding).

llvm-svn: 215614
2014-08-14 02:42:10 +00:00
Yi Kong 45a09319bf ARM: Add mappings for ACLE prefetch intrinsics
Implement __pld, __pldx, __pli and __plix builtin intrinsics as specified in
ARM ACLE 2.0.

llvm-svn: 215599
2014-08-13 23:20:15 +00:00
Justin Bogner 5ea05aed15 test/CodeGen: Don't rely on a value's number in check lines
The tests in r215568 hard code a value as %0 in their checks. This
isn't correct in asserts builds.

llvm-svn: 215585
2014-08-13 21:54:06 +00:00
Yi Kong a5548431a5 AArch64: Prefetch intrinsic
llvm-svn: 215569
2014-08-13 19:18:20 +00:00
Yi Kong 26d104a9ec ARM: Prefetch intrinsics
llvm-svn: 215568
2014-08-13 19:18:14 +00:00
Adam Nemet 4abc07cb75 [AVX512] Add intrinsics for FP scalar broadcasts
Similar approach to the set1 intrinsics is used: implement in terms of vector
initializers and then ensure with an LLVM test that a broadcast is generated
at the end.

Part of <rdar://problem/17688758>

llvm-svn: 215486
2014-08-13 00:29:01 +00:00
Alexey Samsonov de443c5002 [UBSan] Add returns-nonnull sanitizer.
Summary:
This patch adds a runtime check verifying that functions
annotated with "returns_nonnull" attribute do in fact return nonnull pointers.
It is based on suggestion by Jakub Jelinek:
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20140623/223693.html.

Test Plan: regression test suite

Reviewers: rsmith

Reviewed By: rsmith

Subscribers: cfe-commits

Differential Revision: http://reviews.llvm.org/D4849

llvm-svn: 215485
2014-08-13 00:26:40 +00:00
David Blaikie 77bbb5fd0b DebugInfo: Blocks: Do not depend on LLVM argument numbering when choosing the debug info argument numbering.
Due to the possible presence of return-by-out parameters, using the LLVM
argument number count when numbering debug info arguments can end up
off-by-one. This could produce two arguments with the same number, which
would in turn cause LLVM to emit only one of those arguments (whichever
it found last) or assert (r215157).

llvm-svn: 215227
2014-08-08 17:10:14 +00:00
Adam Nemet 5bf7baa938 [AVX512] Add intrinsic for valignd/q
Note that similar to palingr, we could further optimize these to emit
shufflevector when the shift count is <=64.  This however does not
change the overall design that unlike palignr we would still need the LLVM
intrinsic corresponding to this intruction to handle the >64 cases.  (palignr
uses the psrldq intrinsic in this case.)

llvm-svn: 214891
2014-08-05 17:28:23 +00:00
David Majnemer c017d3613e MS ABI: Aligned tentative definitions don't have CommonLinkage
int __declspec(align(16)) foo; is a tentative definition but the storage
for that variable should not have CommonLinkage.

llvm-svn: 214828
2014-08-05 00:01:13 +00:00
Bill Schmidt ccbe0a8022 [PPC64LE] Fix wrong IR for vec_sld and vec_vsldoi
My original LE implementation of the vsldoi instruction, with its
altivec.h interfaces vec_sld and vec_vsldoi, produces incorrect
shufflevector operations in the LLVM IR.  Correct code is generated
because the back end handles the incorrect shufflevector in a
consistent manner.

This patch and a companion patch for LLVM correct this problem by
removing the fixup from altivec.h and the corresponding fixup from the
PowerPC back end.  Several test cases are also modified to reflect the
now-correct LLVM IR.

The vec_sums and vec_vsumsws interfaces in altivec.h are also fixed,
because they used vec_perm calls intended to be recognized as vsldoi
instructions.  These vec_perm calls are now replaced with code that
more clearly shows the intent of the transformation.

llvm-svn: 214801
2014-08-04 23:21:26 +00:00
Joerg Sonnenberger 466a31eb65 vcfsx and dss instructions require immediates, variables are not valid.
llvm-svn: 214635
2014-08-02 15:07:21 +00:00
Alexey Samsonov d9ad5cec0c [ASan] Use metadata to pass source-level information from Clang to ASan.
Instead of creating global variables for source locations and global names,
just create metadata nodes and strings. They will be transformed into actual
globals in the instrumentation pass (if necessary). This approach is more
flexible:
1) we don't have to ensure that our custom globals survive all the optimizations
2) if globals are discarded for some reason, we will simply ignore metadata for them
   and won't have to erase corresponding globals
3) metadata for source locations can be reused for other purposes: e.g. we may
   attach source location metadata to alloca instructions and provide better descriptions
   for stack variables in ASan error reports.

No functionality change.

llvm-svn: 214604
2014-08-02 00:35:50 +00:00
Reid Kleckner e2d6429493 MS inline asm: Tests for r214550
These tests seem like an exception to the rule against assembly emitting
tests in clang.  I made an LLVM side change that can only be tested by
setting up the inline assembly machinery that is only implemented by
Clang.

llvm-svn: 214552
2014-08-01 20:23:29 +00:00
Daniel Sanders 2ef3cdd3d5 Revert r214497: [mips] Defer va_arg expansion to the backend.
It appears that the backend does not handle all cases that were handled by clang.
In particular, it does not handle structs as used in
SingleSource/UnitTests/2003-05-07-VarArgs.

llvm-svn: 214512
2014-08-01 13:26:28 +00:00
Daniel Sanders cd8ba86990 [mips] Defer va_arg expansion to the backend.
Summary:
This patch causes clang to emit va_arg instructions to the backend instead of
expanding them into an implementation itself. The backend already implements
va_arg since this is necessary for NaCl so this patch is removing redundant
code.

Together with the llvm patch (D4556) that accounts for the effect of endianness
on the expansion of va_arg, this fixes PR19612.

Depends on D4556

Reviewers: sstankovic, dsanders

Reviewed By: dsanders

Subscribers: rnk, cfe-commits

Differential Revision: http://reviews.llvm.org/D4742

llvm-svn: 214497
2014-08-01 10:29:21 +00:00
Hans Wennborg f51dc3b5d4 Local extern redeclarations of dllimport variables stay dllimport even if they don't specify the attribute
llvm-svn: 214425
2014-07-31 19:29:39 +00:00
Ehsan Akhgari 9f507382dd ms-inline-asm: Add a test to ensure that call doesn't clobber eax.
Note that it's not clear whether this is the right behavior, please see
the review for the discussion.

Reviewers: rnk

Subscribers: cfe-commits

Differential Revision: http://reviews.llvm.org/D4577

llvm-svn: 214401
2014-07-31 13:43:17 +00:00
Richard Smith 77be48ac47 PR18097: Support initializing an _Atomic(T) from an object of C++ class type T
or a class derived from T. We already supported this when initializing
_Atomic(T) from T for most (and maybe all) other reasonable values of T.

llvm-svn: 214390
2014-07-31 06:31:19 +00:00
Adam Nemet da82bcc4dd [AVX512] Add unaligned FP load intrinsics
Part of <rdar://problem/17688758>

llvm-svn: 214380
2014-07-31 04:00:39 +00:00
Rafael Espindola 42affc2db6 Update for llvm change.
llvm-svn: 214356
2014-07-30 22:52:16 +00:00
Adam Nemet 2db1d2fb32 [AVX512] Add intrinsic for knot
Part of <rdar://problem/17688758>

llvm-svn: 214316
2014-07-30 16:51:27 +00:00
Adam Nemet c871ff95f3 [AVX512] Add some of the FP cast intrinsics
Part of <rdar://problem/17688758>

llvm-svn: 214315
2014-07-30 16:51:24 +00:00
Adam Nemet f42e7a274a [AVX512] Add set1 intrinsics
(Dropped the byte and word variants from the patch.  Turns out these are not
part of AVX512F but only AVX512BW/VL.)

Part of <rdar://problem/17688758>

llvm-svn: 214314
2014-07-30 16:51:22 +00:00
Richard Smith 00db2f13f8 PR20473: Don't "deduplicate" string literals with the same value but different
lengths! In passing, simplify string literal deduplication by relying on LLVM
to deduplicate the underlying constant values.

llvm-svn: 214222
2014-07-29 21:20:12 +00:00
Tobias Grosser b3af390087 Revert "Emit column debug information for loads"
This broke the following gdb tests:

gdb.base__annota1.exp
gdb.base__consecutive.exp
gdb.python__py-symtab.exp
gdb.reverse__consecutive-precsave.exp
gdb.reverse__consecutive-reverse.exp

I will look into this.

This reverts commit 214162.

llvm-svn: 214163
2014-07-29 06:53:14 +00:00
Tobias Grosser 01b923d55b Emit column debug information for loads
This allows us to give more precise diagnostics.

Diego kindly tested the impact on debug info size: "The increase on average
debug sizes is 0.1%. The total file size increase is ~0%."

llvm-svn: 214162
2014-07-29 06:10:47 +00:00
Adam Nemet fce1ad0b99 [AVX512] Add non-masking FP store intrinsics
Part of <rdar://problem/17688758>

llvm-svn: 214099
2014-07-28 17:14:45 +00:00
Adam Nemet a3ebe6214b [AVX512] Add FP add/sub/mul intrinsics
Part of <rdar://problem/17688758>

llvm-svn: 214098
2014-07-28 17:14:42 +00:00
Adam Nemet 062ba618f5 [AVX512] Add CHECK-LABELs to test/CodeGen/avx512f-builtins.c
llvm-svn: 214095
2014-07-28 17:14:36 +00:00
Ulrich Weigand 8afad61a93 [PowerPC] Support ELFv1/ELFv2 ABI selection via -mabi= option
While Clang now supports both ELFv1 and ELFv2 ABIs, their use is currently
hard-coded via the target triple: powerpc64-linux is always ELFv1, while
powerpc64le-linux is always ELFv2.

These are of course the most common scenarios, but in principle it is
possible to support the ELFv2 ABI on big-endian or the ELFv1 ABI on
little-endian systems (and GCC does support that), and there are some
special use cases for that (e.g. certain Linux kernel versions could
only be built using ELFv1 on LE).

This patch implements the Clang side of supporting this, based on the
LLVM commit 214072.  The command line options -mabi=elfv1 or -mabi=elfv2
select the desired ABI if present.  (If not, Clang uses the same default
rules as now.)

Specifically, the patch implements the following changes based on the
presence of the -mabi= option:

In the driver:
- Pass the appropiate -target-abi flag to the back-end
- Select the correct dynamic loader version (/lib64/ld64.so.[12])

In the preprocessor:
- Define _CALL_ELF to the appropriate value (1 or 2)

In the compiler back-end:
- Select the correct ABI in TargetInfo.cpp
- Select the desired ABI for LLVM via feature (elfv1/elfv2)

llvm-svn: 214074
2014-07-28 13:17:52 +00:00
Ehsan Akhgari 755597c83d Fix test/CodeGen/ms-inline-asm.c from r213916.
llvm-svn: 213919
2014-07-25 02:39:33 +00:00
Ehsan Akhgari fa2d9aa798 Fix test/CodeGen/ms-inline-asm.cpp from r213916.
llvm-svn: 213918
2014-07-25 02:35:50 +00:00
Ehsan Akhgari 2f93b448a8 clang-cl: Merge adjacent single-line __asm blocks
Summary:
This patch extends the __asm parser to make it keep parsing input tokens
as inline assembly if a single-line __asm line is followed by another line
starting with __asm too.  It also makes sure that we correctly keep
matching braces in such situations by separating the notions of how many
braces we are matching and whether we are in single-line asm block mode.

Reviewers: rnk

Subscribers: cfe-commits

Differential Revision: http://reviews.llvm.org/D4598

llvm-svn: 213916
2014-07-25 02:27:14 +00:00
Mark Heffernan c888e41c0c Add support for #pragma nounroll.
llvm-svn: 213885
2014-07-24 18:09:38 +00:00
Mark Heffernan 44ca416a64 Rename metadata in test which was missed when renaming loop unroll metadata in r213771.
llvm-svn: 213775
2014-07-23 17:59:07 +00:00