Commit Graph

550 Commits

Author SHA1 Message Date
Adam Nemet 5bf7baa938 [AVX512] Add intrinsic for valignd/q
Note that similar to palingr, we could further optimize these to emit
shufflevector when the shift count is <=64.  This however does not
change the overall design that unlike palignr we would still need the LLVM
intrinsic corresponding to this intruction to handle the >64 cases.  (palignr
uses the psrldq intrinsic in this case.)

llvm-svn: 214891
2014-08-05 17:28:23 +00:00
Bill Schmidt ccbe0a8022 [PPC64LE] Fix wrong IR for vec_sld and vec_vsldoi
My original LE implementation of the vsldoi instruction, with its
altivec.h interfaces vec_sld and vec_vsldoi, produces incorrect
shufflevector operations in the LLVM IR.  Correct code is generated
because the back end handles the incorrect shufflevector in a
consistent manner.

This patch and a companion patch for LLVM correct this problem by
removing the fixup from altivec.h and the corresponding fixup from the
PowerPC back end.  Several test cases are also modified to reflect the
now-correct LLVM IR.

The vec_sums and vec_vsumsws interfaces in altivec.h are also fixed,
because they used vec_perm calls intended to be recognized as vsldoi
instructions.  These vec_perm calls are now replaced with code that
more clearly shows the intent of the transformation.

llvm-svn: 214801
2014-08-04 23:21:26 +00:00
Adam Nemet da82bcc4dd [AVX512] Add unaligned FP load intrinsics
Part of <rdar://problem/17688758>

llvm-svn: 214380
2014-07-31 04:00:39 +00:00
Adam Nemet 2db1d2fb32 [AVX512] Add intrinsic for knot
Part of <rdar://problem/17688758>

llvm-svn: 214316
2014-07-30 16:51:27 +00:00
Adam Nemet c871ff95f3 [AVX512] Add some of the FP cast intrinsics
Part of <rdar://problem/17688758>

llvm-svn: 214315
2014-07-30 16:51:24 +00:00
Adam Nemet f42e7a274a [AVX512] Add set1 intrinsics
(Dropped the byte and word variants from the patch.  Turns out these are not
part of AVX512F but only AVX512BW/VL.)

Part of <rdar://problem/17688758>

llvm-svn: 214314
2014-07-30 16:51:22 +00:00
Joerg Sonnenberger 3d9478cf3a Change __INTx_TYPE__ to be always signed. This changes the value for
char-based types from "char" to "signed char". Adjust stdint.h to use
__INTx_TYPE__ directly without prefixing it with signed and to use
__UINTx_TYPE__ for unsigned ones.

The value of __INTx_TYPE__ now matches GCC.

llvm-svn: 214119
2014-07-28 21:06:22 +00:00
Adam Nemet fce1ad0b99 [AVX512] Add non-masking FP store intrinsics
Part of <rdar://problem/17688758>

llvm-svn: 214099
2014-07-28 17:14:45 +00:00
Adam Nemet a3ebe6214b [AVX512] Add FP add/sub/mul intrinsics
Part of <rdar://problem/17688758>

llvm-svn: 214098
2014-07-28 17:14:42 +00:00
Adam Nemet 0d5bb5530d [AVX512] Reorder functions in avx512fintrin.h
There is no functional change here.

The idea is to have a similar order and categories of functions that we have
in avxintrin.h.

llvm-svn: 214097
2014-07-28 17:14:40 +00:00
Adam Nemet 9a3ea60a2c [AVX512] Bring the formatting of avx512fintrin.h closer to avxintrin.h
llvm-svn: 214096
2014-07-28 17:14:38 +00:00
Yi Kong cd08139865 Add module map entry for ARM ACLE header file
llvm-svn: 213731
2014-07-23 09:00:21 +00:00
Elena Demikhovsky bd1a49bf81 AVX-512: I added new headers to makefiles. It should resolve tests fail.
If it will not, I'm reverting the both commits.

llvm-svn: 213645
2014-07-22 12:08:25 +00:00
Elena Demikhovsky fcc6df310d AVX-512: Added intrinsics to clang.
The set is small, that what I have right now.
Everybody is welcome to add more.

llvm-svn: 213641
2014-07-22 11:31:39 +00:00
Viktor Kutuzov 99400a5a34 Revert D3908 due to issues on Mac platforms
llvm-svn: 213450
2014-07-19 05:58:38 +00:00
Yi Kong 28d7b02687 ARM: Add ACLE memory barrier intrinsic mapping
llvm-svn: 213261
2014-07-17 12:45:17 +00:00
Yi Kong 472e521cec ARM: Add NOP intrinsic mapping in arm_acle.h
llvm-svn: 212950
2014-07-14 15:32:29 +00:00
Saleem Abdulrasool 07257fe14e Headers: add hint intrinsics to arm_acle.h
This adds the ARM ACLE hint intrinsic wrappers to arm_acle.h.  These need to be
protected with a !defined(_MSC_VER) since MSVC (and thus clang in compatibility
mode) provide these wrappers as proper builtin intrinsics.

llvm-svn: 212891
2014-07-12 23:27:26 +00:00
Yi Kong 4e00ce7d0c Improve comments of ARM ACLE header file and tests
Include section number in ARM ACLE specification for easier navigation.

llvm-svn: 212887
2014-07-12 22:48:13 +00:00
Viktor Kutuzov 63537656c6 Add clang headers that fix machine-dependent definitions on FreeBSD 9.2
Differential Revision: http://reviews.llvm.org/D3908

llvm-svn: 212689
2014-07-10 08:43:39 +00:00
Nico Weber a62cffae52 Don't pull in setjmp.h in -ffreestanding compiles.
Also provide _setjmpex(). r200243 put in _setjmp() and _setjmpex() behind a
comment since jmp_buf wasn't available. r200344 added jmp_buf and put in
_setjmp(), but missed _setjmpex().

llvm-svn: 212557
2014-07-08 18:34:46 +00:00
Nico Weber 1287091373 Replace a few // comments with /**/ comments in headers, for consistency.
llvm-svn: 212556
2014-07-08 18:29:27 +00:00
Saleem Abdulrasool c4ebb129b7 Headers: conditionalise more declarations
Protect MMX specific declarations under a __MMX__ guard.  This header can be
included on non-x86 architectures (e.g. ARM) which do not support the MMX ISA.
Use the preprocessor to prevent these declarations from being processed.

llvm-svn: 212512
2014-07-08 05:46:04 +00:00
Saleem Abdulrasool 60df0615b6 Headers: mark arm_acle.h with extern "C"
Although the functions are marked as always_inline, the compiler with which they
are used may not honour the extended attributes and emit them as functions.  In
such a case, indicate that they should have extern "C" linkage and should not be
mangled in C++ style if used within C++.

llvm-svn: 212511
2014-07-08 05:46:00 +00:00
Renato Golin 47843efcf6 Add the __qdbl intrinsic to the arm_acle.h header
Patch by: Moritz Roth

llvm-svn: 212264
2014-07-03 10:14:52 +00:00
Yaron Keren 672efea2e9 Added standard macro guard. In case __GNUC_VA_LIST was not
defined or defined identically before there will not be any
change in functionality.

MinGW-w64 defines __GNUC_VA_LIST as

  #define __GNUC_VA_LIST
  
which is different than the definition here, causing
a warning without the guard.
 

llvm-svn: 212183
2014-07-02 15:25:03 +00:00
Andrea Di Biagio eb606a3c27 [x86] Add Clang support for intrinsic __rdpmc.
This patch adds intrinsic __rdpmc to header file 'ia32intrin.h'.
Intrinsic __rdmpc can be used to read performance monitoring counters. It is
implemented as a direct call to __builtin_ia32_rdpmc.

It takes as input a value representing the index of the performance counter to
read. The value of the performance counter is then returned as a unsigned
64-bit quantity.

llvm-svn: 212053
2014-06-30 18:23:58 +00:00
Yi Kong a44c4d7173 Introduce arm_acle.h supporting existing LLVM builtin intrinsics
Summary: This patch introduces ACLE header file, implementing extensions that can be directly mapped to existing Clang intrinsics. It implements for both AArch32 and AArch64.

Reviewers: t.p.northover, compnerd, rengolin

Reviewed By: compnerd, rengolin

Subscribers: rnk, echristo, compnerd, aemerson, mroth, cfe-commits

Differential Revision: http://reviews.llvm.org/D4296

llvm-svn: 211962
2014-06-27 21:25:42 +00:00
Saleem Abdulrasool 702eefed9a Headers: be a bit more careful about inline asm
Conditionally include x86intrin.h if we are building for x86 or x86_64.
Conditionalise definition of inline assembly routines which use x86 or x86_64
inline assembly. This is needed as clang can target Windows on ARM where these
definitions may be included into user code.

llvm-svn: 211716
2014-06-25 16:48:40 +00:00
Saleem Abdulrasool 114efe0dc8 CodeGen: improve ms instrincics support
Add support for _InterlockedCompareExchangePointer, _InterlockExchangePointer,
_InterlockExchange.  These are available as a compiler intrinsic on ARM and x86.
These are used directly by the Windows SDK headers without use of the intrin
header.

llvm-svn: 211216
2014-06-18 20:51:10 +00:00
Bill Schmidt 1cf7c64fa5 [PPC64LE] Run some existing Altivec tests on powerpc64le as well
There are several Altivec tests that formerly ran only on big-endian
targets (and in some cases only on 32-bit targets).  It is useful to
verify these on little-endian targets as well.

While testing these, I discovered a typo in <altivec.h>.  This is also
fixed by this patch.

llvm-svn: 210928
2014-06-13 18:30:06 +00:00
Bill Schmidt 56a6967000 [PPC64LE] Fix vec_sld and vec_vsldoi for little endian
The vec_sld and vec_vsldoi interfaces perform a left-shift on vector
arguments for both big and little endian.  However, because they rely
on the vec_perm interface which is endian-dependent, the permutation
vector needs to be reversed for LE to get the proper shift direction.

I've added some extra testing for these interfaces for LE in the
builtins-ppc-altivec.c.

llvm-svn: 210657
2014-06-11 15:48:46 +00:00
Bill Schmidt 7f6596bb13 [PPC64LE] Implement little-endian semantics for vec_sums
The PowerPC vsumsws instruction, accessed via vec_sums, is defined
architecturally with a big-endian bias, in that the second input vector
and the result always reference big-endian element 3 (little-endian
element 0).  For ease of porting, the programmer wants elements 3 in
both cases.

To provide this semantics, for little endian we generate a permute for
the second input vector prior to the vsumsws instruction, and generate
a permute for the result vector following the vsumsws instruction.

The correctness of this code is tested by the new sums.c test added in
a previous patch, as well as the modifications to
builtins-ppc-altivec.c in the present patch.

llvm-svn: 210449
2014-06-09 03:31:47 +00:00
Bill Schmidt d7c53a91df [PPC64LE] Implement little-endian semantics for vec_unpack[hl]
The PowerPC vector-unpack-high and vector-unpack-low instructions
are defined architecturally with a big-endian bias, in that the vector
element numbering is assumed to be "left to right" regardless of
whether the processor is in big-endian or little-endian mode.  This
effectively reverses the meaning of "high" and "low."  Such a
definition is unnatural for little-endian code generation.

To facilitate ease of porting, the vec_unpackh and vec_unpackl
interfaces are designed to use natural element ordering, so that
elements are numbered according to little-endian design principles
when code is generated for a little-endian target.  The desired
semantics can be achieved by using the opposite instruction for
little-endian mode.  That is, when a call to vec_unpackh appears in
the code, a vector-unpack-low is generated, and when a call to
vec_unpackl appears in the code, a vector-unpack-high is generated.

The correctness of this code is tested by the new unpack.c test
added in a previous patch, as well as the modifications to
builtins-ppc-altivec.c in the present patch.

Note that these interfaces were originally incorrectly implemented
when they take a vector pixel argument.  This patch corrects this
implementation for both big- and little-endian code generation.

llvm-svn: 210391
2014-06-07 02:20:52 +00:00
Bill Schmidt 7f0a5c5141 [PPC64LE] Update builtins-ppc-altivec.c for PPC64 and PPC64LE
The Altivec builtin test case test/CodeGen/builtins-ppc-altivec.c has
always been executed only for 32-bit PowerPC.  These tests are equally
valid for 64-bit PowerPC.  This patch updates the test to be run for
three targets:  powerpc-unknown-unknown, powerpc64-unknown-unknown,
and powerpc64le-unknown-unknown.  The expected code generation changes
for some of the Altivec builtins for little endian, so this patch adds
new CHECK-LE variants to the test for the powerpc64le target.

These tests satisfy the testing requirements for some previous patches
committed over the last couple of days for lib/Headers/altivec.h:
r210279 for vec_perm, r210337 for vec_mul[eo], and r210340 for
vec_pack.

llvm-svn: 210384
2014-06-06 23:12:00 +00:00
Bill Schmidt 8a7b4f18bd [PPC64LE] Implement little-endian semantics for vec_pack family
The PowerPC vector-pack instructions are defined architecturally with
a big-endian bias, in that the vector element numbering is assumed to
be "left to right" regardless of whether the processor is in
big-endian or little-endian mode.  This definition is unnatural for
little-endian code generation.

To facilitate ease of porting, the vec_pack and related interfaces are
designed to use natural element ordering, so that elements are
numbered according to little-endian design principles when code is
generated for a little-endian target.  The vec_pack calls are
implemented as calls to vec_perm, specifying selection of the
odd-numbered vector elements.  For little endian, this means the
odd-numbered elements counting from the right end of the register.
Since the underlying instructions count from the left end, we must
instead select the even-numbered vector elements for little endian to
achieve the desired semantics.

The correctness of this code is tested by the new pack.c test added in
a previous patch.  I plan to later make the existing ppc32 Altivec
compile-time tests work for ppc64 and ppc64le as well.

llvm-svn: 210340
2014-06-06 15:10:47 +00:00
Bill Schmidt 7c0114f6e3 [PPC64LE] Implement little-endian semantics for vec_mul[eo]
The PowerPC vector-multiply-even and vector-multiply-odd instructions
are defined architecturally with a big-endian bias, in that the vector
element numbering is assumed to be "left to right" regardless of
whether the processor is in big-endian or little-endian mode.  This
definition is unnatural for little-endian code generation.

To facilitate ease of porting, the vec_mule and vec_mulo interfacs are
designed to use natural element ordering, so that elements are
numbered according to little-endian design principles when code is
generated for a little-endian target.  The desired semantics can be
achieved by using the opposite instruction for little-endian mode.
That is, when a call to vec_mule appears in the code, a
vector-multiply-odd is generated, and when a call to vec_mulo appears
in the code, a vector-multiply-even is generated.

The correctness of this code is tested by the new mult-even-odd.c test
added in a previous patch.  I plan to later make the existing ppc32
Altivec compile-time tests work for ppc64 and ppc64le as well.

llvm-svn: 210337
2014-06-06 14:45:06 +00:00
Bill Schmidt f7e289c0f2 [PPC64LE] Implement little-endian semantics for vec_perm
The PowerPC vperm (vector permute) instruction is defined
architecturally with a big-endian bias, in that the two input vectors
are assumed to be concatenated "left to right" and the elements of the
combined input vector are assumed to be numbered from "left to right"
(i.e., with element 0 referencing the high-order element).  This
definition is unnatural for little-endian code generation.

To facilitate ease of porting, the vec_perm interface is designed to
use natural element ordering, so that elements are numbered according
to little-endian design principles when code is generated for a
little-endian target.  The desired semantics can be achieved with the
vperm instruction provided that the two input vector registers are
reversed, and the permute control vector is complemented.  The
complementing is performed using an xor with a vector containing all
one bits.

Only the rightmost 5 bits of each element of the permute control
vector are relevant, so it would be possible to complement the vector
with respect to a <16xi8> vector containing all 31s.  However, when
the permute control vector is not a constant, using 255 instead has
the advantage that the vec_xor can be recognized during code
generation as a vnor instruction.  (Power8 introduces a vnand
instruction which could alternatively be generated.)

The correctness of this code is tested by the new perm.c test added in
a previous patch.  I plan to later make the existing ppc32 Altivec
compile-time tests work for ppc64 and ppc64le as well.

llvm-svn: 210279
2014-06-05 19:07:40 +00:00
Adam Nemet 286ae08e7d Implement AVX1 vbroadcast intrinsics with vector initializers
These intrinsics are special because they directly take a memory operand (AVX2
adds the register counterparts).  Typically, other non-memop intrinsics take
registers and then it's left to isel to fold memory operands.

In order to LICM intrinsics directly reading memory, we require that no stores
are in the loop (LICM) or that the folded load accesses constant memory
(MachineLICM).  When neither is the case we fail to hoist a loop-invariant
broadcast.

We can work around this limitation if we expose the load as a regular load and
then just implement the broadcast using the vector initializer syntax.  This
exposes the load to LICM and other optimizations.

At the IR level this is translated into a series of insertelements.  The
sequence is already recognized as a broadcast so there is no impact on the
quality of codegen.

_mm256_broadcast_pd and _mm256_broadcast_ps are not updated by this patch
because right now we lack the DAG-combiner smartness to recover the broadcast
instructions.  This will be tackled in a follow-on.

There will be completing changes on the LLVM side to remove the LLVM
intrinsics and to auto-upgrade bitcode files.

Fixes <rdar://problem/16494520>

llvm-svn: 209846
2014-05-29 20:47:29 +00:00
Sanjay Patel 1585fb94ab added Intel's BMI intrinsic variants
(fixes PR19431 - http://llvm.org/bugs/show_bug.cgi?id=19431)

llvm-svn: 209769
2014-05-28 20:26:57 +00:00
Akira Hatanaka 5d28ea1451 Fix a bug in xmmintrin.h.
The last step of _mm_cvtps_pi16 should use _mm_packs_pi32, which is a function
that reads two __m64 values and packs four 32-bit values into four 16-bit
values.  

<rdar://problem/16873717>

llvm-svn: 209489
2014-05-23 00:38:07 +00:00
Timur Iskhodzhanov a27b044166 Define the InterlockedCompareExchange64 intrinsic on 32-bits too
llvm-svn: 208699
2014-05-13 13:59:05 +00:00
Filipe Cabecinhas 5d289b48b1 Patched clang to emit x86 blends as shufflevectors.
Summary:
Most of the clang header patch by Simon Pilgrim @ SCEE.
Also fixed (or added) clang tests for these intrinsics.

LLVM tests to make sure we get the blend instruction out of these
shufflevectors are at http://reviews.llvm.org/D3600

Reviewers: eli.friedman, craig.topper, rafael

Subscribers: cfe-commits

Differential Revision: http://reviews.llvm.org/D3601

llvm-svn: 208664
2014-05-13 02:37:02 +00:00
Nico Weber 272bcf6768 Let stddef.h respect __need_{wchar_t, size_t, NULL, ptrdiff_t, wint_t}.
glibc expects that stddef.h only defines a single thing if either of these
defines is set.  For example, before this change, a C file containing

  #include <stdlib.h>
  int ptrdiff_t = 0;

would compile with gcc but not with clang. Now it compiles with clang too.

This also fixes PR12997, where older versions of the Linux headers would define
NULL incorrectly, and glibc would define __need_NULL and expect stddef.h to
redefine NULL with the correct definition.

llvm-svn: 207606
2014-04-30 04:35:09 +00:00
Nico Weber f077c51a70 Revert r207482; I fail at reading IRC.
llvm-svn: 207483
2014-04-29 01:25:49 +00:00
Nico Weber 8af28c1e61 Let stddef.h redefine NULL if __need_NULL is set, as needed by glibc, PR12997.
See the bug and the cfe-commits thread "[patch] Let stddef.h redefine NULL if
__need_NULL is set" for discussion.

Fixes PR12997 and is similar to the __need_wint_t bits already in this file.

llvm-svn: 207482
2014-04-29 01:19:21 +00:00
Hans Wennborg ac156e2225 Intrin.h: remove __rdtsc and __rdtscp declarations
Since r207132, these are defined in ia32intrin.h.

llvm-svn: 207134
2014-04-24 18:40:06 +00:00
Andrea Di Biagio 7ceec07cf6 [X86] Add Clang support for intrinsics __rdtsc and __rdtscp.
This patch:
 1. Adds a definition for two new GCCBuiltins in BuiltinsX86.def:
   __builtin_ia32_rdtsc;
   __builtin_ia32_rdtscp;

 2. Replaces the already existing definition of intrinsic __rdtsc in
    ia32intrin.h with a simple call to the new GCC builtin __builtin_ia32_rdtsc.

 3. Adds a definition for the new intrinsic __rdtscp in ia32intrin.h

llvm-svn: 207132
2014-04-24 18:26:35 +00:00
Ben Langmuir 47d1ca4838 Rename lib/Headers/module.map to module.modulemap
Don't install a file using the legacy spelling.

llvm-svn: 206431
2014-04-17 00:52:48 +00:00
Reid Kleckner 6df5254d6f intrin.h: Fix up bugs in the cr3 and msr intrinsics
Don't include input and output regs in clobbers.  Prefix some
identifiers with __.  Add a memory constraint to __readcr3 to prevent
reordering.  This constraint is heavy handed, but conservatively
correct.

Thanks to PaX Team for the suggestions.

llvm-svn: 205778
2014-04-08 17:49:16 +00:00