Commit Graph

80 Commits

Author SHA1 Message Date
Albert Gutowski 727ab8a803 Add some MS aliases for existing intrinsics
Reviewers: thakis, compnerd, majnemer, rsmith, rnk

Subscribers: alexshap, cfe-commits

Differential Revision: https://reviews.llvm.org/D24330

llvm-svn: 281540
2016-09-14 21:19:43 +00:00
Albert Gutowski 9918cb6573 Reverse commit 281375 (breaks building Chromium)
llvm-svn: 281399
2016-09-13 21:24:51 +00:00
Albert Gutowski ae3fb3113f Add some MS aliases for existing intrinsics
Reviewers: thakis, compnerd, majnemer, rsmith, rnk

Subscribers: cfe-commits

Differential Revision: https://reviews.llvm.org/D24330

llvm-svn: 281375
2016-09-13 19:26:42 +00:00
Craig Topper 45db56c375 [X86] Add missing __x86_64__ qualifiers on a bunch of intrinsics that assume 64-bit GPRs are available.
Usages of these intrinsics in a 32-bit build results in assertions in the backend.

llvm-svn: 276249
2016-07-21 07:38:39 +00:00
Simon Pilgrim e3b9ee0645 [X86][SSE] Reimplement SSE fp2si conversion intrinsics instead of using generic IR
D20859 and D20860 attempted to replace the SSE (V)CVTTPS2DQ and VCVTTPD2DQ truncating conversions with generic IR instead.

It turns out that the behaviour of these intrinsics is different enough from generic IR that this will cause problems, INF/NAN/out of range values are guaranteed to result in a 0x80000000 value - which plays havoc with constant folding which converts them to either zero or UNDEF. This is also an issue with the scalar implementations (which were already generic IR and what I was trying to match).

This patch changes both scalar and packed versions back to using x86-specific builtins.

It also deals with the other scalar conversion cases that are runtime rounding mode dependent and can have similar issues with constant folding.

Differential Revision: https://reviews.llvm.org/D22105

llvm-svn: 276102
2016-07-20 10:18:01 +00:00
Craig Topper 95b61b0544 [X86] Use __builtin_ia32_vec_ext_v4hi and __builtin_ia32_vec_set_v4hi to implement pextrw/pinsertw MMX intrinsics instead of trying to use native IR.
Without this we end up generating code that doesn't use mmx registers and probably doesn't work well with other mmx intrinsics.

llvm-svn: 274968
2016-07-09 05:30:41 +00:00
Craig Topper 2a383c9273 [X86] Use undefined instead of setzero in shufflevector based intrinsics when the second source is unused. Rewrite immediate extractions in shuffle intrinsics to be in ((c >> x) & y) form instead of ((c & z) >> x). This way only x varies between each use instead of having to vary x and z.
llvm-svn: 274525
2016-07-04 22:18:01 +00:00
Zvi Rackover 453d734201 [X86] _MM_ALIGN16 attribute support for non-windows targets
Summary:
This patch adds support for the _MM_ALIGN16 attribute on non-windows targets. This aligns Clang with ICC which supports the attribute on all targets.

Fixes PR28056

Reviewers: aaboud, echristo, cfe-commits, mkuper

Subscribers: zvi, mehdi_amini

Projects: #clang-c

Differential Revision: http://reviews.llvm.org/D21173

llvm-svn: 273095
2016-06-18 20:01:07 +00:00
Simon Pilgrim beca5f295c [Clang][X86] Convert non-temporal store builtins to generic __builtin_nontemporal_store in headers
We can now use __builtin_nontemporal_store instead of target specific builtins for naturally aligned nontemporal stores which avoids the need for handling in CGBuiltin.cpp

The scalar integer nontemporal (unaligned) store builtins will have to wait as __builtin_nontemporal_store currently assumes natural alignment and doesn't accept the 'packed struct' trick that we use for normal unaligned load/stores.

The nontemporal loads require further backend support before we can safely convert them to __builtin_nontemporal_load

Differential Revision: http://reviews.llvm.org/D21272

llvm-svn: 272540
2016-06-13 09:57:52 +00:00
Craig Topper 3a0c7260f4 [X86] Add void to the argument list of intrinsics that don't take arguments since empty argument list mean something else in C.
llvm-svn: 272244
2016-06-09 05:14:28 +00:00
Ekaterina Romanova 50e94a3b34 Add doxygen comments to xmmintrin.h's intrinsics.
Only half of the intrinsics in this file is documented here. The patch for the o
ther half will be sent out later.

The doxygen comments are automatically generated based on Sony's intrinsics docu
ment.

I got an OK from Eric Christopher to commit doxygen comments without prior code
review upstream.

llvm-svn: 272121
2016-06-08 07:34:31 +00:00
Craig Topper 6a77b62640 [X86] Use unsigned types for vector arithmetic in intrinsics to avoid undefined behavior for signed integer overflow.
This is really only needed for addition, subtraction, and multiplication, but I did the bitwise ops too for overall consistency. Clang currently doesn't set NSW for signed vector operations so the undefined behavior shouldn't happen today.

llvm-svn: 271778
2016-06-04 05:43:41 +00:00
Simon Pilgrim 645e1ad33a [X86][SSE] _mm_store1_ps/_mm_store1_pd should require an aligned pointer
According to the gcc headers, intel intrinsics docs and msdn codegen the _mm_store1_pd (and its _mm_store_pd1 equivalent) should use an aligned pointer - the clang headers are the only implementation I can find that assume non-aligned stores (by storing with _mm_storeu_pd).

Additionally, according to the intel intrinsics docs and msdn codegen the _mm_store1_ps (_mm_store_ps1) requires a similarly aligned pointer.

This patch raises the alignment requirements to match the other implementations by calling _mm_store_ps/_mm_store_pd instead.

I've also added the missing _mm_store_pd1 intrinsic (which maps to _mm_store1_pd like _mm_store_ps1 does to _mm_store1_ps).

As a followup I'll update the llvm fast-isel tests to match this codegen.

Differential Revision: http://reviews.llvm.org/D20617

llvm-svn: 271218
2016-05-30 17:55:25 +00:00
Craig Topper 09175dab31 [X86] Replace unaligned store builtins in SSE/AVX intrinsic files with code that will compile to a native unaligned store. Remove the builtins since they are no longer used.
Intrinsics will be removed from llvm in a future commit.

llvm-svn: 271214
2016-05-30 17:10:30 +00:00
Craig Topper 1aa231e3aa [X86] Add typecasts to remove most assumptions about what __m128i/__m256i is defined as. Add similar typecasts for the fp types as well.
llvm-svn: 269632
2016-05-16 06:38:42 +00:00
Richard Smith e0fa4c83b2 [modules] Make the tweak to avoid circular inclusion of emmintrin.h and
xmmintrin.h a bit more directed. If for whatever reason modules are enabled but
we textually include one of these headers, don't deploy the special case for
modules. To make this work cleanly, extend __building_module to be defined
even when modules is disabled.

llvm-svn: 266945
2016-04-21 01:46:37 +00:00
Ekaterina Romanova e2961f71d2 Add doxygen comments to xmmintrin.h's intrinsics.
Only half of the intrinsics in this file is documented here. The patch for the other half will be sent out later.

The doxygen comments are automatically generated based on Sony's intrinsics document.

I got an OK from Eric Christopher to commit doxygen comments without prior code review upstream.

llvm-svn: 263098
2016-03-10 09:37:04 +00:00
Craig Topper 7148166785 [X86] Remove temporary variables from macros in x86 intrinsic headers. Prevents duplicate names appearing from multiple macro expansions. NFC
llvm-svn: 252586
2015-11-10 05:08:05 +00:00
Sean Silva e4c3760a9f Clean up trailing whitespace in the builtin headers
llvm-svn: 247498
2015-09-12 02:55:19 +00:00
Simon Pilgrim 5aba9925c0 [X86][SSE] Add _mm_undefined_* intrinsics
Added missing SSE/AVX 'undefined' intrinsics (PR24040):

_mm_undefined_pd, _mm_undefined_ps + _mm_undefined_si128
_mm256_undefined_pd, _mm256_undefined_ps + _mm256_undefined_si256
_mm512_undefined, _mm512_undefined_ps, _mm512_undefined_pd + _mm512_undefined_epi32

Added builtin intrinsicss:

__builtin_ia32_undef128, __builtin_ia32_undef256 + __builtin_ia32_undef512

Differential Revision: http://reviews.llvm.org/D12052

llvm-svn: 246083
2015-08-26 21:17:12 +00:00
Michael Kuperstein d7b9392f59 [X86] Add support for _MM_ALIGN16
Differential Revision: http://reviews.llvm.org/D11753

llvm-svn: 244201
2015-08-06 08:24:38 +00:00
Michael Kuperstein e45af54cdb [X86] Rename DEFAULT_FN_ATTR macro to __DEFAULT_FN_ATTR
llvm-svn: 241065
2015-06-30 13:36:19 +00:00
Eric Christopher 9fc7fb274e Update the intel intrinsic headers to use the target attribute support.
This involved removing the conditional inclusion and replacing them
with target attributes matching the original conditional inclusion
and checks. The testcase update removes the macro checks for each
file and replaces them with usage of the __target__ attribute, e.g.:

int __attribute__((__target__(("sse3")))) foo(int a) {
  _mm_mwait(0, 0);
  return 4;
}

This usage does require the enclosing function have the requisite
__target__ attribute for inlining and code generation - also for
any macro intrinsic uses in the enclosing function. There's no change
for existing uses of the intrinsic headers.

llvm-svn: 239883
2015-06-17 07:09:32 +00:00
Eric Christopher 4d185168e9 Use a define for per-file function attributes for the Intel intrinsic headers.
This is a precursor to changing them to use the new target attribute
code.

llvm-svn: 239882
2015-06-17 07:09:20 +00:00
Richard Smith 23d8d0338e [modules] Fix a #include cycle when building a module for our builtin headers.
xmmintrin.h includes emmintrin.h and vice versa if SSE2 is enabled. We break
this cycle for a modules build, and instead make the xmmintrin.h module
re-export the immintrin.h module. Also included is a fix for an assert in the
serialization code if a module exports another module that was declared later
in the same module map.

llvm-svn: 237321
2015-05-14 00:45:20 +00:00
Craig Topper 2094d8fe88 [x86] Add the (v)cmpps/pd/ss/sd builtins to match gcc. Use them in the sse intrinsic files.
This still lower to the same intrinsics as before.

This is preparation for bounds checking the immediate on the avx version of the builtin so we don't pass illegal immediates into the backend. Since SSE uses a smaller size immediate its not possible to bounds check when using a shared builtin. Rather than creating a clang specific builtin for the different immediate, I decided (after consulting with Chandler) that it was better to match gcc.

llvm-svn: 224879
2014-12-27 06:59:57 +00:00
Nico Weber 1287091373 Replace a few // comments with /**/ comments in headers, for consistency.
llvm-svn: 212556
2014-07-08 18:29:27 +00:00
Akira Hatanaka 5d28ea1451 Fix a bug in xmmintrin.h.
The last step of _mm_cvtps_pi16 should use _mm_packs_pi32, which is a function
that reads two __m64 values and packs four 32-bit values into four 16-bit
values.  

<rdar://problem/16873717>

llvm-svn: 209489
2014-05-23 00:38:07 +00:00
Warren Hunt 0dc28ea301 [_mm_prefetch] Returning previously deleted comment.
No functional change.  It's unclear if the word FIXME is relevant given 
that the macro behaves as intended.

llvm-svn: 201920
2014-02-22 00:47:24 +00:00
Warren Hunt 20e4a5d2af Reapply 201734 but with appropriate gcc compatibility
Because GCC incorrectly defines _mm_prefetch to take anything that casts 
to void*, people have started using that behavior.  The previous patch 
that made _mm_prefetch actually take a const char * broke compatibility 
with existing code.  This update to the patch leaves the macro that 
defines _mm_prefetch with the (void*) cast when _MSC_VER is not defined.

llvm-svn: 201901
2014-02-21 23:08:53 +00:00
Daniel Jasper 2f0f297bdb Revert r201734 and r201742.
This breaks backwards compatibility with existing code. Previously, this
was defined as

  #define _mm_prefetch(a, sel) (__builtin_prefetch((void *)(a), 0, (sel)))

Which basically accepts any pointer. Changing this to char* simply
breaks a lot of existing code. I have tried changing char* to
"const void*", which seems to be the right thing as per Intel
specification this should work on basically any pointer. However,
apparently this breaks windows compatibility (because of a conflicting
declaration in windows.h).

So, we probably need to #ifdef this based on whether clang is compiling
for windows. According to Chandler, this might be done by introducing an
additional symbol to a fake type in BuiltinsX86.def and then condition
the type expansion on the platform.

llvm-svn: 201775
2014-02-20 11:10:48 +00:00
Warren Hunt 40d6f29ad8 Add _mm_prefetch and some others as MS builtins
This patch adds several built-ins that are required for ms 
compatibility. _mm_prefetch must be a built-in because it takes a 
compile-time constant argument and our prior approach of using a #define 
to the current built-in doesn't work in the presence of re-declaration 
of _mm_prefetch. The others can be obtained by including the windows 
system headers. If a user includes the windows system headers but not 
intrin.h they still need to work and therefore must be built-in because 
we don't get a chance to implement them in intrin.h in this case.

llvm-svn: 201734
2014-02-19 23:20:20 +00:00
Manman Ren 9bb34d66b3 X86 intrinsics: cmpge|gt|nge|ngt_ss|_sd
These intrinsics should return the comparision result in the low bits and keep 
the high bits of the first source operand.

When calling to builtin functions, the source operands are swapped and the high
bits of the second source operand are kept. To fix the issue, an extra
shufflevector is used.

rdar://14153896

llvm-svn: 184110
2013-06-17 19:42:49 +00:00
Douglas Gregor ae3a4dfac0 Even in a modules world, people will depend on the weird xmmintrin.h -> emmintrin.h forwarding.
llvm-svn: 183585
2013-06-07 22:49:44 +00:00
Argyrios Kyrtzidis fff55a028b [lib/Headers] Break the module import cycle between _Builtin_intrinsics.sse and _Builtin_intrinsics.sse2
Module "sse" implicitly exports module "sse2".
This is bad because we also have module "sse2" export module "sse" (as intended) so we end up with a cycle
in the module import graph:
1. sse2 -> (also imports) sse
2. sse -> (also imports) sse2

To eliminate the cycle remove 2.; importing module "sse2" will also import module "sse", but just importing
module "sse" will not also import module "sse2".

rdar://13240552

llvm-svn: 178117
2013-03-27 05:12:34 +00:00
David Blaikie 3302f2bd46 PR14964: intrinsic headers using non-reserved identifiers
Several of the intrinsic headers were using plain non-reserved identifiers.
C++11 17.6.4.3.2 [global.names] p1 reservers names containing a double
begining with an underscore followed by an uppercase letter for any use.

I think I got them all, but open to being corrected. For the most part I
didn't bother updating function-like macro parameter names because I don't
believe they're subject to any such collission - though some function-like
macros already follow this convention (I didn't update them in part because
the churn was more significant as several function-like macros use the double
underscore prefixed version of the same name as a parameter in their
implementation)

llvm-svn: 172666
2013-01-16 23:08:36 +00:00
Manman Ren 5750c1c07e X86 SSE Intrinsics: update header for sqrt_ss, rsqrt_ss and rcp_ss.
There intrinsics pass through the upper FP values from the input.
rdar://12558838

llvm-svn: 166743
2012-10-26 00:25:10 +00:00
Bob Wilson 51897ec79b Fix a typo: _MM_FLUSH_ZERO_OFF has the wrong value. rdar://10716672
llvm-svn: 148711
2012-01-23 18:27:24 +00:00
Craig Topper 9f00948a82 Add AVX2 permute intrinsics. Also add parentheses on some macro arguments in other intrinsic headers.
llvm-svn: 147241
2011-12-24 07:55:14 +00:00
Bob Wilson c9b97cc1da Fix vector macros to correctly check argument types. <rdar://problem/10261670>
llvm-svn: 143792
2011-11-05 06:08:06 +00:00
Eli Friedman 9bb51adcce Tweak *mmintrin.h so that they don't make any bad assumptions about alignment (which probably has little effect in practice, but better to get it right). Make the load in _mm_loadh_pi and _mm_loadl_pi a single LLVM IR instruction to make optimizing easier for CodeGen.
rdar://10054986

llvm-svn: 139874
2011-09-15 23:15:27 +00:00
Bill Wendling 03e7e430c3 Add 'may_alias' attribute. Noticed by Eli.
llvm-svn: 131278
2011-05-13 01:24:00 +00:00
Bill Wendling 502931fad9 Represent the unaligned loads natively. These are converted into a call to the
correct unaligned load.

llvm-svn: 131268
2011-05-13 00:11:39 +00:00
Bill Wendling e106c34817 LLVM doesn't always optimize away the four loads from this:
(__m128){ p[0], p[1], p[2], p[3] }

which produces really bad code. This could be done in instcombine, but it's
probably better to do it in the front-end instead.
<rdar://problem/9424836>

llvm-svn: 131237
2011-05-12 19:02:15 +00:00
Bill Wendling 2c1c33552d Remove comment that snuck in there.
llvm-svn: 129434
2011-04-13 10:05:14 +00:00
Bill Wendling b9c9e34cb3 Just use a native "load" instead of translating the builtin later. Clang can
take it!

I wasn't able to get __builtin_ia32_loaddqu to transform into an unaligned
load...I'll have to look into it further.

llvm-svn: 129427
2011-04-13 05:58:17 +00:00
Chandler Carruth 45c2fb1e69 Undo part of my previous commit to mm_malloc.h, going back to the use of
stdlib.h. There were numerous problems with forward declaring 'malloc' and
'free', but the most important is that these are reserved by POSIX and may be
implemented via a function-like macro.

As suggested by Dale Johannesen, I'm instead guarding the only include of this
in our builtin headers with __STDC_HOSTED__, and I've removed the include of
the header from the test suite. I'll discuss with folks whether we want to have
a hosted section of the test suite or not, and add it (and perhaps other tests)
back there if that's the direction.

llvm-svn: 119958
2010-11-22 08:06:31 +00:00
Chris Lattner 07704f1d7e the mmx intrinsic for pshufw should map to the IR intrinsic, not
to a shufflevector.  Otherwise it doesn't turn into a pshufw.
This bug was introduced in the mmx rewrite.

llvm-svn: 115423
2010-10-02 21:32:59 +00:00
Chris Lattner 212a492063 fix incorrect MM_HINT_ definitions, PR8011
llvm-svn: 112283
2010-08-27 20:10:06 +00:00
Chandler Carruth 7579c008ec Fix some typos I made when adding alternate intrinsic names.
llvm-svn: 110545
2010-08-08 08:30:05 +00:00