[CUDA][HIP] Do not diagnose use of _Float16

r352221 caused regressions in CUDA/HIP since device function may use _Float16 whereas host does not support it.
In this case host compilation should not diagnose usage of _Float16 in device functions or variables.

For now just do not diagnose _Float16 for CUDA/HIP. In the future we should have more precise check.

Differential Revision: https://reviews.llvm.org/D57369

llvm-svn: 352488
This commit is contained in:
Yaxun Liu 2019-01-29 13:20:23 +00:00
parent b0d7cf5df4
commit d442500f5d
3 changed files with 16 additions and 3 deletions

View File

@ -616,8 +616,11 @@ NumericLiteralParser::NumericLiteralParser(StringRef TokSpelling,
if (isHalf || isFloat || isLong || isFloat128)
break; // HF, FF, LF, QF invalid.
if (PP.getTargetInfo().hasFloat16Type() && s + 2 < ThisTokEnd &&
s[1] == '1' && s[2] == '6') {
// CUDA host and device may have different _Float16 support, therefore
// allows f16 literals to avoid false alarm.
// ToDo: more precise check for CUDA.
if ((PP.getTargetInfo().hasFloat16Type() || PP.getLangOpts().CUDA) &&
s + 2 < ThisTokEnd && s[1] == '1' && s[2] == '6') {
s += 2; // success, eat up 2 characters.
isFloat16 = true;
continue;

View File

@ -1442,7 +1442,10 @@ static QualType ConvertDeclSpecToType(TypeProcessingState &state) {
Result = Context.Int128Ty;
break;
case DeclSpec::TST_float16:
if (!S.Context.getTargetInfo().hasFloat16Type())
// CUDA host and device may have different _Float16 support, therefore
// do not diagnose _Float16 usage to avoid false alarm.
// ToDo: more precise diagnostics for CUDA.
if (!S.Context.getTargetInfo().hasFloat16Type() && !S.getLangOpts().CUDA)
S.Diag(DS.getTypeSpecTypeLoc(), diag::err_type_unsupported)
<< "_Float16";
Result = Context.Float16Ty;

View File

@ -0,0 +1,7 @@
// RUN: %clang_cc1 -fsyntax-only -triple x86_64 -aux-triple amdgcn -verify %s
// expected-no-diagnostics
#include "Inputs/cuda.h"
__device__ void f(_Float16 x);
__device__ _Float16 x = 1.0f16;