case in GRExprEngine::Visit (in r101129). Instead, enumerate all Stmt cases and have
no 'default' case in the switch statement. When we encounter a Stmt we don't handle,
we should explicitly add it to the switch statement.
llvm-svn: 101378
ASTContext::getTypeSize() rather than ASTContext::getIntWidth() for
the width of an integral type. The former includes padding for bools
(to the target's size) while the latter does not, so we woud end up
zero-extending bools to the target width when we shouldn't. Fixes a
crash-on-valid in the included test.
llvm-svn: 101372
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101364
- Used to determine whether the alignment of the type in a bit-field is
respected when laying out structures. The default is true, targets can
override this as needed.
- This is designed to correspond to the PCC_BITFIELD_TYPE_MATTERS macro in
gcc. The AST/Sema implementation only affects one line, unless I have
forgotten something. I'd appreciate further review.
- IRgen still needs to be updated to fully support this (which is effectively
PR5591).
llvm-svn: 101356
- Note that this is a behavior change, previously -mllvm at the driver level forwarded to clang -cc1. The driver does a little magic to make sure that '-mllvm -disable-llvm-optzns' works correctly, but other users will need to be updated to use -Xclang.
llvm-svn: 101354
This doesn't occur much at all, it only seems to formed in the case
when the trunc optimization kicks in due to phase ordering. In that
case it is saves a few bytes on x86-32.
llvm-svn: 101350
a load/or/and/store sequence into a narrower store when it is
safe. Daniel tells me that clang will start producing this sort
of thing with bitfields, and this does trigger a few dozen times
on 176.gcc produced by llvm-gcc even now.
This compiles code like CodeGen/X86/2009-05-28-DAGCombineCrash.ll
into:
movl %eax, 36(%rdi)
instead of:
movl $4294967295, %eax ## imm = 0xFFFFFFFF
andq 32(%rdi), %rax
shlq $32, %rcx
addq %rax, %rcx
movq %rcx, 32(%rdi)
and each of the testcases into a single store. Each of them used
to compile into craziness like this:
_test4:
movl $65535, %eax ## imm = 0xFFFF
andl (%rdi), %eax
shll $16, %esi
addl %eax, %esi
movl %esi, (%rdi)
ret
llvm-svn: 101343
- Sadly, this doesn't seem to give any .ll size win so far. It is possible to make this routine significantly smarter & avoid various shifting, masking, and zext/sext, but I'm not really convinced it is worth it. It is tricky, and this is really instcombine's job.
- No intended functionality change; the test case is just to increase coverage & serves as a demo file, it worked before this commit.
The new fixes from r101222 are:
1. The shift to the target position needs to occur after the value is extended to the correct size. This broke Clang bootstrap, among other things no doubt.
2. Swap the order of arguments to OR, to get a tad more constant folding.
llvm-svn: 101339
Stop multiplying constant by 8 accordingly in the header and change
intrinsic definition for what types we expect.
Add to existing palignr test to check that we're emitting the correct things.
llvm-svn: 101332