Commit Graph

322237 Commits

Author SHA1 Message Date
Matt Arsenault b60a2ae40e AMDGPU/GlobalISel: Support arguments with multiple registers
Handles structs used directly in argument lists.

llvm-svn: 366584
2019-07-19 14:29:30 +00:00
Matt Arsenault fecf43eba3 AMDGPU/GlobalISel: Rewrite lowerFormalArguments
This should now handle everything except structs passed as multiple
registers.

I think most of the packing logic should be handled by
handleAssignments, but I'm unclear on what the contract is for
multiple registers. This is copying how x86 handles this.

This does change the behavior of the test_sgpr_alignment0 amdgpu_vs
test. I don't think shader arguments should try to follow the
alignment, and registers need to be repacked. I also don't think it
matters, since I think the pointers are packed to the beginning of the
argument list anyway.

llvm-svn: 366582
2019-07-19 14:15:18 +00:00
Joseph Tremoulet 3fd917d886 Support Linux signal return trampolines in frame initialization
Summary:
Add __kernel_rt_sigreturn to the list of trap handlers for Linux (it's
used as such on aarch64 at least), and __restore_rt as well (used on
x86_64).

Skip decrement-and-recompute for trap handlers in
InitializeNonZerothFrame, as signal dispatch may point the child frame's
return address to the start of the return trampoline.

Parse the 'S' flag for signal handlers from eh_frame augmentation, and
propagate it to the unwind plan.

Reviewers: labath, jankratochvil, compnerd, jfb, jasonmolenda

Reviewed By: jasonmolenda

Subscribers: clayborg, MaskRay, wuzish, nemanjai, kbarton, jrtc27, atanasyan, jsji, javed.absar, kristof.beyls, lldb-commits

Tags: #lldb

Differential Revision: https://reviews.llvm.org/D63667

llvm-svn: 366580
2019-07-19 14:05:55 +00:00
Louis Dionne 9e6a42a185 [libc++] Add missing %link_flags to .sh.cpp test
Without the link flags, the test always fails on Linux. For some reason,
however, it works on Darwin -- which is why it wasn't caught at first.

llvm-svn: 366579
2019-07-19 14:01:48 +00:00
Matt Arsenault 1022c0dfde AMDGPU: Decompose all values to 32-bit pieces for calling conventions
This is the more natural lowering, and presents more opportunities to
reduce 64-bit ops to 32-bit.

This should also help avoid issues graphics shaders have had with
64-bit values, and simplify argument lowering in globalisel.

llvm-svn: 366578
2019-07-19 13:57:44 +00:00
Ilya Biryukov 8bb8915d43 [clangd] Provide a way to publish highlightings in non-racy manner
Summary:
By exposing a callback that can guard code publishing results of
'onMainAST' callback in the same manner we guard diagnostics.

Reviewers: sammccall

Reviewed By: sammccall

Subscribers: javed.absar, MaskRay, jkorous, arphaman, kadircet, hokein, jvikstrom, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D64985

llvm-svn: 366577
2019-07-19 13:51:01 +00:00
Nico Weber c35dd05a7c gn build: Set +x on symlink_or_copy.py
llvm-svn: 366576
2019-07-19 13:40:54 +00:00
Kadir Cetinkaya 9dc0160d26 [clangd] Disable background-index on lit-tests by default
Summary:
Since background-index can perform disk writes, we don't want to turn
it on tests that won't clear it.

Reviewers: sammccall

Subscribers: ilya-biryukov, MaskRay, jkorous, arphaman, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D64990

llvm-svn: 366575
2019-07-19 13:40:30 +00:00
Matt Arsenault 5905aae169 DAG: Handle dbg_value for arguments split into multiple subregs
This was handled previously for arguments split due to not fitting in
an MVT. This was dropping the register for argument registers split
due to TLI::getRegisterTypeForCallingConv.

llvm-svn: 366574
2019-07-19 13:36:46 +00:00
Nico Weber cb2c50028d lld-link: Demangle symbols from archives in diagnostics
Also add test coverage for thin archives (which are the only way I could
come up with to test at least some of the diagnostic changes).

Differential Revision: https://reviews.llvm.org/D64927

llvm-svn: 366573
2019-07-19 13:29:10 +00:00
Than McIntosh b288d90b39 [NFC] include cstdint/string prior to using uint8_t/string
Summary: include proper header prior to use of uint8_t typedef
and std::string.

Subscribers: llvm-commits

Reviewers: cherry

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64937

llvm-svn: 366572
2019-07-19 13:13:54 +00:00
Dmitry Preobrazhensky 4ccb7f8c45 [AMDGPU][MC] Corrected parsing of branch offsets
See bug 40820: https://bugs.llvm.org/show_bug.cgi?id=40820

Reviewers: artem.tamazov, arsenm

Differential Revision: https://reviews.llvm.org/D64629

llvm-svn: 366571
2019-07-19 13:12:47 +00:00
Kai Luo dec624682e [MachineCSE][MachinePRE] Avoid hoisting code from code regions into hot BBs.
Summary:
Current PRE hoists common computations into
CMBB = DT->findNearestCommonDominator(MBB, MBB1).
However, if CMBB is in a hot loop body, we might get performance
degradation.

Differential Revision: https://reviews.llvm.org/D64394

llvm-svn: 366570
2019-07-19 12:58:16 +00:00
Than McIntosh e238a4c757 [X86] for split stack, not save/restore nested arg if unused
Summary:
For split-stack, if the nested argument (i.e. R10) is not used, no need to save/restore it in the prologue.

Reviewers: thanm

Reviewed By: thanm

Subscribers: mstorsjo, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64673

llvm-svn: 366569
2019-07-19 12:54:44 +00:00
Shaurya Gupta 20a0e7caaf [Clangd] Fixed ExtractVariable test
llvm-svn: 366568
2019-07-19 12:11:04 +00:00
Louis Dionne e068c7463f [libc++] Fix link error with _LIBCPP_HIDE_FROM_ABI_PER_TU and std::string
Summary:
This is effectively a revert of r344616, which was a partial fix for
PR38964 (compilation of <string> with GCC in C++03 mode). However, that
configuration is explicitly not supported anymore and that partial fix
breaks compilation with Clang when per-TU insulation is provided.

PR42676
rdar://52899715

Reviewers: mclow.lists, EricWF

Subscribers: christof, jkorous, dexonsmith, libcxx-commits

Tags: #libc

Differential Revision: https://reviews.llvm.org/D64941

llvm-svn: 366567
2019-07-19 11:52:55 +00:00
Shaurya Gupta 06841eab00 [Clangd] Fixed SelectionTree bug for macros
Summary:
Fixed SelectionTree bug for macros
- Fixed SelectionTree claimRange for macros and template instantiations
- Fixed SelectionTree unit tests
- Changed a breaking test in TweakTests

Reviewers: sammccall, kadircet

Subscribers: ilya-biryukov, MaskRay, jkorous, arphaman, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D64329

llvm-svn: 366566
2019-07-19 11:41:02 +00:00
Roman Lebedev 9998585c47 [NFC][InstCombine] Tests for 'rem' formation from sub-of-mul-by-'div' (PR42673)
https://rise4fun.com/Alive/8Rp
https://bugs.llvm.org/show_bug.cgi?id=42673

llvm-svn: 366565
2019-07-19 11:29:18 +00:00
Roman Lebedev 882bf2a844 [NFC][InstCombine] Redundant masking before left-shift: tests with assume
If the legality check is `(shiftNbits-maskNbits) s>= 0`,
then we can simplify it to `shiftNbits u>= maskNbits`,
which is easier to check for.

However, currently switching the `dropRedundantMaskingOfLeftShiftInput()`
to `SimplifyICmpInst()` does not catch these cases and regresses
currently-handled cases, so i'll leave it as is for now.

https://rise4fun.com/Alive/25P

llvm-svn: 366564
2019-07-19 11:29:04 +00:00
Simon Pilgrim 2e435ef3ed Fix MSVC "result of 32-bit shift implicitly converted to 64 bits" warning. NFCI.
llvm-svn: 366563
2019-07-19 11:18:46 +00:00
Oliver Stannard 8780c0dda2 Don't update NoTrappingFPMath and FPDenormalMode in resetTargetOptions
We'd like to remove this whole function, because these are properties of
functions, not the target as a whole. These two are easy to remove
because they are only used for emitting ARM build attributes, which
expects them to represent the defaults for the whole module, not just
the last function generated.

This is needed to get correct build attributes when using IPRA on ARM,
because IPRA causes resetTargetOptions to get called before
ARMAsmPrinter::emitAttributes.

Differential revision: https://reviews.llvm.org/D64929

llvm-svn: 366562
2019-07-19 10:37:37 +00:00
Raphael Isemann cf2aca0aae [lldb][NFC] Tablegenify target
llvm-svn: 366561
2019-07-19 10:23:22 +00:00
Stefan Granitz f44d7c3f9f [NFC] Remove indent after r366433
llvm-svn: 366560
2019-07-19 10:20:35 +00:00
Kadir Cetinkaya 91e5f4b46b Revert "Revert r366458, r366467 and r366468"
This reverts commit 9c377105da.

[clangd][BackgroundIndexLoader] Directly store DependentTU while loading shard

Summary:
We were deferring the population of DependentTU field in LoadedShard
until BackgroundIndexLoader was consumed. This actually triggers a use after
free since the shards FileToTU was pointing at could've been moved while
consuming the Loader.

Reviewers: sammccall

Subscribers: ilya-biryukov, MaskRay, jkorous, arphaman, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D64980

llvm-svn: 366559
2019-07-19 10:18:52 +00:00
George Rimar ce2ef288b2 [llvm-readelf] - A fix for: "--hash-symbols asserts for 64-bit ELFs"
Fixes https://bugs.llvm.org/show_bug.cgi?id=42622.
(--hash-symbols switch is currently broken for 64-bit ELF files, due to r352630.)

Differential revision: https://reviews.llvm.org/D64788

llvm-svn: 366558
2019-07-19 10:15:03 +00:00
Oliver Stannard 0ed7732671 [IPRA] Don't rely on non-exact function definitions
If a function definition is not exact, then the linker could select a
differently-compiled version of it, which could use different registers.

https://reviews.llvm.org/D64909

llvm-svn: 366557
2019-07-19 09:59:26 +00:00
Mikhail Maltsev 0b001f94a5 [ARM] Add <saturate> operand to SQRSHRL and UQRSHLL
Summary:
According to the new Armv8-M specification
https://static.docs.arm.com/ddi0553/bh/DDI0553B_h_armv8m_arm.pdf the
instructions SQRSHRL and UQRSHLL now have an additional immediate
operand <saturate>. The new assembly syntax is:

SQRSHRL<c> RdaLo, RdaHi, #<saturate>, Rm
UQRSHLL<c> RdaLo, RdaHi, #<saturate>, Rm

where <saturate> can be either 64 (the existing behavior) or 48, in
that case the result is saturated to 48 bits.

The new operand is encoded as follows:
  #64 Encoded as sat = 0
  #48 Encoded as sat = 1
sat is bit 7 of the instruction bit pattern.

This patch adds a new assembler operand class MveSaturateOperand which
implements parsing and encoding. Decoding is implemented in
DecodeMVEOverlappingLongShift.

Reviewers: ostannard, simon_tatham, t.p.northover, samparker, dmgreen, SjoerdMeijer

Reviewed By: simon_tatham

Subscribers: javed.absar, kristof.beyls, hiraditya, pbarrio, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64810

llvm-svn: 366555
2019-07-19 09:46:28 +00:00
Azharuddin Mohammed 9c377105da Revert r366458, r366467 and r366468
r366458 is causing test failures. r366467 and r366468 had to be reverted as
they were casuing conflict while reverting r366458.

r366468 [clangd] Remove dead code from BackgroundIndex
r366467 [clangd] BackgroundIndex stores shards to the closest project
r366458 [clangd] Refactor background-index shard loading

llvm-svn: 366551
2019-07-19 09:26:33 +00:00
Sven van Haastregt e9e59ad79f [OpenCL] Define CLK_NULL_EVENT without cast
Defining CLK_NULL_EVENT with a `(void*)` cast has the (unintended?)
side-effect that the address space will be fixed (as generic in OpenCL
2.0 mode).  The consequence is that any target specific address space
for the clk_event_t type will not be applied.

It is not clear why the void pointer cast was needed in the first
place, and it seems we can do without it.

Differential Revision: https://reviews.llvm.org/D63876

llvm-svn: 366546
2019-07-19 09:11:48 +00:00
Kadir Cetinkaya f3ae501d36 [clangd] Handle windows line endings in QueryDriver
Summary:
The previous patch did not fix the end mark. D64789
fixes second case of https://github.com/clangd/clangd/issues/93

Patch by @lh123 !

Reviewers: sammccall, kadircet

Reviewed By: kadircet

Subscribers: MaskRay, ilya-biryukov, jkorous, arphaman, cfe-commits

Tags: #clang-tools-extra, #clang

Differential Revision: https://reviews.llvm.org/D64970

llvm-svn: 366545
2019-07-19 09:08:22 +00:00
Hubert Tong 2711e16b35 [sanitizers] Use covering ObjectFormatType switches
Summary:
This patch removes the `default` case from some switches on
`llvm::Triple::ObjectFormatType`, and cases for the missing enumerators
(`UnknownObjectFormat`, `Wasm`, and `XCOFF`) are then added.

For `UnknownObjectFormat`, the effect of the action for the `default`
case is maintained; otherwise, where `llvm_unreachable` is called,
`report_fatal_error` is used instead.

Where the `default` case returns a default value, `report_fatal_error`
is used for XCOFF as a placeholder. For `Wasm`, the effect of the action
for the `default` case in maintained.

The code is structured to avoid strongly implying that the `Wasm` case
is present for any reason other than to make the switch cover all
`ObjectFormatType` enumerator values.

Reviewers: sfertile, jasonliu, daltenty

Reviewed By: sfertile

Subscribers: hiraditya, aheejin, sunfish, llvm-commits, cfe-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D64222

llvm-svn: 366544
2019-07-19 08:46:18 +00:00
Jay Foad 7d06ffff46 [AMDGPU] Simplify the exclusive scan used for optimized atomics
Summary:
Change the scan algorithm to use only power-of-two shifts (1, 2, 4, 8,
16, 32) instead of starting off shifting by 1, 2 and 3 and then doing
a 3-way ADD, because:

1. It simplifies the compiler a little.
2. It minimizes vgpr pressure because each instruction is now of the
   form vn = vn + vn << c.
3. It is more friendly to the DPP combiner, which currently can't
   combine into an ADD3 instruction.

Because of #2 and #3 the end result is improved from this:

  v_add_u32_dpp v4, v3, v3  row_shr:1 row_mask:0xf bank_mask:0xf bound_ctrl:0
  v_mov_b32_dpp v5, v3  row_shr:2 row_mask:0xf bank_mask:0xf
  v_mov_b32_dpp v1, v3  row_shr:3 row_mask:0xf bank_mask:0xf
  v_add3_u32 v1, v4, v5, v1
  s_nop 1
  v_add_u32_dpp v1, v1, v1  row_shr:4 row_mask:0xf bank_mask:0xe
  s_nop 1
  v_add_u32_dpp v1, v1, v1  row_shr:8 row_mask:0xf bank_mask:0xc
  s_nop 1
  v_add_u32_dpp v1, v1, v1  row_bcast:15 row_mask:0xa bank_mask:0xf
  s_nop 1
  v_add_u32_dpp v1, v1, v1  row_bcast:31 row_mask:0xc bank_mask:0xf

To this:

  v_add_u32_dpp v1, v1, v1  row_shr:1 row_mask:0xf bank_mask:0xf bound_ctrl:0
  s_nop 1
  v_add_u32_dpp v1, v1, v1  row_shr:2 row_mask:0xf bank_mask:0xf bound_ctrl:0
  s_nop 1
  v_add_u32_dpp v1, v1, v1  row_shr:4 row_mask:0xf bank_mask:0xe
  s_nop 1
  v_add_u32_dpp v1, v1, v1  row_shr:8 row_mask:0xf bank_mask:0xc
  s_nop 1
  v_add_u32_dpp v1, v1, v1  row_bcast:15 row_mask:0xa bank_mask:0xf
  s_nop 1
  v_add_u32_dpp v1, v1, v1  row_bcast:31 row_mask:0xc bank_mask:0xf

I.e. two fewer computational instructions, one extra nop where we could
schedule something else.

Reviewers: arsenm, sheredom, critson, rampitec, vpykhtin

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64411

llvm-svn: 366543
2019-07-19 08:40:37 +00:00
Serguei Katkov bde33af85a [Loop Peeling] Enable peeling of multiple exits by default.
Enable loop peeling with multiple exits where all non-latch exits
ends up with deopt by default.

Reviewers: reames, fhahn
Reviewed By: reames
Subscribers: xbolva00, hiraditya, zzheng, llvm-commits
Differential Revision: https://reviews.llvm.org/D64619

llvm-svn: 366542
2019-07-19 08:35:45 +00:00
Haojian Wu 6ae86ea275 [clangd] cleanup: unify the implemenation of checking a location is inside main file.
Summary: We have variant implementations in the codebase, this patch unifies them.

Reviewers: ilya-biryukov, kadircet

Subscribers: MaskRay, jkorous, arphaman, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D64915

llvm-svn: 366541
2019-07-19 08:33:39 +00:00
Roman Lebedev f2eb403144 [InstCombine] Dropping redundant masking before left-shift [5/5] (PR42563)
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.

There are many variants to this pattern:
f. `((x << MaskShAmt) a>> MaskShAmt) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
f. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`)

Normally, the inner pattern is sign-extend,
but for our purposes it's no different to other patterns:

alive proofs:
f: https://rise4fun.com/Alive/7U3

For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.

https://bugs.llvm.org/show_bug.cgi?id=42563

Differential Revision: https://reviews.llvm.org/D64524

llvm-svn: 366540
2019-07-19 08:26:58 +00:00
Roman Lebedev 441c9d6ca8 [InstCombine] Dropping redundant masking before left-shift [4/5] (PR42563)
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.

There are many variants to this pattern:
e. `((x << MaskShAmt) l>> MaskShAmt) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
e. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`)

alive proofs:
e: https://rise4fun.com/Alive/0FT

For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.

https://bugs.llvm.org/show_bug.cgi?id=42563

Differential Revision: https://reviews.llvm.org/D64521

llvm-svn: 366539
2019-07-19 08:26:47 +00:00
Roman Lebedev 3c212ce305 [InstCombine] Dropping redundant masking before left-shift [3/5] (PR42563)
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.

There are many variants to this pattern:
d. `(x & ((-1 << MaskShAmt) >> MaskShAmt)) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
d. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`)

alive proofs:
d: https://rise4fun.com/Alive/I5Y

For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.

https://bugs.llvm.org/show_bug.cgi?id=42563

Differential Revision: https://reviews.llvm.org/D64519

llvm-svn: 366538
2019-07-19 08:26:37 +00:00
Roman Lebedev 2ebe57386d [InstCombine] Dropping redundant masking before left-shift [2/5] (PR42563)
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.

There are many variants to this pattern:
c. `(x & (-1 >> MaskShAmt)) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
c. `(ShiftShAmt-MaskShAmt) s>= 0` (i.e. `ShiftShAmt u>= MaskShAmt`)

alive proofs:
c: https://rise4fun.com/Alive/RgJh

For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.

https://bugs.llvm.org/show_bug.cgi?id=42563

Differential Revision: https://reviews.llvm.org/D64517

llvm-svn: 366537
2019-07-19 08:26:25 +00:00
Roman Lebedev 4422a1657c [InstCombine] Dropping redundant masking before left-shift [1/5] (PR42563)
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.

There are many variants to this pattern:
b. `(x & (~(-1 << maskNbits))) << shiftNbits`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
b. `(MaskShAmt+ShiftShAmt) u>= bitwidth(x)`

alive proof:
b: https://rise4fun.com/Alive/y8M

For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.

https://bugs.llvm.org/show_bug.cgi?id=42563

Differential Revision: https://reviews.llvm.org/D64514

llvm-svn: 366536
2019-07-19 08:26:13 +00:00
Roman Lebedev a5f0824eb5 [InstCombine] Dropping redundant masking before left-shift [0/5] (PR42563)
Summary:
If we have some pattern that leaves only some low bits set, and then performs
left-shift of those bits, if none of the bits that are left after the final
shift are modified by the mask, we can omit the mask.

There are many variants to this pattern:
a. `(x & ((1 << MaskShAmt) - 1)) << ShiftShAmt`
All these patterns can be simplified to just:
`x << ShiftShAmt`
iff:
a. `(MaskShAmt+ShiftShAmt) u>= bitwidth(x)`

alive proof:
a: https://rise4fun.com/Alive/wi9

Indeed, not all of these patterns are canonical.
But since this fold will only produce a single instruction
i'm really interested in handling even uncanonical patterns,
since i have this general kind of pattern in hotpaths,
and it is not totally outlandish for bit-twiddling code.

For now let's start with patterns where both shift amounts are variable,
with trivial constant "offset" between them, since i believe this is
both simplest to handle and i think this is most common.
But again, there are likely other variants where we could use
ValueTracking/ConstantRange to handle more cases.

https://bugs.llvm.org/show_bug.cgi?id=42563

Reviewers: spatel, nikic, huihuiz, xbolva00

Reviewed By: xbolva00

Subscribers: efriedma, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64512

llvm-svn: 366535
2019-07-19 08:25:43 +00:00
Fangrui Song 3628d948f5 [ELF][test] Fix aarch64-condb-reloc.s
llvm-svn: 366534
2019-07-19 08:00:22 +00:00
Hubert Tong ea98f15c43 [NFC] Fix an indentation issue in llvm/Support/TargetRegistry.h
llvm-svn: 366533
2019-07-19 07:21:59 +00:00
Fangrui Song c2a5459d52 [ELF][AArch64] Improve some aarch64-*.s tests
* Delete aarch64-tls-static.s: it is covered by aarch64-tlsdesc.c
* Add --no-show-raw-insn to llvm-objdump -d tests
* When linking an executable with %t.so, the path %t.so will be recorded in the DT_NEEDED entry if %t.so doesn't have DT_SONAME. The DT_NEEDED has varying lengths on different systems.
  Add -soname to make tests more robust. This issue will become outstanding if we allow overlapping PT_LOAD (D64930).

llvm-svn: 366532
2019-07-19 06:33:36 +00:00
Hsiangkai Wang c5ecdd3c5a [DebugInfo] Some fields do not need relocations even relax is enabled.
In debug frame information, some fields, e.g., Length in CIE/FDE and
Offset in FDE are attributes to describe the structure of CIE/FDE. They
are not related to the relaxed code. However, these attributes are
symbol differences. So, in current design, these attributes will be
filled as zero and LLVM generates relocations for them.

We only need to generate relocations for symbols in executable sections.
So, if the symbols are not located in executable sections, we still
evaluate their values under relaxation.

Differential Revision: https://reviews.llvm.org/D61584

llvm-svn: 366531
2019-07-19 06:10:36 +00:00
Chris Lattner f688226bc9 unbreak links
llvm-svn: 366530
2019-07-19 05:49:11 +00:00
Chris Lattner 2e418e16dd replace the old kaleidoscope tutorial files with orphaned pages that forward to the new copy.
llvm-svn: 366529
2019-07-19 05:23:17 +00:00
Chris Lattner 8ef8e5686e Point to the dusted off version of the kaleidoscope tutorial.
llvm-svn: 366528
2019-07-19 05:15:57 +00:00
Alex Brachet 553c29faa2 [test] [llvm-objcopy] Fix broken test case
Summary: The test case added in D62718 did not work unless the user was root because write bits were not set for the output file. This change uses only permissions with user write (0200) to ensure tests pass regardless of the users permissions.

Reviewers: jhenderson, rupprecht, MaskRay, espindola, alexshap

Reviewed By: MaskRay

Subscribers: emaste, arichardson, jakehehrlich, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D64302

llvm-svn: 366527
2019-07-19 02:31:21 +00:00
Kang Zhang ca9f68e55e [NFC][PowerPC] Modify the test case add_cmp.ll
llvm-svn: 366526
2019-07-19 02:23:26 +00:00
Yi Kong c12f29948d [libFuzzer] Set Android specific ALL_FUZZER_SUPPORTED_ARCH
Build libFuzzer for all Android supported architectures.

llvm-svn: 366525
2019-07-19 02:07:46 +00:00