Seperate volatility and atomicity/ordering in SelectionDAG

At the moment, we mark every atomic memory access as being also volatile. This is unnecessarily conservative and prohibits many legal transforms (DCE, folding, etc..).

This patch removes MOVolatile from the MachineMemOperands of atomic, but not volatile, instructions. This should be strictly NFC after a series of previous patches which have gone in to ensure backend code is conservative about handling of isAtomic MMOs. Once it's in and baked for a bit, we'll start working through removing unnecessary bailouts one by one. We applied this same strategy to the middle end a few years ago, with good success.

To make sure this patch itself is NFC, it is build on top of a series of other patches which adjust code to (for the moment) be as conservative for an atomic access as for a volatile access and build up a test corpus (mostly in test/CodeGen/X86/atomics-unordered.ll)..

Previously landed

    D57593 Fix a bug in the definition of isUnordered on MachineMemOperand
    D57596 [CodeGen] Be conservative about atomic accesses as for volatile
    D57802 Be conservative about unordered accesses for the moment
    rL353959: [Tests] First batch of cornercase tests for unordered atomics.
    rL353966: [Tests] RMW folding tests w/unordered atomic operations.
    rL353972: [Tests] More unordered atomic lowering tests.
    rL353989: [SelectionDAG] Inline a single use helper function, and remove last non-MMO interface
    rL354740: [Hexagon, SystemZ] Be super conservative about atomics
    rL354800: [Lanai] Be super conservative about atomics
    rL354845: [ARM] Be super conservative about atomics

Attention Out of Tree Backend Owners: This patch may break you. If it does, you can use the TLI getMMOFlags hook to restore the MOVolatile to any instruction you need to. (See llvm-dev thread titled "PSA: Changes to how atomics are handled in backends" started Feb 27, 2019.)

Differential Revision: https://reviews.llvm.org/D57601

llvm-svn: 355025
This commit is contained in:
Philip Reames 2019-02-27 20:20:08 +00:00
parent c526e02668
commit 288a95fc8c
6 changed files with 66 additions and 21 deletions

View File

@ -4417,10 +4417,10 @@ void SelectionDAGBuilder::visitAtomicCmpXchg(const AtomicCmpXchgInst &I) {
auto Alignment = DAG.getEVTAlignment(MemVT);
// FIXME: Volatile isn't really correct; we should keep track of atomic
// orderings in the memoperand.
auto Flags = MachineMemOperand::MOVolatile | MachineMemOperand::MOLoad |
MachineMemOperand::MOStore;
auto Flags = MachineMemOperand::MOLoad | MachineMemOperand::MOStore;
if (I.isVolatile())
Flags |= MachineMemOperand::MOVolatile;
Flags |= DAG.getTargetLoweringInfo().getMMOFlags(I);
MachineFunction &MF = DAG.getMachineFunction();
MachineMemOperand *MMO =
@ -4468,12 +4468,10 @@ void SelectionDAGBuilder::visitAtomicRMW(const AtomicRMWInst &I) {
auto MemVT = getValue(I.getValOperand()).getSimpleValueType();
auto Alignment = DAG.getEVTAlignment(MemVT);
// For now, atomics are considered to be volatile always, and they are
// chained as such.
// FIXME: Volatile isn't really correct; we should keep track of atomic
// orderings in the memoperand.
auto Flags = MachineMemOperand::MOVolatile |
MachineMemOperand::MOLoad | MachineMemOperand::MOStore;
auto Flags = MachineMemOperand::MOLoad | MachineMemOperand::MOStore;
if (I.isVolatile())
Flags |= MachineMemOperand::MOVolatile;
Flags |= DAG.getTargetLoweringInfo().getMMOFlags(I);
MachineFunction &MF = DAG.getMachineFunction();
MachineMemOperand *MMO =
@ -4518,12 +4516,15 @@ void SelectionDAGBuilder::visitAtomicLoad(const LoadInst &I) {
I.getAlignment() < VT.getStoreSize())
report_fatal_error("Cannot generate unaligned atomic load");
auto Flags = MachineMemOperand::MOLoad;
if (I.isVolatile())
Flags |= MachineMemOperand::MOVolatile;
Flags |= TLI.getMMOFlags(I);
MachineMemOperand *MMO =
DAG.getMachineFunction().
getMachineMemOperand(MachinePointerInfo(I.getPointerOperand()),
MachineMemOperand::MOVolatile |
MachineMemOperand::MOLoad,
VT.getStoreSize(),
Flags, VT.getStoreSize(),
I.getAlignment() ? I.getAlignment() :
DAG.getEVTAlignment(VT),
AAMDNodes(), nullptr, SSID, Order);
@ -4554,11 +4555,10 @@ void SelectionDAGBuilder::visitAtomicStore(const StoreInst &I) {
if (I.getAlignment() < VT.getStoreSize())
report_fatal_error("Cannot generate unaligned atomic store");
// For now, atomics are considered to be volatile always, and they are
// chained as such.
// FIXME: Volatile isn't really correct; we should keep track of atomic
// orderings in the memoperand.
auto Flags = MachineMemOperand::MOVolatile | MachineMemOperand::MOStore;
auto Flags = MachineMemOperand::MOStore;
if (I.isVolatile())
Flags |= MachineMemOperand::MOVolatile;
Flags |= TLI.getMMOFlags(I);
MachineFunction &MF = DAG.getMachineFunction();
MachineMemOperand *MMO =

View File

@ -3718,6 +3718,27 @@ SDValue SystemZTargetLowering::lowerATOMIC_CMP_SWAP(SDValue Op,
return SDValue();
}
MachineMemOperand::Flags
SystemZTargetLowering::getMMOFlags(const Instruction &I) const {
// Because of how we convert atomic_load and atomic_store to normal loads and
// stores in the DAG, we need to ensure that the MMOs are marked volatile
// since DAGCombine hasn't been updated to account for atomic, but non
// volatile loads. (See D57601)
if (auto *SI = dyn_cast<StoreInst>(&I))
if (SI->isAtomic())
return MachineMemOperand::MOVolatile;
if (auto *LI = dyn_cast<LoadInst>(&I))
if (LI->isAtomic())
return MachineMemOperand::MOVolatile;
if (auto *AI = dyn_cast<AtomicRMWInst>(&I))
if (AI->isAtomic())
return MachineMemOperand::MOVolatile;
if (auto *AI = dyn_cast<AtomicCmpXchgInst>(&I))
if (AI->isAtomic())
return MachineMemOperand::MOVolatile;
return MachineMemOperand::MONone;
}
SDValue SystemZTargetLowering::lowerSTACKSAVE(SDValue Op,
SelectionDAG &DAG) const {
MachineFunction &MF = DAG.getMachineFunction();

View File

@ -642,6 +642,7 @@ private:
MachineBasicBlock *MBB,
unsigned Opcode) const;
MachineMemOperand::Flags getMMOFlags(const Instruction &I) const override;
const TargetRegisterClass *getRepRegClassFor(MVT VT) const override;
};

View File

@ -1008,6 +1008,27 @@ LowerATOMIC_STORE(SDValue Op, SelectionDAG &DAG) const {
return SDValue();
}
MachineMemOperand::Flags
XCoreTargetLowering::getMMOFlags(const Instruction &I) const {
// Because of how we convert atomic_load and atomic_store to normal loads and
// stores in the DAG, we need to ensure that the MMOs are marked volatile
// since DAGCombine hasn't been updated to account for atomic, but non
// volatile loads. (See D57601)
if (auto *SI = dyn_cast<StoreInst>(&I))
if (SI->isAtomic())
return MachineMemOperand::MOVolatile;
if (auto *LI = dyn_cast<LoadInst>(&I))
if (LI->isAtomic())
return MachineMemOperand::MOVolatile;
if (auto *AI = dyn_cast<AtomicRMWInst>(&I))
if (AI->isAtomic())
return MachineMemOperand::MOVolatile;
if (auto *AI = dyn_cast<AtomicCmpXchgInst>(&I))
if (AI->isAtomic())
return MachineMemOperand::MOVolatile;
return MachineMemOperand::MONone;
}
//===----------------------------------------------------------------------===//
// Calling Convention Implementation
//===----------------------------------------------------------------------===//

View File

@ -188,6 +188,8 @@ namespace llvm {
SDValue LowerATOMIC_LOAD(SDValue Op, SelectionDAG &DAG) const;
SDValue LowerATOMIC_STORE(SDValue Op, SelectionDAG &DAG) const;
MachineMemOperand::Flags getMMOFlags(const Instruction &I) const override;
// Inline asm support
std::pair<unsigned, const TargetRegisterClass *>
getRegForInlineAsmConstraint(const TargetRegisterInfo *TRI,

View File

@ -1,9 +1,9 @@
; RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=gfx803 -stop-after=si-insert-skips < %s | FileCheck --check-prefix=GCN %s
; GCN-LABEL: name: syncscopes
; GCN: FLAT_STORE_DWORD killed renamable $vgpr1_vgpr2, killed renamable $vgpr0, 0, 0, 0, implicit $exec, implicit $flat_scr :: (volatile store syncscope("agent") seq_cst 4 into %ir.agent_out)
; GCN: FLAT_STORE_DWORD killed renamable $vgpr4_vgpr5, killed renamable $vgpr3, 0, 0, 0, implicit $exec, implicit $flat_scr :: (volatile store syncscope("workgroup") seq_cst 4 into %ir.workgroup_out)
; GCN: FLAT_STORE_DWORD killed renamable $vgpr7_vgpr8, killed renamable $vgpr6, 0, 0, 0, implicit $exec, implicit $flat_scr :: (volatile store syncscope("wavefront") seq_cst 4 into %ir.wavefront_out)
; GCN: FLAT_STORE_DWORD killed renamable $vgpr1_vgpr2, killed renamable $vgpr0, 0, 0, 0, implicit $exec, implicit $flat_scr :: (store syncscope("agent") seq_cst 4 into %ir.agent_out)
; GCN: FLAT_STORE_DWORD killed renamable $vgpr4_vgpr5, killed renamable $vgpr3, 0, 0, 0, implicit $exec, implicit $flat_scr :: (store syncscope("workgroup") seq_cst 4 into %ir.workgroup_out)
; GCN: FLAT_STORE_DWORD killed renamable $vgpr7_vgpr8, killed renamable $vgpr6, 0, 0, 0, implicit $exec, implicit $flat_scr :: (store syncscope("wavefront") seq_cst 4 into %ir.wavefront_out)
define void @syncscopes(
i32 %agent,
i32* %agent_out,