1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2025-01-31 20:51:52 +01:00

Enhance synchscope representation

OpenCL 2.0 introduces the notion of memory scopes in atomic operations to
  global and local memory. These scopes restrict how synchronization is
  achieved, which can result in improved performance.

  This change extends existing notion of synchronization scopes in LLVM to
  support arbitrary scopes expressed as target-specific strings, in addition to
  the already defined scopes (single thread, system).

  The LLVM IR and MIR syntax for expressing synchronization scopes has changed
  to use *syncscope("<scope>")*, where <scope> can be "singlethread" (this
  replaces *singlethread* keyword), or a target-specific name. As before, if
  the scope is not specified, it defaults to CrossThread/System scope.

  Implementation details:
    - Mapping from synchronization scope name/string to synchronization scope id
      is stored in LLVM context;
    - CrossThread/System and SingleThread scopes are pre-defined to efficiently
      check for known scopes without comparing strings;
    - Synchronization scope names are stored in SYNC_SCOPE_NAMES_BLOCK in
      the bitcode.

Differential Revision: https://reviews.llvm.org/D21723

llvm-svn: 307722
This commit is contained in:
Konstantin Zhuravlyov 2017-07-11 22:23:00 +00:00
parent 0e47762e3a
commit d382d6f3fc
69 changed files with 1227 additions and 791 deletions

View File

@ -2209,12 +2209,21 @@ For a simpler introduction to the ordering constraints, see the
same address in this global order. This corresponds to the C++0x/C1x same address in this global order. This corresponds to the C++0x/C1x
``memory_order_seq_cst`` and Java volatile. ``memory_order_seq_cst`` and Java volatile.
.. _singlethread: .. _syncscope:
If an atomic operation is marked ``singlethread``, it only *synchronizes If an atomic operation is marked ``syncscope("singlethread")``, it only
with* or participates in modification and seq\_cst total orderings with *synchronizes with* and only participates in the seq\_cst total orderings of
other operations running in the same thread (for example, in signal other operations running in the same thread (for example, in signal handlers).
handlers).
If an atomic operation is marked ``syncscope("<target-scope>")``, where
``<target-scope>`` is a target specific synchronization scope, then it is target
dependent if it *synchronizes with* and participates in the seq\_cst total
orderings of other operations.
Otherwise, an atomic operation that is not marked ``syncscope("singlethread")``
or ``syncscope("<target-scope>")`` *synchronizes with* and participates in the
seq\_cst total orderings of other operations that are not marked
``syncscope("singlethread")`` or ``syncscope("<target-scope>")``.
.. _fastmath: .. _fastmath:
@ -7380,7 +7389,7 @@ Syntax:
:: ::
<result> = load [volatile] <ty>, <ty>* <pointer>[, align <alignment>][, !nontemporal !<index>][, !invariant.load !<index>][, !invariant.group !<index>][, !nonnull !<index>][, !dereferenceable !<deref_bytes_node>][, !dereferenceable_or_null !<deref_bytes_node>][, !align !<align_node>] <result> = load [volatile] <ty>, <ty>* <pointer>[, align <alignment>][, !nontemporal !<index>][, !invariant.load !<index>][, !invariant.group !<index>][, !nonnull !<index>][, !dereferenceable !<deref_bytes_node>][, !dereferenceable_or_null !<deref_bytes_node>][, !align !<align_node>]
<result> = load atomic [volatile] <ty>, <ty>* <pointer> [singlethread] <ordering>, align <alignment> [, !invariant.group !<index>] <result> = load atomic [volatile] <ty>, <ty>* <pointer> [syncscope("<target-scope>")] <ordering>, align <alignment> [, !invariant.group !<index>]
!<index> = !{ i32 1 } !<index> = !{ i32 1 }
!<deref_bytes_node> = !{i64 <dereferenceable_bytes>} !<deref_bytes_node> = !{i64 <dereferenceable_bytes>}
!<align_node> = !{ i64 <value_alignment> } !<align_node> = !{ i64 <value_alignment> }
@ -7401,14 +7410,14 @@ modify the number or order of execution of this ``load`` with other
:ref:`volatile operations <volatile>`. :ref:`volatile operations <volatile>`.
If the ``load`` is marked as ``atomic``, it takes an extra :ref:`ordering If the ``load`` is marked as ``atomic``, it takes an extra :ref:`ordering
<ordering>` and optional ``singlethread`` argument. The ``release`` and <ordering>` and optional ``syncscope("<target-scope>")`` argument. The
``acq_rel`` orderings are not valid on ``load`` instructions. Atomic loads ``release`` and ``acq_rel`` orderings are not valid on ``load`` instructions.
produce :ref:`defined <memmodel>` results when they may see multiple atomic Atomic loads produce :ref:`defined <memmodel>` results when they may see
stores. The type of the pointee must be an integer, pointer, or floating-point multiple atomic stores. The type of the pointee must be an integer, pointer, or
type whose bit width is a power of two greater than or equal to eight and less floating-point type whose bit width is a power of two greater than or equal to
than or equal to a target-specific size limit. ``align`` must be explicitly eight and less than or equal to a target-specific size limit. ``align`` must be
specified on atomic loads, and the load has undefined behavior if the alignment explicitly specified on atomic loads, and the load has undefined behavior if the
is not set to a value which is at least the size in bytes of the alignment is not set to a value which is at least the size in bytes of the
pointee. ``!nontemporal`` does not have any defined semantics for atomic loads. pointee. ``!nontemporal`` does not have any defined semantics for atomic loads.
The optional constant ``align`` argument specifies the alignment of the The optional constant ``align`` argument specifies the alignment of the
@ -7509,7 +7518,7 @@ Syntax:
:: ::
store [volatile] <ty> <value>, <ty>* <pointer>[, align <alignment>][, !nontemporal !<index>][, !invariant.group !<index>] ; yields void store [volatile] <ty> <value>, <ty>* <pointer>[, align <alignment>][, !nontemporal !<index>][, !invariant.group !<index>] ; yields void
store atomic [volatile] <ty> <value>, <ty>* <pointer> [singlethread] <ordering>, align <alignment> [, !invariant.group !<index>] ; yields void store atomic [volatile] <ty> <value>, <ty>* <pointer> [syncscope("<target-scope>")] <ordering>, align <alignment> [, !invariant.group !<index>] ; yields void
Overview: Overview:
""""""""" """""""""
@ -7529,14 +7538,14 @@ allowed to modify the number or order of execution of this ``store`` with other
structural type <t_opaque>`) can be stored. structural type <t_opaque>`) can be stored.
If the ``store`` is marked as ``atomic``, it takes an extra :ref:`ordering If the ``store`` is marked as ``atomic``, it takes an extra :ref:`ordering
<ordering>` and optional ``singlethread`` argument. The ``acquire`` and <ordering>` and optional ``syncscope("<target-scope>")`` argument. The
``acq_rel`` orderings aren't valid on ``store`` instructions. Atomic loads ``acquire`` and ``acq_rel`` orderings aren't valid on ``store`` instructions.
produce :ref:`defined <memmodel>` results when they may see multiple atomic Atomic loads produce :ref:`defined <memmodel>` results when they may see
stores. The type of the pointee must be an integer, pointer, or floating-point multiple atomic stores. The type of the pointee must be an integer, pointer, or
type whose bit width is a power of two greater than or equal to eight and less floating-point type whose bit width is a power of two greater than or equal to
than or equal to a target-specific size limit. ``align`` must be explicitly eight and less than or equal to a target-specific size limit. ``align`` must be
specified on atomic stores, and the store has undefined behavior if the explicitly specified on atomic stores, and the store has undefined behavior if
alignment is not set to a value which is at least the size in bytes of the the alignment is not set to a value which is at least the size in bytes of the
pointee. ``!nontemporal`` does not have any defined semantics for atomic stores. pointee. ``!nontemporal`` does not have any defined semantics for atomic stores.
The optional constant ``align`` argument specifies the alignment of the The optional constant ``align`` argument specifies the alignment of the
@ -7597,7 +7606,7 @@ Syntax:
:: ::
fence [singlethread] <ordering> ; yields void fence [syncscope("<target-scope>")] <ordering> ; yields void
Overview: Overview:
""""""""" """""""""
@ -7631,17 +7640,17 @@ A ``fence`` which has ``seq_cst`` ordering, in addition to having both
``acquire`` and ``release`` semantics specified above, participates in ``acquire`` and ``release`` semantics specified above, participates in
the global program order of other ``seq_cst`` operations and/or fences. the global program order of other ``seq_cst`` operations and/or fences.
The optional ":ref:`singlethread <singlethread>`" argument specifies A ``fence`` instruction can also take an optional
that the fence only synchronizes with other fences in the same thread. ":ref:`syncscope <syncscope>`" argument.
(This is useful for interacting with signal handlers.)
Example: Example:
"""""""" """"""""
.. code-block:: llvm .. code-block:: llvm
fence acquire ; yields void fence acquire ; yields void
fence singlethread seq_cst ; yields void fence syncscope("singlethread") seq_cst ; yields void
fence syncscope("agent") seq_cst ; yields void
.. _i_cmpxchg: .. _i_cmpxchg:
@ -7653,7 +7662,7 @@ Syntax:
:: ::
cmpxchg [weak] [volatile] <ty>* <pointer>, <ty> <cmp>, <ty> <new> [singlethread] <success ordering> <failure ordering> ; yields { ty, i1 } cmpxchg [weak] [volatile] <ty>* <pointer>, <ty> <cmp>, <ty> <new> [syncscope("<target-scope>")] <success ordering> <failure ordering> ; yields { ty, i1 }
Overview: Overview:
""""""""" """""""""
@ -7682,10 +7691,8 @@ must be at least ``monotonic``, the ordering constraint on failure must be no
stronger than that on success, and the failure ordering cannot be either stronger than that on success, and the failure ordering cannot be either
``release`` or ``acq_rel``. ``release`` or ``acq_rel``.
The optional "``singlethread``" argument declares that the ``cmpxchg`` A ``cmpxchg`` instruction can also take an optional
is only atomic with respect to code (usually signal handlers) running in ":ref:`syncscope <syncscope>`" argument.
the same thread as the ``cmpxchg``. Otherwise the cmpxchg is atomic with
respect to all other code in the system.
The pointer passed into cmpxchg must have alignment greater than or The pointer passed into cmpxchg must have alignment greater than or
equal to the size in memory of the operand. equal to the size in memory of the operand.
@ -7739,7 +7746,7 @@ Syntax:
:: ::
atomicrmw [volatile] <operation> <ty>* <pointer>, <ty> <value> [singlethread] <ordering> ; yields ty atomicrmw [volatile] <operation> <ty>* <pointer>, <ty> <value> [syncscope("<target-scope>")] <ordering> ; yields ty
Overview: Overview:
""""""""" """""""""
@ -7773,6 +7780,9 @@ be a pointer to that type. If the ``atomicrmw`` is marked as
order of execution of this ``atomicrmw`` with other :ref:`volatile order of execution of this ``atomicrmw`` with other :ref:`volatile
operations <volatile>`. operations <volatile>`.
A ``atomicrmw`` instruction can also take an optional
":ref:`syncscope <syncscope>`" argument.
Semantics: Semantics:
"""""""""" """"""""""

View File

@ -59,6 +59,8 @@ enum BlockIDs {
FULL_LTO_GLOBALVAL_SUMMARY_BLOCK_ID, FULL_LTO_GLOBALVAL_SUMMARY_BLOCK_ID,
SYMTAB_BLOCK_ID, SYMTAB_BLOCK_ID,
SYNC_SCOPE_NAMES_BLOCK_ID,
}; };
/// Identification block contains a string that describes the producer details, /// Identification block contains a string that describes the producer details,
@ -172,6 +174,10 @@ enum OperandBundleTagCode {
OPERAND_BUNDLE_TAG = 1, // TAG: [strchr x N] OPERAND_BUNDLE_TAG = 1, // TAG: [strchr x N]
}; };
enum SyncScopeNameCode {
SYNC_SCOPE_NAME = 1,
};
// Value symbol table codes. // Value symbol table codes.
enum ValueSymtabCodes { enum ValueSymtabCodes {
VST_CODE_ENTRY = 1, // VST_ENTRY: [valueid, namechar x N] VST_CODE_ENTRY = 1, // VST_ENTRY: [valueid, namechar x N]
@ -404,12 +410,6 @@ enum AtomicOrderingCodes {
ORDERING_SEQCST = 6 ORDERING_SEQCST = 6
}; };
/// Encoded SynchronizationScope values.
enum AtomicSynchScopeCodes {
SYNCHSCOPE_SINGLETHREAD = 0,
SYNCHSCOPE_CROSSTHREAD = 1
};
/// Markers and flags for call instruction. /// Markers and flags for call instruction.
enum CallMarkersFlags { enum CallMarkersFlags {
CALL_TAIL = 0, CALL_TAIL = 0,

View File

@ -650,7 +650,7 @@ public:
MachinePointerInfo PtrInfo, MachineMemOperand::Flags f, uint64_t s, MachinePointerInfo PtrInfo, MachineMemOperand::Flags f, uint64_t s,
unsigned base_alignment, const AAMDNodes &AAInfo = AAMDNodes(), unsigned base_alignment, const AAMDNodes &AAInfo = AAMDNodes(),
const MDNode *Ranges = nullptr, const MDNode *Ranges = nullptr,
SynchronizationScope SynchScope = CrossThread, SyncScope::ID SSID = SyncScope::System,
AtomicOrdering Ordering = AtomicOrdering::NotAtomic, AtomicOrdering Ordering = AtomicOrdering::NotAtomic,
AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic); AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic);

View File

@ -124,8 +124,8 @@ public:
private: private:
/// Atomic information for this memory operation. /// Atomic information for this memory operation.
struct MachineAtomicInfo { struct MachineAtomicInfo {
/// Synchronization scope for this memory operation. /// Synchronization scope ID for this memory operation.
unsigned SynchScope : 1; // enum SynchronizationScope unsigned SSID : 8; // SyncScope::ID
/// Atomic ordering requirements for this memory operation. For cmpxchg /// Atomic ordering requirements for this memory operation. For cmpxchg
/// atomic operations, atomic ordering requirements when store occurs. /// atomic operations, atomic ordering requirements when store occurs.
unsigned Ordering : 4; // enum AtomicOrdering unsigned Ordering : 4; // enum AtomicOrdering
@ -152,7 +152,7 @@ public:
unsigned base_alignment, unsigned base_alignment,
const AAMDNodes &AAInfo = AAMDNodes(), const AAMDNodes &AAInfo = AAMDNodes(),
const MDNode *Ranges = nullptr, const MDNode *Ranges = nullptr,
SynchronizationScope SynchScope = CrossThread, SyncScope::ID SSID = SyncScope::System,
AtomicOrdering Ordering = AtomicOrdering::NotAtomic, AtomicOrdering Ordering = AtomicOrdering::NotAtomic,
AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic); AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic);
@ -202,9 +202,9 @@ public:
/// Return the range tag for the memory reference. /// Return the range tag for the memory reference.
const MDNode *getRanges() const { return Ranges; } const MDNode *getRanges() const { return Ranges; }
/// Return the synchronization scope for this memory operation. /// Returns the synchronization scope ID for this memory operation.
SynchronizationScope getSynchScope() const { SyncScope::ID getSyncScopeID() const {
return static_cast<SynchronizationScope>(AtomicInfo.SynchScope); return static_cast<SyncScope::ID>(AtomicInfo.SSID);
} }
/// Return the atomic ordering requirements for this memory operation. For /// Return the atomic ordering requirements for this memory operation. For

View File

@ -927,7 +927,7 @@ public:
SDValue Cmp, SDValue Swp, MachinePointerInfo PtrInfo, SDValue Cmp, SDValue Swp, MachinePointerInfo PtrInfo,
unsigned Alignment, AtomicOrdering SuccessOrdering, unsigned Alignment, AtomicOrdering SuccessOrdering,
AtomicOrdering FailureOrdering, AtomicOrdering FailureOrdering,
SynchronizationScope SynchScope); SyncScope::ID SSID);
SDValue getAtomicCmpSwap(unsigned Opcode, const SDLoc &dl, EVT MemVT, SDValue getAtomicCmpSwap(unsigned Opcode, const SDLoc &dl, EVT MemVT,
SDVTList VTs, SDValue Chain, SDValue Ptr, SDVTList VTs, SDValue Chain, SDValue Ptr,
SDValue Cmp, SDValue Swp, MachineMemOperand *MMO); SDValue Cmp, SDValue Swp, MachineMemOperand *MMO);
@ -937,7 +937,7 @@ public:
SDValue getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT, SDValue Chain, SDValue getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT, SDValue Chain,
SDValue Ptr, SDValue Val, const Value *PtrVal, SDValue Ptr, SDValue Val, const Value *PtrVal,
unsigned Alignment, AtomicOrdering Ordering, unsigned Alignment, AtomicOrdering Ordering,
SynchronizationScope SynchScope); SyncScope::ID SSID);
SDValue getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT, SDValue Chain, SDValue getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT, SDValue Chain,
SDValue Ptr, SDValue Val, MachineMemOperand *MMO); SDValue Ptr, SDValue Val, MachineMemOperand *MMO);

View File

@ -1213,8 +1213,8 @@ public:
/// Returns the Ranges that describes the dereference. /// Returns the Ranges that describes the dereference.
const MDNode *getRanges() const { return MMO->getRanges(); } const MDNode *getRanges() const { return MMO->getRanges(); }
/// Return the synchronization scope for this memory operation. /// Returns the synchronization scope ID for this memory operation.
SynchronizationScope getSynchScope() const { return MMO->getSynchScope(); } SyncScope::ID getSyncScopeID() const { return MMO->getSyncScopeID(); }
/// Return the atomic ordering requirements for this memory operation. For /// Return the atomic ordering requirements for this memory operation. For
/// cmpxchg atomic operations, return the atomic ordering requirements when /// cmpxchg atomic operations, return the atomic ordering requirements when

View File

@ -1203,22 +1203,22 @@ public:
return SI; return SI;
} }
FenceInst *CreateFence(AtomicOrdering Ordering, FenceInst *CreateFence(AtomicOrdering Ordering,
SynchronizationScope SynchScope = CrossThread, SyncScope::ID SSID = SyncScope::System,
const Twine &Name = "") { const Twine &Name = "") {
return Insert(new FenceInst(Context, Ordering, SynchScope), Name); return Insert(new FenceInst(Context, Ordering, SSID), Name);
} }
AtomicCmpXchgInst * AtomicCmpXchgInst *
CreateAtomicCmpXchg(Value *Ptr, Value *Cmp, Value *New, CreateAtomicCmpXchg(Value *Ptr, Value *Cmp, Value *New,
AtomicOrdering SuccessOrdering, AtomicOrdering SuccessOrdering,
AtomicOrdering FailureOrdering, AtomicOrdering FailureOrdering,
SynchronizationScope SynchScope = CrossThread) { SyncScope::ID SSID = SyncScope::System) {
return Insert(new AtomicCmpXchgInst(Ptr, Cmp, New, SuccessOrdering, return Insert(new AtomicCmpXchgInst(Ptr, Cmp, New, SuccessOrdering,
FailureOrdering, SynchScope)); FailureOrdering, SSID));
} }
AtomicRMWInst *CreateAtomicRMW(AtomicRMWInst::BinOp Op, Value *Ptr, Value *Val, AtomicRMWInst *CreateAtomicRMW(AtomicRMWInst::BinOp Op, Value *Ptr, Value *Val,
AtomicOrdering Ordering, AtomicOrdering Ordering,
SynchronizationScope SynchScope = CrossThread) { SyncScope::ID SSID = SyncScope::System) {
return Insert(new AtomicRMWInst(Op, Ptr, Val, Ordering, SynchScope)); return Insert(new AtomicRMWInst(Op, Ptr, Val, Ordering, SSID));
} }
Value *CreateGEP(Value *Ptr, ArrayRef<Value *> IdxList, Value *CreateGEP(Value *Ptr, ArrayRef<Value *> IdxList,
const Twine &Name = "") { const Twine &Name = "") {

View File

@ -52,11 +52,6 @@ class ConstantInt;
class DataLayout; class DataLayout;
class LLVMContext; class LLVMContext;
enum SynchronizationScope {
SingleThread = 0,
CrossThread = 1
};
//===----------------------------------------------------------------------===// //===----------------------------------------------------------------------===//
// AllocaInst Class // AllocaInst Class
//===----------------------------------------------------------------------===// //===----------------------------------------------------------------------===//
@ -195,17 +190,16 @@ public:
LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile, LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile,
unsigned Align, BasicBlock *InsertAtEnd); unsigned Align, BasicBlock *InsertAtEnd);
LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile, unsigned Align, LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile, unsigned Align,
AtomicOrdering Order, SynchronizationScope SynchScope = CrossThread, AtomicOrdering Order, SyncScope::ID SSID = SyncScope::System,
Instruction *InsertBefore = nullptr) Instruction *InsertBefore = nullptr)
: LoadInst(cast<PointerType>(Ptr->getType())->getElementType(), Ptr, : LoadInst(cast<PointerType>(Ptr->getType())->getElementType(), Ptr,
NameStr, isVolatile, Align, Order, SynchScope, InsertBefore) {} NameStr, isVolatile, Align, Order, SSID, InsertBefore) {}
LoadInst(Type *Ty, Value *Ptr, const Twine &NameStr, bool isVolatile, LoadInst(Type *Ty, Value *Ptr, const Twine &NameStr, bool isVolatile,
unsigned Align, AtomicOrdering Order, unsigned Align, AtomicOrdering Order,
SynchronizationScope SynchScope = CrossThread, SyncScope::ID SSID = SyncScope::System,
Instruction *InsertBefore = nullptr); Instruction *InsertBefore = nullptr);
LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile, LoadInst(Value *Ptr, const Twine &NameStr, bool isVolatile,
unsigned Align, AtomicOrdering Order, unsigned Align, AtomicOrdering Order, SyncScope::ID SSID,
SynchronizationScope SynchScope,
BasicBlock *InsertAtEnd); BasicBlock *InsertAtEnd);
LoadInst(Value *Ptr, const char *NameStr, Instruction *InsertBefore); LoadInst(Value *Ptr, const char *NameStr, Instruction *InsertBefore);
LoadInst(Value *Ptr, const char *NameStr, BasicBlock *InsertAtEnd); LoadInst(Value *Ptr, const char *NameStr, BasicBlock *InsertAtEnd);
@ -235,34 +229,34 @@ public:
void setAlignment(unsigned Align); void setAlignment(unsigned Align);
/// Returns the ordering effect of this fence. /// Returns the ordering constraint of this load instruction.
AtomicOrdering getOrdering() const { AtomicOrdering getOrdering() const {
return AtomicOrdering((getSubclassDataFromInstruction() >> 7) & 7); return AtomicOrdering((getSubclassDataFromInstruction() >> 7) & 7);
} }
/// Set the ordering constraint on this load. May not be Release or /// Sets the ordering constraint of this load instruction. May not be Release
/// AcquireRelease. /// or AcquireRelease.
void setOrdering(AtomicOrdering Ordering) { void setOrdering(AtomicOrdering Ordering) {
setInstructionSubclassData((getSubclassDataFromInstruction() & ~(7 << 7)) | setInstructionSubclassData((getSubclassDataFromInstruction() & ~(7 << 7)) |
((unsigned)Ordering << 7)); ((unsigned)Ordering << 7));
} }
SynchronizationScope getSynchScope() const { /// Returns the synchronization scope ID of this load instruction.
return SynchronizationScope((getSubclassDataFromInstruction() >> 6) & 1); SyncScope::ID getSyncScopeID() const {
return SSID;
} }
/// Specify whether this load is ordered with respect to all /// Sets the synchronization scope ID of this load instruction.
/// concurrently executing threads, or only with respect to signal handlers void setSyncScopeID(SyncScope::ID SSID) {
/// executing in the same thread. this->SSID = SSID;
void setSynchScope(SynchronizationScope xthread) {
setInstructionSubclassData((getSubclassDataFromInstruction() & ~(1 << 6)) |
(xthread << 6));
} }
/// Sets the ordering constraint and the synchronization scope ID of this load
/// instruction.
void setAtomic(AtomicOrdering Ordering, void setAtomic(AtomicOrdering Ordering,
SynchronizationScope SynchScope = CrossThread) { SyncScope::ID SSID = SyncScope::System) {
setOrdering(Ordering); setOrdering(Ordering);
setSynchScope(SynchScope); setSyncScopeID(SSID);
} }
bool isSimple() const { return !isAtomic() && !isVolatile(); } bool isSimple() const { return !isAtomic() && !isVolatile(); }
@ -297,6 +291,11 @@ private:
void setInstructionSubclassData(unsigned short D) { void setInstructionSubclassData(unsigned short D) {
Instruction::setInstructionSubclassData(D); Instruction::setInstructionSubclassData(D);
} }
/// The synchronization scope ID of this load instruction. Not quite enough
/// room in SubClassData for everything, so synchronization scope ID gets its
/// own field.
SyncScope::ID SSID;
}; };
//===----------------------------------------------------------------------===// //===----------------------------------------------------------------------===//
@ -325,11 +324,10 @@ public:
unsigned Align, BasicBlock *InsertAtEnd); unsigned Align, BasicBlock *InsertAtEnd);
StoreInst(Value *Val, Value *Ptr, bool isVolatile, StoreInst(Value *Val, Value *Ptr, bool isVolatile,
unsigned Align, AtomicOrdering Order, unsigned Align, AtomicOrdering Order,
SynchronizationScope SynchScope = CrossThread, SyncScope::ID SSID = SyncScope::System,
Instruction *InsertBefore = nullptr); Instruction *InsertBefore = nullptr);
StoreInst(Value *Val, Value *Ptr, bool isVolatile, StoreInst(Value *Val, Value *Ptr, bool isVolatile,
unsigned Align, AtomicOrdering Order, unsigned Align, AtomicOrdering Order, SyncScope::ID SSID,
SynchronizationScope SynchScope,
BasicBlock *InsertAtEnd); BasicBlock *InsertAtEnd);
// allocate space for exactly two operands // allocate space for exactly two operands
@ -356,34 +354,34 @@ public:
void setAlignment(unsigned Align); void setAlignment(unsigned Align);
/// Returns the ordering effect of this store. /// Returns the ordering constraint of this store instruction.
AtomicOrdering getOrdering() const { AtomicOrdering getOrdering() const {
return AtomicOrdering((getSubclassDataFromInstruction() >> 7) & 7); return AtomicOrdering((getSubclassDataFromInstruction() >> 7) & 7);
} }
/// Set the ordering constraint on this store. May not be Acquire or /// Sets the ordering constraint of this store instruction. May not be
/// AcquireRelease. /// Acquire or AcquireRelease.
void setOrdering(AtomicOrdering Ordering) { void setOrdering(AtomicOrdering Ordering) {
setInstructionSubclassData((getSubclassDataFromInstruction() & ~(7 << 7)) | setInstructionSubclassData((getSubclassDataFromInstruction() & ~(7 << 7)) |
((unsigned)Ordering << 7)); ((unsigned)Ordering << 7));
} }
SynchronizationScope getSynchScope() const { /// Returns the synchronization scope ID of this store instruction.
return SynchronizationScope((getSubclassDataFromInstruction() >> 6) & 1); SyncScope::ID getSyncScopeID() const {
return SSID;
} }
/// Specify whether this store instruction is ordered with respect to all /// Sets the synchronization scope ID of this store instruction.
/// concurrently executing threads, or only with respect to signal handlers void setSyncScopeID(SyncScope::ID SSID) {
/// executing in the same thread. this->SSID = SSID;
void setSynchScope(SynchronizationScope xthread) {
setInstructionSubclassData((getSubclassDataFromInstruction() & ~(1 << 6)) |
(xthread << 6));
} }
/// Sets the ordering constraint and the synchronization scope ID of this
/// store instruction.
void setAtomic(AtomicOrdering Ordering, void setAtomic(AtomicOrdering Ordering,
SynchronizationScope SynchScope = CrossThread) { SyncScope::ID SSID = SyncScope::System) {
setOrdering(Ordering); setOrdering(Ordering);
setSynchScope(SynchScope); setSyncScopeID(SSID);
} }
bool isSimple() const { return !isAtomic() && !isVolatile(); } bool isSimple() const { return !isAtomic() && !isVolatile(); }
@ -421,6 +419,11 @@ private:
void setInstructionSubclassData(unsigned short D) { void setInstructionSubclassData(unsigned short D) {
Instruction::setInstructionSubclassData(D); Instruction::setInstructionSubclassData(D);
} }
/// The synchronization scope ID of this store instruction. Not quite enough
/// room in SubClassData for everything, so synchronization scope ID gets its
/// own field.
SyncScope::ID SSID;
}; };
template <> template <>
@ -435,7 +438,7 @@ DEFINE_TRANSPARENT_OPERAND_ACCESSORS(StoreInst, Value)
/// An instruction for ordering other memory operations. /// An instruction for ordering other memory operations.
class FenceInst : public Instruction { class FenceInst : public Instruction {
void Init(AtomicOrdering Ordering, SynchronizationScope SynchScope); void Init(AtomicOrdering Ordering, SyncScope::ID SSID);
protected: protected:
// Note: Instruction needs to be a friend here to call cloneImpl. // Note: Instruction needs to be a friend here to call cloneImpl.
@ -447,10 +450,9 @@ public:
// Ordering may only be Acquire, Release, AcquireRelease, or // Ordering may only be Acquire, Release, AcquireRelease, or
// SequentiallyConsistent. // SequentiallyConsistent.
FenceInst(LLVMContext &C, AtomicOrdering Ordering, FenceInst(LLVMContext &C, AtomicOrdering Ordering,
SynchronizationScope SynchScope = CrossThread, SyncScope::ID SSID = SyncScope::System,
Instruction *InsertBefore = nullptr); Instruction *InsertBefore = nullptr);
FenceInst(LLVMContext &C, AtomicOrdering Ordering, FenceInst(LLVMContext &C, AtomicOrdering Ordering, SyncScope::ID SSID,
SynchronizationScope SynchScope,
BasicBlock *InsertAtEnd); BasicBlock *InsertAtEnd);
// allocate space for exactly zero operands // allocate space for exactly zero operands
@ -458,28 +460,26 @@ public:
return User::operator new(s, 0); return User::operator new(s, 0);
} }
/// Returns the ordering effect of this fence. /// Returns the ordering constraint of this fence instruction.
AtomicOrdering getOrdering() const { AtomicOrdering getOrdering() const {
return AtomicOrdering(getSubclassDataFromInstruction() >> 1); return AtomicOrdering(getSubclassDataFromInstruction() >> 1);
} }
/// Set the ordering constraint on this fence. May only be Acquire, Release, /// Sets the ordering constraint of this fence instruction. May only be
/// AcquireRelease, or SequentiallyConsistent. /// Acquire, Release, AcquireRelease, or SequentiallyConsistent.
void setOrdering(AtomicOrdering Ordering) { void setOrdering(AtomicOrdering Ordering) {
setInstructionSubclassData((getSubclassDataFromInstruction() & 1) | setInstructionSubclassData((getSubclassDataFromInstruction() & 1) |
((unsigned)Ordering << 1)); ((unsigned)Ordering << 1));
} }
SynchronizationScope getSynchScope() const { /// Returns the synchronization scope ID of this fence instruction.
return SynchronizationScope(getSubclassDataFromInstruction() & 1); SyncScope::ID getSyncScopeID() const {
return SSID;
} }
/// Specify whether this fence orders other operations with respect to all /// Sets the synchronization scope ID of this fence instruction.
/// concurrently executing threads, or only with respect to signal handlers void setSyncScopeID(SyncScope::ID SSID) {
/// executing in the same thread. this->SSID = SSID;
void setSynchScope(SynchronizationScope xthread) {
setInstructionSubclassData((getSubclassDataFromInstruction() & ~1) |
xthread);
} }
// Methods for support type inquiry through isa, cast, and dyn_cast: // Methods for support type inquiry through isa, cast, and dyn_cast:
@ -496,6 +496,11 @@ private:
void setInstructionSubclassData(unsigned short D) { void setInstructionSubclassData(unsigned short D) {
Instruction::setInstructionSubclassData(D); Instruction::setInstructionSubclassData(D);
} }
/// The synchronization scope ID of this fence instruction. Not quite enough
/// room in SubClassData for everything, so synchronization scope ID gets its
/// own field.
SyncScope::ID SSID;
}; };
//===----------------------------------------------------------------------===// //===----------------------------------------------------------------------===//
@ -509,7 +514,7 @@ private:
class AtomicCmpXchgInst : public Instruction { class AtomicCmpXchgInst : public Instruction {
void Init(Value *Ptr, Value *Cmp, Value *NewVal, void Init(Value *Ptr, Value *Cmp, Value *NewVal,
AtomicOrdering SuccessOrdering, AtomicOrdering FailureOrdering, AtomicOrdering SuccessOrdering, AtomicOrdering FailureOrdering,
SynchronizationScope SynchScope); SyncScope::ID SSID);
protected: protected:
// Note: Instruction needs to be a friend here to call cloneImpl. // Note: Instruction needs to be a friend here to call cloneImpl.
@ -521,13 +526,11 @@ public:
AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal, AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
AtomicOrdering SuccessOrdering, AtomicOrdering SuccessOrdering,
AtomicOrdering FailureOrdering, AtomicOrdering FailureOrdering,
SynchronizationScope SynchScope, SyncScope::ID SSID, Instruction *InsertBefore = nullptr);
Instruction *InsertBefore = nullptr);
AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal, AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
AtomicOrdering SuccessOrdering, AtomicOrdering SuccessOrdering,
AtomicOrdering FailureOrdering, AtomicOrdering FailureOrdering,
SynchronizationScope SynchScope, SyncScope::ID SSID, BasicBlock *InsertAtEnd);
BasicBlock *InsertAtEnd);
// allocate space for exactly three operands // allocate space for exactly three operands
void *operator new(size_t s) { void *operator new(size_t s) {
@ -561,7 +564,12 @@ public:
/// Transparently provide more efficient getOperand methods. /// Transparently provide more efficient getOperand methods.
DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value); DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
/// Set the ordering constraint on this cmpxchg. /// Returns the success ordering constraint of this cmpxchg instruction.
AtomicOrdering getSuccessOrdering() const {
return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
}
/// Sets the success ordering constraint of this cmpxchg instruction.
void setSuccessOrdering(AtomicOrdering Ordering) { void setSuccessOrdering(AtomicOrdering Ordering) {
assert(Ordering != AtomicOrdering::NotAtomic && assert(Ordering != AtomicOrdering::NotAtomic &&
"CmpXchg instructions can only be atomic."); "CmpXchg instructions can only be atomic.");
@ -569,6 +577,12 @@ public:
((unsigned)Ordering << 2)); ((unsigned)Ordering << 2));
} }
/// Returns the failure ordering constraint of this cmpxchg instruction.
AtomicOrdering getFailureOrdering() const {
return AtomicOrdering((getSubclassDataFromInstruction() >> 5) & 7);
}
/// Sets the failure ordering constraint of this cmpxchg instruction.
void setFailureOrdering(AtomicOrdering Ordering) { void setFailureOrdering(AtomicOrdering Ordering) {
assert(Ordering != AtomicOrdering::NotAtomic && assert(Ordering != AtomicOrdering::NotAtomic &&
"CmpXchg instructions can only be atomic."); "CmpXchg instructions can only be atomic.");
@ -576,28 +590,14 @@ public:
((unsigned)Ordering << 5)); ((unsigned)Ordering << 5));
} }
/// Specify whether this cmpxchg is atomic and orders other operations with /// Returns the synchronization scope ID of this cmpxchg instruction.
/// respect to all concurrently executing threads, or only with respect to SyncScope::ID getSyncScopeID() const {
/// signal handlers executing in the same thread. return SSID;
void setSynchScope(SynchronizationScope SynchScope) {
setInstructionSubclassData((getSubclassDataFromInstruction() & ~2) |
(SynchScope << 1));
} }
/// Returns the ordering constraint on this cmpxchg. /// Sets the synchronization scope ID of this cmpxchg instruction.
AtomicOrdering getSuccessOrdering() const { void setSyncScopeID(SyncScope::ID SSID) {
return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7); this->SSID = SSID;
}
/// Returns the ordering constraint on this cmpxchg.
AtomicOrdering getFailureOrdering() const {
return AtomicOrdering((getSubclassDataFromInstruction() >> 5) & 7);
}
/// Returns whether this cmpxchg is atomic between threads or only within a
/// single thread.
SynchronizationScope getSynchScope() const {
return SynchronizationScope((getSubclassDataFromInstruction() & 2) >> 1);
} }
Value *getPointerOperand() { return getOperand(0); } Value *getPointerOperand() { return getOperand(0); }
@ -652,6 +652,11 @@ private:
void setInstructionSubclassData(unsigned short D) { void setInstructionSubclassData(unsigned short D) {
Instruction::setInstructionSubclassData(D); Instruction::setInstructionSubclassData(D);
} }
/// The synchronization scope ID of this cmpxchg instruction. Not quite
/// enough room in SubClassData for everything, so synchronization scope ID
/// gets its own field.
SyncScope::ID SSID;
}; };
template <> template <>
@ -711,10 +716,10 @@ public:
}; };
AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val, AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
AtomicOrdering Ordering, SynchronizationScope SynchScope, AtomicOrdering Ordering, SyncScope::ID SSID,
Instruction *InsertBefore = nullptr); Instruction *InsertBefore = nullptr);
AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val, AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
AtomicOrdering Ordering, SynchronizationScope SynchScope, AtomicOrdering Ordering, SyncScope::ID SSID,
BasicBlock *InsertAtEnd); BasicBlock *InsertAtEnd);
// allocate space for exactly two operands // allocate space for exactly two operands
@ -748,7 +753,12 @@ public:
/// Transparently provide more efficient getOperand methods. /// Transparently provide more efficient getOperand methods.
DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value); DECLARE_TRANSPARENT_OPERAND_ACCESSORS(Value);
/// Set the ordering constraint on this RMW. /// Returns the ordering constraint of this rmw instruction.
AtomicOrdering getOrdering() const {
return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7);
}
/// Sets the ordering constraint of this rmw instruction.
void setOrdering(AtomicOrdering Ordering) { void setOrdering(AtomicOrdering Ordering) {
assert(Ordering != AtomicOrdering::NotAtomic && assert(Ordering != AtomicOrdering::NotAtomic &&
"atomicrmw instructions can only be atomic."); "atomicrmw instructions can only be atomic.");
@ -756,23 +766,14 @@ public:
((unsigned)Ordering << 2)); ((unsigned)Ordering << 2));
} }
/// Specify whether this RMW orders other operations with respect to all /// Returns the synchronization scope ID of this rmw instruction.
/// concurrently executing threads, or only with respect to signal handlers SyncScope::ID getSyncScopeID() const {
/// executing in the same thread. return SSID;
void setSynchScope(SynchronizationScope SynchScope) {
setInstructionSubclassData((getSubclassDataFromInstruction() & ~2) |
(SynchScope << 1));
} }
/// Returns the ordering constraint on this RMW. /// Sets the synchronization scope ID of this rmw instruction.
AtomicOrdering getOrdering() const { void setSyncScopeID(SyncScope::ID SSID) {
return AtomicOrdering((getSubclassDataFromInstruction() >> 2) & 7); this->SSID = SSID;
}
/// Returns whether this RMW is atomic between threads or only within a
/// single thread.
SynchronizationScope getSynchScope() const {
return SynchronizationScope((getSubclassDataFromInstruction() & 2) >> 1);
} }
Value *getPointerOperand() { return getOperand(0); } Value *getPointerOperand() { return getOperand(0); }
@ -797,13 +798,18 @@ public:
private: private:
void Init(BinOp Operation, Value *Ptr, Value *Val, void Init(BinOp Operation, Value *Ptr, Value *Val,
AtomicOrdering Ordering, SynchronizationScope SynchScope); AtomicOrdering Ordering, SyncScope::ID SSID);
// Shadow Instruction::setInstructionSubclassData with a private forwarding // Shadow Instruction::setInstructionSubclassData with a private forwarding
// method so that subclasses cannot accidentally use it. // method so that subclasses cannot accidentally use it.
void setInstructionSubclassData(unsigned short D) { void setInstructionSubclassData(unsigned short D) {
Instruction::setInstructionSubclassData(D); Instruction::setInstructionSubclassData(D);
} }
/// The synchronization scope ID of this rmw instruction. Not quite enough
/// room in SubClassData for everything, so synchronization scope ID gets its
/// own field.
SyncScope::ID SSID;
}; };
template <> template <>

View File

@ -42,6 +42,24 @@ class Output;
} // end namespace yaml } // end namespace yaml
namespace SyncScope {
typedef uint8_t ID;
/// Known synchronization scope IDs, which always have the same value. All
/// synchronization scope IDs that LLVM has special knowledge of are listed
/// here. Additionally, this scheme allows LLVM to efficiently check for
/// specific synchronization scope ID without comparing strings.
enum {
/// Synchronized with respect to signal handlers executing in the same thread.
SingleThread = 0,
/// Synchronized with respect to all concurrently executing threads.
System = 1
};
} // end namespace SyncScope
/// This is an important class for using LLVM in a threaded context. It /// This is an important class for using LLVM in a threaded context. It
/// (opaquely) owns and manages the core "global" data of LLVM's core /// (opaquely) owns and manages the core "global" data of LLVM's core
/// infrastructure, including the type and constant uniquing tables. /// infrastructure, including the type and constant uniquing tables.
@ -111,6 +129,16 @@ public:
/// tag registered with an LLVMContext has an unique ID. /// tag registered with an LLVMContext has an unique ID.
uint32_t getOperandBundleTagID(StringRef Tag) const; uint32_t getOperandBundleTagID(StringRef Tag) const;
/// getOrInsertSyncScopeID - Maps synchronization scope name to
/// synchronization scope ID. Every synchronization scope registered with
/// LLVMContext has unique ID except pre-defined ones.
SyncScope::ID getOrInsertSyncScopeID(StringRef SSN);
/// getSyncScopeNames - Populates client supplied SmallVector with
/// synchronization scope names registered with LLVMContext. Synchronization
/// scope names are ordered by increasing synchronization scope IDs.
void getSyncScopeNames(SmallVectorImpl<StringRef> &SSNs) const;
/// Define the GC for a function /// Define the GC for a function
void setGC(const Function &Fn, std::string GCName); void setGC(const Function &Fn, std::string GCName);

View File

@ -542,7 +542,7 @@ lltok::Kind LLLexer::LexIdentifier() {
KEYWORD(release); KEYWORD(release);
KEYWORD(acq_rel); KEYWORD(acq_rel);
KEYWORD(seq_cst); KEYWORD(seq_cst);
KEYWORD(singlethread); KEYWORD(syncscope);
KEYWORD(nnan); KEYWORD(nnan);
KEYWORD(ninf); KEYWORD(ninf);

View File

@ -1919,20 +1919,42 @@ bool LLParser::parseAllocSizeArguments(unsigned &BaseSizeArg,
} }
/// ParseScopeAndOrdering /// ParseScopeAndOrdering
/// if isAtomic: ::= 'singlethread'? AtomicOrdering /// if isAtomic: ::= SyncScope? AtomicOrdering
/// else: ::= /// else: ::=
/// ///
/// This sets Scope and Ordering to the parsed values. /// This sets Scope and Ordering to the parsed values.
bool LLParser::ParseScopeAndOrdering(bool isAtomic, SynchronizationScope &Scope, bool LLParser::ParseScopeAndOrdering(bool isAtomic, SyncScope::ID &SSID,
AtomicOrdering &Ordering) { AtomicOrdering &Ordering) {
if (!isAtomic) if (!isAtomic)
return false; return false;
Scope = CrossThread; return ParseScope(SSID) || ParseOrdering(Ordering);
if (EatIfPresent(lltok::kw_singlethread)) }
Scope = SingleThread;
return ParseOrdering(Ordering); /// ParseScope
/// ::= syncscope("singlethread" | "<target scope>")?
///
/// This sets synchronization scope ID to the ID of the parsed value.
bool LLParser::ParseScope(SyncScope::ID &SSID) {
SSID = SyncScope::System;
if (EatIfPresent(lltok::kw_syncscope)) {
auto StartParenAt = Lex.getLoc();
if (!EatIfPresent(lltok::lparen))
return Error(StartParenAt, "Expected '(' in syncscope");
std::string SSN;
auto SSNAt = Lex.getLoc();
if (ParseStringConstant(SSN))
return Error(SSNAt, "Expected synchronization scope name");
auto EndParenAt = Lex.getLoc();
if (!EatIfPresent(lltok::rparen))
return Error(EndParenAt, "Expected ')' in syncscope");
SSID = Context.getOrInsertSyncScopeID(SSN);
}
return false;
} }
/// ParseOrdering /// ParseOrdering
@ -6100,7 +6122,7 @@ int LLParser::ParseLoad(Instruction *&Inst, PerFunctionState &PFS) {
bool AteExtraComma = false; bool AteExtraComma = false;
bool isAtomic = false; bool isAtomic = false;
AtomicOrdering Ordering = AtomicOrdering::NotAtomic; AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
SynchronizationScope Scope = CrossThread; SyncScope::ID SSID = SyncScope::System;
if (Lex.getKind() == lltok::kw_atomic) { if (Lex.getKind() == lltok::kw_atomic) {
isAtomic = true; isAtomic = true;
@ -6118,7 +6140,7 @@ int LLParser::ParseLoad(Instruction *&Inst, PerFunctionState &PFS) {
if (ParseType(Ty) || if (ParseType(Ty) ||
ParseToken(lltok::comma, "expected comma after load's type") || ParseToken(lltok::comma, "expected comma after load's type") ||
ParseTypeAndValue(Val, Loc, PFS) || ParseTypeAndValue(Val, Loc, PFS) ||
ParseScopeAndOrdering(isAtomic, Scope, Ordering) || ParseScopeAndOrdering(isAtomic, SSID, Ordering) ||
ParseOptionalCommaAlign(Alignment, AteExtraComma)) ParseOptionalCommaAlign(Alignment, AteExtraComma))
return true; return true;
@ -6134,7 +6156,7 @@ int LLParser::ParseLoad(Instruction *&Inst, PerFunctionState &PFS) {
return Error(ExplicitTypeLoc, return Error(ExplicitTypeLoc,
"explicit pointee type doesn't match operand's pointee type"); "explicit pointee type doesn't match operand's pointee type");
Inst = new LoadInst(Ty, Val, "", isVolatile, Alignment, Ordering, Scope); Inst = new LoadInst(Ty, Val, "", isVolatile, Alignment, Ordering, SSID);
return AteExtraComma ? InstExtraComma : InstNormal; return AteExtraComma ? InstExtraComma : InstNormal;
} }
@ -6149,7 +6171,7 @@ int LLParser::ParseStore(Instruction *&Inst, PerFunctionState &PFS) {
bool AteExtraComma = false; bool AteExtraComma = false;
bool isAtomic = false; bool isAtomic = false;
AtomicOrdering Ordering = AtomicOrdering::NotAtomic; AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
SynchronizationScope Scope = CrossThread; SyncScope::ID SSID = SyncScope::System;
if (Lex.getKind() == lltok::kw_atomic) { if (Lex.getKind() == lltok::kw_atomic) {
isAtomic = true; isAtomic = true;
@ -6165,7 +6187,7 @@ int LLParser::ParseStore(Instruction *&Inst, PerFunctionState &PFS) {
if (ParseTypeAndValue(Val, Loc, PFS) || if (ParseTypeAndValue(Val, Loc, PFS) ||
ParseToken(lltok::comma, "expected ',' after store operand") || ParseToken(lltok::comma, "expected ',' after store operand") ||
ParseTypeAndValue(Ptr, PtrLoc, PFS) || ParseTypeAndValue(Ptr, PtrLoc, PFS) ||
ParseScopeAndOrdering(isAtomic, Scope, Ordering) || ParseScopeAndOrdering(isAtomic, SSID, Ordering) ||
ParseOptionalCommaAlign(Alignment, AteExtraComma)) ParseOptionalCommaAlign(Alignment, AteExtraComma))
return true; return true;
@ -6181,7 +6203,7 @@ int LLParser::ParseStore(Instruction *&Inst, PerFunctionState &PFS) {
Ordering == AtomicOrdering::AcquireRelease) Ordering == AtomicOrdering::AcquireRelease)
return Error(Loc, "atomic store cannot use Acquire ordering"); return Error(Loc, "atomic store cannot use Acquire ordering");
Inst = new StoreInst(Val, Ptr, isVolatile, Alignment, Ordering, Scope); Inst = new StoreInst(Val, Ptr, isVolatile, Alignment, Ordering, SSID);
return AteExtraComma ? InstExtraComma : InstNormal; return AteExtraComma ? InstExtraComma : InstNormal;
} }
@ -6193,7 +6215,7 @@ int LLParser::ParseCmpXchg(Instruction *&Inst, PerFunctionState &PFS) {
bool AteExtraComma = false; bool AteExtraComma = false;
AtomicOrdering SuccessOrdering = AtomicOrdering::NotAtomic; AtomicOrdering SuccessOrdering = AtomicOrdering::NotAtomic;
AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic; AtomicOrdering FailureOrdering = AtomicOrdering::NotAtomic;
SynchronizationScope Scope = CrossThread; SyncScope::ID SSID = SyncScope::System;
bool isVolatile = false; bool isVolatile = false;
bool isWeak = false; bool isWeak = false;
@ -6208,7 +6230,7 @@ int LLParser::ParseCmpXchg(Instruction *&Inst, PerFunctionState &PFS) {
ParseTypeAndValue(Cmp, CmpLoc, PFS) || ParseTypeAndValue(Cmp, CmpLoc, PFS) ||
ParseToken(lltok::comma, "expected ',' after cmpxchg cmp operand") || ParseToken(lltok::comma, "expected ',' after cmpxchg cmp operand") ||
ParseTypeAndValue(New, NewLoc, PFS) || ParseTypeAndValue(New, NewLoc, PFS) ||
ParseScopeAndOrdering(true /*Always atomic*/, Scope, SuccessOrdering) || ParseScopeAndOrdering(true /*Always atomic*/, SSID, SuccessOrdering) ||
ParseOrdering(FailureOrdering)) ParseOrdering(FailureOrdering))
return true; return true;
@ -6231,7 +6253,7 @@ int LLParser::ParseCmpXchg(Instruction *&Inst, PerFunctionState &PFS) {
if (!New->getType()->isFirstClassType()) if (!New->getType()->isFirstClassType())
return Error(NewLoc, "cmpxchg operand must be a first class value"); return Error(NewLoc, "cmpxchg operand must be a first class value");
AtomicCmpXchgInst *CXI = new AtomicCmpXchgInst( AtomicCmpXchgInst *CXI = new AtomicCmpXchgInst(
Ptr, Cmp, New, SuccessOrdering, FailureOrdering, Scope); Ptr, Cmp, New, SuccessOrdering, FailureOrdering, SSID);
CXI->setVolatile(isVolatile); CXI->setVolatile(isVolatile);
CXI->setWeak(isWeak); CXI->setWeak(isWeak);
Inst = CXI; Inst = CXI;
@ -6245,7 +6267,7 @@ int LLParser::ParseAtomicRMW(Instruction *&Inst, PerFunctionState &PFS) {
Value *Ptr, *Val; LocTy PtrLoc, ValLoc; Value *Ptr, *Val; LocTy PtrLoc, ValLoc;
bool AteExtraComma = false; bool AteExtraComma = false;
AtomicOrdering Ordering = AtomicOrdering::NotAtomic; AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
SynchronizationScope Scope = CrossThread; SyncScope::ID SSID = SyncScope::System;
bool isVolatile = false; bool isVolatile = false;
AtomicRMWInst::BinOp Operation; AtomicRMWInst::BinOp Operation;
@ -6271,7 +6293,7 @@ int LLParser::ParseAtomicRMW(Instruction *&Inst, PerFunctionState &PFS) {
if (ParseTypeAndValue(Ptr, PtrLoc, PFS) || if (ParseTypeAndValue(Ptr, PtrLoc, PFS) ||
ParseToken(lltok::comma, "expected ',' after atomicrmw address") || ParseToken(lltok::comma, "expected ',' after atomicrmw address") ||
ParseTypeAndValue(Val, ValLoc, PFS) || ParseTypeAndValue(Val, ValLoc, PFS) ||
ParseScopeAndOrdering(true /*Always atomic*/, Scope, Ordering)) ParseScopeAndOrdering(true /*Always atomic*/, SSID, Ordering))
return true; return true;
if (Ordering == AtomicOrdering::Unordered) if (Ordering == AtomicOrdering::Unordered)
@ -6288,7 +6310,7 @@ int LLParser::ParseAtomicRMW(Instruction *&Inst, PerFunctionState &PFS) {
" integer"); " integer");
AtomicRMWInst *RMWI = AtomicRMWInst *RMWI =
new AtomicRMWInst(Operation, Ptr, Val, Ordering, Scope); new AtomicRMWInst(Operation, Ptr, Val, Ordering, SSID);
RMWI->setVolatile(isVolatile); RMWI->setVolatile(isVolatile);
Inst = RMWI; Inst = RMWI;
return AteExtraComma ? InstExtraComma : InstNormal; return AteExtraComma ? InstExtraComma : InstNormal;
@ -6298,8 +6320,8 @@ int LLParser::ParseAtomicRMW(Instruction *&Inst, PerFunctionState &PFS) {
/// ::= 'fence' 'singlethread'? AtomicOrdering /// ::= 'fence' 'singlethread'? AtomicOrdering
int LLParser::ParseFence(Instruction *&Inst, PerFunctionState &PFS) { int LLParser::ParseFence(Instruction *&Inst, PerFunctionState &PFS) {
AtomicOrdering Ordering = AtomicOrdering::NotAtomic; AtomicOrdering Ordering = AtomicOrdering::NotAtomic;
SynchronizationScope Scope = CrossThread; SyncScope::ID SSID = SyncScope::System;
if (ParseScopeAndOrdering(true /*Always atomic*/, Scope, Ordering)) if (ParseScopeAndOrdering(true /*Always atomic*/, SSID, Ordering))
return true; return true;
if (Ordering == AtomicOrdering::Unordered) if (Ordering == AtomicOrdering::Unordered)
@ -6307,7 +6329,7 @@ int LLParser::ParseFence(Instruction *&Inst, PerFunctionState &PFS) {
if (Ordering == AtomicOrdering::Monotonic) if (Ordering == AtomicOrdering::Monotonic)
return TokError("fence cannot be monotonic"); return TokError("fence cannot be monotonic");
Inst = new FenceInst(Context, Ordering, Scope); Inst = new FenceInst(Context, Ordering, SSID);
return InstNormal; return InstNormal;
} }

View File

@ -241,8 +241,9 @@ namespace llvm {
bool ParseOptionalCallingConv(unsigned &CC); bool ParseOptionalCallingConv(unsigned &CC);
bool ParseOptionalAlignment(unsigned &Alignment); bool ParseOptionalAlignment(unsigned &Alignment);
bool ParseOptionalDerefAttrBytes(lltok::Kind AttrKind, uint64_t &Bytes); bool ParseOptionalDerefAttrBytes(lltok::Kind AttrKind, uint64_t &Bytes);
bool ParseScopeAndOrdering(bool isAtomic, SynchronizationScope &Scope, bool ParseScopeAndOrdering(bool isAtomic, SyncScope::ID &SSID,
AtomicOrdering &Ordering); AtomicOrdering &Ordering);
bool ParseScope(SyncScope::ID &SSID);
bool ParseOrdering(AtomicOrdering &Ordering); bool ParseOrdering(AtomicOrdering &Ordering);
bool ParseOptionalStackAlignment(unsigned &Alignment); bool ParseOptionalStackAlignment(unsigned &Alignment);
bool ParseOptionalCommaAlign(unsigned &Alignment, bool &AteExtraComma); bool ParseOptionalCommaAlign(unsigned &Alignment, bool &AteExtraComma);

View File

@ -93,7 +93,7 @@ enum Kind {
kw_release, kw_release,
kw_acq_rel, kw_acq_rel,
kw_seq_cst, kw_seq_cst,
kw_singlethread, kw_syncscope,
kw_nnan, kw_nnan,
kw_ninf, kw_ninf,
kw_nsz, kw_nsz,

View File

@ -513,6 +513,7 @@ class BitcodeReader : public BitcodeReaderBase, public GVMaterializer {
TBAAVerifier TBAAVerifyHelper; TBAAVerifier TBAAVerifyHelper;
std::vector<std::string> BundleTags; std::vector<std::string> BundleTags;
SmallVector<SyncScope::ID, 8> SSIDs;
public: public:
BitcodeReader(BitstreamCursor Stream, StringRef Strtab, BitcodeReader(BitstreamCursor Stream, StringRef Strtab,
@ -648,6 +649,7 @@ private:
Error parseTypeTable(); Error parseTypeTable();
Error parseTypeTableBody(); Error parseTypeTableBody();
Error parseOperandBundleTags(); Error parseOperandBundleTags();
Error parseSyncScopeNames();
Expected<Value *> recordValue(SmallVectorImpl<uint64_t> &Record, Expected<Value *> recordValue(SmallVectorImpl<uint64_t> &Record,
unsigned NameIndex, Triple &TT); unsigned NameIndex, Triple &TT);
@ -668,6 +670,8 @@ private:
Error findFunctionInStream( Error findFunctionInStream(
Function *F, Function *F,
DenseMap<Function *, uint64_t>::iterator DeferredFunctionInfoIterator); DenseMap<Function *, uint64_t>::iterator DeferredFunctionInfoIterator);
SyncScope::ID getDecodedSyncScopeID(unsigned Val);
}; };
/// Class to manage reading and parsing function summary index bitcode /// Class to manage reading and parsing function summary index bitcode
@ -998,14 +1002,6 @@ static AtomicOrdering getDecodedOrdering(unsigned Val) {
} }
} }
static SynchronizationScope getDecodedSynchScope(unsigned Val) {
switch (Val) {
case bitc::SYNCHSCOPE_SINGLETHREAD: return SingleThread;
default: // Map unknown scopes to cross-thread.
case bitc::SYNCHSCOPE_CROSSTHREAD: return CrossThread;
}
}
static Comdat::SelectionKind getDecodedComdatSelectionKind(unsigned Val) { static Comdat::SelectionKind getDecodedComdatSelectionKind(unsigned Val) {
switch (Val) { switch (Val) {
default: // Map unknown selection kinds to any. default: // Map unknown selection kinds to any.
@ -1745,6 +1741,44 @@ Error BitcodeReader::parseOperandBundleTags() {
} }
} }
Error BitcodeReader::parseSyncScopeNames() {
if (Stream.EnterSubBlock(bitc::SYNC_SCOPE_NAMES_BLOCK_ID))
return error("Invalid record");
if (!SSIDs.empty())
return error("Invalid multiple synchronization scope names blocks");
SmallVector<uint64_t, 64> Record;
while (true) {
BitstreamEntry Entry = Stream.advanceSkippingSubblocks();
switch (Entry.Kind) {
case BitstreamEntry::SubBlock: // Handled for us already.
case BitstreamEntry::Error:
return error("Malformed block");
case BitstreamEntry::EndBlock:
if (SSIDs.empty())
return error("Invalid empty synchronization scope names block");
return Error::success();
case BitstreamEntry::Record:
// The interesting case.
break;
}
// Synchronization scope names are implicitly mapped to synchronization
// scope IDs by their order.
if (Stream.readRecord(Entry.ID, Record) != bitc::SYNC_SCOPE_NAME)
return error("Invalid record");
SmallString<16> SSN;
if (convertToString(Record, 0, SSN))
return error("Invalid record");
SSIDs.push_back(Context.getOrInsertSyncScopeID(SSN));
Record.clear();
}
}
/// Associate a value with its name from the given index in the provided record. /// Associate a value with its name from the given index in the provided record.
Expected<Value *> BitcodeReader::recordValue(SmallVectorImpl<uint64_t> &Record, Expected<Value *> BitcodeReader::recordValue(SmallVectorImpl<uint64_t> &Record,
unsigned NameIndex, Triple &TT) { unsigned NameIndex, Triple &TT) {
@ -3132,6 +3166,10 @@ Error BitcodeReader::parseModule(uint64_t ResumeBit,
if (Error Err = parseOperandBundleTags()) if (Error Err = parseOperandBundleTags())
return Err; return Err;
break; break;
case bitc::SYNC_SCOPE_NAMES_BLOCK_ID:
if (Error Err = parseSyncScopeNames())
return Err;
break;
} }
continue; continue;
@ -4204,7 +4242,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
break; break;
} }
case bitc::FUNC_CODE_INST_LOADATOMIC: { case bitc::FUNC_CODE_INST_LOADATOMIC: {
// LOADATOMIC: [opty, op, align, vol, ordering, synchscope] // LOADATOMIC: [opty, op, align, vol, ordering, ssid]
unsigned OpNum = 0; unsigned OpNum = 0;
Value *Op; Value *Op;
if (getValueTypePair(Record, OpNum, NextValueNo, Op) || if (getValueTypePair(Record, OpNum, NextValueNo, Op) ||
@ -4226,12 +4264,12 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
return error("Invalid record"); return error("Invalid record");
if (Ordering != AtomicOrdering::NotAtomic && Record[OpNum] == 0) if (Ordering != AtomicOrdering::NotAtomic && Record[OpNum] == 0)
return error("Invalid record"); return error("Invalid record");
SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 3]); SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 3]);
unsigned Align; unsigned Align;
if (Error Err = parseAlignmentValue(Record[OpNum], Align)) if (Error Err = parseAlignmentValue(Record[OpNum], Align))
return Err; return Err;
I = new LoadInst(Op, "", Record[OpNum+1], Align, Ordering, SynchScope); I = new LoadInst(Op, "", Record[OpNum+1], Align, Ordering, SSID);
InstructionList.push_back(I); InstructionList.push_back(I);
break; break;
@ -4260,7 +4298,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
} }
case bitc::FUNC_CODE_INST_STOREATOMIC: case bitc::FUNC_CODE_INST_STOREATOMIC:
case bitc::FUNC_CODE_INST_STOREATOMIC_OLD: { case bitc::FUNC_CODE_INST_STOREATOMIC_OLD: {
// STOREATOMIC: [ptrty, ptr, val, align, vol, ordering, synchscope] // STOREATOMIC: [ptrty, ptr, val, align, vol, ordering, ssid]
unsigned OpNum = 0; unsigned OpNum = 0;
Value *Val, *Ptr; Value *Val, *Ptr;
if (getValueTypePair(Record, OpNum, NextValueNo, Ptr) || if (getValueTypePair(Record, OpNum, NextValueNo, Ptr) ||
@ -4280,20 +4318,20 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
Ordering == AtomicOrdering::Acquire || Ordering == AtomicOrdering::Acquire ||
Ordering == AtomicOrdering::AcquireRelease) Ordering == AtomicOrdering::AcquireRelease)
return error("Invalid record"); return error("Invalid record");
SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 3]); SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 3]);
if (Ordering != AtomicOrdering::NotAtomic && Record[OpNum] == 0) if (Ordering != AtomicOrdering::NotAtomic && Record[OpNum] == 0)
return error("Invalid record"); return error("Invalid record");
unsigned Align; unsigned Align;
if (Error Err = parseAlignmentValue(Record[OpNum], Align)) if (Error Err = parseAlignmentValue(Record[OpNum], Align))
return Err; return Err;
I = new StoreInst(Val, Ptr, Record[OpNum+1], Align, Ordering, SynchScope); I = new StoreInst(Val, Ptr, Record[OpNum+1], Align, Ordering, SSID);
InstructionList.push_back(I); InstructionList.push_back(I);
break; break;
} }
case bitc::FUNC_CODE_INST_CMPXCHG_OLD: case bitc::FUNC_CODE_INST_CMPXCHG_OLD:
case bitc::FUNC_CODE_INST_CMPXCHG: { case bitc::FUNC_CODE_INST_CMPXCHG: {
// CMPXCHG:[ptrty, ptr, cmp, new, vol, successordering, synchscope, // CMPXCHG:[ptrty, ptr, cmp, new, vol, successordering, ssid,
// failureordering?, isweak?] // failureordering?, isweak?]
unsigned OpNum = 0; unsigned OpNum = 0;
Value *Ptr, *Cmp, *New; Value *Ptr, *Cmp, *New;
@ -4310,7 +4348,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
if (SuccessOrdering == AtomicOrdering::NotAtomic || if (SuccessOrdering == AtomicOrdering::NotAtomic ||
SuccessOrdering == AtomicOrdering::Unordered) SuccessOrdering == AtomicOrdering::Unordered)
return error("Invalid record"); return error("Invalid record");
SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 2]); SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 2]);
if (Error Err = typeCheckLoadStoreInst(Cmp->getType(), Ptr->getType())) if (Error Err = typeCheckLoadStoreInst(Cmp->getType(), Ptr->getType()))
return Err; return Err;
@ -4322,7 +4360,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
FailureOrdering = getDecodedOrdering(Record[OpNum + 3]); FailureOrdering = getDecodedOrdering(Record[OpNum + 3]);
I = new AtomicCmpXchgInst(Ptr, Cmp, New, SuccessOrdering, FailureOrdering, I = new AtomicCmpXchgInst(Ptr, Cmp, New, SuccessOrdering, FailureOrdering,
SynchScope); SSID);
cast<AtomicCmpXchgInst>(I)->setVolatile(Record[OpNum]); cast<AtomicCmpXchgInst>(I)->setVolatile(Record[OpNum]);
if (Record.size() < 8) { if (Record.size() < 8) {
@ -4339,7 +4377,7 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
break; break;
} }
case bitc::FUNC_CODE_INST_ATOMICRMW: { case bitc::FUNC_CODE_INST_ATOMICRMW: {
// ATOMICRMW:[ptrty, ptr, val, op, vol, ordering, synchscope] // ATOMICRMW:[ptrty, ptr, val, op, vol, ordering, ssid]
unsigned OpNum = 0; unsigned OpNum = 0;
Value *Ptr, *Val; Value *Ptr, *Val;
if (getValueTypePair(Record, OpNum, NextValueNo, Ptr) || if (getValueTypePair(Record, OpNum, NextValueNo, Ptr) ||
@ -4356,13 +4394,13 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
if (Ordering == AtomicOrdering::NotAtomic || if (Ordering == AtomicOrdering::NotAtomic ||
Ordering == AtomicOrdering::Unordered) Ordering == AtomicOrdering::Unordered)
return error("Invalid record"); return error("Invalid record");
SynchronizationScope SynchScope = getDecodedSynchScope(Record[OpNum + 3]); SyncScope::ID SSID = getDecodedSyncScopeID(Record[OpNum + 3]);
I = new AtomicRMWInst(Operation, Ptr, Val, Ordering, SynchScope); I = new AtomicRMWInst(Operation, Ptr, Val, Ordering, SSID);
cast<AtomicRMWInst>(I)->setVolatile(Record[OpNum+1]); cast<AtomicRMWInst>(I)->setVolatile(Record[OpNum+1]);
InstructionList.push_back(I); InstructionList.push_back(I);
break; break;
} }
case bitc::FUNC_CODE_INST_FENCE: { // FENCE:[ordering, synchscope] case bitc::FUNC_CODE_INST_FENCE: { // FENCE:[ordering, ssid]
if (2 != Record.size()) if (2 != Record.size())
return error("Invalid record"); return error("Invalid record");
AtomicOrdering Ordering = getDecodedOrdering(Record[0]); AtomicOrdering Ordering = getDecodedOrdering(Record[0]);
@ -4370,8 +4408,8 @@ Error BitcodeReader::parseFunctionBody(Function *F) {
Ordering == AtomicOrdering::Unordered || Ordering == AtomicOrdering::Unordered ||
Ordering == AtomicOrdering::Monotonic) Ordering == AtomicOrdering::Monotonic)
return error("Invalid record"); return error("Invalid record");
SynchronizationScope SynchScope = getDecodedSynchScope(Record[1]); SyncScope::ID SSID = getDecodedSyncScopeID(Record[1]);
I = new FenceInst(Context, Ordering, SynchScope); I = new FenceInst(Context, Ordering, SSID);
InstructionList.push_back(I); InstructionList.push_back(I);
break; break;
} }
@ -4567,6 +4605,14 @@ Error BitcodeReader::findFunctionInStream(
return Error::success(); return Error::success();
} }
SyncScope::ID BitcodeReader::getDecodedSyncScopeID(unsigned Val) {
if (Val == SyncScope::SingleThread || Val == SyncScope::System)
return SyncScope::ID(Val);
if (Val >= SSIDs.size())
return SyncScope::System; // Map unknown synchronization scopes to system.
return SSIDs[Val];
}
//===----------------------------------------------------------------------===// //===----------------------------------------------------------------------===//
// GVMaterializer implementation // GVMaterializer implementation
//===----------------------------------------------------------------------===// //===----------------------------------------------------------------------===//

View File

@ -266,6 +266,7 @@ private:
const GlobalObject &GO); const GlobalObject &GO);
void writeModuleMetadataKinds(); void writeModuleMetadataKinds();
void writeOperandBundleTags(); void writeOperandBundleTags();
void writeSyncScopeNames();
void writeConstants(unsigned FirstVal, unsigned LastVal, bool isGlobal); void writeConstants(unsigned FirstVal, unsigned LastVal, bool isGlobal);
void writeModuleConstants(); void writeModuleConstants();
bool pushValueAndType(const Value *V, unsigned InstID, bool pushValueAndType(const Value *V, unsigned InstID,
@ -316,6 +317,10 @@ private:
return VE.getValueID(VI.getValue()); return VE.getValueID(VI.getValue());
} }
std::map<GlobalValue::GUID, unsigned> &valueIds() { return GUIDToValueIdMap; } std::map<GlobalValue::GUID, unsigned> &valueIds() { return GUIDToValueIdMap; }
unsigned getEncodedSyncScopeID(SyncScope::ID SSID) {
return unsigned(SSID);
}
}; };
/// Class to manage the bitcode writing for a combined index. /// Class to manage the bitcode writing for a combined index.
@ -485,14 +490,6 @@ static unsigned getEncodedOrdering(AtomicOrdering Ordering) {
llvm_unreachable("Invalid ordering"); llvm_unreachable("Invalid ordering");
} }
static unsigned getEncodedSynchScope(SynchronizationScope SynchScope) {
switch (SynchScope) {
case SingleThread: return bitc::SYNCHSCOPE_SINGLETHREAD;
case CrossThread: return bitc::SYNCHSCOPE_CROSSTHREAD;
}
llvm_unreachable("Invalid synch scope");
}
static void writeStringRecord(BitstreamWriter &Stream, unsigned Code, static void writeStringRecord(BitstreamWriter &Stream, unsigned Code,
StringRef Str, unsigned AbbrevToUse) { StringRef Str, unsigned AbbrevToUse) {
SmallVector<unsigned, 64> Vals; SmallVector<unsigned, 64> Vals;
@ -2042,6 +2039,24 @@ void ModuleBitcodeWriter::writeOperandBundleTags() {
Stream.ExitBlock(); Stream.ExitBlock();
} }
void ModuleBitcodeWriter::writeSyncScopeNames() {
SmallVector<StringRef, 8> SSNs;
M.getContext().getSyncScopeNames(SSNs);
if (SSNs.empty())
return;
Stream.EnterSubblock(bitc::SYNC_SCOPE_NAMES_BLOCK_ID, 2);
SmallVector<uint64_t, 64> Record;
for (auto SSN : SSNs) {
Record.append(SSN.begin(), SSN.end());
Stream.EmitRecord(bitc::SYNC_SCOPE_NAME, Record, 0);
Record.clear();
}
Stream.ExitBlock();
}
static void emitSignedInt64(SmallVectorImpl<uint64_t> &Vals, uint64_t V) { static void emitSignedInt64(SmallVectorImpl<uint64_t> &Vals, uint64_t V) {
if ((int64_t)V >= 0) if ((int64_t)V >= 0)
Vals.push_back(V << 1); Vals.push_back(V << 1);
@ -2658,7 +2673,7 @@ void ModuleBitcodeWriter::writeInstruction(const Instruction &I,
Vals.push_back(cast<LoadInst>(I).isVolatile()); Vals.push_back(cast<LoadInst>(I).isVolatile());
if (cast<LoadInst>(I).isAtomic()) { if (cast<LoadInst>(I).isAtomic()) {
Vals.push_back(getEncodedOrdering(cast<LoadInst>(I).getOrdering())); Vals.push_back(getEncodedOrdering(cast<LoadInst>(I).getOrdering()));
Vals.push_back(getEncodedSynchScope(cast<LoadInst>(I).getSynchScope())); Vals.push_back(getEncodedSyncScopeID(cast<LoadInst>(I).getSyncScopeID()));
} }
break; break;
case Instruction::Store: case Instruction::Store:
@ -2672,7 +2687,8 @@ void ModuleBitcodeWriter::writeInstruction(const Instruction &I,
Vals.push_back(cast<StoreInst>(I).isVolatile()); Vals.push_back(cast<StoreInst>(I).isVolatile());
if (cast<StoreInst>(I).isAtomic()) { if (cast<StoreInst>(I).isAtomic()) {
Vals.push_back(getEncodedOrdering(cast<StoreInst>(I).getOrdering())); Vals.push_back(getEncodedOrdering(cast<StoreInst>(I).getOrdering()));
Vals.push_back(getEncodedSynchScope(cast<StoreInst>(I).getSynchScope())); Vals.push_back(
getEncodedSyncScopeID(cast<StoreInst>(I).getSyncScopeID()));
} }
break; break;
case Instruction::AtomicCmpXchg: case Instruction::AtomicCmpXchg:
@ -2684,7 +2700,7 @@ void ModuleBitcodeWriter::writeInstruction(const Instruction &I,
Vals.push_back( Vals.push_back(
getEncodedOrdering(cast<AtomicCmpXchgInst>(I).getSuccessOrdering())); getEncodedOrdering(cast<AtomicCmpXchgInst>(I).getSuccessOrdering()));
Vals.push_back( Vals.push_back(
getEncodedSynchScope(cast<AtomicCmpXchgInst>(I).getSynchScope())); getEncodedSyncScopeID(cast<AtomicCmpXchgInst>(I).getSyncScopeID()));
Vals.push_back( Vals.push_back(
getEncodedOrdering(cast<AtomicCmpXchgInst>(I).getFailureOrdering())); getEncodedOrdering(cast<AtomicCmpXchgInst>(I).getFailureOrdering()));
Vals.push_back(cast<AtomicCmpXchgInst>(I).isWeak()); Vals.push_back(cast<AtomicCmpXchgInst>(I).isWeak());
@ -2698,12 +2714,12 @@ void ModuleBitcodeWriter::writeInstruction(const Instruction &I,
Vals.push_back(cast<AtomicRMWInst>(I).isVolatile()); Vals.push_back(cast<AtomicRMWInst>(I).isVolatile());
Vals.push_back(getEncodedOrdering(cast<AtomicRMWInst>(I).getOrdering())); Vals.push_back(getEncodedOrdering(cast<AtomicRMWInst>(I).getOrdering()));
Vals.push_back( Vals.push_back(
getEncodedSynchScope(cast<AtomicRMWInst>(I).getSynchScope())); getEncodedSyncScopeID(cast<AtomicRMWInst>(I).getSyncScopeID()));
break; break;
case Instruction::Fence: case Instruction::Fence:
Code = bitc::FUNC_CODE_INST_FENCE; Code = bitc::FUNC_CODE_INST_FENCE;
Vals.push_back(getEncodedOrdering(cast<FenceInst>(I).getOrdering())); Vals.push_back(getEncodedOrdering(cast<FenceInst>(I).getOrdering()));
Vals.push_back(getEncodedSynchScope(cast<FenceInst>(I).getSynchScope())); Vals.push_back(getEncodedSyncScopeID(cast<FenceInst>(I).getSyncScopeID()));
break; break;
case Instruction::Call: { case Instruction::Call: {
const CallInst &CI = cast<CallInst>(I); const CallInst &CI = cast<CallInst>(I);
@ -3716,6 +3732,7 @@ void ModuleBitcodeWriter::write() {
writeUseListBlock(nullptr); writeUseListBlock(nullptr);
writeOperandBundleTags(); writeOperandBundleTags();
writeSyncScopeNames();
// Emit function bodies. // Emit function bodies.
DenseMap<const Function *, uint64_t> FunctionToBitcodeIndex; DenseMap<const Function *, uint64_t> FunctionToBitcodeIndex;

View File

@ -361,7 +361,7 @@ LoadInst *AtomicExpand::convertAtomicLoadToIntegerType(LoadInst *LI) {
auto *NewLI = Builder.CreateLoad(NewAddr); auto *NewLI = Builder.CreateLoad(NewAddr);
NewLI->setAlignment(LI->getAlignment()); NewLI->setAlignment(LI->getAlignment());
NewLI->setVolatile(LI->isVolatile()); NewLI->setVolatile(LI->isVolatile());
NewLI->setAtomic(LI->getOrdering(), LI->getSynchScope()); NewLI->setAtomic(LI->getOrdering(), LI->getSyncScopeID());
DEBUG(dbgs() << "Replaced " << *LI << " with " << *NewLI << "\n"); DEBUG(dbgs() << "Replaced " << *LI << " with " << *NewLI << "\n");
Value *NewVal = Builder.CreateBitCast(NewLI, LI->getType()); Value *NewVal = Builder.CreateBitCast(NewLI, LI->getType());
@ -444,7 +444,7 @@ StoreInst *AtomicExpand::convertAtomicStoreToIntegerType(StoreInst *SI) {
StoreInst *NewSI = Builder.CreateStore(NewVal, NewAddr); StoreInst *NewSI = Builder.CreateStore(NewVal, NewAddr);
NewSI->setAlignment(SI->getAlignment()); NewSI->setAlignment(SI->getAlignment());
NewSI->setVolatile(SI->isVolatile()); NewSI->setVolatile(SI->isVolatile());
NewSI->setAtomic(SI->getOrdering(), SI->getSynchScope()); NewSI->setAtomic(SI->getOrdering(), SI->getSyncScopeID());
DEBUG(dbgs() << "Replaced " << *SI << " with " << *NewSI << "\n"); DEBUG(dbgs() << "Replaced " << *SI << " with " << *NewSI << "\n");
SI->eraseFromParent(); SI->eraseFromParent();
return NewSI; return NewSI;
@ -801,7 +801,7 @@ void AtomicExpand::expandPartwordCmpXchg(AtomicCmpXchgInst *CI) {
Value *FullWord_Cmp = Builder.CreateOr(Loaded_MaskOut, Cmp_Shifted); Value *FullWord_Cmp = Builder.CreateOr(Loaded_MaskOut, Cmp_Shifted);
AtomicCmpXchgInst *NewCI = Builder.CreateAtomicCmpXchg( AtomicCmpXchgInst *NewCI = Builder.CreateAtomicCmpXchg(
PMV.AlignedAddr, FullWord_Cmp, FullWord_NewVal, CI->getSuccessOrdering(), PMV.AlignedAddr, FullWord_Cmp, FullWord_NewVal, CI->getSuccessOrdering(),
CI->getFailureOrdering(), CI->getSynchScope()); CI->getFailureOrdering(), CI->getSyncScopeID());
NewCI->setVolatile(CI->isVolatile()); NewCI->setVolatile(CI->isVolatile());
// When we're building a strong cmpxchg, we need a loop, so you // When we're building a strong cmpxchg, we need a loop, so you
// might think we could use a weak cmpxchg inside. But, using strong // might think we could use a weak cmpxchg inside. But, using strong
@ -924,7 +924,7 @@ AtomicCmpXchgInst *AtomicExpand::convertCmpXchgToIntegerType(AtomicCmpXchgInst *
auto *NewCI = Builder.CreateAtomicCmpXchg(NewAddr, NewCmp, NewNewVal, auto *NewCI = Builder.CreateAtomicCmpXchg(NewAddr, NewCmp, NewNewVal,
CI->getSuccessOrdering(), CI->getSuccessOrdering(),
CI->getFailureOrdering(), CI->getFailureOrdering(),
CI->getSynchScope()); CI->getSyncScopeID());
NewCI->setVolatile(CI->isVolatile()); NewCI->setVolatile(CI->isVolatile());
NewCI->setWeak(CI->isWeak()); NewCI->setWeak(CI->isWeak());
DEBUG(dbgs() << "Replaced " << *CI << " with " << *NewCI << "\n"); DEBUG(dbgs() << "Replaced " << *CI << " with " << *NewCI << "\n");

View File

@ -345,7 +345,7 @@ bool IRTranslator::translateLoad(const User &U, MachineIRBuilder &MIRBuilder) {
*MF->getMachineMemOperand(MachinePointerInfo(LI.getPointerOperand()), *MF->getMachineMemOperand(MachinePointerInfo(LI.getPointerOperand()),
Flags, DL->getTypeStoreSize(LI.getType()), Flags, DL->getTypeStoreSize(LI.getType()),
getMemOpAlignment(LI), AAMDNodes(), nullptr, getMemOpAlignment(LI), AAMDNodes(), nullptr,
LI.getSynchScope(), LI.getOrdering())); LI.getSyncScopeID(), LI.getOrdering()));
return true; return true;
} }
@ -363,7 +363,7 @@ bool IRTranslator::translateStore(const User &U, MachineIRBuilder &MIRBuilder) {
*MF->getMachineMemOperand( *MF->getMachineMemOperand(
MachinePointerInfo(SI.getPointerOperand()), Flags, MachinePointerInfo(SI.getPointerOperand()), Flags,
DL->getTypeStoreSize(SI.getValueOperand()->getType()), DL->getTypeStoreSize(SI.getValueOperand()->getType()),
getMemOpAlignment(SI), AAMDNodes(), nullptr, SI.getSynchScope(), getMemOpAlignment(SI), AAMDNodes(), nullptr, SI.getSyncScopeID(),
SI.getOrdering())); SI.getOrdering()));
return true; return true;
} }

View File

@ -365,6 +365,14 @@ static Cursor maybeLexIRValue(Cursor C, MIToken &Token,
return lexName(C, Token, MIToken::NamedIRValue, Rule.size(), ErrorCallback); return lexName(C, Token, MIToken::NamedIRValue, Rule.size(), ErrorCallback);
} }
static Cursor maybeLexStringConstant(Cursor C, MIToken &Token,
ErrorCallbackType ErrorCallback) {
if (C.peek() != '"')
return None;
return lexName(C, Token, MIToken::StringConstant, /*PrefixLength=*/0,
ErrorCallback);
}
static Cursor lexVirtualRegister(Cursor C, MIToken &Token) { static Cursor lexVirtualRegister(Cursor C, MIToken &Token) {
auto Range = C; auto Range = C;
C.advance(); // Skip '%' C.advance(); // Skip '%'
@ -630,6 +638,8 @@ StringRef llvm::lexMIToken(StringRef Source, MIToken &Token,
return R.remaining(); return R.remaining();
if (Cursor R = maybeLexEscapedIRValue(C, Token, ErrorCallback)) if (Cursor R = maybeLexEscapedIRValue(C, Token, ErrorCallback))
return R.remaining(); return R.remaining();
if (Cursor R = maybeLexStringConstant(C, Token, ErrorCallback))
return R.remaining();
Token.reset(MIToken::Error, C.remaining()); Token.reset(MIToken::Error, C.remaining());
ErrorCallback(C.location(), ErrorCallback(C.location(),

View File

@ -127,7 +127,8 @@ struct MIToken {
NamedIRValue, NamedIRValue,
IRValue, IRValue,
QuotedIRValue, // `<constant value>` QuotedIRValue, // `<constant value>`
SubRegisterIndex SubRegisterIndex,
StringConstant
}; };
private: private:

View File

@ -229,6 +229,7 @@ public:
bool parseMemoryOperandFlag(MachineMemOperand::Flags &Flags); bool parseMemoryOperandFlag(MachineMemOperand::Flags &Flags);
bool parseMemoryPseudoSourceValue(const PseudoSourceValue *&PSV); bool parseMemoryPseudoSourceValue(const PseudoSourceValue *&PSV);
bool parseMachinePointerInfo(MachinePointerInfo &Dest); bool parseMachinePointerInfo(MachinePointerInfo &Dest);
bool parseOptionalScope(LLVMContext &Context, SyncScope::ID &SSID);
bool parseOptionalAtomicOrdering(AtomicOrdering &Order); bool parseOptionalAtomicOrdering(AtomicOrdering &Order);
bool parseMachineMemoryOperand(MachineMemOperand *&Dest); bool parseMachineMemoryOperand(MachineMemOperand *&Dest);
@ -318,6 +319,10 @@ private:
/// ///
/// Return true if the name isn't a name of a bitmask target flag. /// Return true if the name isn't a name of a bitmask target flag.
bool getBitmaskTargetFlag(StringRef Name, unsigned &Flag); bool getBitmaskTargetFlag(StringRef Name, unsigned &Flag);
/// parseStringConstant
/// ::= StringConstant
bool parseStringConstant(std::string &Result);
}; };
} // end anonymous namespace } // end anonymous namespace
@ -2135,6 +2140,26 @@ bool MIParser::parseMachinePointerInfo(MachinePointerInfo &Dest) {
return false; return false;
} }
bool MIParser::parseOptionalScope(LLVMContext &Context,
SyncScope::ID &SSID) {
SSID = SyncScope::System;
if (Token.is(MIToken::Identifier) && Token.stringValue() == "syncscope") {
lex();
if (expectAndConsume(MIToken::lparen))
return error("expected '(' in syncscope");
std::string SSN;
if (parseStringConstant(SSN))
return true;
SSID = Context.getOrInsertSyncScopeID(SSN);
if (expectAndConsume(MIToken::rparen))
return error("expected ')' in syncscope");
}
return false;
}
bool MIParser::parseOptionalAtomicOrdering(AtomicOrdering &Order) { bool MIParser::parseOptionalAtomicOrdering(AtomicOrdering &Order) {
Order = AtomicOrdering::NotAtomic; Order = AtomicOrdering::NotAtomic;
if (Token.isNot(MIToken::Identifier)) if (Token.isNot(MIToken::Identifier))
@ -2174,12 +2199,10 @@ bool MIParser::parseMachineMemoryOperand(MachineMemOperand *&Dest) {
Flags |= MachineMemOperand::MOStore; Flags |= MachineMemOperand::MOStore;
lex(); lex();
// Optional "singlethread" scope. // Optional synchronization scope.
SynchronizationScope Scope = SynchronizationScope::CrossThread; SyncScope::ID SSID;
if (Token.is(MIToken::Identifier) && Token.stringValue() == "singlethread") { if (parseOptionalScope(MF.getFunction()->getContext(), SSID))
Scope = SynchronizationScope::SingleThread; return true;
lex();
}
// Up to two atomic orderings (cmpxchg provides guarantees on failure). // Up to two atomic orderings (cmpxchg provides guarantees on failure).
AtomicOrdering Order, FailureOrder; AtomicOrdering Order, FailureOrder;
@ -2244,7 +2267,7 @@ bool MIParser::parseMachineMemoryOperand(MachineMemOperand *&Dest) {
if (expectAndConsume(MIToken::rparen)) if (expectAndConsume(MIToken::rparen))
return true; return true;
Dest = MF.getMachineMemOperand(Ptr, Flags, Size, BaseAlignment, AAInfo, Range, Dest = MF.getMachineMemOperand(Ptr, Flags, Size, BaseAlignment, AAInfo, Range,
Scope, Order, FailureOrder); SSID, Order, FailureOrder);
return false; return false;
} }
@ -2457,6 +2480,14 @@ bool MIParser::getBitmaskTargetFlag(StringRef Name, unsigned &Flag) {
return false; return false;
} }
bool MIParser::parseStringConstant(std::string &Result) {
if (Token.isNot(MIToken::StringConstant))
return error("expected string constant");
Result = Token.stringValue();
lex();
return false;
}
bool llvm::parseMachineBasicBlockDefinitions(PerFunctionMIParsingState &PFS, bool llvm::parseMachineBasicBlockDefinitions(PerFunctionMIParsingState &PFS,
StringRef Src, StringRef Src,
SMDiagnostic &Error) { SMDiagnostic &Error) {

View File

@ -18,6 +18,7 @@
#include "llvm/ADT/SmallPtrSet.h" #include "llvm/ADT/SmallPtrSet.h"
#include "llvm/ADT/SmallVector.h" #include "llvm/ADT/SmallVector.h"
#include "llvm/ADT/STLExtras.h" #include "llvm/ADT/STLExtras.h"
#include "llvm/ADT/StringExtras.h"
#include "llvm/ADT/StringRef.h" #include "llvm/ADT/StringRef.h"
#include "llvm/ADT/Twine.h" #include "llvm/ADT/Twine.h"
#include "llvm/CodeGen/GlobalISel/RegisterBank.h" #include "llvm/CodeGen/GlobalISel/RegisterBank.h"
@ -139,6 +140,8 @@ class MIPrinter {
ModuleSlotTracker &MST; ModuleSlotTracker &MST;
const DenseMap<const uint32_t *, unsigned> &RegisterMaskIds; const DenseMap<const uint32_t *, unsigned> &RegisterMaskIds;
const DenseMap<int, FrameIndexOperand> &StackObjectOperandMapping; const DenseMap<int, FrameIndexOperand> &StackObjectOperandMapping;
/// Synchronization scope names registered with LLVMContext.
SmallVector<StringRef, 8> SSNs;
bool canPredictBranchProbabilities(const MachineBasicBlock &MBB) const; bool canPredictBranchProbabilities(const MachineBasicBlock &MBB) const;
bool canPredictSuccessors(const MachineBasicBlock &MBB) const; bool canPredictSuccessors(const MachineBasicBlock &MBB) const;
@ -162,7 +165,8 @@ public:
void print(const MachineOperand &Op, const TargetRegisterInfo *TRI, void print(const MachineOperand &Op, const TargetRegisterInfo *TRI,
unsigned I, bool ShouldPrintRegisterTies, unsigned I, bool ShouldPrintRegisterTies,
LLT TypeToPrint, bool IsDef = false); LLT TypeToPrint, bool IsDef = false);
void print(const MachineMemOperand &Op); void print(const LLVMContext &Context, const MachineMemOperand &Op);
void printSyncScope(const LLVMContext &Context, SyncScope::ID SSID);
void print(const MCCFIInstruction &CFI, const TargetRegisterInfo *TRI); void print(const MCCFIInstruction &CFI, const TargetRegisterInfo *TRI);
}; };
@ -731,11 +735,12 @@ void MIPrinter::print(const MachineInstr &MI) {
if (!MI.memoperands_empty()) { if (!MI.memoperands_empty()) {
OS << " :: "; OS << " :: ";
const LLVMContext &Context = MF->getFunction()->getContext();
bool NeedComma = false; bool NeedComma = false;
for (const auto *Op : MI.memoperands()) { for (const auto *Op : MI.memoperands()) {
if (NeedComma) if (NeedComma)
OS << ", "; OS << ", ";
print(*Op); print(Context, *Op);
NeedComma = true; NeedComma = true;
} }
} }
@ -1031,7 +1036,7 @@ void MIPrinter::print(const MachineOperand &Op, const TargetRegisterInfo *TRI,
} }
} }
void MIPrinter::print(const MachineMemOperand &Op) { void MIPrinter::print(const LLVMContext &Context, const MachineMemOperand &Op) {
OS << '('; OS << '(';
// TODO: Print operand's target specific flags. // TODO: Print operand's target specific flags.
if (Op.isVolatile()) if (Op.isVolatile())
@ -1049,8 +1054,7 @@ void MIPrinter::print(const MachineMemOperand &Op) {
OS << "store "; OS << "store ";
} }
if (Op.getSynchScope() == SynchronizationScope::SingleThread) printSyncScope(Context, Op.getSyncScopeID());
OS << "singlethread ";
if (Op.getOrdering() != AtomicOrdering::NotAtomic) if (Op.getOrdering() != AtomicOrdering::NotAtomic)
OS << toIRString(Op.getOrdering()) << ' '; OS << toIRString(Op.getOrdering()) << ' ';
@ -1119,6 +1123,23 @@ void MIPrinter::print(const MachineMemOperand &Op) {
OS << ')'; OS << ')';
} }
void MIPrinter::printSyncScope(const LLVMContext &Context, SyncScope::ID SSID) {
switch (SSID) {
case SyncScope::System: {
break;
}
default: {
if (SSNs.empty())
Context.getSyncScopeNames(SSNs);
OS << "syncscope(\"";
PrintEscapedString(SSNs[SSID], OS);
OS << "\") ";
break;
}
}
}
static void printCFIRegister(unsigned DwarfReg, raw_ostream &OS, static void printCFIRegister(unsigned DwarfReg, raw_ostream &OS,
const TargetRegisterInfo *TRI) { const TargetRegisterInfo *TRI) {
int Reg = TRI->getLLVMRegNum(DwarfReg, true); int Reg = TRI->getLLVMRegNum(DwarfReg, true);

View File

@ -305,11 +305,11 @@ MachineFunction::DeleteMachineBasicBlock(MachineBasicBlock *MBB) {
MachineMemOperand *MachineFunction::getMachineMemOperand( MachineMemOperand *MachineFunction::getMachineMemOperand(
MachinePointerInfo PtrInfo, MachineMemOperand::Flags f, uint64_t s, MachinePointerInfo PtrInfo, MachineMemOperand::Flags f, uint64_t s,
unsigned base_alignment, const AAMDNodes &AAInfo, const MDNode *Ranges, unsigned base_alignment, const AAMDNodes &AAInfo, const MDNode *Ranges,
SynchronizationScope SynchScope, AtomicOrdering Ordering, SyncScope::ID SSID, AtomicOrdering Ordering,
AtomicOrdering FailureOrdering) { AtomicOrdering FailureOrdering) {
return new (Allocator) return new (Allocator)
MachineMemOperand(PtrInfo, f, s, base_alignment, AAInfo, Ranges, MachineMemOperand(PtrInfo, f, s, base_alignment, AAInfo, Ranges,
SynchScope, Ordering, FailureOrdering); SSID, Ordering, FailureOrdering);
} }
MachineMemOperand * MachineMemOperand *
@ -320,13 +320,13 @@ MachineFunction::getMachineMemOperand(const MachineMemOperand *MMO,
MachineMemOperand(MachinePointerInfo(MMO->getValue(), MachineMemOperand(MachinePointerInfo(MMO->getValue(),
MMO->getOffset()+Offset), MMO->getOffset()+Offset),
MMO->getFlags(), Size, MMO->getBaseAlignment(), MMO->getFlags(), Size, MMO->getBaseAlignment(),
AAMDNodes(), nullptr, MMO->getSynchScope(), AAMDNodes(), nullptr, MMO->getSyncScopeID(),
MMO->getOrdering(), MMO->getFailureOrdering()); MMO->getOrdering(), MMO->getFailureOrdering());
return new (Allocator) return new (Allocator)
MachineMemOperand(MachinePointerInfo(MMO->getPseudoValue(), MachineMemOperand(MachinePointerInfo(MMO->getPseudoValue(),
MMO->getOffset()+Offset), MMO->getOffset()+Offset),
MMO->getFlags(), Size, MMO->getBaseAlignment(), MMO->getFlags(), Size, MMO->getBaseAlignment(),
AAMDNodes(), nullptr, MMO->getSynchScope(), AAMDNodes(), nullptr, MMO->getSyncScopeID(),
MMO->getOrdering(), MMO->getFailureOrdering()); MMO->getOrdering(), MMO->getFailureOrdering());
} }
@ -359,7 +359,7 @@ MachineFunction::extractLoadMemRefs(MachineInstr::mmo_iterator Begin,
(*I)->getFlags() & ~MachineMemOperand::MOStore, (*I)->getFlags() & ~MachineMemOperand::MOStore,
(*I)->getSize(), (*I)->getBaseAlignment(), (*I)->getSize(), (*I)->getBaseAlignment(),
(*I)->getAAInfo(), nullptr, (*I)->getAAInfo(), nullptr,
(*I)->getSynchScope(), (*I)->getOrdering(), (*I)->getSyncScopeID(), (*I)->getOrdering(),
(*I)->getFailureOrdering()); (*I)->getFailureOrdering());
Result[Index] = JustLoad; Result[Index] = JustLoad;
} }
@ -393,7 +393,7 @@ MachineFunction::extractStoreMemRefs(MachineInstr::mmo_iterator Begin,
(*I)->getFlags() & ~MachineMemOperand::MOLoad, (*I)->getFlags() & ~MachineMemOperand::MOLoad,
(*I)->getSize(), (*I)->getBaseAlignment(), (*I)->getSize(), (*I)->getBaseAlignment(),
(*I)->getAAInfo(), nullptr, (*I)->getAAInfo(), nullptr,
(*I)->getSynchScope(), (*I)->getOrdering(), (*I)->getSyncScopeID(), (*I)->getOrdering(),
(*I)->getFailureOrdering()); (*I)->getFailureOrdering());
Result[Index] = JustStore; Result[Index] = JustStore;
} }

View File

@ -614,7 +614,7 @@ MachineMemOperand::MachineMemOperand(MachinePointerInfo ptrinfo, Flags f,
uint64_t s, unsigned int a, uint64_t s, unsigned int a,
const AAMDNodes &AAInfo, const AAMDNodes &AAInfo,
const MDNode *Ranges, const MDNode *Ranges,
SynchronizationScope SynchScope, SyncScope::ID SSID,
AtomicOrdering Ordering, AtomicOrdering Ordering,
AtomicOrdering FailureOrdering) AtomicOrdering FailureOrdering)
: PtrInfo(ptrinfo), Size(s), FlagVals(f), BaseAlignLog2(Log2_32(a) + 1), : PtrInfo(ptrinfo), Size(s), FlagVals(f), BaseAlignLog2(Log2_32(a) + 1),
@ -625,8 +625,8 @@ MachineMemOperand::MachineMemOperand(MachinePointerInfo ptrinfo, Flags f,
assert(getBaseAlignment() == a && "Alignment is not a power of 2!"); assert(getBaseAlignment() == a && "Alignment is not a power of 2!");
assert((isLoad() || isStore()) && "Not a load/store!"); assert((isLoad() || isStore()) && "Not a load/store!");
AtomicInfo.SynchScope = static_cast<unsigned>(SynchScope); AtomicInfo.SSID = static_cast<unsigned>(SSID);
assert(getSynchScope() == SynchScope && "Value truncated"); assert(getSyncScopeID() == SSID && "Value truncated");
AtomicInfo.Ordering = static_cast<unsigned>(Ordering); AtomicInfo.Ordering = static_cast<unsigned>(Ordering);
assert(getOrdering() == Ordering && "Value truncated"); assert(getOrdering() == Ordering && "Value truncated");
AtomicInfo.FailureOrdering = static_cast<unsigned>(FailureOrdering); AtomicInfo.FailureOrdering = static_cast<unsigned>(FailureOrdering);

View File

@ -5443,7 +5443,7 @@ SDValue SelectionDAG::getAtomicCmpSwap(
unsigned Opcode, const SDLoc &dl, EVT MemVT, SDVTList VTs, SDValue Chain, unsigned Opcode, const SDLoc &dl, EVT MemVT, SDVTList VTs, SDValue Chain,
SDValue Ptr, SDValue Cmp, SDValue Swp, MachinePointerInfo PtrInfo, SDValue Ptr, SDValue Cmp, SDValue Swp, MachinePointerInfo PtrInfo,
unsigned Alignment, AtomicOrdering SuccessOrdering, unsigned Alignment, AtomicOrdering SuccessOrdering,
AtomicOrdering FailureOrdering, SynchronizationScope SynchScope) { AtomicOrdering FailureOrdering, SyncScope::ID SSID) {
assert(Opcode == ISD::ATOMIC_CMP_SWAP || assert(Opcode == ISD::ATOMIC_CMP_SWAP ||
Opcode == ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS); Opcode == ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS);
assert(Cmp.getValueType() == Swp.getValueType() && "Invalid Atomic Op Types"); assert(Cmp.getValueType() == Swp.getValueType() && "Invalid Atomic Op Types");
@ -5459,7 +5459,7 @@ SDValue SelectionDAG::getAtomicCmpSwap(
MachineMemOperand::MOStore; MachineMemOperand::MOStore;
MachineMemOperand *MMO = MachineMemOperand *MMO =
MF.getMachineMemOperand(PtrInfo, Flags, MemVT.getStoreSize(), Alignment, MF.getMachineMemOperand(PtrInfo, Flags, MemVT.getStoreSize(), Alignment,
AAMDNodes(), nullptr, SynchScope, SuccessOrdering, AAMDNodes(), nullptr, SSID, SuccessOrdering,
FailureOrdering); FailureOrdering);
return getAtomicCmpSwap(Opcode, dl, MemVT, VTs, Chain, Ptr, Cmp, Swp, MMO); return getAtomicCmpSwap(Opcode, dl, MemVT, VTs, Chain, Ptr, Cmp, Swp, MMO);
@ -5481,7 +5481,7 @@ SDValue SelectionDAG::getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT,
SDValue Chain, SDValue Ptr, SDValue Val, SDValue Chain, SDValue Ptr, SDValue Val,
const Value *PtrVal, unsigned Alignment, const Value *PtrVal, unsigned Alignment,
AtomicOrdering Ordering, AtomicOrdering Ordering,
SynchronizationScope SynchScope) { SyncScope::ID SSID) {
if (Alignment == 0) // Ensure that codegen never sees alignment 0 if (Alignment == 0) // Ensure that codegen never sees alignment 0
Alignment = getEVTAlignment(MemVT); Alignment = getEVTAlignment(MemVT);
@ -5501,7 +5501,7 @@ SDValue SelectionDAG::getAtomic(unsigned Opcode, const SDLoc &dl, EVT MemVT,
MachineMemOperand *MMO = MachineMemOperand *MMO =
MF.getMachineMemOperand(MachinePointerInfo(PtrVal), Flags, MF.getMachineMemOperand(MachinePointerInfo(PtrVal), Flags,
MemVT.getStoreSize(), Alignment, AAMDNodes(), MemVT.getStoreSize(), Alignment, AAMDNodes(),
nullptr, SynchScope, Ordering); nullptr, SSID, Ordering);
return getAtomic(Opcode, dl, MemVT, Chain, Ptr, Val, MMO); return getAtomic(Opcode, dl, MemVT, Chain, Ptr, Val, MMO);
} }

View File

@ -3990,7 +3990,7 @@ void SelectionDAGBuilder::visitAtomicCmpXchg(const AtomicCmpXchgInst &I) {
SDLoc dl = getCurSDLoc(); SDLoc dl = getCurSDLoc();
AtomicOrdering SuccessOrder = I.getSuccessOrdering(); AtomicOrdering SuccessOrder = I.getSuccessOrdering();
AtomicOrdering FailureOrder = I.getFailureOrdering(); AtomicOrdering FailureOrder = I.getFailureOrdering();
SynchronizationScope Scope = I.getSynchScope(); SyncScope::ID SSID = I.getSyncScopeID();
SDValue InChain = getRoot(); SDValue InChain = getRoot();
@ -4000,7 +4000,7 @@ void SelectionDAGBuilder::visitAtomicCmpXchg(const AtomicCmpXchgInst &I) {
ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS, dl, MemVT, VTs, InChain, ISD::ATOMIC_CMP_SWAP_WITH_SUCCESS, dl, MemVT, VTs, InChain,
getValue(I.getPointerOperand()), getValue(I.getCompareOperand()), getValue(I.getPointerOperand()), getValue(I.getCompareOperand()),
getValue(I.getNewValOperand()), MachinePointerInfo(I.getPointerOperand()), getValue(I.getNewValOperand()), MachinePointerInfo(I.getPointerOperand()),
/*Alignment=*/ 0, SuccessOrder, FailureOrder, Scope); /*Alignment=*/ 0, SuccessOrder, FailureOrder, SSID);
SDValue OutChain = L.getValue(2); SDValue OutChain = L.getValue(2);
@ -4026,7 +4026,7 @@ void SelectionDAGBuilder::visitAtomicRMW(const AtomicRMWInst &I) {
case AtomicRMWInst::UMin: NT = ISD::ATOMIC_LOAD_UMIN; break; case AtomicRMWInst::UMin: NT = ISD::ATOMIC_LOAD_UMIN; break;
} }
AtomicOrdering Order = I.getOrdering(); AtomicOrdering Order = I.getOrdering();
SynchronizationScope Scope = I.getSynchScope(); SyncScope::ID SSID = I.getSyncScopeID();
SDValue InChain = getRoot(); SDValue InChain = getRoot();
@ -4037,7 +4037,7 @@ void SelectionDAGBuilder::visitAtomicRMW(const AtomicRMWInst &I) {
getValue(I.getPointerOperand()), getValue(I.getPointerOperand()),
getValue(I.getValOperand()), getValue(I.getValOperand()),
I.getPointerOperand(), I.getPointerOperand(),
/* Alignment=*/ 0, Order, Scope); /* Alignment=*/ 0, Order, SSID);
SDValue OutChain = L.getValue(1); SDValue OutChain = L.getValue(1);
@ -4052,7 +4052,7 @@ void SelectionDAGBuilder::visitFence(const FenceInst &I) {
Ops[0] = getRoot(); Ops[0] = getRoot();
Ops[1] = DAG.getConstant((unsigned)I.getOrdering(), dl, Ops[1] = DAG.getConstant((unsigned)I.getOrdering(), dl,
TLI.getFenceOperandTy(DAG.getDataLayout())); TLI.getFenceOperandTy(DAG.getDataLayout()));
Ops[2] = DAG.getConstant(I.getSynchScope(), dl, Ops[2] = DAG.getConstant(I.getSyncScopeID(), dl,
TLI.getFenceOperandTy(DAG.getDataLayout())); TLI.getFenceOperandTy(DAG.getDataLayout()));
DAG.setRoot(DAG.getNode(ISD::ATOMIC_FENCE, dl, MVT::Other, Ops)); DAG.setRoot(DAG.getNode(ISD::ATOMIC_FENCE, dl, MVT::Other, Ops));
} }
@ -4060,7 +4060,7 @@ void SelectionDAGBuilder::visitFence(const FenceInst &I) {
void SelectionDAGBuilder::visitAtomicLoad(const LoadInst &I) { void SelectionDAGBuilder::visitAtomicLoad(const LoadInst &I) {
SDLoc dl = getCurSDLoc(); SDLoc dl = getCurSDLoc();
AtomicOrdering Order = I.getOrdering(); AtomicOrdering Order = I.getOrdering();
SynchronizationScope Scope = I.getSynchScope(); SyncScope::ID SSID = I.getSyncScopeID();
SDValue InChain = getRoot(); SDValue InChain = getRoot();
@ -4078,7 +4078,7 @@ void SelectionDAGBuilder::visitAtomicLoad(const LoadInst &I) {
VT.getStoreSize(), VT.getStoreSize(),
I.getAlignment() ? I.getAlignment() : I.getAlignment() ? I.getAlignment() :
DAG.getEVTAlignment(VT), DAG.getEVTAlignment(VT),
AAMDNodes(), nullptr, Scope, Order); AAMDNodes(), nullptr, SSID, Order);
InChain = TLI.prepareVolatileOrAtomicLoad(InChain, dl, DAG); InChain = TLI.prepareVolatileOrAtomicLoad(InChain, dl, DAG);
SDValue L = SDValue L =
@ -4095,7 +4095,7 @@ void SelectionDAGBuilder::visitAtomicStore(const StoreInst &I) {
SDLoc dl = getCurSDLoc(); SDLoc dl = getCurSDLoc();
AtomicOrdering Order = I.getOrdering(); AtomicOrdering Order = I.getOrdering();
SynchronizationScope Scope = I.getSynchScope(); SyncScope::ID SSID = I.getSyncScopeID();
SDValue InChain = getRoot(); SDValue InChain = getRoot();
@ -4112,7 +4112,7 @@ void SelectionDAGBuilder::visitAtomicStore(const StoreInst &I) {
getValue(I.getPointerOperand()), getValue(I.getPointerOperand()),
getValue(I.getValueOperand()), getValue(I.getValueOperand()),
I.getPointerOperand(), I.getAlignment(), I.getPointerOperand(), I.getAlignment(),
Order, Scope); Order, SSID);
DAG.setRoot(OutChain); DAG.setRoot(OutChain);
} }

View File

@ -2119,6 +2119,8 @@ class AssemblyWriter {
bool ShouldPreserveUseListOrder; bool ShouldPreserveUseListOrder;
UseListOrderStack UseListOrders; UseListOrderStack UseListOrders;
SmallVector<StringRef, 8> MDNames; SmallVector<StringRef, 8> MDNames;
/// Synchronization scope names registered with LLVMContext.
SmallVector<StringRef, 8> SSNs;
public: public:
/// Construct an AssemblyWriter with an external SlotTracker /// Construct an AssemblyWriter with an external SlotTracker
@ -2134,10 +2136,15 @@ public:
void writeOperand(const Value *Op, bool PrintType); void writeOperand(const Value *Op, bool PrintType);
void writeParamOperand(const Value *Operand, AttributeSet Attrs); void writeParamOperand(const Value *Operand, AttributeSet Attrs);
void writeOperandBundles(ImmutableCallSite CS); void writeOperandBundles(ImmutableCallSite CS);
void writeAtomic(AtomicOrdering Ordering, SynchronizationScope SynchScope); void writeSyncScope(const LLVMContext &Context,
void writeAtomicCmpXchg(AtomicOrdering SuccessOrdering, SyncScope::ID SSID);
void writeAtomic(const LLVMContext &Context,
AtomicOrdering Ordering,
SyncScope::ID SSID);
void writeAtomicCmpXchg(const LLVMContext &Context,
AtomicOrdering SuccessOrdering,
AtomicOrdering FailureOrdering, AtomicOrdering FailureOrdering,
SynchronizationScope SynchScope); SyncScope::ID SSID);
void writeAllMDNodes(); void writeAllMDNodes();
void writeMDNode(unsigned Slot, const MDNode *Node); void writeMDNode(unsigned Slot, const MDNode *Node);
@ -2199,30 +2206,42 @@ void AssemblyWriter::writeOperand(const Value *Operand, bool PrintType) {
WriteAsOperandInternal(Out, Operand, &TypePrinter, &Machine, TheModule); WriteAsOperandInternal(Out, Operand, &TypePrinter, &Machine, TheModule);
} }
void AssemblyWriter::writeAtomic(AtomicOrdering Ordering, void AssemblyWriter::writeSyncScope(const LLVMContext &Context,
SynchronizationScope SynchScope) { SyncScope::ID SSID) {
switch (SSID) {
case SyncScope::System: {
break;
}
default: {
if (SSNs.empty())
Context.getSyncScopeNames(SSNs);
Out << " syncscope(\"";
PrintEscapedString(SSNs[SSID], Out);
Out << "\")";
break;
}
}
}
void AssemblyWriter::writeAtomic(const LLVMContext &Context,
AtomicOrdering Ordering,
SyncScope::ID SSID) {
if (Ordering == AtomicOrdering::NotAtomic) if (Ordering == AtomicOrdering::NotAtomic)
return; return;
switch (SynchScope) { writeSyncScope(Context, SSID);
case SingleThread: Out << " singlethread"; break;
case CrossThread: break;
}
Out << " " << toIRString(Ordering); Out << " " << toIRString(Ordering);
} }
void AssemblyWriter::writeAtomicCmpXchg(AtomicOrdering SuccessOrdering, void AssemblyWriter::writeAtomicCmpXchg(const LLVMContext &Context,
AtomicOrdering SuccessOrdering,
AtomicOrdering FailureOrdering, AtomicOrdering FailureOrdering,
SynchronizationScope SynchScope) { SyncScope::ID SSID) {
assert(SuccessOrdering != AtomicOrdering::NotAtomic && assert(SuccessOrdering != AtomicOrdering::NotAtomic &&
FailureOrdering != AtomicOrdering::NotAtomic); FailureOrdering != AtomicOrdering::NotAtomic);
switch (SynchScope) { writeSyncScope(Context, SSID);
case SingleThread: Out << " singlethread"; break;
case CrossThread: break;
}
Out << " " << toIRString(SuccessOrdering); Out << " " << toIRString(SuccessOrdering);
Out << " " << toIRString(FailureOrdering); Out << " " << toIRString(FailureOrdering);
} }
@ -3215,21 +3234,22 @@ void AssemblyWriter::printInstruction(const Instruction &I) {
// Print atomic ordering/alignment for memory operations // Print atomic ordering/alignment for memory operations
if (const LoadInst *LI = dyn_cast<LoadInst>(&I)) { if (const LoadInst *LI = dyn_cast<LoadInst>(&I)) {
if (LI->isAtomic()) if (LI->isAtomic())
writeAtomic(LI->getOrdering(), LI->getSynchScope()); writeAtomic(LI->getContext(), LI->getOrdering(), LI->getSyncScopeID());
if (LI->getAlignment()) if (LI->getAlignment())
Out << ", align " << LI->getAlignment(); Out << ", align " << LI->getAlignment();
} else if (const StoreInst *SI = dyn_cast<StoreInst>(&I)) { } else if (const StoreInst *SI = dyn_cast<StoreInst>(&I)) {
if (SI->isAtomic()) if (SI->isAtomic())
writeAtomic(SI->getOrdering(), SI->getSynchScope()); writeAtomic(SI->getContext(), SI->getOrdering(), SI->getSyncScopeID());
if (SI->getAlignment()) if (SI->getAlignment())
Out << ", align " << SI->getAlignment(); Out << ", align " << SI->getAlignment();
} else if (const AtomicCmpXchgInst *CXI = dyn_cast<AtomicCmpXchgInst>(&I)) { } else if (const AtomicCmpXchgInst *CXI = dyn_cast<AtomicCmpXchgInst>(&I)) {
writeAtomicCmpXchg(CXI->getSuccessOrdering(), CXI->getFailureOrdering(), writeAtomicCmpXchg(CXI->getContext(), CXI->getSuccessOrdering(),
CXI->getSynchScope()); CXI->getFailureOrdering(), CXI->getSyncScopeID());
} else if (const AtomicRMWInst *RMWI = dyn_cast<AtomicRMWInst>(&I)) { } else if (const AtomicRMWInst *RMWI = dyn_cast<AtomicRMWInst>(&I)) {
writeAtomic(RMWI->getOrdering(), RMWI->getSynchScope()); writeAtomic(RMWI->getContext(), RMWI->getOrdering(),
RMWI->getSyncScopeID());
} else if (const FenceInst *FI = dyn_cast<FenceInst>(&I)) { } else if (const FenceInst *FI = dyn_cast<FenceInst>(&I)) {
writeAtomic(FI->getOrdering(), FI->getSynchScope()); writeAtomic(FI->getContext(), FI->getOrdering(), FI->getSyncScopeID());
} }
// Print Metadata info. // Print Metadata info.

View File

@ -2756,11 +2756,14 @@ static LLVMAtomicOrdering mapToLLVMOrdering(AtomicOrdering Ordering) {
llvm_unreachable("Invalid AtomicOrdering value!"); llvm_unreachable("Invalid AtomicOrdering value!");
} }
// TODO: Should this and other atomic instructions support building with
// "syncscope"?
LLVMValueRef LLVMBuildFence(LLVMBuilderRef B, LLVMAtomicOrdering Ordering, LLVMValueRef LLVMBuildFence(LLVMBuilderRef B, LLVMAtomicOrdering Ordering,
LLVMBool isSingleThread, const char *Name) { LLVMBool isSingleThread, const char *Name) {
return wrap( return wrap(
unwrap(B)->CreateFence(mapFromLLVMOrdering(Ordering), unwrap(B)->CreateFence(mapFromLLVMOrdering(Ordering),
isSingleThread ? SingleThread : CrossThread, isSingleThread ? SyncScope::SingleThread
: SyncScope::System,
Name)); Name));
} }
@ -3042,7 +3045,8 @@ LLVMValueRef LLVMBuildAtomicRMW(LLVMBuilderRef B,LLVMAtomicRMWBinOp op,
case LLVMAtomicRMWBinOpUMin: intop = AtomicRMWInst::UMin; break; case LLVMAtomicRMWBinOpUMin: intop = AtomicRMWInst::UMin; break;
} }
return wrap(unwrap(B)->CreateAtomicRMW(intop, unwrap(PTR), unwrap(Val), return wrap(unwrap(B)->CreateAtomicRMW(intop, unwrap(PTR), unwrap(Val),
mapFromLLVMOrdering(ordering), singleThread ? SingleThread : CrossThread)); mapFromLLVMOrdering(ordering), singleThread ? SyncScope::SingleThread
: SyncScope::System));
} }
LLVMValueRef LLVMBuildAtomicCmpXchg(LLVMBuilderRef B, LLVMValueRef Ptr, LLVMValueRef LLVMBuildAtomicCmpXchg(LLVMBuilderRef B, LLVMValueRef Ptr,
@ -3054,7 +3058,7 @@ LLVMValueRef LLVMBuildAtomicCmpXchg(LLVMBuilderRef B, LLVMValueRef Ptr,
return wrap(unwrap(B)->CreateAtomicCmpXchg(unwrap(Ptr), unwrap(Cmp), return wrap(unwrap(B)->CreateAtomicCmpXchg(unwrap(Ptr), unwrap(Cmp),
unwrap(New), mapFromLLVMOrdering(SuccessOrdering), unwrap(New), mapFromLLVMOrdering(SuccessOrdering),
mapFromLLVMOrdering(FailureOrdering), mapFromLLVMOrdering(FailureOrdering),
singleThread ? SingleThread : CrossThread)); singleThread ? SyncScope::SingleThread : SyncScope::System));
} }
@ -3062,17 +3066,18 @@ LLVMBool LLVMIsAtomicSingleThread(LLVMValueRef AtomicInst) {
Value *P = unwrap<Value>(AtomicInst); Value *P = unwrap<Value>(AtomicInst);
if (AtomicRMWInst *I = dyn_cast<AtomicRMWInst>(P)) if (AtomicRMWInst *I = dyn_cast<AtomicRMWInst>(P))
return I->getSynchScope() == SingleThread; return I->getSyncScopeID() == SyncScope::SingleThread;
return cast<AtomicCmpXchgInst>(P)->getSynchScope() == SingleThread; return cast<AtomicCmpXchgInst>(P)->getSyncScopeID() ==
SyncScope::SingleThread;
} }
void LLVMSetAtomicSingleThread(LLVMValueRef AtomicInst, LLVMBool NewValue) { void LLVMSetAtomicSingleThread(LLVMValueRef AtomicInst, LLVMBool NewValue) {
Value *P = unwrap<Value>(AtomicInst); Value *P = unwrap<Value>(AtomicInst);
SynchronizationScope Sync = NewValue ? SingleThread : CrossThread; SyncScope::ID SSID = NewValue ? SyncScope::SingleThread : SyncScope::System;
if (AtomicRMWInst *I = dyn_cast<AtomicRMWInst>(P)) if (AtomicRMWInst *I = dyn_cast<AtomicRMWInst>(P))
return I->setSynchScope(Sync); return I->setSyncScopeID(SSID);
return cast<AtomicCmpXchgInst>(P)->setSynchScope(Sync); return cast<AtomicCmpXchgInst>(P)->setSyncScopeID(SSID);
} }
LLVMAtomicOrdering LLVMGetCmpXchgSuccessOrdering(LLVMValueRef CmpXchgInst) { LLVMAtomicOrdering LLVMGetCmpXchgSuccessOrdering(LLVMValueRef CmpXchgInst) {

View File

@ -362,13 +362,13 @@ static bool haveSameSpecialState(const Instruction *I1, const Instruction *I2,
(LI->getAlignment() == cast<LoadInst>(I2)->getAlignment() || (LI->getAlignment() == cast<LoadInst>(I2)->getAlignment() ||
IgnoreAlignment) && IgnoreAlignment) &&
LI->getOrdering() == cast<LoadInst>(I2)->getOrdering() && LI->getOrdering() == cast<LoadInst>(I2)->getOrdering() &&
LI->getSynchScope() == cast<LoadInst>(I2)->getSynchScope(); LI->getSyncScopeID() == cast<LoadInst>(I2)->getSyncScopeID();
if (const StoreInst *SI = dyn_cast<StoreInst>(I1)) if (const StoreInst *SI = dyn_cast<StoreInst>(I1))
return SI->isVolatile() == cast<StoreInst>(I2)->isVolatile() && return SI->isVolatile() == cast<StoreInst>(I2)->isVolatile() &&
(SI->getAlignment() == cast<StoreInst>(I2)->getAlignment() || (SI->getAlignment() == cast<StoreInst>(I2)->getAlignment() ||
IgnoreAlignment) && IgnoreAlignment) &&
SI->getOrdering() == cast<StoreInst>(I2)->getOrdering() && SI->getOrdering() == cast<StoreInst>(I2)->getOrdering() &&
SI->getSynchScope() == cast<StoreInst>(I2)->getSynchScope(); SI->getSyncScopeID() == cast<StoreInst>(I2)->getSyncScopeID();
if (const CmpInst *CI = dyn_cast<CmpInst>(I1)) if (const CmpInst *CI = dyn_cast<CmpInst>(I1))
return CI->getPredicate() == cast<CmpInst>(I2)->getPredicate(); return CI->getPredicate() == cast<CmpInst>(I2)->getPredicate();
if (const CallInst *CI = dyn_cast<CallInst>(I1)) if (const CallInst *CI = dyn_cast<CallInst>(I1))
@ -386,7 +386,7 @@ static bool haveSameSpecialState(const Instruction *I1, const Instruction *I2,
return EVI->getIndices() == cast<ExtractValueInst>(I2)->getIndices(); return EVI->getIndices() == cast<ExtractValueInst>(I2)->getIndices();
if (const FenceInst *FI = dyn_cast<FenceInst>(I1)) if (const FenceInst *FI = dyn_cast<FenceInst>(I1))
return FI->getOrdering() == cast<FenceInst>(I2)->getOrdering() && return FI->getOrdering() == cast<FenceInst>(I2)->getOrdering() &&
FI->getSynchScope() == cast<FenceInst>(I2)->getSynchScope(); FI->getSyncScopeID() == cast<FenceInst>(I2)->getSyncScopeID();
if (const AtomicCmpXchgInst *CXI = dyn_cast<AtomicCmpXchgInst>(I1)) if (const AtomicCmpXchgInst *CXI = dyn_cast<AtomicCmpXchgInst>(I1))
return CXI->isVolatile() == cast<AtomicCmpXchgInst>(I2)->isVolatile() && return CXI->isVolatile() == cast<AtomicCmpXchgInst>(I2)->isVolatile() &&
CXI->isWeak() == cast<AtomicCmpXchgInst>(I2)->isWeak() && CXI->isWeak() == cast<AtomicCmpXchgInst>(I2)->isWeak() &&
@ -394,12 +394,13 @@ static bool haveSameSpecialState(const Instruction *I1, const Instruction *I2,
cast<AtomicCmpXchgInst>(I2)->getSuccessOrdering() && cast<AtomicCmpXchgInst>(I2)->getSuccessOrdering() &&
CXI->getFailureOrdering() == CXI->getFailureOrdering() ==
cast<AtomicCmpXchgInst>(I2)->getFailureOrdering() && cast<AtomicCmpXchgInst>(I2)->getFailureOrdering() &&
CXI->getSynchScope() == cast<AtomicCmpXchgInst>(I2)->getSynchScope(); CXI->getSyncScopeID() ==
cast<AtomicCmpXchgInst>(I2)->getSyncScopeID();
if (const AtomicRMWInst *RMWI = dyn_cast<AtomicRMWInst>(I1)) if (const AtomicRMWInst *RMWI = dyn_cast<AtomicRMWInst>(I1))
return RMWI->getOperation() == cast<AtomicRMWInst>(I2)->getOperation() && return RMWI->getOperation() == cast<AtomicRMWInst>(I2)->getOperation() &&
RMWI->isVolatile() == cast<AtomicRMWInst>(I2)->isVolatile() && RMWI->isVolatile() == cast<AtomicRMWInst>(I2)->isVolatile() &&
RMWI->getOrdering() == cast<AtomicRMWInst>(I2)->getOrdering() && RMWI->getOrdering() == cast<AtomicRMWInst>(I2)->getOrdering() &&
RMWI->getSynchScope() == cast<AtomicRMWInst>(I2)->getSynchScope(); RMWI->getSyncScopeID() == cast<AtomicRMWInst>(I2)->getSyncScopeID();
return true; return true;
} }

View File

@ -1304,34 +1304,34 @@ LoadInst::LoadInst(Value *Ptr, const Twine &Name, bool isVolatile,
LoadInst::LoadInst(Type *Ty, Value *Ptr, const Twine &Name, bool isVolatile, LoadInst::LoadInst(Type *Ty, Value *Ptr, const Twine &Name, bool isVolatile,
unsigned Align, Instruction *InsertBef) unsigned Align, Instruction *InsertBef)
: LoadInst(Ty, Ptr, Name, isVolatile, Align, AtomicOrdering::NotAtomic, : LoadInst(Ty, Ptr, Name, isVolatile, Align, AtomicOrdering::NotAtomic,
CrossThread, InsertBef) {} SyncScope::System, InsertBef) {}
LoadInst::LoadInst(Value *Ptr, const Twine &Name, bool isVolatile, LoadInst::LoadInst(Value *Ptr, const Twine &Name, bool isVolatile,
unsigned Align, BasicBlock *InsertAE) unsigned Align, BasicBlock *InsertAE)
: LoadInst(Ptr, Name, isVolatile, Align, AtomicOrdering::NotAtomic, : LoadInst(Ptr, Name, isVolatile, Align, AtomicOrdering::NotAtomic,
CrossThread, InsertAE) {} SyncScope::System, InsertAE) {}
LoadInst::LoadInst(Type *Ty, Value *Ptr, const Twine &Name, bool isVolatile, LoadInst::LoadInst(Type *Ty, Value *Ptr, const Twine &Name, bool isVolatile,
unsigned Align, AtomicOrdering Order, unsigned Align, AtomicOrdering Order,
SynchronizationScope SynchScope, Instruction *InsertBef) SyncScope::ID SSID, Instruction *InsertBef)
: UnaryInstruction(Ty, Load, Ptr, InsertBef) { : UnaryInstruction(Ty, Load, Ptr, InsertBef) {
assert(Ty == cast<PointerType>(Ptr->getType())->getElementType()); assert(Ty == cast<PointerType>(Ptr->getType())->getElementType());
setVolatile(isVolatile); setVolatile(isVolatile);
setAlignment(Align); setAlignment(Align);
setAtomic(Order, SynchScope); setAtomic(Order, SSID);
AssertOK(); AssertOK();
setName(Name); setName(Name);
} }
LoadInst::LoadInst(Value *Ptr, const Twine &Name, bool isVolatile, LoadInst::LoadInst(Value *Ptr, const Twine &Name, bool isVolatile,
unsigned Align, AtomicOrdering Order, unsigned Align, AtomicOrdering Order,
SynchronizationScope SynchScope, SyncScope::ID SSID,
BasicBlock *InsertAE) BasicBlock *InsertAE)
: UnaryInstruction(cast<PointerType>(Ptr->getType())->getElementType(), : UnaryInstruction(cast<PointerType>(Ptr->getType())->getElementType(),
Load, Ptr, InsertAE) { Load, Ptr, InsertAE) {
setVolatile(isVolatile); setVolatile(isVolatile);
setAlignment(Align); setAlignment(Align);
setAtomic(Order, SynchScope); setAtomic(Order, SSID);
AssertOK(); AssertOK();
setName(Name); setName(Name);
} }
@ -1419,16 +1419,16 @@ StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile,
StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile, unsigned Align, StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile, unsigned Align,
Instruction *InsertBefore) Instruction *InsertBefore)
: StoreInst(val, addr, isVolatile, Align, AtomicOrdering::NotAtomic, : StoreInst(val, addr, isVolatile, Align, AtomicOrdering::NotAtomic,
CrossThread, InsertBefore) {} SyncScope::System, InsertBefore) {}
StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile, unsigned Align, StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile, unsigned Align,
BasicBlock *InsertAtEnd) BasicBlock *InsertAtEnd)
: StoreInst(val, addr, isVolatile, Align, AtomicOrdering::NotAtomic, : StoreInst(val, addr, isVolatile, Align, AtomicOrdering::NotAtomic,
CrossThread, InsertAtEnd) {} SyncScope::System, InsertAtEnd) {}
StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile, StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile,
unsigned Align, AtomicOrdering Order, unsigned Align, AtomicOrdering Order,
SynchronizationScope SynchScope, SyncScope::ID SSID,
Instruction *InsertBefore) Instruction *InsertBefore)
: Instruction(Type::getVoidTy(val->getContext()), Store, : Instruction(Type::getVoidTy(val->getContext()), Store,
OperandTraits<StoreInst>::op_begin(this), OperandTraits<StoreInst>::op_begin(this),
@ -1438,13 +1438,13 @@ StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile,
Op<1>() = addr; Op<1>() = addr;
setVolatile(isVolatile); setVolatile(isVolatile);
setAlignment(Align); setAlignment(Align);
setAtomic(Order, SynchScope); setAtomic(Order, SSID);
AssertOK(); AssertOK();
} }
StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile, StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile,
unsigned Align, AtomicOrdering Order, unsigned Align, AtomicOrdering Order,
SynchronizationScope SynchScope, SyncScope::ID SSID,
BasicBlock *InsertAtEnd) BasicBlock *InsertAtEnd)
: Instruction(Type::getVoidTy(val->getContext()), Store, : Instruction(Type::getVoidTy(val->getContext()), Store,
OperandTraits<StoreInst>::op_begin(this), OperandTraits<StoreInst>::op_begin(this),
@ -1454,7 +1454,7 @@ StoreInst::StoreInst(Value *val, Value *addr, bool isVolatile,
Op<1>() = addr; Op<1>() = addr;
setVolatile(isVolatile); setVolatile(isVolatile);
setAlignment(Align); setAlignment(Align);
setAtomic(Order, SynchScope); setAtomic(Order, SSID);
AssertOK(); AssertOK();
} }
@ -1474,13 +1474,13 @@ void StoreInst::setAlignment(unsigned Align) {
void AtomicCmpXchgInst::Init(Value *Ptr, Value *Cmp, Value *NewVal, void AtomicCmpXchgInst::Init(Value *Ptr, Value *Cmp, Value *NewVal,
AtomicOrdering SuccessOrdering, AtomicOrdering SuccessOrdering,
AtomicOrdering FailureOrdering, AtomicOrdering FailureOrdering,
SynchronizationScope SynchScope) { SyncScope::ID SSID) {
Op<0>() = Ptr; Op<0>() = Ptr;
Op<1>() = Cmp; Op<1>() = Cmp;
Op<2>() = NewVal; Op<2>() = NewVal;
setSuccessOrdering(SuccessOrdering); setSuccessOrdering(SuccessOrdering);
setFailureOrdering(FailureOrdering); setFailureOrdering(FailureOrdering);
setSynchScope(SynchScope); setSyncScopeID(SSID);
assert(getOperand(0) && getOperand(1) && getOperand(2) && assert(getOperand(0) && getOperand(1) && getOperand(2) &&
"All operands must be non-null!"); "All operands must be non-null!");
@ -1507,25 +1507,25 @@ void AtomicCmpXchgInst::Init(Value *Ptr, Value *Cmp, Value *NewVal,
AtomicCmpXchgInst::AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal, AtomicCmpXchgInst::AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
AtomicOrdering SuccessOrdering, AtomicOrdering SuccessOrdering,
AtomicOrdering FailureOrdering, AtomicOrdering FailureOrdering,
SynchronizationScope SynchScope, SyncScope::ID SSID,
Instruction *InsertBefore) Instruction *InsertBefore)
: Instruction( : Instruction(
StructType::get(Cmp->getType(), Type::getInt1Ty(Cmp->getContext())), StructType::get(Cmp->getType(), Type::getInt1Ty(Cmp->getContext())),
AtomicCmpXchg, OperandTraits<AtomicCmpXchgInst>::op_begin(this), AtomicCmpXchg, OperandTraits<AtomicCmpXchgInst>::op_begin(this),
OperandTraits<AtomicCmpXchgInst>::operands(this), InsertBefore) { OperandTraits<AtomicCmpXchgInst>::operands(this), InsertBefore) {
Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SynchScope); Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SSID);
} }
AtomicCmpXchgInst::AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal, AtomicCmpXchgInst::AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
AtomicOrdering SuccessOrdering, AtomicOrdering SuccessOrdering,
AtomicOrdering FailureOrdering, AtomicOrdering FailureOrdering,
SynchronizationScope SynchScope, SyncScope::ID SSID,
BasicBlock *InsertAtEnd) BasicBlock *InsertAtEnd)
: Instruction( : Instruction(
StructType::get(Cmp->getType(), Type::getInt1Ty(Cmp->getContext())), StructType::get(Cmp->getType(), Type::getInt1Ty(Cmp->getContext())),
AtomicCmpXchg, OperandTraits<AtomicCmpXchgInst>::op_begin(this), AtomicCmpXchg, OperandTraits<AtomicCmpXchgInst>::op_begin(this),
OperandTraits<AtomicCmpXchgInst>::operands(this), InsertAtEnd) { OperandTraits<AtomicCmpXchgInst>::operands(this), InsertAtEnd) {
Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SynchScope); Init(Ptr, Cmp, NewVal, SuccessOrdering, FailureOrdering, SSID);
} }
//===----------------------------------------------------------------------===// //===----------------------------------------------------------------------===//
@ -1534,12 +1534,12 @@ AtomicCmpXchgInst::AtomicCmpXchgInst(Value *Ptr, Value *Cmp, Value *NewVal,
void AtomicRMWInst::Init(BinOp Operation, Value *Ptr, Value *Val, void AtomicRMWInst::Init(BinOp Operation, Value *Ptr, Value *Val,
AtomicOrdering Ordering, AtomicOrdering Ordering,
SynchronizationScope SynchScope) { SyncScope::ID SSID) {
Op<0>() = Ptr; Op<0>() = Ptr;
Op<1>() = Val; Op<1>() = Val;
setOperation(Operation); setOperation(Operation);
setOrdering(Ordering); setOrdering(Ordering);
setSynchScope(SynchScope); setSyncScopeID(SSID);
assert(getOperand(0) && getOperand(1) && assert(getOperand(0) && getOperand(1) &&
"All operands must be non-null!"); "All operands must be non-null!");
@ -1554,24 +1554,24 @@ void AtomicRMWInst::Init(BinOp Operation, Value *Ptr, Value *Val,
AtomicRMWInst::AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val, AtomicRMWInst::AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
AtomicOrdering Ordering, AtomicOrdering Ordering,
SynchronizationScope SynchScope, SyncScope::ID SSID,
Instruction *InsertBefore) Instruction *InsertBefore)
: Instruction(Val->getType(), AtomicRMW, : Instruction(Val->getType(), AtomicRMW,
OperandTraits<AtomicRMWInst>::op_begin(this), OperandTraits<AtomicRMWInst>::op_begin(this),
OperandTraits<AtomicRMWInst>::operands(this), OperandTraits<AtomicRMWInst>::operands(this),
InsertBefore) { InsertBefore) {
Init(Operation, Ptr, Val, Ordering, SynchScope); Init(Operation, Ptr, Val, Ordering, SSID);
} }
AtomicRMWInst::AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val, AtomicRMWInst::AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
AtomicOrdering Ordering, AtomicOrdering Ordering,
SynchronizationScope SynchScope, SyncScope::ID SSID,
BasicBlock *InsertAtEnd) BasicBlock *InsertAtEnd)
: Instruction(Val->getType(), AtomicRMW, : Instruction(Val->getType(), AtomicRMW,
OperandTraits<AtomicRMWInst>::op_begin(this), OperandTraits<AtomicRMWInst>::op_begin(this),
OperandTraits<AtomicRMWInst>::operands(this), OperandTraits<AtomicRMWInst>::operands(this),
InsertAtEnd) { InsertAtEnd) {
Init(Operation, Ptr, Val, Ordering, SynchScope); Init(Operation, Ptr, Val, Ordering, SSID);
} }
//===----------------------------------------------------------------------===// //===----------------------------------------------------------------------===//
@ -1579,19 +1579,19 @@ AtomicRMWInst::AtomicRMWInst(BinOp Operation, Value *Ptr, Value *Val,
//===----------------------------------------------------------------------===// //===----------------------------------------------------------------------===//
FenceInst::FenceInst(LLVMContext &C, AtomicOrdering Ordering, FenceInst::FenceInst(LLVMContext &C, AtomicOrdering Ordering,
SynchronizationScope SynchScope, SyncScope::ID SSID,
Instruction *InsertBefore) Instruction *InsertBefore)
: Instruction(Type::getVoidTy(C), Fence, nullptr, 0, InsertBefore) { : Instruction(Type::getVoidTy(C), Fence, nullptr, 0, InsertBefore) {
setOrdering(Ordering); setOrdering(Ordering);
setSynchScope(SynchScope); setSyncScopeID(SSID);
} }
FenceInst::FenceInst(LLVMContext &C, AtomicOrdering Ordering, FenceInst::FenceInst(LLVMContext &C, AtomicOrdering Ordering,
SynchronizationScope SynchScope, SyncScope::ID SSID,
BasicBlock *InsertAtEnd) BasicBlock *InsertAtEnd)
: Instruction(Type::getVoidTy(C), Fence, nullptr, 0, InsertAtEnd) { : Instruction(Type::getVoidTy(C), Fence, nullptr, 0, InsertAtEnd) {
setOrdering(Ordering); setOrdering(Ordering);
setSynchScope(SynchScope); setSyncScopeID(SSID);
} }
//===----------------------------------------------------------------------===// //===----------------------------------------------------------------------===//
@ -3795,12 +3795,12 @@ AllocaInst *AllocaInst::cloneImpl() const {
LoadInst *LoadInst::cloneImpl() const { LoadInst *LoadInst::cloneImpl() const {
return new LoadInst(getOperand(0), Twine(), isVolatile(), return new LoadInst(getOperand(0), Twine(), isVolatile(),
getAlignment(), getOrdering(), getSynchScope()); getAlignment(), getOrdering(), getSyncScopeID());
} }
StoreInst *StoreInst::cloneImpl() const { StoreInst *StoreInst::cloneImpl() const {
return new StoreInst(getOperand(0), getOperand(1), isVolatile(), return new StoreInst(getOperand(0), getOperand(1), isVolatile(),
getAlignment(), getOrdering(), getSynchScope()); getAlignment(), getOrdering(), getSyncScopeID());
} }
@ -3808,7 +3808,7 @@ AtomicCmpXchgInst *AtomicCmpXchgInst::cloneImpl() const {
AtomicCmpXchgInst *Result = AtomicCmpXchgInst *Result =
new AtomicCmpXchgInst(getOperand(0), getOperand(1), getOperand(2), new AtomicCmpXchgInst(getOperand(0), getOperand(1), getOperand(2),
getSuccessOrdering(), getFailureOrdering(), getSuccessOrdering(), getFailureOrdering(),
getSynchScope()); getSyncScopeID());
Result->setVolatile(isVolatile()); Result->setVolatile(isVolatile());
Result->setWeak(isWeak()); Result->setWeak(isWeak());
return Result; return Result;
@ -3816,14 +3816,14 @@ AtomicCmpXchgInst *AtomicCmpXchgInst::cloneImpl() const {
AtomicRMWInst *AtomicRMWInst::cloneImpl() const { AtomicRMWInst *AtomicRMWInst::cloneImpl() const {
AtomicRMWInst *Result = AtomicRMWInst *Result =
new AtomicRMWInst(getOperation(),getOperand(0), getOperand(1), new AtomicRMWInst(getOperation(), getOperand(0), getOperand(1),
getOrdering(), getSynchScope()); getOrdering(), getSyncScopeID());
Result->setVolatile(isVolatile()); Result->setVolatile(isVolatile());
return Result; return Result;
} }
FenceInst *FenceInst::cloneImpl() const { FenceInst *FenceInst::cloneImpl() const {
return new FenceInst(getContext(), getOrdering(), getSynchScope()); return new FenceInst(getContext(), getOrdering(), getSyncScopeID());
} }
TruncInst *TruncInst::cloneImpl() const { TruncInst *TruncInst::cloneImpl() const {

View File

@ -81,6 +81,16 @@ LLVMContext::LLVMContext() : pImpl(new LLVMContextImpl(*this)) {
assert(GCTransitionEntry->second == LLVMContext::OB_gc_transition && assert(GCTransitionEntry->second == LLVMContext::OB_gc_transition &&
"gc-transition operand bundle id drifted!"); "gc-transition operand bundle id drifted!");
(void)GCTransitionEntry; (void)GCTransitionEntry;
SyncScope::ID SingleThreadSSID =
pImpl->getOrInsertSyncScopeID("singlethread");
assert(SingleThreadSSID == SyncScope::SingleThread &&
"singlethread synchronization scope ID drifted!");
SyncScope::ID SystemSSID =
pImpl->getOrInsertSyncScopeID("");
assert(SystemSSID == SyncScope::System &&
"system synchronization scope ID drifted!");
} }
LLVMContext::~LLVMContext() { delete pImpl; } LLVMContext::~LLVMContext() { delete pImpl; }
@ -255,6 +265,14 @@ uint32_t LLVMContext::getOperandBundleTagID(StringRef Tag) const {
return pImpl->getOperandBundleTagID(Tag); return pImpl->getOperandBundleTagID(Tag);
} }
SyncScope::ID LLVMContext::getOrInsertSyncScopeID(StringRef SSN) {
return pImpl->getOrInsertSyncScopeID(SSN);
}
void LLVMContext::getSyncScopeNames(SmallVectorImpl<StringRef> &SSNs) const {
pImpl->getSyncScopeNames(SSNs);
}
void LLVMContext::setGC(const Function &Fn, std::string GCName) { void LLVMContext::setGC(const Function &Fn, std::string GCName) {
auto It = pImpl->GCNames.find(&Fn); auto It = pImpl->GCNames.find(&Fn);

View File

@ -205,6 +205,20 @@ uint32_t LLVMContextImpl::getOperandBundleTagID(StringRef Tag) const {
return I->second; return I->second;
} }
SyncScope::ID LLVMContextImpl::getOrInsertSyncScopeID(StringRef SSN) {
auto NewSSID = SSC.size();
assert(NewSSID < std::numeric_limits<SyncScope::ID>::max() &&
"Hit the maximum number of synchronization scopes allowed!");
return SSC.insert(std::make_pair(SSN, SyncScope::ID(NewSSID))).first->second;
}
void LLVMContextImpl::getSyncScopeNames(
SmallVectorImpl<StringRef> &SSNs) const {
SSNs.resize(SSC.size());
for (const auto &SSE : SSC)
SSNs[SSE.second] = SSE.first();
}
/// Singleton instance of the OptBisect class. /// Singleton instance of the OptBisect class.
/// ///
/// This singleton is accessed via the LLVMContext::getOptBisect() function. It /// This singleton is accessed via the LLVMContext::getOptBisect() function. It

View File

@ -1297,6 +1297,20 @@ public:
void getOperandBundleTags(SmallVectorImpl<StringRef> &Tags) const; void getOperandBundleTags(SmallVectorImpl<StringRef> &Tags) const;
uint32_t getOperandBundleTagID(StringRef Tag) const; uint32_t getOperandBundleTagID(StringRef Tag) const;
/// A set of interned synchronization scopes. The StringMap maps
/// synchronization scope names to their respective synchronization scope IDs.
StringMap<SyncScope::ID> SSC;
/// getOrInsertSyncScopeID - Maps synchronization scope name to
/// synchronization scope ID. Every synchronization scope registered with
/// LLVMContext has unique ID except pre-defined ones.
SyncScope::ID getOrInsertSyncScopeID(StringRef SSN);
/// getSyncScopeNames - Populates client supplied SmallVector with
/// synchronization scope names registered with LLVMContext. Synchronization
/// scope names are ordered by increasing synchronization scope IDs.
void getSyncScopeNames(SmallVectorImpl<StringRef> &SSNs) const;
/// Maintain the GC name for each function. /// Maintain the GC name for each function.
/// ///
/// This saves allocating an additional word in Function for programs which /// This saves allocating an additional word in Function for programs which

View File

@ -3108,7 +3108,7 @@ void Verifier::visitLoadInst(LoadInst &LI) {
ElTy, &LI); ElTy, &LI);
checkAtomicMemAccessSize(ElTy, &LI); checkAtomicMemAccessSize(ElTy, &LI);
} else { } else {
Assert(LI.getSynchScope() == CrossThread, Assert(LI.getSyncScopeID() == SyncScope::System,
"Non-atomic load cannot have SynchronizationScope specified", &LI); "Non-atomic load cannot have SynchronizationScope specified", &LI);
} }
@ -3137,7 +3137,7 @@ void Verifier::visitStoreInst(StoreInst &SI) {
ElTy, &SI); ElTy, &SI);
checkAtomicMemAccessSize(ElTy, &SI); checkAtomicMemAccessSize(ElTy, &SI);
} else { } else {
Assert(SI.getSynchScope() == CrossThread, Assert(SI.getSyncScopeID() == SyncScope::System,
"Non-atomic store cannot have SynchronizationScope specified", &SI); "Non-atomic store cannot have SynchronizationScope specified", &SI);
} }
visitInstruction(SI); visitInstruction(SI);

View File

@ -3398,9 +3398,9 @@ ARMTargetLowering::LowerINTRINSIC_WO_CHAIN(SDValue Op, SelectionDAG &DAG,
static SDValue LowerATOMIC_FENCE(SDValue Op, SelectionDAG &DAG, static SDValue LowerATOMIC_FENCE(SDValue Op, SelectionDAG &DAG,
const ARMSubtarget *Subtarget) { const ARMSubtarget *Subtarget) {
SDLoc dl(Op); SDLoc dl(Op);
ConstantSDNode *ScopeN = cast<ConstantSDNode>(Op.getOperand(2)); ConstantSDNode *SSIDNode = cast<ConstantSDNode>(Op.getOperand(2));
auto Scope = static_cast<SynchronizationScope>(ScopeN->getZExtValue()); auto SSID = static_cast<SyncScope::ID>(SSIDNode->getZExtValue());
if (Scope == SynchronizationScope::SingleThread) if (SSID == SyncScope::SingleThread)
return Op; return Op;
if (!Subtarget->hasDataBarrier()) { if (!Subtarget->hasDataBarrier()) {

View File

@ -3182,13 +3182,13 @@ SDValue SystemZTargetLowering::lowerATOMIC_FENCE(SDValue Op,
SDLoc DL(Op); SDLoc DL(Op);
AtomicOrdering FenceOrdering = static_cast<AtomicOrdering>( AtomicOrdering FenceOrdering = static_cast<AtomicOrdering>(
cast<ConstantSDNode>(Op.getOperand(1))->getZExtValue()); cast<ConstantSDNode>(Op.getOperand(1))->getZExtValue());
SynchronizationScope FenceScope = static_cast<SynchronizationScope>( SyncScope::ID FenceSSID = static_cast<SyncScope::ID>(
cast<ConstantSDNode>(Op.getOperand(2))->getZExtValue()); cast<ConstantSDNode>(Op.getOperand(2))->getZExtValue());
// The only fence that needs an instruction is a sequentially-consistent // The only fence that needs an instruction is a sequentially-consistent
// cross-thread fence. // cross-thread fence.
if (FenceOrdering == AtomicOrdering::SequentiallyConsistent && if (FenceOrdering == AtomicOrdering::SequentiallyConsistent &&
FenceScope == CrossThread) { FenceSSID == SyncScope::System) {
return SDValue(DAG.getMachineNode(SystemZ::Serialize, DL, MVT::Other, return SDValue(DAG.getMachineNode(SystemZ::Serialize, DL, MVT::Other,
Op.getOperand(0)), Op.getOperand(0)),
0); 0);

View File

@ -22850,7 +22850,7 @@ X86TargetLowering::lowerIdempotentRMWIntoFencedLoad(AtomicRMWInst *AI) const {
auto Builder = IRBuilder<>(AI); auto Builder = IRBuilder<>(AI);
Module *M = Builder.GetInsertBlock()->getParent()->getParent(); Module *M = Builder.GetInsertBlock()->getParent()->getParent();
auto SynchScope = AI->getSynchScope(); auto SSID = AI->getSyncScopeID();
// We must restrict the ordering to avoid generating loads with Release or // We must restrict the ordering to avoid generating loads with Release or
// ReleaseAcquire orderings. // ReleaseAcquire orderings.
auto Order = AtomicCmpXchgInst::getStrongestFailureOrdering(AI->getOrdering()); auto Order = AtomicCmpXchgInst::getStrongestFailureOrdering(AI->getOrdering());
@ -22872,7 +22872,7 @@ X86TargetLowering::lowerIdempotentRMWIntoFencedLoad(AtomicRMWInst *AI) const {
// otherwise, we might be able to be more aggressive on relaxed idempotent // otherwise, we might be able to be more aggressive on relaxed idempotent
// rmw. In practice, they do not look useful, so we don't try to be // rmw. In practice, they do not look useful, so we don't try to be
// especially clever. // especially clever.
if (SynchScope == SingleThread) if (SSID == SyncScope::SingleThread)
// FIXME: we could just insert an X86ISD::MEMBARRIER here, except we are at // FIXME: we could just insert an X86ISD::MEMBARRIER here, except we are at
// the IR level, so we must wrap it in an intrinsic. // the IR level, so we must wrap it in an intrinsic.
return nullptr; return nullptr;
@ -22891,7 +22891,7 @@ X86TargetLowering::lowerIdempotentRMWIntoFencedLoad(AtomicRMWInst *AI) const {
// Finally we can emit the atomic load. // Finally we can emit the atomic load.
LoadInst *Loaded = Builder.CreateAlignedLoad(Ptr, LoadInst *Loaded = Builder.CreateAlignedLoad(Ptr,
AI->getType()->getPrimitiveSizeInBits()); AI->getType()->getPrimitiveSizeInBits());
Loaded->setAtomic(Order, SynchScope); Loaded->setAtomic(Order, SSID);
AI->replaceAllUsesWith(Loaded); AI->replaceAllUsesWith(Loaded);
AI->eraseFromParent(); AI->eraseFromParent();
return Loaded; return Loaded;
@ -22902,13 +22902,13 @@ static SDValue LowerATOMIC_FENCE(SDValue Op, const X86Subtarget &Subtarget,
SDLoc dl(Op); SDLoc dl(Op);
AtomicOrdering FenceOrdering = static_cast<AtomicOrdering>( AtomicOrdering FenceOrdering = static_cast<AtomicOrdering>(
cast<ConstantSDNode>(Op.getOperand(1))->getZExtValue()); cast<ConstantSDNode>(Op.getOperand(1))->getZExtValue());
SynchronizationScope FenceScope = static_cast<SynchronizationScope>( SyncScope::ID FenceSSID = static_cast<SyncScope::ID>(
cast<ConstantSDNode>(Op.getOperand(2))->getZExtValue()); cast<ConstantSDNode>(Op.getOperand(2))->getZExtValue());
// The only fence that needs an instruction is a sequentially-consistent // The only fence that needs an instruction is a sequentially-consistent
// cross-thread fence. // cross-thread fence.
if (FenceOrdering == AtomicOrdering::SequentiallyConsistent && if (FenceOrdering == AtomicOrdering::SequentiallyConsistent &&
FenceScope == CrossThread) { FenceSSID == SyncScope::System) {
if (Subtarget.hasMFence()) if (Subtarget.hasMFence())
return DAG.getNode(X86ISD::MFENCE, dl, MVT::Other, Op.getOperand(0)); return DAG.getNode(X86ISD::MFENCE, dl, MVT::Other, Op.getOperand(0));

View File

@ -837,7 +837,7 @@ OptimizeGlobalAddressOfMalloc(GlobalVariable *GV, CallInst *CI, Type *AllocTy,
if (StoreInst *SI = dyn_cast<StoreInst>(GV->user_back())) { if (StoreInst *SI = dyn_cast<StoreInst>(GV->user_back())) {
// The global is initialized when the store to it occurs. // The global is initialized when the store to it occurs.
new StoreInst(ConstantInt::getTrue(GV->getContext()), InitBool, false, 0, new StoreInst(ConstantInt::getTrue(GV->getContext()), InitBool, false, 0,
SI->getOrdering(), SI->getSynchScope(), SI); SI->getOrdering(), SI->getSyncScopeID(), SI);
SI->eraseFromParent(); SI->eraseFromParent();
continue; continue;
} }
@ -854,7 +854,7 @@ OptimizeGlobalAddressOfMalloc(GlobalVariable *GV, CallInst *CI, Type *AllocTy,
// Replace the cmp X, 0 with a use of the bool value. // Replace the cmp X, 0 with a use of the bool value.
// Sink the load to where the compare was, if atomic rules allow us to. // Sink the load to where the compare was, if atomic rules allow us to.
Value *LV = new LoadInst(InitBool, InitBool->getName()+".val", false, 0, Value *LV = new LoadInst(InitBool, InitBool->getName()+".val", false, 0,
LI->getOrdering(), LI->getSynchScope(), LI->getOrdering(), LI->getSyncScopeID(),
LI->isUnordered() ? (Instruction*)ICI : LI); LI->isUnordered() ? (Instruction*)ICI : LI);
InitBoolUsed = true; InitBoolUsed = true;
switch (ICI->getPredicate()) { switch (ICI->getPredicate()) {
@ -1605,7 +1605,7 @@ static bool TryToShrinkGlobalToBoolean(GlobalVariable *GV, Constant *OtherVal) {
assert(LI->getOperand(0) == GV && "Not a copy!"); assert(LI->getOperand(0) == GV && "Not a copy!");
// Insert a new load, to preserve the saved value. // Insert a new load, to preserve the saved value.
StoreVal = new LoadInst(NewGV, LI->getName()+".b", false, 0, StoreVal = new LoadInst(NewGV, LI->getName()+".b", false, 0,
LI->getOrdering(), LI->getSynchScope(), LI); LI->getOrdering(), LI->getSyncScopeID(), LI);
} else { } else {
assert((isa<CastInst>(StoredVal) || isa<SelectInst>(StoredVal)) && assert((isa<CastInst>(StoredVal) || isa<SelectInst>(StoredVal)) &&
"This is not a form that we understand!"); "This is not a form that we understand!");
@ -1614,12 +1614,12 @@ static bool TryToShrinkGlobalToBoolean(GlobalVariable *GV, Constant *OtherVal) {
} }
} }
new StoreInst(StoreVal, NewGV, false, 0, new StoreInst(StoreVal, NewGV, false, 0,
SI->getOrdering(), SI->getSynchScope(), SI); SI->getOrdering(), SI->getSyncScopeID(), SI);
} else { } else {
// Change the load into a load of bool then a select. // Change the load into a load of bool then a select.
LoadInst *LI = cast<LoadInst>(UI); LoadInst *LI = cast<LoadInst>(UI);
LoadInst *NLI = new LoadInst(NewGV, LI->getName()+".b", false, 0, LoadInst *NLI = new LoadInst(NewGV, LI->getName()+".b", false, 0,
LI->getOrdering(), LI->getSynchScope(), LI); LI->getOrdering(), LI->getSyncScopeID(), LI);
Value *NSI; Value *NSI;
if (IsOneZero) if (IsOneZero)
NSI = new ZExtInst(NLI, LI->getType(), "", LI); NSI = new ZExtInst(NLI, LI->getType(), "", LI);

View File

@ -461,7 +461,7 @@ static LoadInst *combineLoadToNewType(InstCombiner &IC, LoadInst &LI, Type *NewT
LoadInst *NewLoad = IC.Builder.CreateAlignedLoad( LoadInst *NewLoad = IC.Builder.CreateAlignedLoad(
IC.Builder.CreateBitCast(Ptr, NewTy->getPointerTo(AS)), IC.Builder.CreateBitCast(Ptr, NewTy->getPointerTo(AS)),
LI.getAlignment(), LI.isVolatile(), LI.getName() + Suffix); LI.getAlignment(), LI.isVolatile(), LI.getName() + Suffix);
NewLoad->setAtomic(LI.getOrdering(), LI.getSynchScope()); NewLoad->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
MDBuilder MDB(NewLoad->getContext()); MDBuilder MDB(NewLoad->getContext());
for (const auto &MDPair : MD) { for (const auto &MDPair : MD) {
unsigned ID = MDPair.first; unsigned ID = MDPair.first;
@ -521,7 +521,7 @@ static StoreInst *combineStoreToNewValue(InstCombiner &IC, StoreInst &SI, Value
StoreInst *NewStore = IC.Builder.CreateAlignedStore( StoreInst *NewStore = IC.Builder.CreateAlignedStore(
V, IC.Builder.CreateBitCast(Ptr, V->getType()->getPointerTo(AS)), V, IC.Builder.CreateBitCast(Ptr, V->getType()->getPointerTo(AS)),
SI.getAlignment(), SI.isVolatile()); SI.getAlignment(), SI.isVolatile());
NewStore->setAtomic(SI.getOrdering(), SI.getSynchScope()); NewStore->setAtomic(SI.getOrdering(), SI.getSyncScopeID());
for (const auto &MDPair : MD) { for (const auto &MDPair : MD) {
unsigned ID = MDPair.first; unsigned ID = MDPair.first;
MDNode *N = MDPair.second; MDNode *N = MDPair.second;
@ -1025,9 +1025,9 @@ Instruction *InstCombiner::visitLoadInst(LoadInst &LI) {
SI->getOperand(2)->getName()+".val"); SI->getOperand(2)->getName()+".val");
assert(LI.isUnordered() && "implied by above"); assert(LI.isUnordered() && "implied by above");
V1->setAlignment(Align); V1->setAlignment(Align);
V1->setAtomic(LI.getOrdering(), LI.getSynchScope()); V1->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
V2->setAlignment(Align); V2->setAlignment(Align);
V2->setAtomic(LI.getOrdering(), LI.getSynchScope()); V2->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
return SelectInst::Create(SI->getCondition(), V1, V2); return SelectInst::Create(SI->getCondition(), V1, V2);
} }
@ -1540,7 +1540,7 @@ bool InstCombiner::SimplifyStoreAtEndOfBlock(StoreInst &SI) {
SI.isVolatile(), SI.isVolatile(),
SI.getAlignment(), SI.getAlignment(),
SI.getOrdering(), SI.getOrdering(),
SI.getSynchScope()); SI.getSyncScopeID());
InsertNewInstBefore(NewSI, *BBI); InsertNewInstBefore(NewSI, *BBI);
// The debug locations of the original instructions might differ; merge them. // The debug locations of the original instructions might differ; merge them.
NewSI->setDebugLoc(DILocation::getMergedLocation(SI.getDebugLoc(), NewSI->setDebugLoc(DILocation::getMergedLocation(SI.getDebugLoc(),

View File

@ -379,10 +379,11 @@ void ThreadSanitizer::chooseInstructionsToInstrument(
} }
static bool isAtomic(Instruction *I) { static bool isAtomic(Instruction *I) {
// TODO: Ask TTI whether synchronization scope is between threads.
if (LoadInst *LI = dyn_cast<LoadInst>(I)) if (LoadInst *LI = dyn_cast<LoadInst>(I))
return LI->isAtomic() && LI->getSynchScope() == CrossThread; return LI->isAtomic() && LI->getSyncScopeID() != SyncScope::SingleThread;
if (StoreInst *SI = dyn_cast<StoreInst>(I)) if (StoreInst *SI = dyn_cast<StoreInst>(I))
return SI->isAtomic() && SI->getSynchScope() == CrossThread; return SI->isAtomic() && SI->getSyncScopeID() != SyncScope::SingleThread;
if (isa<AtomicRMWInst>(I)) if (isa<AtomicRMWInst>(I))
return true; return true;
if (isa<AtomicCmpXchgInst>(I)) if (isa<AtomicCmpXchgInst>(I))
@ -676,7 +677,7 @@ bool ThreadSanitizer::instrumentAtomic(Instruction *I, const DataLayout &DL) {
I->eraseFromParent(); I->eraseFromParent();
} else if (FenceInst *FI = dyn_cast<FenceInst>(I)) { } else if (FenceInst *FI = dyn_cast<FenceInst>(I)) {
Value *Args[] = {createOrdering(&IRB, FI->getOrdering())}; Value *Args[] = {createOrdering(&IRB, FI->getOrdering())};
Function *F = FI->getSynchScope() == SingleThread ? Function *F = FI->getSyncScopeID() == SyncScope::SingleThread ?
TsanAtomicSignalFence : TsanAtomicThreadFence; TsanAtomicSignalFence : TsanAtomicThreadFence;
CallInst *C = CallInst::Create(F, Args); CallInst *C = CallInst::Create(F, Args);
ReplaceInstWithInst(I, C); ReplaceInstWithInst(I, C);

View File

@ -1166,7 +1166,7 @@ bool GVN::PerformLoadPRE(LoadInst *LI, AvailValInBlkVect &ValuesPerBlock,
auto *NewLoad = new LoadInst(LoadPtr, LI->getName()+".pre", auto *NewLoad = new LoadInst(LoadPtr, LI->getName()+".pre",
LI->isVolatile(), LI->getAlignment(), LI->isVolatile(), LI->getAlignment(),
LI->getOrdering(), LI->getSynchScope(), LI->getOrdering(), LI->getSyncScopeID(),
UnavailablePred->getTerminator()); UnavailablePred->getTerminator());
// Transfer the old load's AA tags to the new load. // Transfer the old load's AA tags to the new load.

View File

@ -1212,7 +1212,7 @@ bool JumpThreadingPass::SimplifyPartiallyRedundantLoad(LoadInst *LI) {
LoadInst *NewVal = new LoadInst( LoadInst *NewVal = new LoadInst(
LoadedPtr->DoPHITranslation(LoadBB, UnavailablePred), LoadedPtr->DoPHITranslation(LoadBB, UnavailablePred),
LI->getName() + ".pr", false, LI->getAlignment(), LI->getOrdering(), LI->getName() + ".pr", false, LI->getAlignment(), LI->getOrdering(),
LI->getSynchScope(), UnavailablePred->getTerminator()); LI->getSyncScopeID(), UnavailablePred->getTerminator());
NewVal->setDebugLoc(LI->getDebugLoc()); NewVal->setDebugLoc(LI->getDebugLoc());
if (AATags) if (AATags)
NewVal->setAAMetadata(AATags); NewVal->setAAMetadata(AATags);

View File

@ -2398,7 +2398,7 @@ private:
LoadInst *NewLI = IRB.CreateAlignedLoad(&NewAI, NewAI.getAlignment(), LoadInst *NewLI = IRB.CreateAlignedLoad(&NewAI, NewAI.getAlignment(),
LI.isVolatile(), LI.getName()); LI.isVolatile(), LI.getName());
if (LI.isVolatile()) if (LI.isVolatile())
NewLI->setAtomic(LI.getOrdering(), LI.getSynchScope()); NewLI->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
// Any !nonnull metadata or !range metadata on the old load is also valid // Any !nonnull metadata or !range metadata on the old load is also valid
// on the new load. This is even true in some cases even when the loads // on the new load. This is even true in some cases even when the loads
@ -2433,7 +2433,7 @@ private:
getSliceAlign(TargetTy), getSliceAlign(TargetTy),
LI.isVolatile(), LI.getName()); LI.isVolatile(), LI.getName());
if (LI.isVolatile()) if (LI.isVolatile())
NewLI->setAtomic(LI.getOrdering(), LI.getSynchScope()); NewLI->setAtomic(LI.getOrdering(), LI.getSyncScopeID());
V = NewLI; V = NewLI;
IsPtrAdjusted = true; IsPtrAdjusted = true;
@ -2576,7 +2576,7 @@ private:
} }
NewSI->copyMetadata(SI, LLVMContext::MD_mem_parallel_loop_access); NewSI->copyMetadata(SI, LLVMContext::MD_mem_parallel_loop_access);
if (SI.isVolatile()) if (SI.isVolatile())
NewSI->setAtomic(SI.getOrdering(), SI.getSynchScope()); NewSI->setAtomic(SI.getOrdering(), SI.getSyncScopeID());
Pass.DeadInsts.insert(&SI); Pass.DeadInsts.insert(&SI);
deleteIfTriviallyDead(OldOp); deleteIfTriviallyDead(OldOp);

View File

@ -513,8 +513,8 @@ int FunctionComparator::cmpOperations(const Instruction *L,
if (int Res = if (int Res =
cmpOrderings(LI->getOrdering(), cast<LoadInst>(R)->getOrdering())) cmpOrderings(LI->getOrdering(), cast<LoadInst>(R)->getOrdering()))
return Res; return Res;
if (int Res = if (int Res = cmpNumbers(LI->getSyncScopeID(),
cmpNumbers(LI->getSynchScope(), cast<LoadInst>(R)->getSynchScope())) cast<LoadInst>(R)->getSyncScopeID()))
return Res; return Res;
return cmpRangeMetadata(LI->getMetadata(LLVMContext::MD_range), return cmpRangeMetadata(LI->getMetadata(LLVMContext::MD_range),
cast<LoadInst>(R)->getMetadata(LLVMContext::MD_range)); cast<LoadInst>(R)->getMetadata(LLVMContext::MD_range));
@ -529,7 +529,8 @@ int FunctionComparator::cmpOperations(const Instruction *L,
if (int Res = if (int Res =
cmpOrderings(SI->getOrdering(), cast<StoreInst>(R)->getOrdering())) cmpOrderings(SI->getOrdering(), cast<StoreInst>(R)->getOrdering()))
return Res; return Res;
return cmpNumbers(SI->getSynchScope(), cast<StoreInst>(R)->getSynchScope()); return cmpNumbers(SI->getSyncScopeID(),
cast<StoreInst>(R)->getSyncScopeID());
} }
if (const CmpInst *CI = dyn_cast<CmpInst>(L)) if (const CmpInst *CI = dyn_cast<CmpInst>(L))
return cmpNumbers(CI->getPredicate(), cast<CmpInst>(R)->getPredicate()); return cmpNumbers(CI->getPredicate(), cast<CmpInst>(R)->getPredicate());
@ -584,7 +585,8 @@ int FunctionComparator::cmpOperations(const Instruction *L,
if (int Res = if (int Res =
cmpOrderings(FI->getOrdering(), cast<FenceInst>(R)->getOrdering())) cmpOrderings(FI->getOrdering(), cast<FenceInst>(R)->getOrdering()))
return Res; return Res;
return cmpNumbers(FI->getSynchScope(), cast<FenceInst>(R)->getSynchScope()); return cmpNumbers(FI->getSyncScopeID(),
cast<FenceInst>(R)->getSyncScopeID());
} }
if (const AtomicCmpXchgInst *CXI = dyn_cast<AtomicCmpXchgInst>(L)) { if (const AtomicCmpXchgInst *CXI = dyn_cast<AtomicCmpXchgInst>(L)) {
if (int Res = cmpNumbers(CXI->isVolatile(), if (int Res = cmpNumbers(CXI->isVolatile(),
@ -601,8 +603,8 @@ int FunctionComparator::cmpOperations(const Instruction *L,
cmpOrderings(CXI->getFailureOrdering(), cmpOrderings(CXI->getFailureOrdering(),
cast<AtomicCmpXchgInst>(R)->getFailureOrdering())) cast<AtomicCmpXchgInst>(R)->getFailureOrdering()))
return Res; return Res;
return cmpNumbers(CXI->getSynchScope(), return cmpNumbers(CXI->getSyncScopeID(),
cast<AtomicCmpXchgInst>(R)->getSynchScope()); cast<AtomicCmpXchgInst>(R)->getSyncScopeID());
} }
if (const AtomicRMWInst *RMWI = dyn_cast<AtomicRMWInst>(L)) { if (const AtomicRMWInst *RMWI = dyn_cast<AtomicRMWInst>(L)) {
if (int Res = cmpNumbers(RMWI->getOperation(), if (int Res = cmpNumbers(RMWI->getOperation(),
@ -614,8 +616,8 @@ int FunctionComparator::cmpOperations(const Instruction *L,
if (int Res = cmpOrderings(RMWI->getOrdering(), if (int Res = cmpOrderings(RMWI->getOrdering(),
cast<AtomicRMWInst>(R)->getOrdering())) cast<AtomicRMWInst>(R)->getOrdering()))
return Res; return Res;
return cmpNumbers(RMWI->getSynchScope(), return cmpNumbers(RMWI->getSyncScopeID(),
cast<AtomicRMWInst>(R)->getSynchScope()); cast<AtomicRMWInst>(R)->getSyncScopeID());
} }
if (const PHINode *PNL = dyn_cast<PHINode>(L)) { if (const PHINode *PNL = dyn_cast<PHINode>(L)) {
const PHINode *PNR = cast<PHINode>(R); const PHINode *PNR = cast<PHINode>(R);

View File

@ -5,14 +5,20 @@
define void @f(i32* %x) { define void @f(i32* %x) {
; CHECK: load atomic i32, i32* %x unordered, align 4 ; CHECK: load atomic i32, i32* %x unordered, align 4
load atomic i32, i32* %x unordered, align 4 load atomic i32, i32* %x unordered, align 4
; CHECK: load atomic volatile i32, i32* %x singlethread acquire, align 4 ; CHECK: load atomic volatile i32, i32* %x syncscope("singlethread") acquire, align 4
load atomic volatile i32, i32* %x singlethread acquire, align 4 load atomic volatile i32, i32* %x syncscope("singlethread") acquire, align 4
; CHECK: load atomic volatile i32, i32* %x syncscope("agent") acquire, align 4
load atomic volatile i32, i32* %x syncscope("agent") acquire, align 4
; CHECK: store atomic i32 3, i32* %x release, align 4 ; CHECK: store atomic i32 3, i32* %x release, align 4
store atomic i32 3, i32* %x release, align 4 store atomic i32 3, i32* %x release, align 4
; CHECK: store atomic volatile i32 3, i32* %x singlethread monotonic, align 4 ; CHECK: store atomic volatile i32 3, i32* %x syncscope("singlethread") monotonic, align 4
store atomic volatile i32 3, i32* %x singlethread monotonic, align 4 store atomic volatile i32 3, i32* %x syncscope("singlethread") monotonic, align 4
; CHECK: cmpxchg i32* %x, i32 1, i32 0 singlethread monotonic monotonic ; CHECK: store atomic volatile i32 3, i32* %x syncscope("workgroup") monotonic, align 4
cmpxchg i32* %x, i32 1, i32 0 singlethread monotonic monotonic store atomic volatile i32 3, i32* %x syncscope("workgroup") monotonic, align 4
; CHECK: cmpxchg i32* %x, i32 1, i32 0 syncscope("singlethread") monotonic monotonic
cmpxchg i32* %x, i32 1, i32 0 syncscope("singlethread") monotonic monotonic
; CHECK: cmpxchg i32* %x, i32 1, i32 0 syncscope("workitem") monotonic monotonic
cmpxchg i32* %x, i32 1, i32 0 syncscope("workitem") monotonic monotonic
; CHECK: cmpxchg volatile i32* %x, i32 0, i32 1 acq_rel acquire ; CHECK: cmpxchg volatile i32* %x, i32 0, i32 1 acq_rel acquire
cmpxchg volatile i32* %x, i32 0, i32 1 acq_rel acquire cmpxchg volatile i32* %x, i32 0, i32 1 acq_rel acquire
; CHECK: cmpxchg i32* %x, i32 42, i32 0 acq_rel monotonic ; CHECK: cmpxchg i32* %x, i32 42, i32 0 acq_rel monotonic
@ -23,9 +29,13 @@ define void @f(i32* %x) {
atomicrmw add i32* %x, i32 10 seq_cst atomicrmw add i32* %x, i32 10 seq_cst
; CHECK: atomicrmw volatile xchg i32* %x, i32 10 monotonic ; CHECK: atomicrmw volatile xchg i32* %x, i32 10 monotonic
atomicrmw volatile xchg i32* %x, i32 10 monotonic atomicrmw volatile xchg i32* %x, i32 10 monotonic
; CHECK: fence singlethread release ; CHECK: atomicrmw volatile xchg i32* %x, i32 10 syncscope("agent") monotonic
fence singlethread release atomicrmw volatile xchg i32* %x, i32 10 syncscope("agent") monotonic
; CHECK: fence syncscope("singlethread") release
fence syncscope("singlethread") release
; CHECK: fence seq_cst ; CHECK: fence seq_cst
fence seq_cst fence seq_cst
; CHECK: fence syncscope("device") seq_cst
fence syncscope("device") seq_cst
ret void ret void
} }

View File

@ -0,0 +1,17 @@
; RUN: llvm-dis -o - %s.bc | FileCheck %s
; Backwards compatibility test: make sure we can process bitcode without
; synchronization scope names encoded in it.
; CHECK: load atomic i32, i32* %x unordered, align 4
; CHECK: load atomic volatile i32, i32* %x syncscope("singlethread") acquire, align 4
; CHECK: store atomic i32 3, i32* %x release, align 4
; CHECK: store atomic volatile i32 3, i32* %x syncscope("singlethread") monotonic, align 4
; CHECK: cmpxchg i32* %x, i32 1, i32 0 syncscope("singlethread") monotonic monotonic
; CHECK: cmpxchg volatile i32* %x, i32 0, i32 1 acq_rel acquire
; CHECK: cmpxchg i32* %x, i32 42, i32 0 acq_rel monotonic
; CHECK: cmpxchg weak i32* %x, i32 13, i32 0 seq_cst monotonic
; CHECK: atomicrmw add i32* %x, i32 10 seq_cst
; CHECK: atomicrmw volatile xchg i32* %x, i32 10 monotonic
; CHECK: fence syncscope("singlethread") release
; CHECK: fence seq_cst

Binary file not shown.

View File

@ -11,8 +11,8 @@ define void @test_cmpxchg(i32* %addr, i32 %desired, i32 %new) {
cmpxchg weak i32* %addr, i32 %desired, i32 %new acq_rel acquire cmpxchg weak i32* %addr, i32 %desired, i32 %new acq_rel acquire
; CHECK: cmpxchg weak i32* %addr, i32 %desired, i32 %new acq_rel acquire ; CHECK: cmpxchg weak i32* %addr, i32 %desired, i32 %new acq_rel acquire
cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new singlethread release monotonic cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new syncscope("singlethread") release monotonic
; CHECK: cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new singlethread release monotonic ; CHECK: cmpxchg weak volatile i32* %addr, i32 %desired, i32 %new syncscope("singlethread") release monotonic
ret void ret void
} }

View File

@ -551,8 +551,8 @@ define void @atomics(i32* %word) {
; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
%cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
%cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
%atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
%atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
@ -571,33 +571,33 @@ define void @atomics(i32* %word) {
; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
%atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
%atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
%atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
fence acquire fence acquire
; CHECK: fence acquire ; CHECK: fence acquire
fence release fence release
; CHECK: fence release ; CHECK: fence release
fence acq_rel fence acq_rel
; CHECK: fence acq_rel ; CHECK: fence acq_rel
fence singlethread seq_cst fence syncscope("singlethread") seq_cst
; CHECK: fence singlethread seq_cst ; CHECK: fence syncscope("singlethread") seq_cst
; XXX: The parser spits out the load type here. ; XXX: The parser spits out the load type here.
%ld.1 = load atomic i32* %word monotonic, align 4 %ld.1 = load atomic i32* %word monotonic, align 4
; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4 ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
%ld.2 = load atomic volatile i32* %word acquire, align 8 %ld.2 = load atomic volatile i32* %word acquire, align 8
; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8 ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
%ld.3 = load atomic volatile i32* %word singlethread seq_cst, align 16 %ld.3 = load atomic volatile i32* %word syncscope("singlethread") seq_cst, align 16
; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16 ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
store atomic i32 23, i32* %word monotonic, align 4 store atomic i32 23, i32* %word monotonic, align 4
; CHECK: store atomic i32 23, i32* %word monotonic, align 4 ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
store atomic volatile i32 24, i32* %word monotonic, align 4 store atomic volatile i32 24, i32* %word monotonic, align 4
; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4 ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
store atomic volatile i32 25, i32* %word singlethread monotonic, align 4 store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4 ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
ret void ret void
} }

View File

@ -596,8 +596,8 @@ define void @atomics(i32* %word) {
; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
%cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
%cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
%atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
%atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
@ -616,32 +616,32 @@ define void @atomics(i32* %word) {
; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
%atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
%atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
%atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
fence acquire fence acquire
; CHECK: fence acquire ; CHECK: fence acquire
fence release fence release
; CHECK: fence release ; CHECK: fence release
fence acq_rel fence acq_rel
; CHECK: fence acq_rel ; CHECK: fence acq_rel
fence singlethread seq_cst fence syncscope("singlethread") seq_cst
; CHECK: fence singlethread seq_cst ; CHECK: fence syncscope("singlethread") seq_cst
%ld.1 = load atomic i32, i32* %word monotonic, align 4 %ld.1 = load atomic i32, i32* %word monotonic, align 4
; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4 ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
%ld.2 = load atomic volatile i32, i32* %word acquire, align 8 %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8 ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
%ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16 %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16 ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
store atomic i32 23, i32* %word monotonic, align 4 store atomic i32 23, i32* %word monotonic, align 4
; CHECK: store atomic i32 23, i32* %word monotonic, align 4 ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
store atomic volatile i32 24, i32* %word monotonic, align 4 store atomic volatile i32 24, i32* %word monotonic, align 4
; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4 ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
store atomic volatile i32 25, i32* %word singlethread monotonic, align 4 store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4 ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
ret void ret void
} }

View File

@ -627,8 +627,8 @@ define void @atomics(i32* %word) {
; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
%cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
%cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
%atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
%atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
@ -647,32 +647,32 @@ define void @atomics(i32* %word) {
; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
%atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
%atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
%atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
fence acquire fence acquire
; CHECK: fence acquire ; CHECK: fence acquire
fence release fence release
; CHECK: fence release ; CHECK: fence release
fence acq_rel fence acq_rel
; CHECK: fence acq_rel ; CHECK: fence acq_rel
fence singlethread seq_cst fence syncscope("singlethread") seq_cst
; CHECK: fence singlethread seq_cst ; CHECK: fence syncscope("singlethread") seq_cst
%ld.1 = load atomic i32, i32* %word monotonic, align 4 %ld.1 = load atomic i32, i32* %word monotonic, align 4
; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4 ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
%ld.2 = load atomic volatile i32, i32* %word acquire, align 8 %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8 ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
%ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16 %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16 ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
store atomic i32 23, i32* %word monotonic, align 4 store atomic i32 23, i32* %word monotonic, align 4
; CHECK: store atomic i32 23, i32* %word monotonic, align 4 ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
store atomic volatile i32 24, i32* %word monotonic, align 4 store atomic volatile i32 24, i32* %word monotonic, align 4
; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4 ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
store atomic volatile i32 25, i32* %word singlethread monotonic, align 4 store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4 ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
ret void ret void
} }

View File

@ -698,8 +698,8 @@ define void @atomics(i32* %word) {
; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
%cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
%cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
%atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
%atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
@ -718,32 +718,32 @@ define void @atomics(i32* %word) {
; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
%atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
%atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
%atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
fence acquire fence acquire
; CHECK: fence acquire ; CHECK: fence acquire
fence release fence release
; CHECK: fence release ; CHECK: fence release
fence acq_rel fence acq_rel
; CHECK: fence acq_rel ; CHECK: fence acq_rel
fence singlethread seq_cst fence syncscope("singlethread") seq_cst
; CHECK: fence singlethread seq_cst ; CHECK: fence syncscope("singlethread") seq_cst
%ld.1 = load atomic i32, i32* %word monotonic, align 4 %ld.1 = load atomic i32, i32* %word monotonic, align 4
; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4 ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
%ld.2 = load atomic volatile i32, i32* %word acquire, align 8 %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8 ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
%ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16 %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16 ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
store atomic i32 23, i32* %word monotonic, align 4 store atomic i32 23, i32* %word monotonic, align 4
; CHECK: store atomic i32 23, i32* %word monotonic, align 4 ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
store atomic volatile i32 24, i32* %word monotonic, align 4 store atomic volatile i32 24, i32* %word monotonic, align 4
; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4 ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
store atomic volatile i32 25, i32* %word singlethread monotonic, align 4 store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4 ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
ret void ret void
} }

View File

@ -698,8 +698,8 @@ define void @atomics(i32* %word) {
; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
%cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
%cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
%atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
%atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
@ -718,32 +718,32 @@ define void @atomics(i32* %word) {
; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
%atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
%atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
%atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
fence acquire fence acquire
; CHECK: fence acquire ; CHECK: fence acquire
fence release fence release
; CHECK: fence release ; CHECK: fence release
fence acq_rel fence acq_rel
; CHECK: fence acq_rel ; CHECK: fence acq_rel
fence singlethread seq_cst fence syncscope("singlethread") seq_cst
; CHECK: fence singlethread seq_cst ; CHECK: fence syncscope("singlethread") seq_cst
%ld.1 = load atomic i32, i32* %word monotonic, align 4 %ld.1 = load atomic i32, i32* %word monotonic, align 4
; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4 ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
%ld.2 = load atomic volatile i32, i32* %word acquire, align 8 %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8 ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
%ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16 %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16 ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
store atomic i32 23, i32* %word monotonic, align 4 store atomic i32 23, i32* %word monotonic, align 4
; CHECK: store atomic i32 23, i32* %word monotonic, align 4 ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
store atomic volatile i32 24, i32* %word monotonic, align 4 store atomic volatile i32 24, i32* %word monotonic, align 4
; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4 ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
store atomic volatile i32 25, i32* %word singlethread monotonic, align 4 store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4 ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
ret void ret void
} }

View File

@ -705,8 +705,8 @@ define void @atomics(i32* %word) {
; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic ; CHECK: %cmpxchg.5 = cmpxchg weak i32* %word, i32 0, i32 9 seq_cst monotonic
%cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic ; CHECK: %cmpxchg.6 = cmpxchg volatile i32* %word, i32 0, i32 10 seq_cst monotonic
%cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 singlethread seq_cst monotonic ; CHECK: %cmpxchg.7 = cmpxchg weak volatile i32* %word, i32 0, i32 11 syncscope("singlethread") seq_cst monotonic
%atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic ; CHECK: %atomicrmw.xchg = atomicrmw xchg i32* %word, i32 12 monotonic
%atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic %atomicrmw.add = atomicrmw add i32* %word, i32 13 monotonic
@ -725,32 +725,32 @@ define void @atomics(i32* %word) {
; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic ; CHECK: %atomicrmw.max = atomicrmw max i32* %word, i32 19 monotonic
%atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic ; CHECK: %atomicrmw.min = atomicrmw volatile min i32* %word, i32 20 monotonic
%atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 singlethread monotonic ; CHECK: %atomicrmw.umax = atomicrmw umax i32* %word, i32 21 syncscope("singlethread") monotonic
%atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 singlethread monotonic ; CHECK: %atomicrmw.umin = atomicrmw volatile umin i32* %word, i32 22 syncscope("singlethread") monotonic
fence acquire fence acquire
; CHECK: fence acquire ; CHECK: fence acquire
fence release fence release
; CHECK: fence release ; CHECK: fence release
fence acq_rel fence acq_rel
; CHECK: fence acq_rel ; CHECK: fence acq_rel
fence singlethread seq_cst fence syncscope("singlethread") seq_cst
; CHECK: fence singlethread seq_cst ; CHECK: fence syncscope("singlethread") seq_cst
%ld.1 = load atomic i32, i32* %word monotonic, align 4 %ld.1 = load atomic i32, i32* %word monotonic, align 4
; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4 ; CHECK: %ld.1 = load atomic i32, i32* %word monotonic, align 4
%ld.2 = load atomic volatile i32, i32* %word acquire, align 8 %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8 ; CHECK: %ld.2 = load atomic volatile i32, i32* %word acquire, align 8
%ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16 %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
; CHECK: %ld.3 = load atomic volatile i32, i32* %word singlethread seq_cst, align 16 ; CHECK: %ld.3 = load atomic volatile i32, i32* %word syncscope("singlethread") seq_cst, align 16
store atomic i32 23, i32* %word monotonic, align 4 store atomic i32 23, i32* %word monotonic, align 4
; CHECK: store atomic i32 23, i32* %word monotonic, align 4 ; CHECK: store atomic i32 23, i32* %word monotonic, align 4
store atomic volatile i32 24, i32* %word monotonic, align 4 store atomic volatile i32 24, i32* %word monotonic, align 4
; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4 ; CHECK: store atomic volatile i32 24, i32* %word monotonic, align 4
store atomic volatile i32 25, i32* %word singlethread monotonic, align 4 store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
; CHECK: store atomic volatile i32 25, i32* %word singlethread monotonic, align 4 ; CHECK: store atomic volatile i32 25, i32* %word syncscope("singlethread") monotonic, align 4
ret void ret void
} }

View File

@ -107,29 +107,29 @@ entry:
; CHECK-NEXT: %res8 = load atomic volatile i8, i8* %ptr1 seq_cst, align 1 ; CHECK-NEXT: %res8 = load atomic volatile i8, i8* %ptr1 seq_cst, align 1
%res8 = load atomic volatile i8, i8* %ptr1 seq_cst, align 1 %res8 = load atomic volatile i8, i8* %ptr1 seq_cst, align 1
; CHECK-NEXT: %res9 = load atomic i8, i8* %ptr1 singlethread unordered, align 1 ; CHECK-NEXT: %res9 = load atomic i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
%res9 = load atomic i8, i8* %ptr1 singlethread unordered, align 1 %res9 = load atomic i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
; CHECK-NEXT: %res10 = load atomic i8, i8* %ptr1 singlethread monotonic, align 1 ; CHECK-NEXT: %res10 = load atomic i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
%res10 = load atomic i8, i8* %ptr1 singlethread monotonic, align 1 %res10 = load atomic i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
; CHECK-NEXT: %res11 = load atomic i8, i8* %ptr1 singlethread acquire, align 1 ; CHECK-NEXT: %res11 = load atomic i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
%res11 = load atomic i8, i8* %ptr1 singlethread acquire, align 1 %res11 = load atomic i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
; CHECK-NEXT: %res12 = load atomic i8, i8* %ptr1 singlethread seq_cst, align 1 ; CHECK-NEXT: %res12 = load atomic i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
%res12 = load atomic i8, i8* %ptr1 singlethread seq_cst, align 1 %res12 = load atomic i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
; CHECK-NEXT: %res13 = load atomic volatile i8, i8* %ptr1 singlethread unordered, align 1 ; CHECK-NEXT: %res13 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
%res13 = load atomic volatile i8, i8* %ptr1 singlethread unordered, align 1 %res13 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") unordered, align 1
; CHECK-NEXT: %res14 = load atomic volatile i8, i8* %ptr1 singlethread monotonic, align 1 ; CHECK-NEXT: %res14 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
%res14 = load atomic volatile i8, i8* %ptr1 singlethread monotonic, align 1 %res14 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") monotonic, align 1
; CHECK-NEXT: %res15 = load atomic volatile i8, i8* %ptr1 singlethread acquire, align 1 ; CHECK-NEXT: %res15 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
%res15 = load atomic volatile i8, i8* %ptr1 singlethread acquire, align 1 %res15 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") acquire, align 1
; CHECK-NEXT: %res16 = load atomic volatile i8, i8* %ptr1 singlethread seq_cst, align 1 ; CHECK-NEXT: %res16 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
%res16 = load atomic volatile i8, i8* %ptr1 singlethread seq_cst, align 1 %res16 = load atomic volatile i8, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
ret void ret void
} }
@ -193,29 +193,29 @@ entry:
; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 seq_cst, align 1 ; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 seq_cst, align 1
store atomic volatile i8 2, i8* %ptr1 seq_cst, align 1 store atomic volatile i8 2, i8* %ptr1 seq_cst, align 1
; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread unordered, align 1 ; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
store atomic i8 2, i8* %ptr1 singlethread unordered, align 1 store atomic i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread monotonic, align 1 ; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
store atomic i8 2, i8* %ptr1 singlethread monotonic, align 1 store atomic i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread release, align 1 ; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
store atomic i8 2, i8* %ptr1 singlethread release, align 1 store atomic i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
; CHECK-NEXT: store atomic i8 2, i8* %ptr1 singlethread seq_cst, align 1 ; CHECK-NEXT: store atomic i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
store atomic i8 2, i8* %ptr1 singlethread seq_cst, align 1 store atomic i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread unordered, align 1 ; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
store atomic volatile i8 2, i8* %ptr1 singlethread unordered, align 1 store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") unordered, align 1
; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread monotonic, align 1 ; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
store atomic volatile i8 2, i8* %ptr1 singlethread monotonic, align 1 store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") monotonic, align 1
; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread release, align 1 ; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
store atomic volatile i8 2, i8* %ptr1 singlethread release, align 1 store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") release, align 1
; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 singlethread seq_cst, align 1 ; CHECK-NEXT: store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
store atomic volatile i8 2, i8* %ptr1 singlethread seq_cst, align 1 store atomic volatile i8 2, i8* %ptr1 syncscope("singlethread") seq_cst, align 1
ret void ret void
} }
@ -232,13 +232,13 @@ entry:
; CHECK-NEXT: %res2 = extractvalue { i32, i1 } [[TMP]], 0 ; CHECK-NEXT: %res2 = extractvalue { i32, i1 } [[TMP]], 0
%res2 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new monotonic monotonic %res2 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new monotonic monotonic
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
; CHECK-NEXT: %res3 = extractvalue { i32, i1 } [[TMP]], 0 ; CHECK-NEXT: %res3 = extractvalue { i32, i1 } [[TMP]], 0
%res3 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic %res3 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
; CHECK-NEXT: %res4 = extractvalue { i32, i1 } [[TMP]], 0 ; CHECK-NEXT: %res4 = extractvalue { i32, i1 } [[TMP]], 0
%res4 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread monotonic monotonic %res4 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") monotonic monotonic
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new acquire acquire ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new acquire acquire
@ -249,13 +249,13 @@ entry:
; CHECK-NEXT: %res6 = extractvalue { i32, i1 } [[TMP]], 0 ; CHECK-NEXT: %res6 = extractvalue { i32, i1 } [[TMP]], 0
%res6 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new acquire acquire %res6 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new acquire acquire
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
; CHECK-NEXT: %res7 = extractvalue { i32, i1 } [[TMP]], 0 ; CHECK-NEXT: %res7 = extractvalue { i32, i1 } [[TMP]], 0
%res7 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire %res7 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
; CHECK-NEXT: %res8 = extractvalue { i32, i1 } [[TMP]], 0 ; CHECK-NEXT: %res8 = extractvalue { i32, i1 } [[TMP]], 0
%res8 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread acquire acquire %res8 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acquire acquire
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new release monotonic ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new release monotonic
@ -266,13 +266,13 @@ entry:
; CHECK-NEXT: %res10 = extractvalue { i32, i1 } [[TMP]], 0 ; CHECK-NEXT: %res10 = extractvalue { i32, i1 } [[TMP]], 0
%res10 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new release monotonic %res10 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new release monotonic
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
; CHECK-NEXT: %res11 = extractvalue { i32, i1 } [[TMP]], 0 ; CHECK-NEXT: %res11 = extractvalue { i32, i1 } [[TMP]], 0
%res11 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic %res11 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
; CHECK-NEXT: %res12 = extractvalue { i32, i1 } [[TMP]], 0 ; CHECK-NEXT: %res12 = extractvalue { i32, i1 } [[TMP]], 0
%res12 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread release monotonic %res12 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") release monotonic
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new acq_rel acquire ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new acq_rel acquire
@ -283,13 +283,13 @@ entry:
; CHECK-NEXT: %res14 = extractvalue { i32, i1 } [[TMP]], 0 ; CHECK-NEXT: %res14 = extractvalue { i32, i1 } [[TMP]], 0
%res14 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new acq_rel acquire %res14 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new acq_rel acquire
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acq_rel acquire ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acq_rel acquire
; CHECK-NEXT: %res15 = extractvalue { i32, i1 } [[TMP]], 0 ; CHECK-NEXT: %res15 = extractvalue { i32, i1 } [[TMP]], 0
%res15 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread acq_rel acquire %res15 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acq_rel acquire
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread acq_rel acquire ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acq_rel acquire
; CHECK-NEXT: %res16 = extractvalue { i32, i1 } [[TMP]], 0 ; CHECK-NEXT: %res16 = extractvalue { i32, i1 } [[TMP]], 0
%res16 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread acq_rel acquire %res16 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") acq_rel acquire
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new seq_cst seq_cst ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new seq_cst seq_cst
@ -300,13 +300,13 @@ entry:
; CHECK-NEXT: %res18 = extractvalue { i32, i1 } [[TMP]], 0 ; CHECK-NEXT: %res18 = extractvalue { i32, i1 } [[TMP]], 0
%res18 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new seq_cst seq_cst %res18 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new seq_cst seq_cst
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread seq_cst seq_cst ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") seq_cst seq_cst
; CHECK-NEXT: %res19 = extractvalue { i32, i1 } [[TMP]], 0 ; CHECK-NEXT: %res19 = extractvalue { i32, i1 } [[TMP]], 0
%res19 = cmpxchg i32* %ptr, i32 %cmp, i32 %new singlethread seq_cst seq_cst %res19 = cmpxchg i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") seq_cst seq_cst
; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread seq_cst seq_cst ; CHECK-NEXT: [[TMP:%[a-z0-9]+]] = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") seq_cst seq_cst
; CHECK-NEXT: %res20 = extractvalue { i32, i1 } [[TMP]], 0 ; CHECK-NEXT: %res20 = extractvalue { i32, i1 } [[TMP]], 0
%res20 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new singlethread seq_cst seq_cst %res20 = cmpxchg volatile i32* %ptr, i32 %cmp, i32 %new syncscope("singlethread") seq_cst seq_cst
ret void ret void
} }

View File

@ -1328,16 +1328,16 @@ define void @test_load_store_atomics(i8* %addr) {
; CHECK: G_STORE [[V0]](s8), [[ADDR]](p0) :: (store monotonic 1 into %ir.addr) ; CHECK: G_STORE [[V0]](s8), [[ADDR]](p0) :: (store monotonic 1 into %ir.addr)
; CHECK: [[V1:%[0-9]+]](s8) = G_LOAD [[ADDR]](p0) :: (load acquire 1 from %ir.addr) ; CHECK: [[V1:%[0-9]+]](s8) = G_LOAD [[ADDR]](p0) :: (load acquire 1 from %ir.addr)
; CHECK: G_STORE [[V1]](s8), [[ADDR]](p0) :: (store release 1 into %ir.addr) ; CHECK: G_STORE [[V1]](s8), [[ADDR]](p0) :: (store release 1 into %ir.addr)
; CHECK: [[V2:%[0-9]+]](s8) = G_LOAD [[ADDR]](p0) :: (load singlethread seq_cst 1 from %ir.addr) ; CHECK: [[V2:%[0-9]+]](s8) = G_LOAD [[ADDR]](p0) :: (load syncscope("singlethread") seq_cst 1 from %ir.addr)
; CHECK: G_STORE [[V2]](s8), [[ADDR]](p0) :: (store singlethread monotonic 1 into %ir.addr) ; CHECK: G_STORE [[V2]](s8), [[ADDR]](p0) :: (store syncscope("singlethread") monotonic 1 into %ir.addr)
%v0 = load atomic i8, i8* %addr unordered, align 1 %v0 = load atomic i8, i8* %addr unordered, align 1
store atomic i8 %v0, i8* %addr monotonic, align 1 store atomic i8 %v0, i8* %addr monotonic, align 1
%v1 = load atomic i8, i8* %addr acquire, align 1 %v1 = load atomic i8, i8* %addr acquire, align 1
store atomic i8 %v1, i8* %addr release, align 1 store atomic i8 %v1, i8* %addr release, align 1
%v2 = load atomic i8, i8* %addr singlethread seq_cst, align 1 %v2 = load atomic i8, i8* %addr syncscope("singlethread") seq_cst, align 1
store atomic i8 %v2, i8* %addr singlethread monotonic, align 1 store atomic i8 %v2, i8* %addr syncscope("singlethread") monotonic, align 1
ret void ret void
} }

View File

@ -16,6 +16,6 @@ define void @fence_singlethread() {
; IOS: ; COMPILER BARRIER ; IOS: ; COMPILER BARRIER
; IOS-NOT: dmb ; IOS-NOT: dmb
fence singlethread seq_cst fence syncscope("singlethread") seq_cst
ret void ret void
} }

View File

@ -0,0 +1,19 @@
; RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=gfx803 -stop-before=si-debugger-insert-nops < %s | FileCheck --check-prefix=GCN %s
; GCN-LABEL: name: syncscopes
; GCN: FLAT_STORE_DWORD killed %vgpr1_vgpr2, killed %vgpr0, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("agent") seq_cst 4 into %ir.agent_out)
; GCN: FLAT_STORE_DWORD killed %vgpr4_vgpr5, killed %vgpr3, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("workgroup") seq_cst 4 into %ir.workgroup_out)
; GCN: FLAT_STORE_DWORD killed %vgpr7_vgpr8, killed %vgpr6, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("wavefront") seq_cst 4 into %ir.wavefront_out)
define void @syncscopes(
i32 %agent,
i32 addrspace(4)* %agent_out,
i32 %workgroup,
i32 addrspace(4)* %workgroup_out,
i32 %wavefront,
i32 addrspace(4)* %wavefront_out) {
entry:
store atomic i32 %agent, i32 addrspace(4)* %agent_out syncscope("agent") seq_cst, align 4
store atomic i32 %workgroup, i32 addrspace(4)* %workgroup_out syncscope("workgroup") seq_cst, align 4
store atomic i32 %wavefront, i32 addrspace(4)* %wavefront_out syncscope("wavefront") seq_cst, align 4
ret void
}

View File

@ -11,6 +11,6 @@ define void @fence_singlethread() {
; CHECK: @ COMPILER BARRIER ; CHECK: @ COMPILER BARRIER
; CHECK-NOT: dmb ; CHECK-NOT: dmb
fence singlethread seq_cst fence syncscope("singlethread") seq_cst
ret void ret void
} }

View File

@ -14,7 +14,7 @@
# CHECK: %3(s16) = G_LOAD %0(p0) :: (load acquire 2) # CHECK: %3(s16) = G_LOAD %0(p0) :: (load acquire 2)
# CHECK: G_STORE %3(s16), %0(p0) :: (store release 2) # CHECK: G_STORE %3(s16), %0(p0) :: (store release 2)
# CHECK: G_STORE %2(s32), %0(p0) :: (store acq_rel 4) # CHECK: G_STORE %2(s32), %0(p0) :: (store acq_rel 4)
# CHECK: G_STORE %1(s64), %0(p0) :: (store singlethread seq_cst 8) # CHECK: G_STORE %1(s64), %0(p0) :: (store syncscope("singlethread") seq_cst 8)
name: atomic_memoperands name: atomic_memoperands
body: | body: |
bb.0: bb.0:
@ -25,6 +25,6 @@ body: |
%3:_(s16) = G_LOAD %0(p0) :: (load acquire 2) %3:_(s16) = G_LOAD %0(p0) :: (load acquire 2)
G_STORE %3(s16), %0(p0) :: (store release 2) G_STORE %3(s16), %0(p0) :: (store release 2)
G_STORE %2(s32), %0(p0) :: (store acq_rel 4) G_STORE %2(s32), %0(p0) :: (store acq_rel 4)
G_STORE %1(s64), %0(p0) :: (store singlethread seq_cst 8) G_STORE %1(s64), %0(p0) :: (store syncscope("singlethread") seq_cst 8)
RET_ReallyLR RET_ReallyLR
... ...

View File

@ -0,0 +1,98 @@
# RUN: llc -mtriple=amdgcn-amd-amdhsa -mcpu=gfx803 -run-pass=none %s -o - | FileCheck --check-prefix=GCN %s
--- |
; ModuleID = '<stdin>'
source_filename = "<stdin>"
target datalayout = "e-p:32:32-p1:64:64-p2:64:64-p3:32:32-p4:64:64-p5:32:32-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64"
target triple = "amdgcn-amd-amdhsa"
define void @syncscopes(i32 %agent, i32 addrspace(4)* %agent_out, i32 %workgroup, i32 addrspace(4)* %workgroup_out, i32 %wavefront, i32 addrspace(4)* %wavefront_out) #0 {
entry:
store atomic i32 %agent, i32 addrspace(4)* %agent_out syncscope("agent") seq_cst, align 4
store atomic i32 %workgroup, i32 addrspace(4)* %workgroup_out syncscope("workgroup") seq_cst, align 4
store atomic i32 %wavefront, i32 addrspace(4)* %wavefront_out syncscope("wavefront") seq_cst, align 4
ret void
}
; Function Attrs: convergent nounwind
declare { i1, i64 } @llvm.amdgcn.if(i1) #1
; Function Attrs: convergent nounwind
declare { i1, i64 } @llvm.amdgcn.else(i64) #1
; Function Attrs: convergent nounwind readnone
declare i64 @llvm.amdgcn.break(i64) #2
; Function Attrs: convergent nounwind readnone
declare i64 @llvm.amdgcn.if.break(i1, i64) #2
; Function Attrs: convergent nounwind readnone
declare i64 @llvm.amdgcn.else.break(i64, i64) #2
; Function Attrs: convergent nounwind
declare i1 @llvm.amdgcn.loop(i64) #1
; Function Attrs: convergent nounwind
declare void @llvm.amdgcn.end.cf(i64) #1
attributes #0 = { "target-cpu"="gfx803" }
attributes #1 = { convergent nounwind }
attributes #2 = { convergent nounwind readnone }
# GCN-LABEL: name: syncscopes
# GCN: FLAT_STORE_DWORD killed %vgpr0_vgpr1, killed %vgpr2, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("agent") seq_cst 4 into %ir.agent_out)
# GCN: FLAT_STORE_DWORD killed %vgpr0_vgpr1, killed %vgpr2, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("workgroup") seq_cst 4 into %ir.workgroup_out)
# GCN: FLAT_STORE_DWORD killed %vgpr0_vgpr1, killed %vgpr2, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("wavefront") seq_cst 4 into %ir.wavefront_out)
...
---
name: syncscopes
alignment: 0
exposesReturnsTwice: false
legalized: false
regBankSelected: false
selected: false
tracksRegLiveness: true
liveins:
- { reg: '%sgpr4_sgpr5' }
frameInfo:
isFrameAddressTaken: false
isReturnAddressTaken: false
hasStackMap: false
hasPatchPoint: false
stackSize: 0
offsetAdjustment: 0
maxAlignment: 0
adjustsStack: false
hasCalls: false
hasOpaqueSPAdjustment: false
hasVAStart: false
hasMustTailInVarArgFunc: false
body: |
bb.0.entry:
liveins: %sgpr4_sgpr5
S_WAITCNT 0
%sgpr0_sgpr1 = S_LOAD_DWORDX2_IMM %sgpr4_sgpr5, 8, 0 :: (non-temporal dereferenceable invariant load 8 from `i64 addrspace(2)* undef`)
%sgpr6 = S_LOAD_DWORD_IMM %sgpr4_sgpr5, 0, 0 :: (non-temporal dereferenceable invariant load 4 from `i32 addrspace(2)* undef`)
%sgpr2_sgpr3 = S_LOAD_DWORDX2_IMM %sgpr4_sgpr5, 24, 0 :: (non-temporal dereferenceable invariant load 8 from `i64 addrspace(2)* undef`)
%sgpr7 = S_LOAD_DWORD_IMM %sgpr4_sgpr5, 16, 0 :: (non-temporal dereferenceable invariant load 4 from `i32 addrspace(2)* undef`)
%sgpr8 = S_LOAD_DWORD_IMM %sgpr4_sgpr5, 32, 0 :: (non-temporal dereferenceable invariant load 4 from `i32 addrspace(2)* undef`)
S_WAITCNT 127
%vgpr0 = V_MOV_B32_e32 %sgpr0, implicit %exec, implicit-def %vgpr0_vgpr1, implicit %sgpr0_sgpr1
%sgpr4_sgpr5 = S_LOAD_DWORDX2_IMM killed %sgpr4_sgpr5, 40, 0 :: (non-temporal dereferenceable invariant load 8 from `i64 addrspace(2)* undef`)
%vgpr1 = V_MOV_B32_e32 killed %sgpr1, implicit %exec, implicit killed %sgpr0_sgpr1, implicit %sgpr0_sgpr1, implicit %exec
%vgpr2 = V_MOV_B32_e32 killed %sgpr6, implicit %exec, implicit %exec
FLAT_STORE_DWORD killed %vgpr0_vgpr1, killed %vgpr2, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("agent") seq_cst 4 into %ir.agent_out)
S_WAITCNT 112
%vgpr0 = V_MOV_B32_e32 %sgpr2, implicit %exec, implicit-def %vgpr0_vgpr1, implicit %sgpr2_sgpr3
%vgpr1 = V_MOV_B32_e32 killed %sgpr3, implicit %exec, implicit killed %sgpr2_sgpr3, implicit %sgpr2_sgpr3, implicit %exec
%vgpr2 = V_MOV_B32_e32 killed %sgpr7, implicit %exec, implicit %exec
FLAT_STORE_DWORD killed %vgpr0_vgpr1, killed %vgpr2, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("workgroup") seq_cst 4 into %ir.workgroup_out)
S_WAITCNT 112
%vgpr0 = V_MOV_B32_e32 %sgpr4, implicit %exec, implicit-def %vgpr0_vgpr1, implicit %sgpr4_sgpr5
%vgpr1 = V_MOV_B32_e32 killed %sgpr5, implicit %exec, implicit killed %sgpr4_sgpr5, implicit %sgpr4_sgpr5, implicit %exec
%vgpr2 = V_MOV_B32_e32 killed %sgpr8, implicit %exec, implicit %exec
FLAT_STORE_DWORD killed %vgpr0_vgpr1, killed %vgpr2, 0, -1, 0, implicit %exec, implicit %flat_scr :: (volatile store syncscope("wavefront") seq_cst 4 into %ir.wavefront_out)
S_ENDPGM
...

File diff suppressed because it is too large Load Diff

View File

@ -1959,7 +1959,7 @@ entry:
define void @atomic_signal_fence_acquire() nounwind uwtable { define void @atomic_signal_fence_acquire() nounwind uwtable {
entry: entry:
fence singlethread acquire, !dbg !7 fence syncscope("singlethread") acquire, !dbg !7
ret void, !dbg !7 ret void, !dbg !7
} }
; CHECK-LABEL: atomic_signal_fence_acquire ; CHECK-LABEL: atomic_signal_fence_acquire
@ -1975,7 +1975,7 @@ entry:
define void @atomic_signal_fence_release() nounwind uwtable { define void @atomic_signal_fence_release() nounwind uwtable {
entry: entry:
fence singlethread release, !dbg !7 fence syncscope("singlethread") release, !dbg !7
ret void, !dbg !7 ret void, !dbg !7
} }
; CHECK-LABEL: atomic_signal_fence_release ; CHECK-LABEL: atomic_signal_fence_release
@ -1991,7 +1991,7 @@ entry:
define void @atomic_signal_fence_acq_rel() nounwind uwtable { define void @atomic_signal_fence_acq_rel() nounwind uwtable {
entry: entry:
fence singlethread acq_rel, !dbg !7 fence syncscope("singlethread") acq_rel, !dbg !7
ret void, !dbg !7 ret void, !dbg !7
} }
; CHECK-LABEL: atomic_signal_fence_acq_rel ; CHECK-LABEL: atomic_signal_fence_acq_rel
@ -2007,7 +2007,7 @@ entry:
define void @atomic_signal_fence_seq_cst() nounwind uwtable { define void @atomic_signal_fence_seq_cst() nounwind uwtable {
entry: entry:
fence singlethread seq_cst, !dbg !7 fence syncscope("singlethread") seq_cst, !dbg !7
ret void, !dbg !7 ret void, !dbg !7
} }
; CHECK-LABEL: atomic_signal_fence_seq_cst ; CHECK-LABEL: atomic_signal_fence_seq_cst

View File

@ -0,0 +1,6 @@
define void @syncscope_1() {
fence syncscope("agent") seq_cst
fence syncscope("workgroup") seq_cst
fence syncscope("wavefront") seq_cst
ret void
}

View File

@ -0,0 +1,6 @@
define void @syncscope_2() {
fence syncscope("image") seq_cst
fence syncscope("agent") seq_cst
fence syncscope("workgroup") seq_cst
ret void
}

11
test/Linker/syncscopes.ll Normal file
View File

@ -0,0 +1,11 @@
; RUN: llvm-link %S/Inputs/syncscope-1.ll %S/Inputs/syncscope-2.ll -S | FileCheck %s
; CHECK-LABEL: define void @syncscope_1
; CHECK: fence syncscope("agent") seq_cst
; CHECK: fence syncscope("workgroup") seq_cst
; CHECK: fence syncscope("wavefront") seq_cst
; CHECK-LABEL: define void @syncscope_2
; CHECK: fence syncscope("image") seq_cst
; CHECK: fence syncscope("agent") seq_cst
; CHECK: fence syncscope("workgroup") seq_cst

View File

@ -208,14 +208,14 @@ define void @fence_seq_cst(i32* %P1, i32* %P2) {
ret void ret void
} }
; Can't DSE across a full singlethread fence ; Can't DSE across a full syncscope("singlethread") fence
define void @fence_seq_cst_st(i32* %P1, i32* %P2) { define void @fence_seq_cst_st(i32* %P1, i32* %P2) {
; CHECK-LABEL: @fence_seq_cst_st( ; CHECK-LABEL: @fence_seq_cst_st(
; CHECK: store ; CHECK: store
; CHECK: fence singlethread seq_cst ; CHECK: fence syncscope("singlethread") seq_cst
; CHECK: store ; CHECK: store
store i32 0, i32* %P1, align 4 store i32 0, i32* %P1, align 4
fence singlethread seq_cst fence syncscope("singlethread") seq_cst
store i32 0, i32* %P1, align 4 store i32 0, i32* %P1, align 4
ret void ret void
} }

View File

@ -4,7 +4,7 @@
; CHECK-LABEL: define void @tinkywinky ; CHECK-LABEL: define void @tinkywinky
; CHECK-NEXT: fence seq_cst ; CHECK-NEXT: fence seq_cst
; CHECK-NEXT: fence singlethread acquire ; CHECK-NEXT: fence syncscope("singlethread") acquire
; CHECK-NEXT: ret void ; CHECK-NEXT: ret void
; CHECK-NEXT: } ; CHECK-NEXT: }
@ -12,21 +12,21 @@ define void @tinkywinky() {
fence seq_cst fence seq_cst
fence seq_cst fence seq_cst
fence seq_cst fence seq_cst
fence singlethread acquire fence syncscope("singlethread") acquire
fence singlethread acquire fence syncscope("singlethread") acquire
fence singlethread acquire fence syncscope("singlethread") acquire
ret void ret void
} }
; CHECK-LABEL: define void @dipsy ; CHECK-LABEL: define void @dipsy
; CHECK-NEXT: fence seq_cst ; CHECK-NEXT: fence seq_cst
; CHECK-NEXT: fence singlethread seq_cst ; CHECK-NEXT: fence syncscope("singlethread") seq_cst
; CHECK-NEXT: ret void ; CHECK-NEXT: ret void
; CHECK-NEXT: } ; CHECK-NEXT: }
define void @dipsy() { define void @dipsy() {
fence seq_cst fence seq_cst
fence singlethread seq_cst fence syncscope("singlethread") seq_cst
ret void ret void
} }

View File

@ -5,9 +5,9 @@ target triple = "x86_64-unknown-linux-gnu"
define void @test1(i32* ()*) { define void @test1(i32* ()*) {
entry: entry:
%1 = call i32* %0() #0 %1 = call i32* %0() #0
fence singlethread seq_cst fence syncscope("singlethread") seq_cst
%2 = load i32, i32* %1, align 4 %2 = load i32, i32* %1, align 4
fence singlethread seq_cst fence syncscope("singlethread") seq_cst
%3 = icmp eq i32 %2, 0 %3 = icmp eq i32 %2, 0
br i1 %3, label %fail, label %pass br i1 %3, label %fail, label %pass
@ -20,9 +20,9 @@ pass: ; preds = %fail, %top
; CHECK-LABEL: @test1( ; CHECK-LABEL: @test1(
; CHECK: %[[call:.*]] = call i32* %0() ; CHECK: %[[call:.*]] = call i32* %0()
; CHECK: fence singlethread seq_cst ; CHECK: fence syncscope("singlethread") seq_cst
; CHECK: load i32, i32* %[[call]], align 4 ; CHECK: load i32, i32* %[[call]], align 4
; CHECK: fence singlethread seq_cst ; CHECK: fence syncscope("singlethread") seq_cst
attributes #0 = { nounwind readnone } attributes #0 = { nounwind readnone }

View File

@ -180,10 +180,11 @@ TEST_F(AliasAnalysisTest, getModRefInfo) {
auto *VAArg1 = new VAArgInst(Addr, PtrType, "vaarg", BB); auto *VAArg1 = new VAArgInst(Addr, PtrType, "vaarg", BB);
auto *CmpXChg1 = new AtomicCmpXchgInst( auto *CmpXChg1 = new AtomicCmpXchgInst(
Addr, ConstantInt::get(IntType, 0), ConstantInt::get(IntType, 1), Addr, ConstantInt::get(IntType, 0), ConstantInt::get(IntType, 1),
AtomicOrdering::Monotonic, AtomicOrdering::Monotonic, CrossThread, BB); AtomicOrdering::Monotonic, AtomicOrdering::Monotonic,
SyncScope::System, BB);
auto *AtomicRMW = auto *AtomicRMW =
new AtomicRMWInst(AtomicRMWInst::Xchg, Addr, ConstantInt::get(IntType, 1), new AtomicRMWInst(AtomicRMWInst::Xchg, Addr, ConstantInt::get(IntType, 1),
AtomicOrdering::Monotonic, CrossThread, BB); AtomicOrdering::Monotonic, SyncScope::System, BB);
ReturnInst::Create(C, nullptr, BB); ReturnInst::Create(C, nullptr, BB);