1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2025-02-01 13:11:39 +01:00

30167 Commits

Author SHA1 Message Date
Simon Pilgrim
04fd71aa2f [X86][SSE] Call SimplifyMultipleUseDemandedBits on PACKSS/PACKUS arguments.
This mainly helps to replace unused arguments with UNDEF in the case where they have multiple users.

llvm-svn: 368026
2019-08-06 13:10:42 +00:00
Simon Pilgrim
7cb6d98546 [X86][SSE] Enable min/max partial reduction
As mentioned on D65047 / rL366933 the plan is to enable partial reduction handling wherever possible.

llvm-svn: 368016
2019-08-06 11:00:34 +00:00
Simon Pilgrim
30ea0a39c5 [X86][SSE] Add tests for min/max partial reduction
As mentioned on D65047 / rL366933 the plan is to enable partial reduction handling wherever possible.

llvm-svn: 368015
2019-08-06 10:52:44 +00:00
Ulrich Weigand
d9090aceb5 [Strict FP] Allow custom operation actions
This patch changes the DAG legalizer to respect the operation actions
set by the target for strict floating-point operations. (Currently, the
legalizer will usually fall back to mutate to the non-strict action
(which is assumed to be legal), and only skip mutation if the strict
operation is marked legal.)

With this patch, if whenever a strict operation is marked as Legal or
Custom, it is passed to the target as usual. Only if it is marked as
Expand will the legalizer attempt to mutate to the non-strict operation.
Note that this will now fail if the non-strict operation is itself
marked as Custom -- the target will have to provide a Custom definition
for the strict operation then as well.

Reviewed By: hfinkel

Differential Revision: https://reviews.llvm.org/D65226

llvm-svn: 368012
2019-08-06 10:43:13 +00:00
Tim Northover
ee608c40a5 AArch64: use xzr/wzr for constant 0 in GlobalISel.
COPYs from xzr and wzr can often be folded away entirely during register
allocation, unlike a movz.

llvm-svn: 368003
2019-08-06 09:18:41 +00:00
Austin Kerbow
98f47b7ad5 Re-commit: [AMDGPU] Use S_DENORM_MODE for gfx10
Summary: During fdiv32 lowering use S_DENORM_MODE to select denorm mode in gfx10.

Reviewers: arsenm, rampitec

Reviewed By: arsenm, rampitec

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65620

llvm-svn: 367969
2019-08-06 02:16:11 +00:00
Shiva Chen
9378c79f70 [RISCV] Custom legalize i32 operations for RV64 to reduce signed extensions
Differential Revision: https://reviews.llvm.org/D65434

llvm-svn: 367960
2019-08-06 00:24:00 +00:00
Keno Fischer
089fc0dcee [WebAssembly] Fix conflict between ret legalization and sjlj
Summary:
When the WebAssembly backend encounters a return type that doesn't
fit within i32, SelectionDAG performs sret demotion, adding an
additional argument to the start of the function that contains
a pointer to an sret buffer to use instead. However, this conflicts
with the emscripten sjlj lowering pass. There we translate calls like:

```
	call {i32, i32} @foo()
```

into (in pseudo-llvm)
```
	%addr = @foo
	call {i32, i32} @__invoke_{i32,i32}(%addr)
```

i.e. we perform an indirect call through an extra function.
However, the sret transform now transforms this into
the equivalent of
```
        %addr = @foo
        %sret = alloca {i32, i32}
        call {i32, i32} @__invoke_{i32,i32}(%sret, %addr)
```
(while simultaneously translation the implementation of @foo as well).
Unfortunately, this doesn't work out. The __invoke_ ABI expected
the function address to be the first argument, causing crashes.

There is several possible ways to fix this:
1. Implementing the sret rewrite at the IR level as well and performing
   it as part of lowering to __invoke
2. Fixing the wasm backend to recognize that __invoke has a special ABI
3. A change to the binaryen/emscripten ABI to recognize this situation

This revision implements the middle option, teaching the backend to
treat __invoke_ functions specially in sret lowering. This is achieved
by
1) Introducing a new CallingConv ID for invoke functions
2) When this CallingConv ID is seen in the backend and the first argument
   is marked as sret (a function pointer would never be marked as sret),
   swapping the first two arguments.

Reviewed By: tlively, aheejin
Differential Revision: https://reviews.llvm.org/D65463

llvm-svn: 367935
2019-08-05 21:36:09 +00:00
Amara Emerson
71886ba279 [AArch64][GlobalISel] Inline tiny memcpy et al at -O0.
FastISel already does this since the initial arm64 port was upstreamed, so
it seems there are no issues with doing this at -O0 for very small memcpys.

Gives a 0.2% geomean code size improvement on CTMark.

Differential Revision: https://reviews.llvm.org/D65758

llvm-svn: 367919
2019-08-05 20:02:52 +00:00
Dmitri Gribenko
f5776c8a6c Revert "[AMDGPU] Use S_DENORM_MODE for gfx10"
This reverts commit r367882. It broke the test
MC/Disassembler/AMDGPU/gfx10_dasm_all.txt.

llvm-svn: 367904
2019-08-05 18:36:43 +00:00
Craig Topper
ad1b2fd4c0 [X86] Enable -x86-experimental-vector-widening-legalization by default.
This patch changes our defualt legalization behavior for 16, 32, and
64 bit vectors with i8/i16/i32/i64 scalar types from promotion to
widening. For example, v8i8 will now be widened to v16i8 instead of
promoted to v8i16. This keeps the elements widths the same and pads
with undef elements. We believe this is a better legalization strategy.
But it carries some issues due to the fragmented vector ISA. For
example, i8 shifts and multiplies get widened and then later have
to be promoted/split into vXi16 vectors.

This has the potential to cause regressions so we wanted to get
it in early in the 10.0 cycle so we have plenty of time to
address them.

Next steps will be to merge tests that explicitly test the command
line option. And then we can remove the option and its associated
code.

llvm-svn: 367901
2019-08-05 18:25:36 +00:00
Evandro Menezes
66a85dd4d4 [AArch64] Expand bcmp() for small block lengths
Patch D56593 by @courbet results in calls to `bcmp()` in some cases, should
the target support the it.  Unless `TTI::MemCmpExpansionOptions()`
is overridden by the target.

In a proprietary benchmark we see a performance drop of about 12% on PNG
compression before this patch, though it passes all tests.

This patch mirrors X86 for AArch64 and initializes
`TTI::MemCmpExpansionOptions()` to then expand calls to `bcmp()` when
appropriate.  No tuning of the parameters was performed, but, at this point,
it's enough to recover the performance drop above.

This problem also exists on ARM.  Once a consensus is reached for AArch64, we
can work to fix ARM as well.

Authors:
- Evandro Menezes (@evandro) <e.menezes@samsung.com>
- Brian Rzycki (@brzycki) <b.rzycki@samsung.com>

Differential revision: https://reviews.llvm.org/D64805

llvm-svn: 367898
2019-08-05 18:09:14 +00:00
Pablo Barrio
7fb3a65624 [AArch64] Set preferred function alignment to 16 bytes on Neoverse N1
Summary:
The Arm Neoverse N1 Software Optimization Guide [1], Section "4.8 Branch
instruction alignment" states:

"Consider aligning subroutine entry points and branch targets to 32B
boundaries, within the bounds of the code-density requirements of the
program."

This patch sets the preferred function alignment on Neoverse N1 to 2^4=16B.
This was already the case in some of the latest Cortex-A CPUs. Benchmarking
in previous Cortex-A CPUs suggested that 16B alignment is already better
than the default. See commit d04ee305.

The reason we don't set it to 32B right now (as the optimisation guide
suggests) is that this will impact code size and perhaps the instruction
cache performance. Therefore we need benchmark numbers first.

I have also added testing for A75 and A76 that we were missing.

[1] https://developer.arm.com/docs/swog309707/latest

Reviewers: fhahn, greened, samparker, dmgreen

Reviewed By: dmgreen

Subscribers: dmgreen, javed.absar, kristof.beyls, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65654

llvm-svn: 367894
2019-08-05 17:38:58 +00:00
Austin Kerbow
0c6b3a498b [AMDGPU] Use S_DENORM_MODE for gfx10
Summary: During fdiv32 lowering use S_DENORM_MODE to select denorm mode in gfx10.

Reviewers: arsenm, rampitec

Reviewed By: arsenm, rampitec

Subscribers: arsenm, kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65620

llvm-svn: 367882
2019-08-05 16:09:49 +00:00
Tom Stellard
df85dc1127 AMDGPU/LoadStoreOptimizer: Set the correct offset whem merging MMOs
Summary:
This is a follow up to r367237.  MachineFunction::getMachineMemOperand()
adds the offset parameter to the existing offset instead of resetting it.
So we need to reset the offset to the correct value after calling this
function.

Reviewers: arsenm

Subscribers: kzhuravl, jvesely, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65557

llvm-svn: 367881
2019-08-05 16:08:44 +00:00
Matt Arsenault
2028ac72f0 AMDGPU: Correct behavior of f16 buffer loads
Don't assume format loads for f16. Also fixes support for targets
without i16.

llvm-svn: 367879
2019-08-05 15:59:07 +00:00
Matt Arsenault
edc817e382 AMDGPU: Correct behavior of f16/i16 non-format store intrinsics
This was switching to use a format store for a non-format store for
f16 types. Also fixes i16/f16 stores on targets without legal f16.

The corresponding loads also need to be fixed.

llvm-svn: 367872
2019-08-05 14:57:59 +00:00
Matt Arsenault
47a677c669 AMDGPU/GlobalISel: Alternative mappings for constants
Without context we assume SGPR. Allowing VGPR constants theoretically
helps avoid a copy. This seems to not actually work now, and the
choice isn't based on the use bank.

llvm-svn: 367871
2019-08-05 14:40:26 +00:00
Cullen Rhodes
87883e84e2 [AArch64] Implement initial SVE calling convention support
Summary:

This patch adds initial support for the SVE calling convention such that
SVE types can be passed as arguments and return values to/from a
subroutine.

The SVE AAPCS states [1]:

    z0-z7 are used to pass scalable vector arguments to a subroutine,
    and to return scalable vector results from a function. If a
    subroutine takes arguments in scalable vector or predicate
    registers, or if it is a function that returns results in such
    registers, it must ensure that the entire contents of z8-z23 are
    preserved across the call. In other cases it need only preserve the
    low 64 bits of z8-z15, as described in §5.1.2.

    p0-p3 are used to pass scalable predicate arguments to a subroutine
    and to return scalable predicate results from a function. If a
    subroutine takes arguments in scalable vector or predicate
    registers, or if it is a function that returns results in these
    registers, it must ensure that p4-p15 are preserved across the call.
    In other cases it need not preserve any scalable predicate register
    contents.

SVE predicate and data registers are passed indirectly (i.e. spilled to the
stack and pass the address) if they exceed the registers used for argument
passing defined by the PCS referenced above.  Until SVE stack support is merged
we can't spill SVE registers to the stack, so currently an llvm_unreachable is
used where we will eventually handle this.

[1] https://static.docs.arm.com/100986/0000/100986_0000.pdf

Reviewed By: ostannard

Differential Revision: https://reviews.llvm.org/D65448

llvm-svn: 367859
2019-08-05 13:44:10 +00:00
Sanjay Patel
2996b9a3c7 [DAGCombiner][x86] prevent infinite loop from truncate/extend transforms
The test case is based on the example from the post-commit thread for:
https://reviews.llvm.org/rGc9171bd0a955

This replaces the x86-specific simple-type check from:
rL367766
with a check in the DAGCombiner. Adding the check isn't
strictly necessary after the fix from:
rL367768
...but it seems likely that we're heading for trouble if
we are creating weird types in this transform.

I combined the earlier legality check into the initial
clause to simplify the code.

So we should only try the trunc/sext transform at the
earliest combine stage, but we limit the transform to
simple types anyway because the TLI hook is probably
too lax about what it considers a free truncate.

llvm-svn: 367834
2019-08-05 11:27:07 +00:00
Florian Hahn
625dcd33f6 [AArch64] Skip isZIPMask check for masks with an odd number of elements.
We process 2 elements at a time and expect the number of elements to be
even. Similar to D60690.

Reviewers: dmgreen, samparker, t.p.northover

Reviewed By: dmgreen

Differential Revision: https://reviews.llvm.org/D65400

llvm-svn: 367831
2019-08-05 11:12:23 +00:00
Nicolai Haehnle
496b39a5b9 AMDGPU: add missing llvm.amdgcn.{raw,struct}.buffer.atomic.{inc,dec}
Summary:
Wrapping increment/decrement. These aren't exposed by many APIs...

Change-Id: I1df25c7889de5a5ba76468ad8e8a2597efa9af6c

Reviewers: arsenm, tpr, dstuttard

Subscribers: kzhuravl, jvesely, wdng, yaxunl, t-tye, jfb, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65283

llvm-svn: 367821
2019-08-05 09:36:06 +00:00
Oliver Stannard
5a4b1ab0cd Reland: Fix and test inter-procedural register allocation for ARM
Add an explicit construction of the ArrayRef, gcc 5 and earlier don't
seem to select the ArrayRef constructor which takes a C array when the
construction is implicit.

Original commit message:

- Avoid a crash when IPRA calls ARMFrameLowering::determineCalleeSaves
  with a null RegScavenger. Simply not updating the register scavenger
  is fine because IPRA only cares about the SavedRegs vector, the acutal
  code of the function has already been generated at this point.
- Add a new hook to TargetRegisterInfo to get the set of registers which
  can be clobbered inside a call, even if the compiler can see both
  sides, by linker-generated code.

Differential revision: https://reviews.llvm.org/D64908

llvm-svn: 367819
2019-08-05 09:04:10 +00:00
Craig Topper
4b41f5003e [X86] Fix a bad early out in combineExtInVec that prevented recursive shuffle combining from running with -x86-experimental-vector-widening-legalization.
llvm-svn: 367798
2019-08-05 03:48:31 +00:00
Craig Topper
7a9a9ca91e [TargetLowering][X86] Teach SimplifyDemandedVectorElts to replace the base vector of INSERT_SUBVECTOR with undef if none of the elements are demanded even if the node has other users.
Summary:
The SimplifyDemandedVectorElts function can replace with undef
when no elements are demanded, but due to how it interacts with
TargetLoweringOpts, it can only do this when the node has
no other users.

Remove a now unneeded DAG combine from the X86 backend.

Reviewers: RKSimon, spatel

Reviewed By: RKSimon

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D65713

llvm-svn: 367788
2019-08-04 17:30:41 +00:00
Simon Pilgrim
64340be201 Regenerate test for an upcoming patch.
I managed to use the update_llc_test_checks script for this, but had to set -asm-verbose=true and then manually tweak the result (PR42882)

llvm-svn: 367787
2019-08-04 16:37:29 +00:00
Simon Pilgrim
870adc88b3 [X86] lowerShuffleAsSpecificZeroOrAnyExtend - use undef PSHUFB mask indices for ANY_EXTEND shuffles
llvm-svn: 367784
2019-08-04 13:15:23 +00:00
Simon Pilgrim
607c9c137e [X86] SimplifyMultipleUseDemandedBits - Add target shuffle support
llvm-svn: 367782
2019-08-04 12:24:40 +00:00
David Green
27ec5e90c9 [ARM] MVE big endian bitcasts
This adds big endian MVE patterns for bitcasts. They are defined in llvm as
being the same as a store of the existing type and the load into the new. This
means that they have to become a VREV between the two types, working in the
same way that NEON works in big-endian. This also adds some example tests for
bigendian, showing where code is and isn't different.

The main difference, especially from a testing perspective is that vectors are
passed as v2f64, and so are VREV into and out of call arguments, and the
parameters are passed in a v2f64 format. Same happens for inline assembly where
the register class is used, so it is VREV to a v16i8.

So some of this is probably not correct yet, but it is (mostly) self-consistent
and seems to be consistent with how llvm treats vectors. The rest we can
hopefully fix later. More details about big endian neon can be found in
https://llvm.org/docs/BigEndianNEON.html.

Differential Revision: https://reviews.llvm.org/D65581

llvm-svn: 367780
2019-08-04 10:18:15 +00:00
Yonghong Song
7cc58d8b91 [Transforms] Do not drop !preserve.access.index metadata
Currently, when a GVN or CSE optimization happens,
the llvm.preserve.access.index metadata is dropped.
This caused a problem for BPF AbstructMemberOffset phase
as it relies on the metadata (debuginfo types).

This patch added proper hooks in lib/Transforms to
preserve !preserve.access.index metadata. A test
case is added to ensure metadata is preserved under CSE.

Differential Revision: https://reviews.llvm.org/D65700

llvm-svn: 367769
2019-08-03 23:41:26 +00:00
Sanjay Patel
2ae7202704 [x86] change free truncate hook to handle only simple types (PR42880)
This avoids the crash from:
https://bugs.llvm.org/show_bug.cgi?id=42880
...and I think it's a proper constraint for the TLI hook.

But that example raises questions about what happens to get us
into this situation (created i29 types) and what happens later
(why does legalization die on those types), so I'm not sure if
we will resolve the bug based on this change.

llvm-svn: 367766
2019-08-03 21:46:27 +00:00
Keno Fischer
1544840c00 [WebAssembly] Fix allocsize attribute in sjlj lowering
Summary:
The allocsize attribute refers to call parameters by index.
Thus, when we add the extra parameter in sjlj lowering, we
need to increment the referenced paramater in the allocsize
attribute to avoid angering the Verifier.

Reviewed By: aheejin
Differential Revision: https://reviews.llvm.org/D65470

llvm-svn: 367765
2019-08-03 21:38:19 +00:00
Tim Northover
e5745c32fd IR: print value numbers for unnamed function arguments
For consistency with normal instructions and clarity when reading IR,
it's best to print the %0, %1, ... names of function arguments in
definitions.

Also modifies the parser to accept IR in that form for obvious reasons.

llvm-svn: 367755
2019-08-03 14:28:34 +00:00
Nikita Popov
51f61c0d34 [Thumb] Fix invalid symbol redefinition due to duplicated jumptable (PR42760)
Fix for https://bugs.llvm.org/show_bug.cgi?id=42760. A tBR_JTr
instruction is duplicated by tail duplication, which results in
the same jumptable with the same label being emitted twice.

Fix this by marking tBR_JTr as not duplicable. The corresponding
ARM/Thumb instructions are already marked as not duplicable.
Additionally also mark tTBB_JT and tTBH_JT to be consistent with
Thumb2, even though this shouldn't be strictly necessary.

Differential Revision: https://reviews.llvm.org/D65606

llvm-svn: 367753
2019-08-03 06:47:23 +00:00
Bill Wendling
fa94d17420 Emit diagnostic if an inline asm constraint requires an immediate
Summary:
An inline asm call can result in an immediate after inlining. Therefore emit a
diagnostic here if constraint requires an immediate but one isn't supplied.

Reviewers: joerg, mgorny, efriedma, rsmith

Reviewed By: joerg

Subscribers: asb, rbar, johnrusso, simoncook, apazos, sabuasal, niosHD, zzheng, edward-jones, rogfer01, MartinMosbeck, brucehoult, the_o, PkmX, jocewei, s.egerton, MaskRay, jyknight, dylanmckay, javed.absar, fedor.sergeev, jrtc27, Jim, krytarowski, eraman, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D60942

llvm-svn: 367750
2019-08-03 05:52:47 +00:00
Eric Christopher
177c897088 Temporarily Revert "[PowerPC][NFC][MachinePipeliner] Add some regression testcases"
It's breaking a number of bots, e.g.:

http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux-bootstrap-msan/builds/13893/steps/check-llvm%20msan/logs/stdio

This reverts commit r367732.

llvm-svn: 367741
2019-08-03 01:12:55 +00:00
Amara Emerson
d894b5f8e3 Re-commit "[GlobalISel] Add legalization support for non-power-2 loads and stores""
This is an old commit that exposed a bug in the GISel importer, which caused
non-truncating stores to be selected for truncating store patterns. Now that's
been fixed in r367737 this can go back in.

llvm-svn: 367739
2019-08-02 23:44:24 +00:00
Craig Topper
3d80f1333d [ScalarizeMaskedMemIntrin] Bitcast the mask to the scalar domain and use scalar bit tests for the branches for expandload/compressstore.
Same as what was done for gather/scatter/load/store in r367489.
Expandload/compressstore were delayed due to lack of constant
masking handling that has since been fixed.

llvm-svn: 367738
2019-08-02 23:43:53 +00:00
Yonghong Song
fefb19fcc0 [BPF] Handling type conversions correctly for CO-RE
With newly added debuginfo type
metadata for preserve_array_access_index() intrinsic,
this patch did the following two things:
 (1). checking validity before adding a new access index
      to the access chain.
 (2). calculating access byte offset in IR phase
      BPFAbstractMemberAccess instead of when BTF is emitted.

For (1), the metadata provided by all preserve_*_access_index()
intrinsics are used to check whether the to-be-added type
is a proper struct/union member or array element.

For (2), with all available metadata, calculating access byte
offset becomes easier in BPFAbstractMemberAccess IR phase.
This enables us to remove the unnecessary complexity in
BTFDebug.cpp.

New tests are added for
  . user explicit casting to array/structure/union
  . global variable (or its dereference) as the source of base
  . multi demensional arrays
  . array access given a base pointer
  . cases where we won't generate relocation if we cannot find
    type name.

Differential Revision: https://reviews.llvm.org/D65618

llvm-svn: 367735
2019-08-02 23:16:44 +00:00
Jinsong Ji
4b2d844f4f [PowerPC][NFC][MachinePipeliner] Add some regression testcases
Exposed by refactoring in https://reviews.llvm.org/D64665.

llvm-svn: 367732
2019-08-02 22:27:44 +00:00
Douglas Yung
103ef490da Revert Fix and test inter-procedural register allocation for ARM
This reverts r367669 (git commit f6b00c279a5587a25876752a6ecd8da0bed959dc)

This was breaking a build bot http://lab.llvm.org:8011/builders/netbsd-amd64/builds/21233

llvm-svn: 367731
2019-08-02 22:11:49 +00:00
Yonghong Song
1f886e2f2a [BPF] annotate DIType metadata for builtin preseve_array_access_index()
Previously, debuginfo types are annotated to
IR builtin preserve_struct_access_index() and
preserve_union_access_index(), but not
preserve_array_access_index(). The debug info
is useful to identify the root type name which
later will be used for type comparison.

For user access without explicit type conversions,
the previous scheme works as we can ignore intermediate
compiler generated type conversions (e.g., from union types to
union members) and still generate correct access index string.

The issue comes with user explicit type conversions, e.g.,
converting an array to a structure like below:
  struct t { int a; char b[40]; };
  struct p { int c; int d; };
  struct t *var = ...;
  ... __builtin_preserve_access_index(&(((struct p *)&(var->b[0]))->d)) ...
Although BPF backend can derive the type of &(var->b[0]),
explicit type annotation make checking more consistent
and less error prone.

Another benefit is for multiple dimension array handling.
For example,
  struct p { int c; int d; } g[8][9][10];
  ... __builtin_preserve_access_index(&g[2][3][4].d) ...
It would be possible to calculate the number of "struct p"'s
before accessing its member "d" if array debug info is
available as it contains each dimension range.

This patch enables to annotate IR builtin preserve_array_access_index()
with proper debuginfo type. The unit test case and language reference
is updated as well.

Signed-off-by: Yonghong Song <yhs@fb.com>

Differential Revision: https://reviews.llvm.org/D65664

llvm-svn: 367724
2019-08-02 21:28:28 +00:00
Amara Emerson
785f4c1ed2 [AArch64][GlobalISel] Eliminate redundant G_ZEXT when the source is implicitly zext-loaded.
These cases can come up when the extending loads combiner doesn't combine a
zext(load) to a zextload op, due to some other operation being in between, which
then gets simplified at a later stage.

Differential Revision: https://reviews.llvm.org/D65360

llvm-svn: 367723
2019-08-02 21:15:36 +00:00
Philip Reames
050c82570a [Statepoints] Fix overalignment of loads in no-realign-stack functions
This really should have been part of 366765.  For some reason, I forgot to handle the corresponding load side, and the readable test cases (using deopt vs statepoints) turned out to be overly reduced.  Oops.

As seen in the test change, the problem was that we were using a load with alignment expectations rather than the unaligned variant when the stack alignment was less than that prefered type alignment.

llvm-svn: 367718
2019-08-02 20:17:37 +00:00
Craig Topper
6d76a4bcea [ScalarizeMaskedMemIntrin] Add constant mask support to expandload and compressstore scalarization
This adds support for generating all the loads or stores for a constant mask into a single basic block with no conditionals.

Differential Revision: https://reviews.llvm.org/D65613

llvm-svn: 367715
2019-08-02 20:04:34 +00:00
Philip Reames
f4079cb7ec [Test] Demonstrate a realignment bug missed in r366765
llvm-svn: 367714
2019-08-02 20:01:43 +00:00
Sanjay Patel
6449e66c97 [DAGCombiner] try to convert opposing shifts to casts
This reverses a questionable IR canonicalization when a truncate
is free:

sra (add (shl X, N1C), AddC), N1C -->
sext (add (trunc X to (width - N1C)), AddC')

https://rise4fun.com/Alive/slRC

More details in PR42644:
https://bugs.llvm.org/show_bug.cgi?id=42644

I limited this to pre-legalization for code simplicity because that
should be enough to reverse the IR patterns. I don't have any
evidence (no regression test diffs) that we need to try this later.

Differential Revision: https://reviews.llvm.org/D65607

llvm-svn: 367710
2019-08-02 19:33:46 +00:00
Jessica Paquette
7f4d840652 [AArch64][GlobalISel] Support the neg_addsub_shifted_imm32 pattern
Add an equivalent ComplexRendererFns function for SelectNegArithImmed. This
allows us to select immediate adds of -1 by turning them into subtracts.

Update select-binop.mir to show that the pattern works.

Differential Revision: https://reviews.llvm.org/D65460

llvm-svn: 367700
2019-08-02 18:12:53 +00:00
Simon Pilgrim
538c639d53 [AMDGPU] Regenerated saddo.ll test file for D47927
llvm-svn: 367698
2019-08-02 17:52:55 +00:00
Peter Collingbourne
35a0672873 CodeGen: Don't follow aliases when extracting type info.
This fixes a crash in the case where the type info object is an alias
pointing to a non-zero offset within a global or is otherwise unanalyzable
by the stripPointerCasts() function. Looking through the alias is not the
right thing to do anyway for similar reasons as D65118.

Differential Revision: https://reviews.llvm.org/D65314

llvm-svn: 367696
2019-08-02 17:43:45 +00:00