1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-19 19:12:56 +02:00
Commit Graph

27713 Commits

Author SHA1 Message Date
Tim Northover
fa95942ab3 ARM: expand atomic ldrex/strex loops in IR
The previous situation where ATOMIC_LOAD_WHATEVER nodes were expanded
at MachineInstr emission time had grown to be extremely large and
involved, to account for the subtly different code needed for the
various flavours (8/16/32/64 bit, cmpxchg/add/minmax).

Moving this transformation into the IR clears up the code
substantially, and makes future optimisations much easier:

1. an atomicrmw followed by using the *new* value can be more
   efficient. As an IR pass, simple CSE could handle this
   efficiently.
2. Making use of cmpxchg success/failure orderings only has to be done
   in one (simpler) place.
3. The common "cmpxchg; did we store?" idiom can be exposed to
   optimisation.

I intend to gradually improve this situation within the ARM backend
and make sure there are no hidden issues before moving the code out
into CodeGen to be shared with (at least ARM64/AArch64, though I think
PPC & Mips could benefit too).

llvm-svn: 205525
2014-04-03 11:44:58 +00:00
Stepan Dyatkovskiy
d535cd7ed5 PR19320:
The trouble as in ARMAsmParser, in ParseInstruction method. It assumes that ARM::R12 + 1 == ARM::SP.
It is wrong, since ARM::<Register> codes are generated by tablegen and actually could be any random numbers.

llvm-svn: 205524
2014-04-03 11:29:15 +00:00
Silviu Baranga
e6ece696a8 [ARM] When generating a vpaddl node the input lane type is not always the type of the
add operation since extract_vector_elt can perform an extend operation. Get the input lane
type from the vector on which we're performing the vpaddl operation on and extend or
truncate it to the output type of the original add node.

llvm-svn: 205523
2014-04-03 10:44:27 +00:00
Sasa Stankovic
fd4cd957f3 [mips] Extend MipsMCExpr class to handle %higher(sym1 - sym2 + const) and
%highest(sym1 - sym2 + const) relocations. Remove "ABS_" from VK_Mips_HI
and VK_Mips_LO enums in MipsMCExpr, to be consistent with VK_Mips_HIGHER
and VK_Mips_HIGHEST.

This change also deletes test file test/MC/Mips/higher_highest.ll and moves
its CHECK's to the new test file test/MC/Mips/higher-highest-addressing.s.
The deleted file tests that R_MIPS_HIGHER and R_MIPS_HIGHEST relocations are
emitted in the .o file. Since it uses -force-mips-long-branch option, it was
created when MipsLongBranch's implementation was emitting R_MIPS_HIGHER and
R_MIPS_HIGHEST relocations in the .o file. It was disabled when MipsLongBranch
started to directly calculate offsets.

Differential Revision: http://llvm-reviews.chandlerc.com/D3230

llvm-svn: 205522
2014-04-03 10:37:45 +00:00
Tim Northover
49892dbaff ARM64: always use i64 for the RHS of shift operations
Switching between i32 and i64 based on the LHS type is a good idea in
theory, but pre-legalisation uses i64 regardless of our choice,
leading to potential ISel errors.

Should fix PR19294.

llvm-svn: 205519
2014-04-03 09:26:16 +00:00
Oliver Stannard
4de3ae25ad ARM: Use __STACK_LIMIT symbol for segmented stacks
We cannot use STACK_LIMIT, as it is not reserved for the compiler
by the C spec.

llvm-svn: 205516
2014-04-03 08:45:16 +00:00
Tim Northover
14ff8399f5 ARM64: don't generate __sincos_stret calls unless on MachO
This should fix PR19314.

llvm-svn: 205514
2014-04-03 07:06:13 +00:00
Lang Hames
c739f18eda [X86] As per suggestion from Craig Topper and Hal Finkel, override
TargetInstrInfo::findCommutedOpIndices to enable VFMA*231 commutation, rather
than abusing commuteInstruction.

Thanks very much for the suggestion guys!

llvm-svn: 205489
2014-04-02 23:57:49 +00:00
Hal Finkel
7f4525afba [PowerPC] Make PPCTTI::getMemoryOpCost call BasicTTI::getMemoryOpCost
PPCTTI::getMemoryOpCost will now make use of BasicTTI::getMemoryOpCost to
calculate the base cost of the memory access, and then adjust on top of that.
There is no functionality change from this modification, but it will become
important so that PPCTTI can take advantage of scalarization information for which
BasicTTI::getMemoryOpCost will account in the near future.

llvm-svn: 205476
2014-04-02 22:43:49 +00:00
Lang Hames
e2f8671084 [X86] Make the VFMA*231 variants commutable and relax the alignment restrictions
on FMA3 memory operands. FMA3 instructions are VEX encoded, so they can load
from unaligned memory.

Testcase to follow, along with related patch.

<rdar://problem/16478629>

llvm-svn: 205472
2014-04-02 22:06:16 +00:00
Juergen Ributzka
f7fd4ffde4 Add comments and test case for [X86TTI] Make constant base pointers for GetElementPtr opaque (r204739).
llvm-svn: 205468
2014-04-02 21:45:36 +00:00
Saleem Abdulrasool
ab884ce73f ARM: update subtarget information for Windows on ARM
Update the subtarget information for Windows on ARM.  This enables using the MC
layer to target Windows on ARM.

llvm-svn: 205459
2014-04-02 20:32:05 +00:00
Jim Grosbach
77e0767757 Make a few more range-based loops use explicit types.
No functional change.

llvm-svn: 205458
2014-04-02 20:21:22 +00:00
Tom Stellard
7fed4dd0dd TargetLibraryInfo: Disable memcpy and memset on R600
There are no implementations of these for R600.

llvm-svn: 205455
2014-04-02 19:53:29 +00:00
Jim Grosbach
f349849199 Simplify resolveFrameIndex() signature.
Just pass a MachineInstr reference rather than an MBB iterator.
Creating a MachineInstr& is the first thing every implementation did
anyway.

llvm-svn: 205453
2014-04-02 19:28:18 +00:00
Jim Grosbach
b0339a1fda ARM: cortex-m0 doesn't support unaligned memory access.
Unlike other v6+ processors, cortex-m0 never supports unaligned accesses.
From the v6m ARM ARM:

"A3.2 Alignment support: ARMv6-M always generates a fault when an unaligned
access occurs."

rdar://16491560

llvm-svn: 205452
2014-04-02 19:28:13 +00:00
Jim Grosbach
0d43fd22db Make some range based loop types more explicit.
No functional change, but more readable code.

llvm-svn: 205451
2014-04-02 19:28:08 +00:00
Kai Nacke
e4a52800d5 [mips] Add more Octeon cnMips instructions
Adds the instructions ext/ext32/cins/cins32.
It also changes pop/dpop to accept the two operand version and
adds a simple pattern to generate baddu.
Tests for the two operand versions (including baddu/dmul/dpop/pop)
and the code generation pattern for baddu are included.

Reviewed by: Daniel.Sanders@imgtec.com

llvm-svn: 205449
2014-04-02 18:40:43 +00:00
Jim Grosbach
c6775630e5 [C++11,ARM64] Range based for and explicit 'override' in STP cleanup.
No functional change intended.

llvm-svn: 205446
2014-04-02 18:00:59 +00:00
Jim Grosbach
c7a2ceb7f1 [C++11,ARM64] Range based for loops in constant promotion.
No functional change intended.

llvm-svn: 205445
2014-04-02 18:00:56 +00:00
Jim Grosbach
2429467686 [C++11,ARM64] Range based for loops in load/store pair optimizer.
No functional change intended.

llvm-svn: 205444
2014-04-02 18:00:53 +00:00
Jim Grosbach
53b22df210 [C++11,ARM64] Range based for loops in target lowering.
No functional change intended.

llvm-svn: 205443
2014-04-02 18:00:51 +00:00
Jim Grosbach
b076c81855 [C++11,ARM64] Range based for loops in frame lowering.
No functional change intended.

llvm-svn: 205442
2014-04-02 18:00:49 +00:00
Jim Grosbach
585d96e423 [C++11,ARM64] Range based for loops in pseudo expansion.
No functional change intended.

llvm-svn: 205441
2014-04-02 18:00:46 +00:00
Jim Grosbach
7cce9ef6c2 [C++11,ARM64] Range based for loops for LOH
No functional change intended.

llvm-svn: 205440
2014-04-02 18:00:44 +00:00
Jim Grosbach
e9f6cff80e [C++11,ARM64] Range based for loops TLS cleanup.
No functional change intended.

llvm-svn: 205439
2014-04-02 18:00:41 +00:00
Jim Grosbach
6445ae410b [C++11,ARM64] Range based for loops in branch relaxation.
No functional change intended.

llvm-svn: 205438
2014-04-02 18:00:39 +00:00
Jim Grosbach
14e1472e9c [C++11,ARM64] Range based for loops in address type promotion.
No functional change intended.

llvm-svn: 205437
2014-04-02 18:00:36 +00:00
Quentin Colombet
7b975744c7 [ARM64][CollectLOH] Remove the link to the radar from the comments.
llvm-svn: 205435
2014-04-02 16:40:49 +00:00
Oliver Stannard
e941b27161 ARM: Add support for segmented stacks
Patch by Alex Crichton, ILyoan, Luqman Aden and Svetoslav.

llvm-svn: 205430
2014-04-02 16:10:33 +00:00
Tim Northover
0688295f2f ARM64: use GOT for weak symbols & PIC.
Weak symbols cannot use the small code model's usual ADRP sequences since the
instruction simply may not be able to encode a value of 0.

This redirects them to use the GOT, which hopefully linkers are able to cope
with even in the static relocation model.

llvm-svn: 205426
2014-04-02 14:39:11 +00:00
Tim Northover
01192b285d ARM64: fix lowering of fp128 fptosi/fptoui
We were creating libcall nodes that returned an MVT::f128, when these
particular operations actually return an int of some stripe.

llvm-svn: 205425
2014-04-02 14:39:07 +00:00
Tim Northover
128b34bf73 ARM64: make sure first argument to INSERT_SUBVECTOR has right type.
Again, coalescing and other optimisations swiftly made the MachineInstrs
consistent again, but when compiled at -O0 a bad INSERT_SUBREGISTER was
produced.

llvm-svn: 205423
2014-04-02 14:38:58 +00:00
Tim Northover
47864f21b8 ARM64: convert fp16 narrowing ISel to pseudo-instruction
The previous attempt was fine with optimisations, but was actually rather
cavalier with its types. When compiled at -O0, it produced invalid COPY
MachineInstrs.

llvm-svn: 205422
2014-04-02 14:38:54 +00:00
Job Noorman
69bfc47f3d Mark FPB as a reserved register when needed.
llvm-svn: 205421
2014-04-02 13:13:56 +00:00
Renato Golin
479ae7528b Remove duplicated DMB instructions
ARM specific optimiztion, finding places in ARM machine code where 2 dmbs
follow one another, and eliminating one of them.

Patch by Reinoud Elhorst.

llvm-svn: 205409
2014-04-02 09:03:43 +00:00
Yaron Keren
18f37e5fc5 Added isTargetWindowsMSVC(), renamed isTargetMingw() to isTargetWindowsGNU()
and isTargetCygwin() to isTargetWindowsCygwin() to be consistent with the
four Windows environments in Triple.h.

Suggestion by Saleem Abdulrasool!

llvm-svn: 205393
2014-04-02 04:27:51 +00:00
Quentin Colombet
2f9673e7ea [ARM64][CollectLOH] Add some comments to explain how the LOHs
framework works (for the compiler part), since the design
document is not available.

llvm-svn: 205379
2014-04-02 01:02:28 +00:00
Hal Finkel
3d67e62e4c [PowerPC] Add some missing VSX bitcast patterns
llvm-svn: 205352
2014-04-01 19:24:27 +00:00
Yaron Keren
5b97697cc3 If isKnownWindowsMSVCEnvironment then getOS == Triple::Win32 and
Environment == Triple::MSVC so it will never be MinGW or Cygwin.

llvm-svn: 205349
2014-04-01 18:52:55 +00:00
Hal Finkel
0ebd182c7a Implement X86TTI::getUnrollingPreferences
This provides an initial implementation of getUnrollingPreferences for x86.
getUnrollingPreferences is used by the generic (concatenation) unroller, which
is distinct from the unrolling done by the loop vectorizer. Many modern x86
cores have some kind of uop cache and loop-stream detector (LSD) used to
efficiently dispatch small loops, and taking full advantage of this requires
unrolling small loops (small here means 10s of uops).

These caches also have limits on the number of taken branches in the loop, and
so we also cap the loop unrolling factor based on the maximum "depth" of the
loop. This is currently calculated with a partial DFS traversal (partial
because it will stop early if the path length grows too much). This is still an
approximation, and one that is both conservative (because it does not account
for branches eliminated via block placement) and optimistic (because it is only
recording the maximum depth over minimum paths). Nevertheless, because the
loops that fit in these uop caches are so small, it is not clear how much the
details matter.

The original set of patches posted for review produced the following test-suite
performance results (from the TSVC benchmark) at that time:
  ControlLoops-dbl - 13% speedup
  ControlLoops-flt - 15% speedup
  Reductions-dbl - 7.5% speedup

llvm-svn: 205348
2014-04-01 18:50:34 +00:00
Kai Nacke
0d60ee32af [mips] Add Octeon cnMips instructions mtmX and mtpX
Adds the Octeon cnMips instructions "load multiplier register MPLx" and "load product register Px".
Includes tests.

Reviews by: Daniel.Sanders@imgtec.com

llvm-svn: 205343
2014-04-01 18:35:26 +00:00
Reid Kleckner
1ad832f876 Support segmented stacks on Win64
Identical to Win32 method except the GS segment register is used for TLS
instead of FS and pvArbitrary is at TEB offset 0x28 instead of 0x14.

llvm-svn: 205342
2014-04-01 18:34:21 +00:00
Yaron Keren
0524cf3d81 isTargetWindows() renamed to isTargetKnownWindowsMSVC()
to reflect its current functionality.

Based on Takumi NAKAMURA suggestion.

llvm-svn: 205338
2014-04-01 18:15:34 +00:00
Christian Pirker
fb4268517c ARM: rename ARMle/ARMbe with ARMLE/ARMBE, and Thumble/Thumbbe with ThumbLE/ThumbBE
llvm-svn: 205317
2014-04-01 15:19:30 +00:00
Tim Northover
01da3ca92b ARM: teach LLVM that Cortex-A7 is very similar to A8.
llvm-svn: 205314
2014-04-01 14:10:07 +00:00
Aaron Ballman
617b4f6afb Attempting to fix r205124, which had failed asserts when built with MSVC.
Suggestion from Yaron Keren.

llvm-svn: 205313
2014-04-01 13:56:35 +00:00
Tim Northover
91bcd8d2eb ARM: add cyclone CPU with ZeroCycleZeroing feature.
The Cyclone CPU is similar to swift for most LLVM purposes, but does have two
preferred instructions for zeroing a VFP register. This teaches LLVM about
them.

llvm-svn: 205309
2014-04-01 13:22:02 +00:00
Daniel Sanders
fe763dddc1 [mips] Renamed ParseAnyRegisterWithoutDollar to MatchAnyRegisterWithoutDollar
This is for consistency with other functions. The Parse* functions consume
tokens and the Match* functions don't.

No functional change.

llvm-svn: 205305
2014-04-01 12:35:23 +00:00
Aaron Ballman
2d51f6d8cb Fixing an MSVC warning about widening the result of a 32-bit shift implicitly. No functional change intended.
llvm-svn: 205304
2014-04-01 12:24:25 +00:00