1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-30 15:32:52 +01:00
Commit Graph

7712 Commits

Author SHA1 Message Date
Andrew Trick
5d13fe97ed Machine Model: Add MicroOpBufferSize and resource BufferSize.
Replace the ill-defined MinLatency and ILPWindow properties with
with straightforward buffer sizes:
MCSchedMode::MicroOpBufferSize
MCProcResourceDesc::BufferSize

These can be used to more precisely model instruction execution if desired.

Disabled some misched tests temporarily. They'll be reenabled in a few commits.

llvm-svn: 184032
2013-06-15 04:49:57 +00:00
David Blaikie
d1a8512cd8 Debug Info: Don't print the display name and colon prefix for DEBUG_VALUE comments if the display name is empty
llvm-svn: 184026
2013-06-15 00:33:47 +00:00
Tom Stellard
0f71ebc5a8 R600: Add SI load support for v[24]i32 and store for v2i32
Also add a seperate vector lit test file, since r600 doesn't seem to handle
v2i32 load/store yet, but we can test both for SI.

Patch by: Aaron Watry

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Aaron Watry <awatry@gmail.com>
llvm-svn: 184021
2013-06-15 00:09:31 +00:00
Tom Stellard
428a19228a R600: Use correct encoding for Vertex Fetch instructions on Cayman
Reviewed-by: Vincent Lejeune<vljn at ovi.com>
llvm-svn: 184016
2013-06-14 22:12:30 +00:00
Tom Stellard
c25852891f R600: Use EXPORT_RAT_INST_STORE_DWORD for stores on Cayman
We were using RAT_INST_STORE_RAW, which seemed to work, but the docs
say this instruction doesn't exist for Cayman, so it's probably safer
to use a documented instruction instead.

Reviewed-by: Vincent Lejeune<vljn at ovi.com>
llvm-svn: 184015
2013-06-14 22:12:24 +00:00
Tim Northover
11fabb62e9 Mark rematerialized super/sub registers as dead.
When we're rematerializing into a not-quite-right register we already add the
real definition as an imp-def, but we should also be marking the "official"
register as dead, since nothing else is going to use it as a result of this
remat.

Not doing this can affect pressure tracking.

rdar://problem/14158833

llvm-svn: 184002
2013-06-14 20:22:21 +00:00
Stephen Lin
ccd266c666 SelectionDAG: Fix incorrect condition checks in some cases of folding FADD/FMUL combinations; also improve accuracy of comments
llvm-svn: 183993
2013-06-14 18:17:35 +00:00
Derek Schuff
4c429cad03 Make PrologEpilogInserter save/restore all callee saved registers
in functions which call __builtin_unwind_init()

__builtin_unwind_init() is an undocumented gcc intrinsic which has this effect,
and is used in libgcc_eh.

Goes part of the way toward fixing PR8541.

llvm-svn: 183984
2013-06-14 16:15:29 +00:00
Benjamin Kramer
a0c15494c5 X86: cvtpi2ps is just an SSE instruction with MMX operands. It has no AVX equivalent.
Give it the right register format so we can also emit it when AVX is enabled.

llvm-svn: 183971
2013-06-14 09:31:41 +00:00
JF Bastien
cb60eaba94 Enable FastISel on ARM for Linux and NaCl, not MCJIT
This is a resubmit of r182877, which was reverted because it broken
MCJIT tests on ARM. The patch leaves MCJIT on ARM as it was before: only
enabled for iOS. I've CC'ed people from the original review and revert.

FastISel was only enabled for iOS ARM and Thumb2, this patch enables it
for ARM (not Thumb2) on Linux and NaCl, but not MCJIT.

Thumb2 support needs a bit more work, mainly around register class
restrictions.

The patch punts to SelectionDAG when doing TLS relocation on non-Darwin
targets. I will fix this and other FastISel-to-SelectionDAG failures in
a separate patch.

The patch also forces FastISel to retain frame pointers: iOS always
keeps them for backtracking (so emitted code won't change because of
this), but Linux was getting much worse code that was incorrect when
using big frames (such as test-suite's lencod). I'll also fix this in a
later patch, it will probably require a peephole so that FastISel
doesn't rematerialize frame pointers back-to-back.

The test changes are straightforward, similar to:
  http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130513/174279.html
They also add a vararg test that got dropped in that change.

I ran all of lnt test-suite on A15 hardware with --optimize-option=-O0
and all the tests pass. All the tests also pass on x86 make check-all. I
also re-ran the check-all tests that failed on ARM, and they all seem to
pass.

llvm-svn: 183966
2013-06-14 02:49:43 +00:00
Bill Schmidt
fac3a79629 [PowerPC] Disable fast-isel for existing -O0 tests for PowerPC.
This is a preliminary patch for fast instruction selection on
PowerPC.  Code generation can differ between DAG isel and fast isel.
Existing tests that specify -O0 were written to expect DAG isel.  Make
this explicit by adding -fast-isel=false to the tests.

In some cases specifying -fast-isel=false produces different code even
when there isn't a fast instruction selector specified.  This is
because TM.Options.EnableFastISel = 1 at -O0 whether or not a FastISel
object exists.  Thus disabling fast isel can actually produce less
conservative code.  Because of this, some of the expected code
generation in the -O0 tests needs to be adjusted.

In particular, handling of function arguments is less conservative
with -fast-isel=false (see isOnlyUsedInEntryBlock() in
SelectionDAGBuilder.cpp).  This results in fewer stack accesses and,
in some cases, reduced stack size as uselessly loaded values are no
longer stored back to spill locations in the stack.

No functional change with this patch; test case adjustments only.

llvm-svn: 183939
2013-06-13 20:23:34 +00:00
Akira Hatanaka
8332c68b45 [mips] Add an IR transformation pass that optimizes calls to sqrt.
The pass emits a call to sqrt that has attribute "read-none". This call will be
converted to an ISD::FSQRT node during DAG construction, which will turn into
a mips native sqrt instruction.
 

llvm-svn: 183802
2013-06-11 22:21:44 +00:00
Tim Northover
4ba890d132 X86: Stop LEA64_32r doing unspeakable things to its arguments.
Previously LEA64_32r went through virtually the entire backend thinking it was
using 32-bit registers until its blissful illusions were cruelly snatched away
by MCInstLower and 64-bit equivalents were substituted at the last minute.

This patch makes it behave normally, and take 64-bit registers as sources all
the way through. Previous uses (for 32-bit arithmetic) are accommodated via
SUBREG_TO_REG instructions which make the types and classes agree properly.

llvm-svn: 183693
2013-06-10 20:43:49 +00:00
Justin Holewinski
fa2e2511f8 [NVPTX] Remove old CONST_NOT_GEN address space that is not being used anymore and causes constants to be emitted in the global address space
llvm-svn: 183652
2013-06-10 13:29:47 +00:00
JF Bastien
be1c9b906e Add test for ARM FastISel load/store register classes
r183624 fixed an issue that was tested indirectly. Test it directly with this new test.

llvm-svn: 183634
2013-06-10 00:35:57 +00:00
Reed Kotler
8176eeb183 Fix a regression I introduced when I expanded the complex pseudos in
the Mips16 port. A few of the psuedos could either take signed
or unsigned arguments and I did not distinguish the case and improperly
rejected some valid cases that the assembler had previously accepted
when they were pure pseudos that expanded as assembly instructions.

llvm-svn: 183633
2013-06-09 23:23:46 +00:00
Logan Chien
61f1a9f5d2 Refine the ARM EHABI test cases.
Since we have ARM unwind directive parser and assembler, we
can check the correctness in two stages:

1. From LLVM assembly (.ll) to ARM assembly (.s)
2. From ARM assembly (.s) to ELF object file (.o)

We already have several "*.s to *.o" test cases.  This CL adds
some "*.ll to *.s" test cases and removes the redundant "*.ll to *.o"
test cases.

New test cases to check "*.ll to *.s" code generator:

- ehabi.ll: Check the correctness of the generated unwind directives.
- section-name.ll: Check the section name of functions.

Removed test cases:

- ehabi-mc-cantunwind.ll
  (Covered by ehabi-cantunwind.ll, and eh-directive-cantunwind.s)
- ehabi-mc-compact-pr0.ll
  (Covered by ehabi.ll, eh-compact-pr0.s, eh-directive-save.s, and
   eh-directive-setfp.s)
- ehabi-mc-compact-pr1.ll
  (Covered by ehabi.ll, eh-compact-pr1.s, eh-directive-save.s, and
   eh-directive-setfp.s)
- ehabi-mc.ll
  (Covered by ehabi.ll, and eh-directive-integrated-test.s)
- ehabi-mc-section-group.ll
  (Covered by section-name.ll, and eh-directive-section-comdat.s)
- ehabi-mc-section.ll
  (Covered by section-name.ll, and eh-directive-section.s)
- ehabi-mc-sh_link.ll
  (Covered by eh-directive-text-section.s, and eh-directive-section.s)

llvm-svn: 183628
2013-06-09 12:36:57 +00:00
Logan Chien
fa2266ded0 Fix ARM unwind opcode assembler in several cases.
Changes to ARM unwind opcode assembler:

* Fix multiple .save or .vsave directives.  Besides, the
  order is preserved now.

* For the directives which will generate multiple opcodes,
  such as ".save {r0-r11}", the order of the unwind opcode
  is fixed now, i.e. the registers with less encoding value
  are popped first.

* Fix the $sp offset calculation.  Now, we can use the
  .setfp, .pad, .save, and .vsave directives at any order.

Changes to test cases:

* Add test cases to check the order of multiple opcodes
  for the .save directive.

* Fix the incorrect $sp offset in the test case.  The
  stack pointer offset specified in the test case was
  incorrect.  (Changed test cases: ehabi-mc-section.ll and
  ehabi-mc.ll)

* The opcode to restore $sp are slightly reordered.  The
  behavior are not changed, and the new output is same
  as the output of GNU as.  (Changed test cases:
  eh-directive-pad.s and eh-directive-setfp.s)

llvm-svn: 183627
2013-06-09 12:22:30 +00:00
Elena Demikhovsky
1c6fe6138b Removed PackedDouble domain from scalar instructions. Added more formats for the scalar stuff.
llvm-svn: 183626
2013-06-09 07:37:10 +00:00
Venkatraman Govindaraju
9767317993 [Sparc] Delete FPMover Pass and remove Fp* Pseudo-instructions from Sparc backend.
llvm-svn: 183613
2013-06-08 15:32:59 +00:00
Quentin Colombet
288faf08d2 Reapply r183552. This time, use a standard type for the option to avoid template
instantiation issue with non-standard type.

Add a backend option to warn on a given stack size limit.
Option: -mllvm -warn-stack-size=<limit>
Output (if limit is exceeded):
warning: Stack size limit exceeded (<actual size>) in <functionName>.

The longer term plan is to hook that to a clang warning.
PR:4072
<rdar://problem/13987214>.

llvm-svn: 183595
2013-06-08 00:07:54 +00:00
Vincent Lejeune
2f252fdf26 R600: Anti dep better handled in tex clause
llvm-svn: 183592
2013-06-07 23:30:26 +00:00
Jakob Stoklund Olesen
3a549110cd Add missing zextloadi1 to i64 patterns. PR16721.
llvm-svn: 183587
2013-06-07 22:55:05 +00:00
Hal Finkel
9c0ef0659d Disallow i64 div/rem in PPC32 counter loops
On PPC32, [su]div,rem on i64 types are transformed into runtime library
function calls. As a result, they are not allowed in counter-based loops (the
counter-loops verification pass caught this error; this change fixes PR16169).

llvm-svn: 183581
2013-06-07 22:16:19 +00:00
Quentin Colombet
8f78b800ab Revert commits related to stack warning.
llvm-svn: 183579
2013-06-07 22:14:50 +00:00
Quentin Colombet
4a1b82f036 Explicit triple in warn stack size test cases to not depend on OS.
llvm-svn: 183574
2013-06-07 21:09:42 +00:00
Tom Stellard
7c091ffbf7 R600: Fix calculation of stack offset in AMDGPUFrameLowering
We weren't computing structure size correctly and we were relying on
the original alloca instruction to compute the offset, which isn't
always reliable.

Reviewed-by: Vincent Lejeune <vljn@ovi.com>
llvm-svn: 183568
2013-06-07 20:52:05 +00:00
Tom Stellard
f4646ab025 R600: Fix the fetch limits for R600 generation GPUs
Reviewed-by: Vincent Lejeune <vljn@ovi.com>

https://bugs.freedesktop.org/show_bug.cgi?id=64257

llvm-svn: 183560
2013-06-07 20:28:55 +00:00
Quentin Colombet
474d9dcceb Add a backend option to warn on a given stack size limit.
Option: -mllvm -warn-stack-size=<limit>
Output (if limit is exceeded):
warning: Stack size limit exceeded (<actual size>) in <functionName>.

The longer term plan is to hook that to a clang warning.
PR:4072
<rdar://problem/13987214>

llvm-svn: 183552
2013-06-07 20:18:12 +00:00
JF Bastien
c950ffbe08 ARM FastISel integer sext/zext improvements
My recent ARM FastISel patch exposed this bug:
  http://llvm.org/bugs/show_bug.cgi?id=16178
The root cause is that it can't select integer sext/zext pre-ARMv6 and
asserts out.

The current integer sext/zext code doesn't handle other cases gracefully
either, so this patch makes it handle all sext and zext from i1/i8/i16
to i8/i16/i32, with and without ARMv6, both in Thumb and ARM mode. This
should fix the bug as well as make FastISel faster because it bails to
SelectionDAG less often. See fastisel-ext.patch for this.

fastisel-ext-tests.patch changes current tests to always use reg-imm AND
for 8-bit zext instead of UXTB. This simplifies code since it is
supported on ARMv4t and later, and at least on A15 both should perform
exactly the same (both have exec 1 uop 1, type I).

2013-05-31-char-shift-crash.ll is a bitcode version of the above bug
16178 repro.

fast-isel-ext.ll tests all sext/zext combinations that ARM FastISel
should now handle.

Note that my ARM FastISel enabling patch was reverted due to a separate
failure when dealing with MCJIT, I'll fix this second failure and then
turn FastISel on again for non-iOS ARM targets.

I've tested "make check-all" on my x86 box, and "lnt test-suite" on A15
hardware.

llvm-svn: 183551
2013-06-07 20:10:37 +00:00
Quentin Colombet
a8970e620f Teach AsmPrinter how to print odd constants.
Fix an assertion when the compiler encounters big constants whose bit width is
not a multiple of 64-bits.
Although clang would never generate something like this, the backend should be
able to handle any legal IR.

<rdar://problem/13363576>

llvm-svn: 183544
2013-06-07 18:36:03 +00:00
Roman Divacky
8439b144e6 Fix a typo in asm string of BP* family of instructions. With this fix
I am able to compile/assemble/link/run /bin/echo from FreeBSD.

llvm-svn: 183537
2013-06-07 17:46:57 +00:00
Rafael Espindola
7b8382bcbc Support OpenBSD's native frame protection conventions.
OpenBSD's stack smashing protection differs slightly from other
platforms:

  1. The smash handler function is "__stack_smash_handler(const char
     *funcname)" instead of "__stack_chk_fail(void)".

  2. There's a hidden "long __guard_local" object that gets linked
     into each executable and DSO.

Patch by Matthew Dempsky.

llvm-svn: 183533
2013-06-07 16:35:57 +00:00
Venkatraman Govindaraju
867986b1ff [Sparc]: Use cmp instruction instead of subcc to compare integers.
llvm-svn: 183463
2013-06-07 00:03:36 +00:00
Vincent Lejeune
dd2a468cbd R600: Add a pass that merge Vector Register
Previously commited @183279 but tests were failing, reverted @183286
It was broken because @183336 was missing, now it's there.

llvm-svn: 183343
2013-06-05 21:38:04 +00:00
Vincent Lejeune
cc7d08e974 R600: Schedule copy from phys register at beginning of block
It allows regalloc pass to remove them by trivially assigning associated reg

llvm-svn: 183336
2013-06-05 20:27:35 +00:00
Akira Hatanaka
b5546583ad [mips] brcond + setgt/setugt instruction selection patterns.
llvm-svn: 183334
2013-06-05 19:49:55 +00:00
Michael Liao
d464e3d65b [PATCH] Fix VGATHER* operand constraints
Add earlyclobber constaints to prevent input register being allocated as
the output register because, according to Intel spec [1], "If any pair
of the index, mask, or destination registers are the same, this
instruction results a UD fault."

---
[1] http://software.intel.com/sites/default/files/319433-014.pdf

llvm-svn: 183327
2013-06-05 18:12:26 +00:00
Tom Stellard
ecf6bd2eaa R600: Make sure to schedule AR register uses and defs in the same clause
Reviewed-by: vljn at ovi.com
llvm-svn: 183294
2013-06-05 03:43:06 +00:00
Rafael Espindola
f5e919b2e6 Revert "R600: Add a pass that merge Vector Register"
This reverts commit r183279. CodeGen/R600/texture-input-merge.ll was failing.

llvm-svn: 183286
2013-06-05 01:48:30 +00:00
Vincent Lejeune
57d56af481 R600: Add a pass that merge Vector Register
llvm-svn: 183279
2013-06-04 23:17:26 +00:00
Vincent Lejeune
7c89765008 R600: Const/Neg/Abs can be folded to dot4
llvm-svn: 183278
2013-06-04 23:17:15 +00:00
Evan Cheng
1c010771f4 Cortex-R5 can issue Thumb2 integer division instructions.
llvm-svn: 183275
2013-06-04 22:52:09 +00:00
David Majnemer
d0dc0d58f6 ARM: Fix crash in ARM backend inside of ARMConstantIslandPass
The ARM backend did not expect LDRBi12 to hold a constant pool operand.
Allow for LLVM to deal with the instruction similar to how it deals with
LDRi12.

This fixes PR16215.

llvm-svn: 183238
2013-06-04 17:46:15 +00:00
Vincent Lejeune
8d2ef79cb9 R600: Swizzle texture/export instructions
llvm-svn: 183229
2013-06-04 15:04:53 +00:00
Vincent Lejeune
e7cc832b43 R600: Add a test for r183108
llvm-svn: 183228
2013-06-04 15:03:35 +00:00
Tom Stellard
0c2bbb2a1f R600/SI: Add support for work item and work group intrinsics
llvm-svn: 183138
2013-06-03 17:40:18 +00:00
Tom Stellard
0faf53682e R600/SI: Add a calling convention for compute shaders
llvm-svn: 183137
2013-06-03 17:40:11 +00:00
Tom Stellard
47a52f3e69 R600/SI: Custom lower i64 sign_extend
llvm-svn: 183136
2013-06-03 17:40:03 +00:00
Tom Stellard
7e44e13b15 R600/SI: Add support for global loads
llvm-svn: 183131
2013-06-03 17:39:43 +00:00
Vincent Lejeune
55871f8f8a R600: use capital letter for PV channel
llvm-svn: 183107
2013-06-03 15:44:35 +00:00
Venkatraman Govindaraju
2d5d39937e Sparc: Add support for indirect branch and blockaddress in Sparc backend.
llvm-svn: 183094
2013-06-03 05:58:33 +00:00
Venkatraman Govindaraju
b47bd839e0 Sparc: When storing 0, use %g0 directly in the store instruction instead of
using two instructions (sethi and store).

llvm-svn: 183090
2013-06-03 00:21:54 +00:00
Venkatraman Govindaraju
0514a4c845 Sparc: Combine add/or/sethi instruction with restore if possible.
llvm-svn: 183088
2013-06-02 21:48:17 +00:00
Venkatraman Govindaraju
acb910b7ae Sparc: Perform leaf procedure optimization by default
llvm-svn: 183083
2013-06-02 02:24:27 +00:00
Venkatraman Govindaraju
2425aef2ad Sparc: Mark functions calling llvm.vastart and llvm.returnaddress intrinsics as non-leaf functions.
llvm-svn: 183079
2013-06-01 20:42:48 +00:00
Tim Northover
e84e621d63 Revert r183069: "TMP: LEA64_32r fixing"
Very sorry, it was committed from the wrong branch by mistake.

llvm-svn: 183070
2013-06-01 10:23:46 +00:00
Tim Northover
93287c3991 TMP: LEA64_32r fixing
llvm-svn: 183069
2013-06-01 10:21:54 +00:00
Tim Northover
8efc0e4868 X86: change MOV64ri64i32 into MOV32ri64
The MOV64ri64i32 instruction required hacky MCInst lowering because it
was allocated as setting a GR64, but the eventual instruction ("movl")
only set a GR32. This converts it into a so-called "MOV32ri64" which
still accepts a (appropriate) 64-bit immediate but defines a GR32.
This is then converted to the full GR64 by a SUBREG_TO_REG operation,
thus keeping everyone happy.

This fixes a typo in the opcode field of the original patch, which
should make the legact JIT work again (& adds test for that problem).

llvm-svn: 183068
2013-06-01 09:55:14 +00:00
Venkatraman Govindaraju
1eaf496598 [Sparc] Generate correct code for leaf functions with stack objects
llvm-svn: 183067
2013-06-01 04:51:18 +00:00
Eric Christopher
e4ab862999 Temporarily Revert "X86: change MOV64ri64i32 into MOV32ri64" as it
seems to have caused PR16192 and other JIT related failures.

llvm-svn: 183059
2013-05-31 23:30:45 +00:00
Quentin Colombet
c3a4f33cc1 Modify how the formulae are rated in Loop Strength Reduce.
Namely, check if the target allows to fold more that one register in the
addressing mode and if yes, adjust the cost accordingly.

Prior to this commit, reg1 + scale * reg2 accesses were artificially preferred
to reg1 + reg2 accesses. Indeed, the cost model wrongly assumed that reg1 + reg2
needs a temporary register for the computation, whereas it was correctly
estimated for reg1 + scale * reg2.

<rdar://problem/13973908>

llvm-svn: 183021
2013-05-31 17:20:29 +00:00
Richard Sandiford
77f91408dd [SystemZ] Don't use LOAD and STORE REVERSED for volatile accesses
Unlike most -- hopefully "all other", but I'm still checking -- memory
instructions we support, LOAD REVERSED and STORE REVERSED may access
the memory location several times.  This means that they are not suitable
for volatile loads and stores.

This patch is a prerequisite for better atomic load and store support.
The same principle applies there: almost all memory instructions we
support are inherently atomic ("block concurrent"), but LOAD REVERSED
and STORE REVERSED are exceptions.

Other instructions continue to allow volatile operands.  I will add
positive "allows volatile" tests at the same time as the "allows atomic
load or store" tests.

llvm-svn: 183002
2013-05-31 13:25:22 +00:00
Justin Holewinski
d925e36ab2 [NVPTX] Re-enable support for virtual registers in the final output
Now that 3.3 is branched, we are re-enabling virtual registers to help
iron out bugs before the next release. Some of the post-RA passes do
not play well with virtual registers, so we disable them for now. The
needed functionality of the PrologEpilogInserter pass is copied to a
new backend-specific NVPTXPrologEpilog pass.

The test for this commit is not breaking the existing tests.

llvm-svn: 182998
2013-05-31 12:14:49 +00:00
Tim Northover
8940245595 X86: change MOV64ri64i32 into MOV32ri64
The MOV64ri64i32 instruction required hacky MCInst lowering because it was
allocated as setting a GR64, but the eventual instruction ("movl") only set a
GR32. This converts it into a so-called "MOV32ri64" which still accepts a
(appropriate) 64-bit immediate but defines a GR32. This is then converted to
the full GR64 by a SUBREG_TO_REG operation, thus keeping everyone happy.

llvm-svn: 182991
2013-05-31 09:57:13 +00:00
Akira Hatanaka
13f3fde46f [mips] Big-endian code generation for atomic instructions.
Patch by Jyun-Yan You.

llvm-svn: 182984
2013-05-31 03:25:44 +00:00
Rafael Espindola
d23482285b Revert r182937 and r182877.
r182877 broke MCJIT tests on ARM and r182937 was working around another failure
by r182877.

This should make the ARM bots green.

llvm-svn: 182960
2013-05-30 20:37:52 +00:00
Benjamin Kramer
f41318934e Force a triple so we don't get bitten by windows' different regalloc.
llvm-svn: 182935
2013-05-30 15:39:35 +00:00
Benjamin Kramer
8b559051e4 Force fragile test to the atom scheduler model.
The pattern the test originally checked for doesn't occur on other -mcpu
settings. On atom it's still there though slightly differently scheduled.

llvm-svn: 182933
2013-05-30 15:22:28 +00:00
Tim Northover
2b261b4ac3 X86: allow registers 8-15 in test
This test was failing on some hosts when an unexpected register was used for a
variable. This just extends the regexp to allow the new x86-64 registers.

llvm-svn: 182929
2013-05-30 13:56:32 +00:00
Tim Northover
aa5932cde5 X86: use sub-register sequences for MOV*r0 operations
Instead of having a bunch of separate MOV8r0, MOV16r0, ... pseudo-instructions,
it's better to use a single MOV32r0 (which will expand to "xorl %reg, %reg")
and obtain other sizes with EXTRACT_SUBREG and SUBREG_TO_REG. The encoding is
smaller and partial register updates can sometimes be avoided.

Until recently, this sequence was a barrier to rematerialization though. That
should now be fixed so it's an appropriate time to make the change.

llvm-svn: 182928
2013-05-30 13:19:42 +00:00
Justin Holewinski
099d52887f [NVPTX] Fix case where a sext load of an i1 type may produce an
ld.u1 instead of an ld.u8.

llvm-svn: 182924
2013-05-30 12:22:39 +00:00
Richard Sandiford
b7ab6fd782 [SystemZ] Enable unaligned accesses
The code to distinguish between unaligned and aligned addresses was
already there, so this is mostly just a switch-on-and-test process.

llvm-svn: 182920
2013-05-30 09:45:42 +00:00
Rafael Espindola
5b34d5a3c7 Change how we iterate over relocations on ELF.
For COFF and MachO, sections semantically have relocations that apply to them.
That is not the case on ELF.

In relocatable objects (.o), a section with relocations in ELF has offsets to
another section where the relocations should be applied.

In dynamic objects and executables, relocations don't have an offset, they have
a virtual address. The section sh_info may or may not point to another section,
but that is not actually used for resolving the relocations.

This patch exposes that in the ObjectFile API. It has the following advantages:

* Most (all?) clients can handle this more efficiently. They will normally walk
all relocations, so doing an effort to iterate in a particular order doesn't
save time.

* llvm-readobj now prints relocations in the same way the native readelf does.

* probably most important, relocations that don't point to any section are now
visible. This is the case of relocations in the rela.dyn section. See the
updated relocation-executable.test for example.

llvm-svn: 182908
2013-05-30 03:05:14 +00:00
Bill Wendling
cab3fa1f1c This testcase tests command line attributes which we don't yet support.
In fact, we're probably going to support these flags in completely different
ways. So this test is no longer valid.

llvm-svn: 182899
2013-05-30 00:32:04 +00:00
Andrew Trick
aec414c298 Order CALLSEQ_START and CALLSEQ_END nodes.
Fixes PR16146: gdb.base__call-ar-st.exp fails after
pre-RA-sched=source fixes.

Patch by Xiaoyi Guo!

This also fixes an unsupported dbg.value test case. Codegen was
previously incorrect but the test was passing by luck.

llvm-svn: 182885
2013-05-29 22:03:55 +00:00
JF Bastien
235d8c9117 Enable FastISel on ARM for Linux and NaCl
FastISel was only enabled for iOS ARM and Thumb2, this patch enables it
for ARM (not Thumb2) on Linux and NaCl.

Thumb2 support needs a bit more work, mainly around register class
restrictions.

The patch punts to SelectionDAG when doing TLS relocation on non-Darwin
targets. I will fix this and other FastISel-to-SelectionDAG failures in
a separate patch.

The patch also forces FastISel to retain frame pointers: iOS always
keeps them for backtracking (so emitted code won't change because of
this), but Linux was getting much worse code that was incorrect when
using big frames (such as test-suite's lencod). I'll also fix this in a
later patch, it will probably require a peephole so that FastISel
doesn't rematerialize frame pointers back-to-back.

The test changes are straightforward, similar to:
  http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130513/174279.html
They also add a vararg test that got dropped in that change.

I ran all of test-suite on A15 hardware with --optimize-option=-O0 and
all the tests pass.

llvm-svn: 182877
2013-05-29 20:38:10 +00:00
Tim Northover
db2d7a34b2 Teach ReMaterialization to be more cunning about subregisters
This allows rematerialization during register coalescing to handle
more cases involving operations like SUBREG_TO_REG which might need to
be rematerialized using sub-register indices.

For example, code like:
    v1(GPR64):sub_32 = MOVZ something
    v2(GPR64) = COPY v1(GPR64)
should be convertable to:
    v2(GPR64):sub_32 = MOVZ something

but previously we just gave up in places like this

llvm-svn: 182872
2013-05-29 19:32:06 +00:00
Richard Sandiford
b70371b744 [SystemZ] Two tests missing from previous commit
llvm-svn: 182847
2013-05-29 11:59:26 +00:00
Richard Sandiford
b62e20c071 [SystemZ] Immediate compare-and-branch support
This patch adds support for the CIJ and CGIJ instructions.

llvm-svn: 182846
2013-05-29 11:58:52 +00:00
Venkatraman Govindaraju
cb40ce1f29 [Sparc] Add support for leaf functions in sparc backend.
llvm-svn: 182822
2013-05-29 04:46:31 +00:00
Richard Sandiford
4b6cfd7cec [SystemZ] Register compare-and-branch support
This patch adds support for the CRJ and CGRJ instructions.  Support for
the immediate forms will be a separate patch.

The architecture has a large number of comparison instructions.  I think
it's generally better to concentrate on using the "best" comparison
instruction first and foremost, then only use something like CRJ if
CR really was the natual choice of comparison instruction.  The patch
therefore opportunistically converts separate CR and BRC instructions
into a single CRJ while emitting instructions in ISelLowering.

llvm-svn: 182764
2013-05-28 10:41:11 +00:00
Preston Gurd
8be7e42cc2 Convert sqrt functions into sqrt instructions when -ffast-math is in effect.
When -ffast-math is in effect (on Linux, at least), clang defines
__FINITE_MATH_ONLY__ > 0 when including <math.h>. This causes the
preprocessor to include <bits/math-finite.h>, which renames the sqrt functions.
For instance, "sqrt" is renamed as "__sqrt_finite". 

This patch adds the 3 new names in such a way that they will be treated
as equivalent to their respective original names.

llvm-svn: 182739
2013-05-27 15:44:35 +00:00
Rafael Espindola
bf27725489 Add a cpu to try to bring back the atom bots.
llvm-svn: 182734
2013-05-27 13:22:52 +00:00
Hal Finkel
1f5ee2fefe Prefer to duplicate PPC Altivec loads when expanding unaligned loads
When expanding unaligned Altivec loads, we use the decremented offset trick to
prevent page faults. Unfortunately, if we have a sequence of consecutive
unaligned loads, this leads to suboptimal code generation because the 'extra'
load from the first unaligned load can be combined with the base load from the
second (but only if the decremented offset trick is not used for the first).
Search up and down the chain, through loads and token factors, looking for
consecutive loads, and if one is found, don't use the offset reduction trick.
These duplicate loads are later combined to yield the desired sequence (in the
future, we might want a more-powerful chain search, but that will require some
changes to allow the combiner routines to access the AA object).

This should complete the initial implementation of the optimized unaligned
Altivec load expansion. There is some refactoring that should be done, but
that will happen when the unaligned store expansion is added.

llvm-svn: 182719
2013-05-26 18:08:30 +00:00
Andrew Trick
c92ce8a4f2 Fix PR16143: Insert DEBUG_VALUE before terminator.
llvm-svn: 182717
2013-05-26 08:58:50 +00:00
Hal Finkel
f5d061cce9 PPC: Combine duplicate (offset) lvsl Altivec intrinsics
The lvsl permutation control instruction is a function only of the alignment of
the pointer operand (relative to the 16-byte natural alignment of Altivec
vectors). As a result, multiple lvsl intrinsics where the operands differ by a
multiple of 16 can be combined.

llvm-svn: 182708
2013-05-25 04:05:05 +00:00
Andrew Trick
53ca6c9254 Track IR ordering of SelectionDAG nodes 4/4.
Unit test cases for -pre-RA-sched=source.

llvm-svn: 182706
2013-05-25 03:26:51 +00:00
Andrew Trick
34c31df32a Track IR ordering of SelectionDAG nodes 3/4.
Remove the old IR ordering mechanism and switch to new one.  Fix unit
test failures.

llvm-svn: 182704
2013-05-25 03:08:10 +00:00
Hal Finkel
b8fe2ab5cb PPC: Initial support for permutation-based unaligned Altivec loads
Altivec only directly supports aligned loads, but the loads have a strange
property: If given an unaligned address, they truncate the address to the next
lower aligned address, and load from there.  This property, along with an extra
load and some special-purpose permutation-control instructions that generate
the appropriate permutations from the original unaligned address, allow
efficient lowering of aligned loads. This code uses the trick explained in the
Apple Velocity Engine optimization overview document to prevent the needed
extra load from possibly causing a page fault if the original address happens
to be aligned.

As noted in the FIXMEs, there are several additional optimizations that can be
performed to reduce the cost of these loads even more. These will be
implemented in future commits.

llvm-svn: 182691
2013-05-24 23:00:14 +00:00
Diego Novillo
d1f091f169 Add a new function attribute 'cold' to functions.
Other than recognizing the attribute, the patch does little else.
It changes the branch probability analyzer so that edges into
blocks postdominated by a cold function are given low weight.

Added analysis and code generation tests.  Added documentation for the
new attribute.

llvm-svn: 182638
2013-05-24 12:26:52 +00:00
Tim Northover
9b6ef4d68e ARM: implement @llvm.readcyclecounter intrinsic
This implements the @llvm.readcyclecounter intrinsic as the specific
MRC instruction specified in the ARM manuals for CPUs with the Power
Management extensions.

Older CPUs had slightly different methods which may also have to be
implemented eventually, but this should cover all v7 cases.

rdar://problem/13939186

llvm-svn: 182603
2013-05-23 19:11:20 +00:00
Tom Stellard
dee18e3abb R600: Fix R600ControlFlowFinalizer not considering VTX_READ 128 bit dst reg
Patch by: Vincent Lejeune

https://bugs.freedesktop.org/show_bug.cgi?id=64877

NOTE: This is a candidate for the 3.3 branch.
llvm-svn: 182600
2013-05-23 18:26:42 +00:00
Jakob Stoklund Olesen
b4fe10613b Fix PR16110: Handle DBG_VALUE in ConnectedVNInfoEqClasses::Distribute().
Now that the LiveDebugVariables pass is running *after* register
coalescing, the ConnectedVNInfoEqClasses class needs to deal with
DBG_VALUE instructions.

This only comes up when rematerialization during coalescing causes the
remaining live range of a virtual register to separate into two
connected components.

llvm-svn: 182592
2013-05-23 17:02:23 +00:00
Nick Lewycky
f93dded11b Add missing test from r175092.
llvm-svn: 182564
2013-05-23 07:46:13 +00:00
Nadav Rotem
bfc39207bf X86: Fix a bug in EltsFromConsecutiveLoads. We can't generate new loads without chains.
llvm-svn: 182507
2013-05-22 19:28:41 +00:00
Benjamin Kramer
dced131e7e X86: When expanding PCMPGTQ to PCMPGTD we always want to compare the lower halves as unsigned.
Take #2 on fixing PR15977.

llvm-svn: 182486
2013-05-22 17:01:12 +00:00
David Majnemer
0c608d85e0 X86: Remove test instructions proceeding shift by immediate instructions
Allow LLVM to take advantage of shift instructions that set the ZF flag,
making instructions that test the destination superfluous.

llvm-svn: 182454
2013-05-22 08:13:02 +00:00
Akira Hatanaka
4da68c1676 [mips] Rename option to make it compatible with gcc.
llvm-svn: 182397
2013-05-21 17:17:59 +00:00
Akira Hatanaka
6123e22ce0 [mips] Add instruction selection patterns for blez and bgez.
llvm-svn: 182396
2013-05-21 17:13:47 +00:00
Justin Holewinski
2a53cbfbe1 [NVPTX] Add @llvm.nvvm.sqrt.f() intrinsic
llvm-svn: 182394
2013-05-21 16:51:30 +00:00
Justin Holewinski
e8d1165196 Drop @llvm.annotation and @llvm.ptr.annotation intrinsics during codegen.
The intrinsic calls are dropped, but the annotated value is propagated.

Fixes PR 15253

Original patch by Zeng Bin!

llvm-svn: 182387
2013-05-21 14:37:16 +00:00
Benjamin Kramer
6b339bee4e X86: When emulating unsigned PCMPGTQ with PCMPGTD, fix the sign bit for the smaller type.
Otherwise we'll get a mix of signed and unsigned compares.
Fixes PR15977.

llvm-svn: 182364
2013-05-21 09:58:54 +00:00
Richard Sandiford
10059d8172 [SystemZ] Tighten branch tests
After r182274, the branches in these tests must always be short.

llvm-svn: 182358
2013-05-21 08:53:17 +00:00
Benjamin Kramer
5872fe13c9 DAGCombine: Avoid an edge case where it tried to create an i0 type for (x & 0) == 0.
Fixes PR16083.

llvm-svn: 182357
2013-05-21 08:51:09 +00:00
Reed Kotler
81c5979b12 Add checks that the proper predeined stubs are being called to the test case.
These were accidentally omitted.

llvm-svn: 182347
2013-05-21 01:27:36 +00:00
Reed Kotler
2d5f41cb35 Add some additional functions to the list of helper functions for
pic calls. These need to be there so we don't try and use helper
functions when we call those.

As part of this, make sure that we properly exclude helper functions in pic
mode when indirect calls are involved.

llvm-svn: 182343
2013-05-21 00:50:30 +00:00
Akira Hatanaka
05711091c4 [mips] Add (setne $lhs, 0) instruction selection pattern.
llvm-svn: 182307
2013-05-20 18:18:07 +00:00
Akira Hatanaka
96de87d87c [mips] Trap on integer division by zero.
By default, a teq instruction is inserted after integer divide. No divide-by-zero
checks are performed if option "-mnocheck-zero-division" is used.

llvm-svn: 182306
2013-05-20 18:07:43 +00:00
Justin Holewinski
2d2a08ee5e [NVPTX] Fix mis-use of CurrentFnSym in NVPTXAsmPrinter. This was causing a symbol name error in the output PTX.
llvm-svn: 182298
2013-05-20 16:42:18 +00:00
Tom Stellard
6cdacd1792 R600: Fix rotr.ll on non-asserts builds
The -debug-only option is only available on asserts builds.

llvm-svn: 182291
2013-05-20 15:28:48 +00:00
Tom Stellard
d72698331e R600/SI: Add pattern for rotr
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 182286
2013-05-20 15:02:24 +00:00
Tom Stellard
5ca265d214 R600: Swap the legality of rotl and rotr
The hardware supports rotr and not rotl.

llvm-svn: 182285
2013-05-20 15:02:19 +00:00
Tom Stellard
1609774b15 R600/SI: Add patterns for 64-bit shift operations
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 182284
2013-05-20 15:02:12 +00:00
Richard Sandiford
cc815ef1d8 [SystemZ] Add long branch pass
Before this change, the SystemZ backend would use BRCL for all branches
and only consider shortening them to BRC when generating an object file.
E.g. a branch on equal would use the JGE alias of BRCL in assembly output,
but might be shortened to the JE alias of BRC in ELF output.  This was
a useful first step, but it had two problems:

(1) The z assembler isn't traditionally supposed to perform branch shortening
    or branch relaxation.  We followed this rule by not relaxing branches
    in assembler input, but that meant that generating assembly code and
    then assembling it would not produce the same result as going directly
    to object code; the former would give long branches everywhere, whereas
    the latter would use short branches where possible.

(2) Other useful branches, like COMPARE AND BRANCH, do not have long forms.
    We would need to do something else before supporting them.

    (Although COMPARE AND BRANCH does not change the condition codes,
    the plan is to model COMPARE AND BRANCH as a CC-clobbering instruction
    during codegen, so that we can safely lower it to a separate compare
    and long branch where necessary.  This is not a valid transformation
    for the assembler proper to make.)

This patch therefore moves branch relaxation to a pre-emit pass.
For now, calls are still shortened from BRASL to BRAS by the assembler,
although this too is not really the traditional behaviour.

The first test takes about 1.5s to run, and there are likely to be
more tests in this vein once further branch types are added.  The feeling
on IRC was that 1.5s is a bit much for a single test, so I've restricted
it to SystemZ hosts for now.

The patch exposes (and fixes) some typos in the main CodeGen/SystemZ tests.
A later patch will remove the {{g}}s from that directory.

llvm-svn: 182274
2013-05-20 14:23:08 +00:00
Justin Holewinski
d5636664a4 [NVPTX] Add GenericToNVVM IR converter to better handle idiomatic LLVM IR inputs
This converter currently only handles global variables in address space 0. For
these variables, they are promoted to address space 1 (global memory), and all
uses are updated to point to the result of a cvta.global instruction on the new
variable.

The motivation for this is address space 0 global variables are illegal since we
cannot declare variables in the generic address space.  Instead, we place the
variables in address space 1 and explicitly convert the pointer to address
space 0. This is primarily intended to help new users who expect to be able to
place global variables in the default address space.

llvm-svn: 182254
2013-05-20 12:13:32 +00:00
Justin Holewinski
fda22b94b1 [NVPTX] Fix i1 kernel parameters and global variables. ABI rules say we need to use .u8 for i1 parameters for kernels.
llvm-svn: 182253
2013-05-20 12:13:28 +00:00
Stepan Dyatkovskiy
adb91e7ed9 PR15868 fix.
Introduction:
In case when stack alignment is 8 and GPRs parameter part size is not N*8:
we add padding to GPRs part, so part's last byte must be recovered at
address K*8-1.
We need to do it, since remained (stack) part of parameter starts from
address K*8, and we need to "attach" "GPRs head" without gaps to it:

Stack:
|---- 8 bytes block ----| |---- 8 bytes block ----| |---- 8 bytes...
[ [padding] [GPRs head] ] [ ------ Tail passed via stack  ------ ...

FIX:
Note, once we added padding we need to correct *all* Arg offsets that are going
after padded one. That's why we need this fix: Arg offsets were never corrected
before this patch. See new test-cases included in patch.

We also don't need to insert padding for byval parameters that are stored in GPRs
only. We need pad only last byval parameter and only in case it outsides GPRs
and stack alignment = 8.
Though, stack area, allocated for recovered byval params, must satisfy
"Size mod 8 = 0" restriction.

This patch reduces stack usage for some cases:
We can reduce ArgRegsSaveArea since inner N*4 bytes sized byval params my be
"packed" with alignment 4 in some cases.

llvm-svn: 182237
2013-05-20 08:01:34 +00:00
Jakob Stoklund Olesen
dad0022424 Also expand 64-bit bitcasts.
llvm-svn: 182229
2013-05-20 01:01:43 +00:00
Jakob Stoklund Olesen
e033165fd4 Implement spill and fill of I64Regs.
llvm-svn: 182228
2013-05-20 00:53:25 +00:00
Jakob Stoklund Olesen
f777e6d131 Mark i64 SETCC as expand so it is turned into a SELECT_CC.
llvm-svn: 182227
2013-05-20 00:28:36 +00:00
Jakob Stoklund Olesen
50d419ac93 Don't use %g0 to materialize 0 directly.
The wired physreg doesn't work on tied operands like on MOVXCC.

Add a README note to fix this later.

llvm-svn: 182225
2013-05-19 21:47:13 +00:00
Jakob Stoklund Olesen
f4ec84c2d4 Select i64 values with %icc conditions.
llvm-svn: 182224
2013-05-19 20:38:21 +00:00
Jakob Stoklund Olesen
1948cf9ca7 Add floating point selects on %xcc predicates.
llvm-svn: 182222
2013-05-19 20:33:11 +00:00
Jakob Stoklund Olesen
c53b3587a3 Implement SPselectfcc for i64 operands.
Also clean up the arguments to all the MOVCC instructions so the
operands always are (true-val, false-val, cond-code).

llvm-svn: 182221
2013-05-19 20:20:54 +00:00
Venkatraman Govindaraju
d432ed013f [Sparc] Rearrange integer registers' allocation order so that register allocator will use I and G registers before using L and O registers.
Also, enable registers %g2-%g4 to be used in application and %g5 in 64 bit mode.

llvm-svn: 182219
2013-05-19 20:07:20 +00:00
Jakob Stoklund Olesen
702a7c68c9 Handle i64 FrameIndex nodes in SPARC v9 mode.
llvm-svn: 182216
2013-05-19 19:14:24 +00:00
Hal Finkel
d4eb9291fa Check InlineAsm clobbers in PPCCTRLoops
We don't need to reject all inline asm as using the counter register (most does
not). Only those that explicitly clobber the counter register need to prevent
the transformation.

llvm-svn: 182191
2013-05-18 09:20:39 +00:00
David Majnemer
a21386b571 X86: Bad peephole interaction between adc, MOV32r0
The peephole tries to reorder MOV32r0 instructions such that they are
before the instruction that modifies EFLAGS.

The problem is that the peephole does not consider the case where the
instruction that modifies EFLAGS also depends on the previous state of
EFLAGS.

Instead, walk backwards until we find an instruction that has a def for
EFLAGS but does not have a use.
If we find such an instruction, insert the MOV32r0 before it.
If it cannot find such an instruction, skip the optimization.

llvm-svn: 182184
2013-05-18 01:02:03 +00:00
JF Bastien
cbcaf8db77 Support unaligned load/store on more ARM targets
This patch matches GCC behavior: the code used to only allow unaligned
load/store on ARM for v6+ Darwin, it will now allow unaligned load/store
for v6+ Darwin as well as for v7+ on Linux and NaCl.

The distinction is made because v6 doesn't guarantee support (but LLVM
assumes that Apple controls hardware+kernel and therefore have
conformant v6 CPUs), whereas v7 does provide this guarantee (and
Linux/NaCl behave sanely).

The patch keeps the -arm-strict-align command line option, and adds
-arm-no-strict-align. They behave similarly to GCC's -mstrict-align and
-mnostrict-align.

I originally encountered this discrepancy in FastIsel tests which expect
unaligned load/store generation. Overall this should slightly improve
performance in most cases because of reduced I$ pressure.

llvm-svn: 182175
2013-05-17 23:49:01 +00:00
Vincent Lejeune
c8aad4509a R600: Lower int_load_input to copyFromReg instead of Register node
It solves a bug uncovered by dot4 patch where the register class of
int_load_input use was ignored.

llvm-svn: 182130
2013-05-17 16:51:06 +00:00
Vincent Lejeune
5a2e018ab6 R600: Use bottom up scheduling algorithm
llvm-svn: 182129
2013-05-17 16:50:56 +00:00
Vincent Lejeune
2bf65b1826 R600: Use depth first scheduling algorithm
It should increase PV substitution opportunities and lower gpr
usage (pending computations path are "flushed" sooner)

llvm-svn: 182128
2013-05-17 16:50:44 +00:00
Vincent Lejeune
152473c61c R600: Relax some vector constraints on Dot4.
Dot4 now uses 8 scalar operands instead of 2 vectors one which allows register
coalescer to remove some unneeded COPY.
This patch also defines some structures/functions that can be used to handle
every vector instructions (CUBE, Cayman special instructions...) in a similar
fashion.

llvm-svn: 182126
2013-05-17 16:50:32 +00:00
Vincent Lejeune
0c663b698a R600: Improve texture handling
llvm-svn: 182125
2013-05-17 16:50:20 +00:00
Vincent Lejeune
b57cb76b6d R600: Rename 128 bit registers.
Almost all instructions that takes a 128 bits reg as input (fetch, export...)
have the abilities to swizzle their argument and output. Instead of printing
default swizzle for each 128 bits reg, rename T*.XYZW to T* and let instructions
print potentially optimized swizzles themselves.

llvm-svn: 182124
2013-05-17 16:50:09 +00:00
Tom Stellard
a4cc081e08 R600: Fix encoding for R600 family GPUs
Reviewed-by: Vincent Lejeune <vljn@ovi.com>

https://bugs.freedesktop.org/show_bug.cgi?id=64193
https://bugs.freedesktop.org/show_bug.cgi?id=64257
https://bugs.freedesktop.org/show_bug.cgi?id=64320

NOTE: This is a candidate for the 3.3 branch.
llvm-svn: 182113
2013-05-17 15:23:21 +00:00
Venkatraman Govindaraju
989fb74a1c [Sparc] Implements hasReservedCallFrame and hasFP.
This is to generate correct framesetup code when the function
 has variable sized allocas.

llvm-svn: 182108
2013-05-17 15:14:34 +00:00
Benjamin Kramer
e9efb3252f X86: Make shuffle -> shift conversion more aggressive about undefs.
Shuffles that only move an element into position 0 of the vector are common in
the output of the loop vectorizer and often generate suboptimal code when SSSE3
is not available. Lower them to vector shifts if possible.

We still prefer palignr over psrldq because it has higher throughput on
sandybridge.

llvm-svn: 182102
2013-05-17 14:48:34 +00:00
Benjamin Kramer
d1f1dddf9f FileCheckize test.
llvm-svn: 182101
2013-05-17 14:48:25 +00:00
Venkatraman Govindaraju
62af2fad30 [Sparc] Prevent instructions that defines or uses %o7 to be in call's delay slot.
llvm-svn: 182063
2013-05-16 23:53:29 +00:00
Akira Hatanaka
3848727973 [mips] Improve instruction selection for pattern (store (fp_to_sint $src), $ptr).
Previously, three instructions were needed:

trunc.w.s $f0, $f2
mfc1 $4, $f0
sw $4, 0($2)

Now we need only two:

trunc.w.s $f0, $f2
swc1 $f0, 0($2)

llvm-svn: 182053
2013-05-16 21:17:15 +00:00
Rafael Espindola
1a64a52101 More test coverage for addFrameMove.
llvm-svn: 182051
2013-05-16 20:50:56 +00:00
Hal Finkel
4f0a332c93 Fix cpu on test CodeGen/PowerPC/ctrloop-fp64.ll
We need ppc instead of generic to override native features on ppc machines.

llvm-svn: 182049
2013-05-16 20:28:05 +00:00
Rafael Espindola
afa5fae8eb More addFrameMove test coverage.
llvm-svn: 182046
2013-05-16 20:00:45 +00:00
Hal Finkel
80fddc9af7 Create an new preheader in PPCCTRLoops to avoid counter register clobbers
Some IR-level instructions (such as FP <-> i64 conversions) are not chained
w.r.t. the mtctr intrinsic and yet may become function calls that clobber the
counter register. At the selection-DAG level, these might be reordered with the
mtctr intrinsic causing miscompiles. To avoid this situation, if an existing
preheader has instructions that might use the counter register, create a new
preheader for the mtctr intrinsic. This extra block will be remerged with the
old preheader at the MI level, but will prevent unwanted reordering at the
selection-DAG level.

llvm-svn: 182045
2013-05-16 19:58:38 +00:00
Akira Hatanaka
ba455f200e [mips] Test case for r182042. Add comment.
llvm-svn: 182044
2013-05-16 19:57:23 +00:00
Rafael Espindola
a2af7d8def More test coverage for addFrameMove.
llvm-svn: 182041
2013-05-16 19:44:40 +00:00
Benjamin Kramer
d971be891d DAGCombine: Also shrink eq compares where the constant is exactly as large as the smaller type.
if ((x & 255) == 255)

before: movzbl  %al, %eax
        cmpl  $255, %eax

after:  cmpb  $-1, %al
llvm-svn: 182038
2013-05-16 18:47:58 +00:00
Ulrich Weigand
7b22c7a38a [PowerPC] Use true offset value in "memrix" machine operands
This is the second part of the change to always return "true"
offset values from getPreIndexedAddressParts, tackling the
case of "memrix" type operands.

This is about instructions like LD/STD that only have a 14-bit
field to encode immediate offsets, which are implicitly extended
by two zero bits by the machine, so that in effect we can access
16-bit offsets as long as they are a multiple of 4.

The PowerPC back end currently handles such instructions by
carrying the 14-bit value (as it will get encoded into the
actual machine instructions) in the machine operand fields
for such instructions.  This means that those values are
in fact not the true offset, but rather the offset divided
by 4 (and then truncated to an unsigned 14-bit value).

Like in the case fixed in r182012, this makes common code
operations on such offset values not work as expected.
Furthermore, there doesn't really appear to be any strong
reason why we should encode machine operands this way.

This patch therefore changes the encoding of "memrix" type
machine operands to simply contain the "true" offset value
as a signed immediate value, while enforcing the rules that
it must fit in a 16-bit signed value and must also be a
multiple of 4.

This change must be made simultaneously in all places that
access machine operands of this type.  However, just about
all those changes make the code simpler; in many cases we
can now just share the same code for memri and memrix
operands.

llvm-svn: 182032
2013-05-16 17:58:02 +00:00
Hal Finkel
7daa616e35 PPC32 cannot form counter loops around i64 FP conversions
On PPC32, i64 FP conversions are implemented using runtime calls (which clobber
the counter register). These must be excluded.

llvm-svn: 182023
2013-05-16 16:52:41 +00:00
Rafael Espindola
eb3034b91f Add a triple to the test to try to fix the windows bots.
llvm-svn: 182022
2013-05-16 16:48:46 +00:00
Rafael Espindola
d2b872cd1b More addFrameMove test coverage.
llvm-svn: 182021
2013-05-16 16:34:38 +00:00
Bill Schmidt
edb2d94d45 Use new CHECK-DAG support to stabilize CodeGen/PowerPC/recipest.ll
While testing some experimental code to add vector-scalar registers to
PowerPC, I noticed that a couple of independent instructions were
flipped by the scheduler.  The new CHECK-DAG support is perfect for
avoiding this problem.

llvm-svn: 182020
2013-05-16 16:15:18 +00:00
Rafael Espindola
379ba8a86e Add more addFrameMove test coverage.
llvm-svn: 182019
2013-05-16 16:09:54 +00:00
Rafael Espindola
7dd7b264b7 Add more test coverage for addFrameMove.
llvm-svn: 182017
2013-05-16 15:18:50 +00:00
Ulrich Weigand
08228b8354 [PowerPC] Report true displacement value from getPreIndexedAddressParts
DAGCombiner::CombineToPreIndexedLoadStore calls a target routine to
decompose a memory address into a base/offset pair.  It expects the
offset (if constant) to be the true displacement value in order to
perform optional additional optimizations; in particular, to convert
other uses of the original pointer into uses of the new base pointer
after pre-increment.

The PowerPC implementation of getPreIndexedAddressParts, however,
simply calls SelectAddressRegImm, which returns a TargetConstant.
This value is appropriate for encoding into the instruction, but
it is not always usable as true displacement value:

- Its type is always MVT::i32, even on 64-bit, where addresses
  ought to be i64 ... this causes the optimization to simply
  always fail on 64-bit due to this line in DAGCombiner:

      // FIXME: In some cases, we can be smarter about this.
      if (Op1.getValueType() != Offset.getValueType()) {

- Its value is truncated to an unsigned 16-bit value if negative.
  This causes the above opimization to generate wrong code.

This patch fixes both problems by simply returning the true
displacement value (in its original type).  This doesn't
affect any other user of the displacement.

llvm-svn: 182012
2013-05-16 14:53:05 +00:00
Rafael Espindola
16c10c628f Add more addFrameMove test coverage.
llvm-svn: 182011
2013-05-16 14:51:26 +00:00
Rafael Espindola
0daf590030 Extend test to check the .cfi instructions.
I am about to refactor the calls to addFrameMove and some of the ppc
ones were not being tested.

llvm-svn: 182009
2013-05-16 14:30:09 +00:00
Benjamin Kramer
c2c24d044d Relax CHECK-NEXTs a bit to cope with atom's return nop padding.
llvm-svn: 181999
2013-05-16 11:46:50 +00:00
Rafael Espindola
554870d776 Extend test for better coverage.
Without this change nothing was covering this addFrameMove:

// For 64-bit SVR4 when we have spilled CRs, the spill location
// is SP+8, not a frame-relative slot.
if (Subtarget.isSVR4ABI()
    && Subtarget.isPPC64()
    && (PPC::CR2 <= Reg && Reg <= PPC::CR4)) {
  MachineLocation CSDst(PPC::X1, 8);
  MachineLocation CSSrc(PPC::CR2);
  MMI.addFrameMove(Label, CSDst, CSSrc);
  continue;
}

llvm-svn: 181976
2013-05-16 03:48:50 +00:00
Reed Kotler
fb71c30979 Patch number 2 for mips16/32 floating point interoperability stubs.
This creates stubs that help Mips32 functions call Mips16 
functions which have floating point parameters that are normally passed
in floating point registers.
 

llvm-svn: 181972
2013-05-16 02:17:42 +00:00
David Majnemer
71b69a2eb9 Set an explicit triple for this test.
This allows the test to correctly check symbol names.

llvm-svn: 181939
2013-05-15 22:23:21 +00:00
David Majnemer
8ce4c34d1d X86: Remove redundant test instructions
Increase the number of instructions LLVM recognizes as setting the ZF
flag. This allows us to remove test instructions that redundantly
recalculate the flag.

llvm-svn: 181937
2013-05-15 22:03:08 +00:00
Hal Finkel
91bd48d046 Implement PPC counter loops as a late IR-level pass
The old PPCCTRLoops pass, like the Hexagon pass version from which it was
derived, could only handle some simple loops in canonical form. We cannot
directly adapt the new Hexagon hardware loops pass, however, because the
Hexagon pass contains a fundamental assumption that non-constant-trip-count
loops will contain a guard, and this is not always true (the result being that
incorrect negative counts can be generated). With this commit, we replace the
pass with a late IR-level pass which makes use of SE to calculate the
backedge-taken counts and safely generate the loop-count expressions (including
any necessary max() parts). This IR level pass inserts custom intrinsics that
are lowered into the desired decrement-and-branch instructions.

The most fragile part of this new implementation is that interfering uses of
the counter register must be detected on the IR level (and, on PPC, this also
includes any indirect branches in addition to function calls). Also, to make
all of this work, we need a variant of the mtctr instruction that is marked
as having side effects. Without this, machine-code level CSE, DCE, etc.
illegally transform the resulting code. Hopefully, this can be improved
in the future.

This new pass is smaller than the original (and much smaller than the new
Hexagon hardware loops pass), and can handle many additional cases correctly.
In addition, the preheader-creation code has been copied from LoopSimplify, and
after we decide on where it belongs, this code will be refactored so that it
can be explicitly shared (making this implementation even smaller).

The new test-case files ctrloop-{le,lt,ne}.ll have been adapted from tests for
the new Hexagon pass. There are a few classes of loops that this pass does not
transform (noted by FIXMEs in the files), but these deficiencies can be
addressed within the SE infrastructure (thus helping many other passes as well).

llvm-svn: 181927
2013-05-15 21:37:41 +00:00
Derek Schuff
962654973c Fix miscompile due to StackColoring incorrectly merging stack slots (PR15707)
IR optimisation passes can result in a basic block that contains:

  llvm.lifetime.start(%buf)
  ...
  llvm.lifetime.end(%buf)
  ...
  llvm.lifetime.start(%buf)

Before this change, calculateLiveIntervals() was ignoring the second
lifetime.start() and was regarding %buf as being dead from the
lifetime.end() through to the end of the basic block.  This can cause
StackColoring to incorrectly merge %buf with another stack slot.

Fix by removing the incorrect Starts[pos].isValid() and
Finishes[pos].isValid() checks.

Just doing:
      Starts[pos] = Indexes->getMBBStartIdx(MBB);
      Finishes[pos] = Indexes->getMBBEndIdx(MBB);
unconditionally would be enough to fix the bug, but it causes some
test failures due to stack slots not being merged when they were
before.  So, in order to keep the existing tests passing, treat LiveIn
and LiveOut separately rather than approximating the live ranges by
merging LiveIn and LiveOut.

This fixes PR15707.
Patch by Mark Seaborn.

llvm-svn: 181922
2013-05-15 21:15:09 +00:00
Richard Sandiford
1ccc224047 [SystemZ] Make use of SUBTRACT HALFWORD
Thanks to Ulrich Weigand for noticing that this instruction was missing.

llvm-svn: 181893
2013-05-15 15:05:29 +00:00
Arnold Schwaighofer
6df028f783 ARM ISel: Don't create illegal types during LowerMUL
The transformation happening here is that we want to turn a
"mul(ext(X), ext(X))" into a "vmull(X, X)", stripping off the extension. We have
to make sure that X still has a valid vector type - possibly recreate an
extension to a smaller type. In case of a extload of a memory type smaller than
64 bit we used create a ext(load()). The problem with doing this - instead of
recreating an extload - is that an illegal type is exposed.

This patch fixes this by creating extloads instead of ext(load()) sequences.

Fixes PR15970.

radar://13871383

llvm-svn: 181842
2013-05-14 22:33:24 +00:00
Jyotsna Verma
a1b968ec45 Hexagon: Pass to replace tranfer/copy instructions into combine instruction
where possible.

llvm-svn: 181817
2013-05-14 18:54:06 +00:00
Eric Christopher
ad3c788829 Reapply "Subtract isn't commutative, fix this for MMX psub." with
a somewhat randomly chosen cpu that will minimize cpu specific
differences on bots.

llvm-svn: 181814
2013-05-14 18:33:40 +00:00
Eric Christopher
9db4fb70b3 Temporarily revert "Subtract isn't commutative, fix this for MMX psub."
It's causing failures on the atom bot.

llvm-svn: 181812
2013-05-14 18:20:42 +00:00
Eric Christopher
e3d05e6f9d Subtract isn't commutative, fix this for MMX psub.
Patch by Andrea DiBiagio.

llvm-svn: 181809
2013-05-14 17:52:05 +00:00
Jakob Stoklund Olesen
a90977f7ae Recognize sparc64 as an alias for sparcv9 triples.
Patch by Brad Smith!

llvm-svn: 181808
2013-05-14 17:47:27 +00:00
Jyotsna Verma
980fae33f3 Hexagon: Add patterns to generate 'combine' instructions.
llvm-svn: 181805
2013-05-14 17:16:38 +00:00
Jyotsna Verma
86beda7e47 Hexagon: ArePredicatesComplement should not restrict itself to TFRs.
llvm-svn: 181803
2013-05-14 16:36:34 +00:00
Derek Schuff
812ee5bc7c Fix ARM FastISel tests, as a first step to enabling ARM FastISel
ARM FastISel is currently only enabled for iOS non-Thumb1, and I'm working on
enabling it for other targets. As a first step I've fixed some of the tests.
Changes to ARM FastISel tests:
- Different triples don't generate the same relocations (especially
  movw/movt versus constant pool loads). Use a regex to allow either.
- Mangling is different. Use a regex to allow either.
- The reserved registers are sometimes different, so registers get
  allocated in a different order. Capture the names only where this
  occurs.
- Add -verify-machineinstrs to some tests where it works. It doesn't
  work everywhere it should yet.
- Add -fast-isel-abort to many tests that didn't have it before.
- Split out the VarArg test from fast-isel-call.ll into its own
  test. This simplifies test setup because of --check-prefix.

Patch by JF Bastien

llvm-svn: 181801
2013-05-14 16:26:38 +00:00
Bill Schmidt
8c1e12a2ff PPC32: Fix stack collision between FP and CR save areas.
The changes to CR spill handling missed a case for 32-bit PowerPC.
The code in PPCFrameLowering::processFunctionBeforeFrameFinalized()
checks whether CR spill has occurred using a flag in the function
info.  This flag is only set by storeRegToStackSlot and
loadRegFromStackSlot.  spillCalleeSavedRegisters does not call
storeRegToStackSlot, but instead produces MI directly.  Thus we don't
see the CR is spilled when assigning frame offsets, and the CR spill
ends up colliding with some other location (generally the FP slot).

This patch sets the flag in spillCalleeSavedRegisters for PPC32 so
that the CR spill is properly detected and gets its own slot in the
stack frame.

llvm-svn: 181800
2013-05-14 16:08:32 +00:00
Jyotsna Verma
a892e02220 Hexagon: Test case to check if branch probabilities are properly reflected in
the jump instructions in the form of taken/not-taken hint.

llvm-svn: 181799
2013-05-14 15:50:49 +00:00
Michel Danzer
451c4b7507 R600/SI: Add lit test coverage for the remaining patterns added recently
Reviewed-by: Christian König <christian.koenig@amd.com>
llvm-svn: 181775
2013-05-14 09:53:30 +00:00
Reed Kotler
cade566d36 This is the first of three patches which creates stubs used for
Mips16/32 floating point interoperability.

When Mips16 code calls external functions that would normally have some
of its parameters or return values passed in floating point registers,
it needs (Mips32) helper functions to do this because while in Mips16 mode
there is no ability to access the floating point registers.

In Pic mode, this is done with a set of predefined functions in libc.
This case is already handled in llvm for Mips16.

In static relocation mode, for efficiency reasons, the compiler generates
stubs that the linker will use if it turns out that the external function
is a Mips32 function. (If it's Mips16, then it does not need the helper
stubs).

These stubs are identically named and the linker knows about these tricks
and will not create multiple copies and will delete them if they are not
needed.

llvm-svn: 181753
2013-05-14 02:00:24 +00:00
Akira Hatanaka
cc5a3793e3 StackColoring: don't clear an instruction's mem operand if the underlying
object is a PseudoSourceValue and PseudoSourceValue::isConstant returns true (i.e.,
points to memory that has a constant value).

llvm-svn: 181751
2013-05-14 01:42:44 +00:00
Bill Schmidt
c7fd4630b3 PPC64: Constant initializers with dynamic relocations go in .data.rel.ro.
This fixes warning messages observed in the oggenc application test in
projects/test-suite.  Special handling is needed for the 64-bit
PowerPC SVR4 ABI when a constant is initialized with a pointer to a
function in a shared library.  Because a function address is
implemented as the address of a function descriptor, the use of copy
relocations can lead to problems with initialization.  GNU ld
therefore replaces copy relocations with dynamic relocations to be
resolved by the dynamic linker.  This means the constant cannot reside
in the read-only data section, but instead belongs in .data.rel.ro,
which is designed for constants containing dynamic relocations.

The implementation creates a class PPC64LinuxTargetObjectFile
inheriting from TargetLoweringObjectFileELF, which behaves like its
parent except to place constants of this sort into .data.rel.ro.

The test case is reduced from the oggenc application.

llvm-svn: 181723
2013-05-13 19:34:37 +00:00
Akira Hatanaka
aaa3035d45 [mips] Add option -mno-ldc1-sdc1.
This option is used when the user wants to avoid emitting double precision FP
loads and stores. Double precision FP loads and stores are expanded to single
precision instructions after register allocation.

llvm-svn: 181718
2013-05-13 18:23:35 +00:00
Lang Hames
9019f5804f Correctly preserve the input chain for potential tailcall nodes whose
return values are bitcasts.

The chain had previously been being clobbered with the entry node to
the dag, which sometimes caused other code in the function to be
erroneously deleted when tailcall optimization kicked in.

<rdar://problem/13827621>

llvm-svn: 181696
2013-05-13 10:21:19 +00:00
Hao Liu
4a62b32731 Fix PR15950 A bug in DAG Combiner about undef mask
llvm-svn: 181682
2013-05-13 02:07:05 +00:00
Reed Kotler
1f27950895 Add -mtriple=mipsel-linux-gnu to the test so that the compiler does
not think it can support small data sections.

llvm-svn: 181654
2013-05-11 01:02:20 +00:00
Reed Kotler
88fbecdc6f Checkin in of first of several patches to finish implementation of
mips16/mips32 floating point interoperability. 

This patch fixes returns from mips16 functions so that if the function
was in fact called by a mips32 hard float routine, then values
that would have been returned in floating point registers are so returned.

Mips16 mode has no floating point instructions so there is no way to
load values into floating point registers.

This is needed when returning float, double, single complex, double complex
in the Mips ABI.

Helper functions in libc for mips16 are available to do this.

For efficiency purposes, these helper functions have a different calling
convention from normal Mips calls.

Registers v0,v1,a0,a1 are used to pass parameters instead of
a0,a1,a2,a3.

This is because v0,v1,a0,a1 are the natural registers used to return
floating point values in soft float. These values can then be moved
to the appropriate floating point registers with no extra cost.

The only register that is modified is ra in this call.

The helper functions make sure that the return values are in the floating
point registers that they would be in if soft float was not in effect
(which it is for mips16, though the soft float is implemented using a mips32
library that uses hard float).
 

llvm-svn: 181641
2013-05-10 22:25:39 +00:00
Jyotsna Verma
2dfc0b2d13 Hexagon: Fix switch cases in HexagonVLIWPacketizer.cpp.
llvm-svn: 181624
2013-05-10 20:27:34 +00:00
Benjamin Kramer
9205d91a11 DAGCombiner: Generate a correct constant for vector types when folding (xor (and)) into (and (not)).
PR15948.

llvm-svn: 181597
2013-05-10 14:09:52 +00:00
Tom Stellard
7edf38bf1f R600: Remove AMDILPeeopholeOptimizer and replace optimizations with tablegen patterns
The BFE optimization was the only one we were actually using, and it was
emitting an intrinsic that we don't support.

https://bugs.freedesktop.org/show_bug.cgi?id=64201

Reviewed-by: Christian König <christian.koenig@amd.com>

NOTE: This is a candidate for the 3.3 branch.
llvm-svn: 181580
2013-05-10 02:09:45 +00:00
Tom Stellard
ed363c57b2 R600: Expand SUB for v2i32/v4i32
Patch by: Aaron Watry

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Aaron Watry <awatry@gmail.com>

NOTE: This is a candidate for the 3.3 branch.
llvm-svn: 181579
2013-05-10 02:09:39 +00:00
Tom Stellard
3ca3d250c6 R600: Expand MUL for v4i32/v2i32
Fixes piglit test for OpenCL builtin mul24, and allows mad24 to run.

Patch by: Aaron Watry

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Aaron Watry <awatry@gmail.com>

NOTE: This is a candidate for the 3.3 branch.
llvm-svn: 181578
2013-05-10 02:09:34 +00:00
Tom Stellard
56fef8261c R600: Expand SRA for v4i32/v2i32
v2: Add v4i32 test

Patch by: Aaron Watry

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Aaron Watry <awatry@gmail.com>

NOTE: This is a candidate for the 3.3 branch.
llvm-svn: 181577
2013-05-10 02:09:29 +00:00
Tom Stellard
5d4a5a0d37 R600: Expand vselect for v4i32 and v2i32
v2: Add vselect v4i32 test

Patch by: Aaron Watry

Reviewed-by: Tom Stellard <thomas.stellard@amd.com>
Signed-off-by: Aaron Watry <awatry@gmail.com>

NOTE: This is a candidate for the 3.3 branch.
llvm-svn: 181576
2013-05-10 02:09:24 +00:00
Owen Anderson
e34b034b7d Teach SelectionDAG to constant fold all-constant FMA nodes the same way that it constant folds FADD, FMUL, etc.
llvm-svn: 181555
2013-05-09 22:27:13 +00:00
Bill Wendling
59bb428cc9 Generate a compact unwind encoding in the face of a stack alignment push.
We generate a `push' of a random register (%rax) if the stack needs to be
aligned by the size of that register. However, this could mess up compact unwind
generation. In particular, we want to still generate compact unwind in the
presence of this monstrosity.

Check if the push of of the %rax/%eax register. If it is and it's marked with
the `FrameSetup' flag, then we can generate a compact unwind encoding for the
function only if the push is the last FrameSetup instruction.

llvm-svn: 181540
2013-05-09 20:10:38 +00:00
Jyotsna Verma
15d448ba0c Hexagon: Use relation map for getMatchingCondBranchOpcode() and
getInvertedPredicatedOpcode() functions instead of switch cases.

llvm-svn: 181530
2013-05-09 18:25:44 +00:00
Richard Osborne
7504cb9f47 [XCore] Fix handling of functions where only the LR is spilled.
Previously we only checked if the LR required saving if the frame size was
non zero. However because the caller reserves 1 word for the callee to use
that doesn't count towards our frame size it is possible for the LR to need
saving and for the frame size to be 0.

We didn't hit when the LR needed saving because of a function calls because
the 1 word of stack we must allocate for our callee means the frame size
is always non zero in this case. However we can hit this case if the LR is
clobbered in inline asm.

llvm-svn: 181520
2013-05-09 16:43:42 +00:00
Akira Hatanaka
a1d814e7b8 [mips] Add instruction selection pattern for (seteq $LHS, 0).
llvm-svn: 181459
2013-05-08 19:38:04 +00:00
Bill Schmidt
7f1a2b5212 Fix handling of anonymous aggregate parameters for powerpc*-apple-darwin8.
This fixes bug 15821 similarly to the powerpc64-linux fix for bug 14779.

Patch by David Fang.

llvm-svn: 181449
2013-05-08 17:22:33 +00:00