1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-19 19:12:56 +02:00
Commit Graph

9252 Commits

Author SHA1 Message Date
Adam Nemet
05756683c9 Update comment from r203315 based on review
llvm-svn: 203361
2014-03-08 21:51:55 +00:00
David Blaikie
6f7025ca24 DebugInfo: further improvements to test following up on r203329
llvm-svn: 203337
2014-03-08 02:45:53 +00:00
David Blaikie
097ffbf12d DebugInfo: Fix test fallout from r203323
Will fix this harder in a moment.

llvm-svn: 203329
2014-03-08 01:32:51 +00:00
Adam Nemet
f591fbc1db [DAGCombiner] Recognize another rotation idiom
This is the new idiom:

  x<<(y&31) | x>>((0-y)&31)

which is recognized as:

  x ROTL (y&31)

The change refines matchRotateSub.  In
Neg & (OpSize - 1) == (OpSize - Pos) & (OpSize - 1), if Pos is
Pos' & (OpSize - 1) we can just use Pos' instead of Pos.

llvm-svn: 203315
2014-03-07 23:56:28 +00:00
Arnold Schwaighofer
bd1d167eb0 ISel: Make VSELECT selection terminate in cases where the condition type has to
be split and the result type widened.

When the condition of a vselect has to be split it makes no sense widening the
vselect and thereby widening the condition. We end up in an endless loop of
widening (vselect result type) and splitting (condition mask type) doing this.
Instead, split both the condition and the vselect and widen the result.

I ran this over the test suite with i686 and mattr=+sse and saw no regressions.

Fixes PR18036.

llvm-svn: 203311
2014-03-07 23:25:55 +00:00
Sasa Stankovic
5f4d984f32 Moved test file from test/MC/Mips to test/CodeGen/Mips.
llvm-svn: 203298
2014-03-07 22:08:46 +00:00
Tom Stellard
bee4678d48 R600/SI: Using SGPRs is illegal for instructions that read carry-out from VCC
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 203281
2014-03-07 20:12:39 +00:00
Tom Stellard
230af572ff R600/SI: Custom lower i1 stores
These are sometimes created by the shrink to boolean optimization in the
globalopt pass.

Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 203280
2014-03-07 20:12:33 +00:00
Tim Northover
781b15d502 CodeGenPrep: sink extends of illegal types into use block.
This helps the instruction selector to lower an i64 * i64 -> i128
multiplication into a single instruction on targets which support it.

Patch by Manuel Jacob.

llvm-svn: 203230
2014-03-07 11:04:30 +00:00
Rafael Espindola
cb9ca86245 Replace PROLOG_LABEL with a new CFI_INSTRUCTION.
The old system was fairly convoluted:
* A temporary label was created.
* A single PROLOG_LABEL was created with it.
* A few MCCFIInstructions were created with the same label.

The semantics were that the cfi instructions were mapped to the PROLOG_LABEL
via the temporary label. The output position was that of the PROLOG_LABEL.
The temporary label itself was used only for doing the mapping.

The new CFI_INSTRUCTION has a 1:1 mapping to MCCFIInstructions and points to
one by holding an index into the CFI instructions of this function.

I did consider removing MMI.getFrameInstructions completelly and having
CFI_INSTRUCTION own a MCCFIInstruction, but MCCFIInstructions have non
trivial constructors and destructors and are somewhat big, so the this setup
is probably better.

The net result is that we don't create temporary labels that are never used.

llvm-svn: 203204
2014-03-07 06:08:31 +00:00
Rafael Espindola
fe5dfa44c9 Remove shouldEmitUsedDirectiveFor.
Clang now uses llvm.compiler.used for these cases.

llvm-svn: 203174
2014-03-06 22:47:08 +00:00
Rafael Espindola
f350832c74 Convert test to FileCheck.
llvm-svn: 203173
2014-03-06 22:21:43 +00:00
Andrea Di Biagio
4586b3b78c [X86] Teach the DAGCombiner how to fold a OR of two shufflevector nodes.
This patch teaches the DAGCombiner how to fold a binary OR between two
shufflevector into a single shuffle vector when possible.

The rules are:
  1. fold (or (shuf A, V_0, MA), (shuf B, V_0, MB)) -> (shuf A, B, Mask1)
  2. fold (or (shuf A, V_0, MA), (shuf B, V_0, MB)) -> (shuf B, A, Mask2)

The DAGCombiner can take advantage of the fact that OR is commutative and
compute two possible shuffle masks (Mask1 and Mask2) for the resulting
shuffle node.

Before folding a dag according to either rule 1 or 2, DAGCombiner verifies
that the resulting shuffle mask is legal for the target.
DAGCombiner would firstly try to fold according to 1.; If not possible
then it will try to fold according to 2.
If both Mask1 and Mask2 are illegal then we conservatively don't fold
the OR instruction.

llvm-svn: 203156
2014-03-06 20:19:52 +00:00
Matt Arsenault
8140d7d370 R600: Fix extloads from i8 / i16 to i64.
This appears to only be working for global loads. Private
and local break for other reasons.

llvm-svn: 203135
2014-03-06 17:34:12 +00:00
Matt Arsenault
f68a94e609 R600/SI: Expand selects on vectors.
llvm-svn: 203134
2014-03-06 17:34:03 +00:00
Richard Osborne
7d4ecf1273 [XCore] Add support for the "m" inline asm constraint.
Summary:
This provides support for CP and DP relative global accesses in inline
asm.

Reviewers: robertlytton

Reviewed By: robertlytton

Differential Revision: http://llvm-reviews.chandlerc.com/D2943

llvm-svn: 203129
2014-03-06 16:37:48 +00:00
Chad Rosier
6c9595d931 [AArch64] This is a work in progress to provide a machine description
for the Cortex-A53 subtarget in the AArch64 backend.

This patch lays the ground work to annotate each AArch64 instruction
(no NEON yet) with a list of SchedReadWrite types. The patch also
provides the Cortex-A53 processor resources, maps those the the default
SchedReadWrites, and provides basic latency. NEON support will be added
in a subsequent patch with proper forwarding logic.

Verification was done by setting the pre-RA scheduler to linearize to
better gauge the effect of the MIScheduler. Even without modeling the
forward logic, the results show a modest improvement for Cortex-A53.

Reviewers: apazos, mcrosier, atrick
Patch by Dave Estes <cestes@codeaurora.org>!

llvm-svn: 203125
2014-03-06 16:04:00 +00:00
Hal Finkel
c04da1f4c7 Fixup PPC Darwin i1 argument handling
Like on other targets, we need to zero_extend/truncate i1 args before copying
them to GPRs.

llvm-svn: 203045
2014-03-06 00:45:19 +00:00
Hal Finkel
0373847793 When using CR bit registers on PPC32, handle the i1 vaarg case
When copying an i1 value into a GPR for a vaarg call, we need to explicitly
zero-extend the i1 value (otherwise an invalid CRBIT -> GPR copy will be
generated).

llvm-svn: 203041
2014-03-06 00:23:33 +00:00
Jack Carter
79eae149e1 [Mips] Testcase typo fix. No functionality change.
llvm-svn: 203020
2014-03-05 22:54:56 +00:00
Hal Finkel
18344a3ff6 With PPC CR bit registers, handle int_to_fp on older cores
On cores without fpcvt support, we cannot promote int_to_fp i1 operations,
because there is nothing to promote them to. The most straightforward
implementation of this uses a select to choose between the two possible
resulting floating-point values (and that's what is done here).

llvm-svn: 203015
2014-03-05 22:14:00 +00:00
Rafael Espindola
d987164eed Always print the implicit .text at the start of an asm file.
Before llvm-mc would print it, but llc was assuming that it would produce
another section changing directive before one was needed. That assumption is
false with inline asm.

Fixes PR19049.

Another option would be to always create the section, but in the asm printer
avoid printing sections changes during initialization. That would work, but
* We do use the fact that llvm-mc prints it in testing. The tests can be changed
  if needed.
* A quick poll on IRC suggest that most developers prefer the implicit .text to
  be printed.

llvm-svn: 203001
2014-03-05 20:09:15 +00:00
Cameron McInally
80fa2d42e5 Lower AVX v4i64->v4i32 truncate to one shuffle.
llvm-svn: 202996
2014-03-05 19:41:16 +00:00
Oliver Stannard
6aa486598a ARM: Correctly align arguments after a byval struct is passed on the stack
llvm-svn: 202985
2014-03-05 15:25:27 +00:00
Andrew Trick
bcd20c0c29 Make stackmap machineinstrs clobber the scratch regs too.
Patchpoints already did this. Doing it for stackmaps is a convenience
for the runtime in the event that it needs to scratch register to
patch or perform a runtime call thunk.

Unlike patchpoints, we just assume the AnyRegCC calling
convention. This is the only language and target independent calling
convention specific to stackmaps so makes sense.  Although the calling
convention is not currently used to select the scratch registers.

llvm-svn: 202943
2014-03-05 07:08:16 +00:00
Hans Wennborg
c1cb270dba Check for dynamic allocas and inline asm that clobbers sp before building
selection dag (PR19012)

In X86SelectionDagInfo::EmitTargetCodeForMemcpy we check with MachineFrameInfo
to make sure that ESI isn't used as a base pointer register before we choose to
emit rep movs (which clobbers esi).

The problem is that MachineFrameInfo wouldn't know about dynamic allocas or
inline asm that clobbers the stack pointer until SelectionDAGBuilder has
encountered them.

This patch fixes the problem by checking for such things when building the
FunctionLoweringInfo.

Differential Revision: http://llvm-reviews.chandlerc.com/D2954

llvm-svn: 202930
2014-03-05 02:43:26 +00:00
Richard Osborne
b9f5c6e728 [XCore] Fix call of absolute address.
Previously for:

tail call void inttoptr (i64 65536 to void ()*)() nounwind

We would emit:

bl 65536

The immediate operand of the bl instruction is a relative offset so it is
wrong to use the absolute address here.

llvm-svn: 202860
2014-03-04 16:50:30 +00:00
Daniel Sanders
2e526d806c [mips][msa] Correct the behaviour of the COPY_FW pseudo on lanes 2 and 3.
Summary:
Previously, attempting to extract lanes 2 and 3 would actually extract lane 1.
The MSA CodeGen tests only covered lanes 0 and 1.

Differential Revision: http://llvm-reviews.chandlerc.com/D2935

llvm-svn: 202848
2014-03-04 13:54:30 +00:00
Chad Rosier
e60c767814 Revert "[AArch64] This is a work in progress to provide a machine description"
This reverts commit ff717c8fc786a0cfa1602982b91895fa09e514fc.

llvm-svn: 202773
2014-03-04 00:32:07 +00:00
Chad Rosier
ad64e09862 [AArch64] This is a work in progress to provide a machine description
for the Cortex-A53 subtarget in the AArch64 backend.

This patch lays the ground work to annotate each AArch64 instruction
(no NEON yet) with a list of SchedReadWrite types. The patch also
provides the Cortex-A53 processor resources, maps those the the default
SchedReadWrites, and provides basic latency. NEON support will be added
in a subsequent patch with proper forwarding logic.

Verification was done by setting the pre-RA scheduler to linearize to
better gauge the effect of the MIScheduler. Even without modeling the
forward logic, the results show a modest improvement for Cortex-A53.

Reviewers: apazos, mcrosier, atrick
Patch by Dave Estes <cestes@codeaurora.org>!

llvm-svn: 202767
2014-03-03 23:32:47 +00:00
Daniel Sanders
ea44f13708 [mips] Prevent %lo relocation being used on MSA loads and stores.
Summary:
Parts of the compiler still believed MSA load/stores have a 16-bit offset when
it is actually 10-bit. Corrected this, and fixed a closely related issue this
uncovered where load/stores with 10-bit and 12-bit offsets (MSA and microMIPS
respectively) could not load/store using offsets from the stack/frame pointer.
They accepted frameindex+offset, but not frameindex by itself.

Reviewers: jacksprat, matheusalmeida

Reviewed By: jacksprat

Differential Revision: http://llvm-reviews.chandlerc.com/D2888

llvm-svn: 202717
2014-03-03 14:31:21 +00:00
Hal Finkel
64680c3ba1 Add a PPC inline asm constraint type for single CR bits
Now that the PowerPC backend can track individual CR bits as first-class
registers, we should also have a way of allocating them for inline asm
statements. Because these registers are only one bit, if an output variable is
implicitly cast to a larger integer size, we'll get an any_extend to that
larger type (this is part of the existing target-independent logic). As a
result, regardless of the size of the output type, only the first bit is
meaningful.

The constraint identifier "wc" has been chosen for this purpose. Although gcc
does not currently support allocating individual CR bits, this identifier
choice has been coordinated with the gcc PowerPC team, and will be marked as
reserved for this purpose in the gcc constraints.md file.

llvm-svn: 202657
2014-03-02 18:23:39 +00:00
Elena Demikhovsky
838b163a58 AVX-512: Fixed extract_vector_elt for v8i1 vector
llvm-svn: 202624
2014-03-02 09:19:44 +00:00
Matt Arsenault
394a9d104d R600: Add failing control flow tests.
Simple cases hit a variety of problems at -O0.

llvm-svn: 202601
2014-03-01 21:45:41 +00:00
Hal Finkel
4937443651 Remove extra truncs/exts around i32 bit operations on PPC64
This generalizes the code to eliminate extra truncs/exts around i1 bit
operations to also do the same on PPC64 for i32 bit operations. This eliminates
a fairly prevalent code wart:

int foo(int a) {
  return a == 5 ? 7 : 8;
}

On PPC64, because of the extension implied by the ABI, this would generate:

	cmplwi 0, 3, 5
	li 12, 8
	li 4, 7
	isel 3, 4, 12, 2
	rldicl 3, 3, 0, 32
	blr

where the 'rldicl 3, 3, 0, 32', the extension, is completely unnecessary. At
least for the single-BB case (which is all that the DAG combine mechanism can
handle), this unnecessary extension is no longer generated.

llvm-svn: 202600
2014-03-01 21:36:57 +00:00
Venkatraman Govindaraju
789e2fd1b7 [Sparc] Add support for parsing directives in SparcAsmParser.
llvm-svn: 202564
2014-03-01 02:18:04 +00:00
Venkatraman Govindaraju
439a7d90a6 [Sparc] Emit 'restore' instead of 'restore %g0, %g0, %g0'. This improves the readability of the generated code.
llvm-svn: 202563
2014-03-01 01:04:26 +00:00
Manman Ren
fe531c446c SpillPlacement: fix a bug in iterate.
Inside iterate, we scan backwards then scan forwards in a loop. When iteration
is not zero, the last node was just updated so we can skip it. But when
iteration is zero, we can't skip the last node.

For the testing case, fixing this will save a spill and move register copies
from hot path to cold path.

llvm-svn: 202557
2014-02-28 23:05:31 +00:00
Tom Stellard
6280afdecd R600/SI: Expand all v16[if]32 operations
llvm-svn: 202543
2014-02-28 21:36:37 +00:00
Justin Bogner
56a8b49ffd CommandLine: Exit successfully for -version and -help
Tools that use the CommandLine library currently exit with an error
when invoked with -version or -help. This is unusual and non-standard,
so we'll fix them to exit successfully instead.

I don't expect that anyone relies on the current behaviour, so this
should be a fairly safe change.

llvm-svn: 202530
2014-02-28 19:08:01 +00:00
Adam Nemet
0fe89b88ce Test commit
llvm-svn: 202528
2014-02-28 18:44:39 +00:00
Zoran Jovanovic
9c1887bef4 Fixed operand of SC microMIPS instruction.
llvm-svn: 202526
2014-02-28 18:22:56 +00:00
Hal Finkel
1970087008 Swap PPC isel operands to allow for 0-folding
The PPC isel instruction can fold 0 into the first operand (thus eliminating
the need to materialize a zero-containing register when the 'true' result of
the isel is 0). When the isel is fed by a bit register operation that we can
invert, do so as part of the bit-register-operation peephole routine.

llvm-svn: 202469
2014-02-28 06:11:16 +00:00
Hal Finkel
883c64377d Add CR-bit tracking to the PowerPC backend for i1 values
This change enables tracking i1 values in the PowerPC backend using the
condition register bits. These bits can be treated on PowerPC as separate
registers; individual bit operations (and, or, xor, etc.) are supported.
Tracking booleans in CR bits has several advantages:

 - Reduction in register pressure (because we no longer need GPRs to store
   boolean values).

 - Logical operations on booleans can be handled more efficiently; we used to
   have to move all results from comparisons into GPRs, perform promoted
   logical operations in GPRs, and then move the result back into condition
   register bits to be used by conditional branches. This can be very
   inefficient, because the throughput of these CR <-> GPR moves have high
   latency and low throughput (especially when other associated instructions
   are accounted for).

 - On the POWER7 and similar cores, we can increase total throughput by using
   the CR bits. CR bit operations have a dedicated functional unit.

Most of this is more-or-less mechanical: Adjustments were needed in the
calling-convention code, support was added for spilling/restoring individual
condition-register bits, and conditional branch instruction definitions taking
specific CR bits were added (plus patterns and code for generating bit-level
operations).

This is enabled by default when running at -O2 and higher. For -O0 and -O1,
where the ability to debug is more important, this feature is disabled by
default. Individual CR bits do not have assigned DWARF register numbers,
and storing values in CR bits makes them invisible to the debugger.

It is critical, however, that we don't move i1 values that have been promoted
to larger values (such as those passed as function arguments) into bit
registers only to quickly turn around and move the values back into GPRs (such
as happens when values are returned by functions). A pair of target-specific
DAG combines are added to remove the trunc/extends in:
  trunc(binary-ops(binary-ops(zext(x), zext(y)), ...)
and:
  zext(binary-ops(binary-ops(trunc(x), trunc(y)), ...)
In short, we only want to use CR bits where some of the i1 values come from
comparisons or are used by conditional branches or selects. To put it another
way, if we can do the entire i1 computation in GPRs, then we probably should
(on the POWER7, the GPR-operation throughput is higher, and for all cores, the
CR <-> GPR moves are expensive).

POWER7 test-suite performance results (from 10 runs in each configuration):

SingleSource/Benchmarks/Misc/mandel-2: 35% speedup
MultiSource/Benchmarks/Prolangs-C++/city/city: 21% speedup
MultiSource/Benchmarks/MiBench/automotive-susan: 23% speedup
SingleSource/Benchmarks/CoyoteBench/huffbench: 13% speedup
SingleSource/Benchmarks/Misc-C++/Large/sphereflake: 13% speedup
SingleSource/Benchmarks/Misc-C++/mandel-text: 10% speedup

SingleSource/Benchmarks/Misc-C++-EH/spirit: 10% slowdown
MultiSource/Applications/lemon/lemon: 8% slowdown

llvm-svn: 202451
2014-02-28 00:27:01 +00:00
Roman Divacky
f36febf578 Lower FNEG just like FABS to fneg[ds] and fmov[ds], thus avoiding
expensive libcall. Also, Qp_neg is not implemented on at least
FreeBSD. This is also what gcc is doing.

llvm-svn: 202422
2014-02-27 19:26:29 +00:00
Adrian Prantl
d7f77dd966 Debug info: Remove ARMAsmPrinter::EmitDwarfRegOp(). AsmPrinter can now
scan the register file for sub- and super-registers.
No functionality change intended.

(Tests are updated because the comments in the assembler output are
different.)

llvm-svn: 202416
2014-02-27 17:56:08 +00:00
Richard Osborne
947c19eaa0 [XCore] Support functions returning more than 4 words.
If a function returns a large struct by value return the first 4 words
in registers and the rest on the stack in a location reserved by the
caller. This is needed to support the xC language which supports
functions returning an arbitrary number of return values. This is
r202397 reapplied with a fix to avoid an uninitialized read of a member.

llvm-svn: 202414
2014-02-27 17:47:54 +00:00
Richard Osborne
f8fb4e8a7f Revert r202396, r202397.
These are causing test failures, revert for now.

llvm-svn: 202398
2014-02-27 14:24:13 +00:00
Richard Osborne
cb6866dfec [XCore] Support functions returning more than 4 words.
Summary:
If a function returns a large struct by value return the first 4 words
in registers and the rest on the stack in a location reserved by the
caller. This is needed to support the xC language which supports
functions returning an arbitrary number of return values.

Reviewers: robertlytton

Reviewed By: robertlytton

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D2889

llvm-svn: 202397
2014-02-27 14:00:40 +00:00
Richard Osborne
5ac74685fd [XCore] Target optimized library function __memcpy_4()
Summary:
If the src, dst and size of a memcpy are known to be 4 byte aligned we
can call __memcpy_4() instead of memcpy().

Reviewers: robertlytton

Reviewed By: robertlytton

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D2871

llvm-svn: 202395
2014-02-27 13:39:07 +00:00