1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-22 04:22:57 +02:00
Commit Graph

92004 Commits

Author SHA1 Message Date
Hal Finkel
7daa616e35 PPC32 cannot form counter loops around i64 FP conversions
On PPC32, i64 FP conversions are implemented using runtime calls (which clobber
the counter register). These must be excluded.

llvm-svn: 182023
2013-05-16 16:52:41 +00:00
Rafael Espindola
eb3034b91f Add a triple to the test to try to fix the windows bots.
llvm-svn: 182022
2013-05-16 16:48:46 +00:00
Rafael Espindola
d2b872cd1b More addFrameMove test coverage.
llvm-svn: 182021
2013-05-16 16:34:38 +00:00
Bill Schmidt
edb2d94d45 Use new CHECK-DAG support to stabilize CodeGen/PowerPC/recipest.ll
While testing some experimental code to add vector-scalar registers to
PowerPC, I noticed that a couple of independent instructions were
flipped by the scheduler.  The new CHECK-DAG support is perfect for
avoiding this problem.

llvm-svn: 182020
2013-05-16 16:15:18 +00:00
Rafael Espindola
379ba8a86e Add more addFrameMove test coverage.
llvm-svn: 182019
2013-05-16 16:09:54 +00:00
Aaron Ballman
42af887d8c Fixing a 64-bit conversion warning in MSVC.
llvm-svn: 182018
2013-05-16 16:03:36 +00:00
Rafael Espindola
7dd7b264b7 Add more test coverage for addFrameMove.
llvm-svn: 182017
2013-05-16 15:18:50 +00:00
Rafael Espindola
4c7120e048 Remove dead calls to addFrameMove.
Without a PROLOG_LABEL present, the cfi instructions are never printed.

llvm-svn: 182016
2013-05-16 15:08:37 +00:00
Ulrich Weigand
08228b8354 [PowerPC] Report true displacement value from getPreIndexedAddressParts
DAGCombiner::CombineToPreIndexedLoadStore calls a target routine to
decompose a memory address into a base/offset pair.  It expects the
offset (if constant) to be the true displacement value in order to
perform optional additional optimizations; in particular, to convert
other uses of the original pointer into uses of the new base pointer
after pre-increment.

The PowerPC implementation of getPreIndexedAddressParts, however,
simply calls SelectAddressRegImm, which returns a TargetConstant.
This value is appropriate for encoding into the instruction, but
it is not always usable as true displacement value:

- Its type is always MVT::i32, even on 64-bit, where addresses
  ought to be i64 ... this causes the optimization to simply
  always fail on 64-bit due to this line in DAGCombiner:

      // FIXME: In some cases, we can be smarter about this.
      if (Op1.getValueType() != Offset.getValueType()) {

- Its value is truncated to an unsigned 16-bit value if negative.
  This causes the above opimization to generate wrong code.

This patch fixes both problems by simply returning the true
displacement value (in its original type).  This doesn't
affect any other user of the displacement.

llvm-svn: 182012
2013-05-16 14:53:05 +00:00
Rafael Espindola
16c10c628f Add more addFrameMove test coverage.
llvm-svn: 182011
2013-05-16 14:51:26 +00:00
Rafael Espindola
0daf590030 Extend test to check the .cfi instructions.
I am about to refactor the calls to addFrameMove and some of the ppc
ones were not being tested.

llvm-svn: 182009
2013-05-16 14:30:09 +00:00
Richard Sandiford
cb335bb295 [SystemZ] Tweak register array comment
llvm-svn: 182007
2013-05-16 13:39:02 +00:00
Benjamin Kramer
c2c24d044d Relax CHECK-NEXTs a bit to cope with atom's return nop padding.
llvm-svn: 181999
2013-05-16 11:46:50 +00:00
Evgeniy Stepanov
bce1a20d13 [msan] Switch TLS globals to initial-exec model.
They are always defined in the main executable.

llvm-svn: 181994
2013-05-16 09:14:05 +00:00
Patrik Hagglund
a0ea76e714 Removed unused variable, detected by gcc
-Wunused-but-set-variable. Leftover from r181979.

llvm-svn: 181993
2013-05-16 08:37:22 +00:00
Rafael Espindola
537821c785 Delete dead code.
llvm-svn: 181982
2013-05-16 04:59:17 +00:00
Rafael Espindola
24bf7876c2 Don't call addFrameMove on XCore.
getExceptionHandlingType is not ExceptionHandling::DwarfCFI on xcore, so
etFrameInstructions is never called. There is no point creating cfi
instructions if they are never used.

llvm-svn: 181979
2013-05-16 04:16:25 +00:00
Richard Smith
3995413b20 Respect the 'nobuiltin' attribute when determining if a call is to a memory builtin.
llvm-svn: 181978
2013-05-16 04:12:04 +00:00
Rafael Espindola
554870d776 Extend test for better coverage.
Without this change nothing was covering this addFrameMove:

// For 64-bit SVR4 when we have spilled CRs, the spill location
// is SP+8, not a frame-relative slot.
if (Subtarget.isSVR4ABI()
    && Subtarget.isPPC64()
    && (PPC::CR2 <= Reg && Reg <= PPC::CR4)) {
  MachineLocation CSDst(PPC::X1, 8);
  MachineLocation CSSrc(PPC::CR2);
  MMI.addFrameMove(Label, CSDst, CSSrc);
  continue;
}

llvm-svn: 181976
2013-05-16 03:48:50 +00:00
Rafael Espindola
92a3518a62 Removed dead code.
llvm-svn: 181975
2013-05-16 03:34:58 +00:00
Lang Hames
b0c9552559 Fix PBQP graph iterator typedefs.
llvm-svn: 181973
2013-05-16 02:20:41 +00:00
Reed Kotler
fb71c30979 Patch number 2 for mips16/32 floating point interoperability stubs.
This creates stubs that help Mips32 functions call Mips16 
functions which have floating point parameters that are normally passed
in floating point registers.
 

llvm-svn: 181972
2013-05-16 02:17:42 +00:00
Derek Schuff
8af06f2ba6 Revert "Support unaligned load/store on more ARM targets"
This reverts r181898.

llvm-svn: 181944
2013-05-15 23:07:43 +00:00
Eli Bendersky
c0e010b564 Remove dead code.
This method is not being used/tested anywhere.

llvm-svn: 181943
2013-05-15 22:41:28 +00:00
Arnold Schwaighofer
63eb047996 LoopVectorize: Move call of canHoistAllLoads to canVectorizeWithIfConvert
We only want to check this once, not for every conditional block in the loop.

No functionality change (except that we don't perform a check redudantly
anymore).

llvm-svn: 181942
2013-05-15 22:38:14 +00:00
Rafael Espindola
e1f815ebd9 Delete dead code.
llvm-svn: 181941
2013-05-15 22:27:35 +00:00
David Majnemer
71b69a2eb9 Set an explicit triple for this test.
This allows the test to correctly check symbol names.

llvm-svn: 181939
2013-05-15 22:23:21 +00:00
Hal Finkel
ae8f6158eb undef setjmp in PPCCTRLoops
Trying to unbreak the VS build by copying some undef code from
Utils/LowerInvoke.cpp.

llvm-svn: 181938
2013-05-15 22:20:24 +00:00
David Majnemer
8ce4c34d1d X86: Remove redundant test instructions
Increase the number of instructions LLVM recognizes as setting the ZF
flag. This allows us to remove test instructions that redundantly
recalculate the flag.

llvm-svn: 181937
2013-05-15 22:03:08 +00:00
Bill Wendling
29e9e36da8 Use proper syntax.
llvm-svn: 181930
2013-05-15 21:38:12 +00:00
Hal Finkel
91bd48d046 Implement PPC counter loops as a late IR-level pass
The old PPCCTRLoops pass, like the Hexagon pass version from which it was
derived, could only handle some simple loops in canonical form. We cannot
directly adapt the new Hexagon hardware loops pass, however, because the
Hexagon pass contains a fundamental assumption that non-constant-trip-count
loops will contain a guard, and this is not always true (the result being that
incorrect negative counts can be generated). With this commit, we replace the
pass with a late IR-level pass which makes use of SE to calculate the
backedge-taken counts and safely generate the loop-count expressions (including
any necessary max() parts). This IR level pass inserts custom intrinsics that
are lowered into the desired decrement-and-branch instructions.

The most fragile part of this new implementation is that interfering uses of
the counter register must be detected on the IR level (and, on PPC, this also
includes any indirect branches in addition to function calls). Also, to make
all of this work, we need a variant of the mtctr instruction that is marked
as having side effects. Without this, machine-code level CSE, DCE, etc.
illegally transform the resulting code. Hopefully, this can be improved
in the future.

This new pass is smaller than the original (and much smaller than the new
Hexagon hardware loops pass), and can handle many additional cases correctly.
In addition, the preheader-creation code has been copied from LoopSimplify, and
after we decide on where it belongs, this code will be refactored so that it
can be explicitly shared (making this implementation even smaller).

The new test-case files ctrloop-{le,lt,ne}.ll have been adapted from tests for
the new Hexagon pass. There are a few classes of loops that this pass does not
transform (noted by FIXMEs in the files), but these deficiencies can be
addressed within the SE infrastructure (thus helping many other passes as well).

llvm-svn: 181927
2013-05-15 21:37:41 +00:00
Hal Finkel
29ce39fc2e Fix legalization of SETCC with promoted integer intrinsics
If the input operands to SETCC are promoted, we need to make sure that we
either use the promoted form of both operands (or neither); a mixture is not
allowed. This can happen, for example, if a target has a custom promoted
i1-returning intrinsic (where i1 is not a legal type). In this case, we need to
use the promoted form of both operands.

This change only augments the behavior of the existing logic in the case where
the input types (which may or may not have already been legalized) disagree,
and should not affect existing target code because this case would otherwise
cause an assert in the SETCC operand promotion code.

This will be covered by (essentially all of the) tests for the new PPCCTRLoops
infrastructure.

llvm-svn: 181926
2013-05-15 21:37:27 +00:00
Bill Wendling
da386b9c50 Add lldb and polly to the projects to tag.
llvm-svn: 181925
2013-05-15 21:36:46 +00:00
Derek Schuff
962654973c Fix miscompile due to StackColoring incorrectly merging stack slots (PR15707)
IR optimisation passes can result in a basic block that contains:

  llvm.lifetime.start(%buf)
  ...
  llvm.lifetime.end(%buf)
  ...
  llvm.lifetime.start(%buf)

Before this change, calculateLiveIntervals() was ignoring the second
lifetime.start() and was regarding %buf as being dead from the
lifetime.end() through to the end of the basic block.  This can cause
StackColoring to incorrectly merge %buf with another stack slot.

Fix by removing the incorrect Starts[pos].isValid() and
Finishes[pos].isValid() checks.

Just doing:
      Starts[pos] = Indexes->getMBBStartIdx(MBB);
      Finishes[pos] = Indexes->getMBBEndIdx(MBB);
unconditionally would be enough to fix the bug, but it causes some
test failures due to stack slots not being merged when they were
before.  So, in order to keep the existing tests passing, treat LiveIn
and LiveOut separately rather than approximating the live ranges by
merging LiveIn and LiveOut.

This fixes PR15707.
Patch by Mark Seaborn.

llvm-svn: 181922
2013-05-15 21:15:09 +00:00
Rafael Espindola
5ef2f39b92 Cleanup relocation sorting for ELF.
We want the order to be deterministic on all platforms. NAKAMURA Takumi
fixed that in r181864. This patch is just two small cleanups:

* Move the function to the cpp file. It is only passed to array_pod_sort.
* Remove the ppc implementation which is now redundant

llvm-svn: 181910
2013-05-15 18:22:01 +00:00
NAKAMURA Takumi
04ff46b750 PPCISelLowering.h: Escape \@ in comments. [-Wdocumentation]
llvm-svn: 181907
2013-05-15 18:01:35 +00:00
NAKAMURA Takumi
9f1d12c5aa Whitespace.
llvm-svn: 181906
2013-05-15 18:01:28 +00:00
Michael Gottesman
49557fbfb5 [objc-arc] Fixed a spelling error and made the statistic descriptions be consistent about their usage of periods.
llvm-svn: 181901
2013-05-15 17:43:03 +00:00
Douglas Gregor
8fdda97cb6 Add missing #include
llvm-svn: 181900
2013-05-15 17:41:02 +00:00
Derek Schuff
d07b32ae79 Support unaligned load/store on more ARM targets
This patch matches GCC behavior: the code used to only allow unaligned
load/store on ARM for v6+ Darwin, it will now allow unaligned load/store for
v6+ Darwin as well as for v7+ on other targets.

The distinction is made because v6 doesn't guarantee support (but LLVM assumes
that Apple controls hardware+kernel and therefore have conformant v6 CPUs),
whereas v7 does provide this guarantee (and Linux behaves sanely).

Overall this should slightly improve performance in most cases because of
reduced I$ pressure.

Patch by JF Bastien

llvm-svn: 181897
2013-05-15 16:08:30 +00:00
Ulrich Weigand
3a366f06c6 Remove MCELFObjectTargetWriter::adjustFixupOffset hack
Now that PowerPC no longer uses adjustFixupOffset, and no other
back-end (ever?) did, we can remove the infrastructure itself
(incidentally addressing a FIXME to that effect).

llvm-svn: 181895
2013-05-15 15:07:42 +00:00
Ulrich Weigand
0325b4909d [PowerPC] Remove need for adjustFixupOffst hack
Now that applyFixup understands differently-sized fixups, we can define
fixup_ppc_lo16/fixup_ppc_lo16_ds/fixup_ppc_ha16 to properly be 2-byte
fixups, applied at an offset of 2 relative to the start of the 
instruction text.

This has the benefit that if we actually need to generate a real
relocation record, its address will come out correctly automatically,
without having to fiddle with the offset in adjustFixupOffset.

Tested on both 64-bit and 32-bit PowerPC, using external and
integrated assembler.

llvm-svn: 181894
2013-05-15 15:07:06 +00:00
Richard Sandiford
1ccc224047 [SystemZ] Make use of SUBTRACT HALFWORD
Thanks to Ulrich Weigand for noticing that this instruction was missing.

llvm-svn: 181893
2013-05-15 15:05:29 +00:00
Ulrich Weigand
5062453e37 [PowerPC] Add test case for r181891
llvm-svn: 181892
2013-05-15 15:02:12 +00:00
Ulrich Weigand
08f27cb2c6 [PowerPC] Correctly handle fixups of other than 4 byte size
The PPCAsmBackend::applyFixup routine handles the case where a
fixup can be resolved within the same object file.  However,
this routine is currently hard-coded to assume the size of
any fixup is always exactly 4 bytes.

This is sort-of correct for fixups on instruction text; even
though it only works because several of what really would be
2-byte fixups are presented as 4-byte fixups instead (requiring
another hack in PPCELFObjectWriter::adjustFixupOffset to clean
it up).

However, this assumption breaks down completely for fixups
on data, which legitimately can be of any size (1, 2, 4, or 8).

This patch makes applyFixup aware of fixups of varying sizes,
introducing a new helper routine getFixupKindNumBytes (along
the lines of what the ARM back end does).  Note that in order
to handle fixups of size 8, we also need to fix the return type
of adjustFixupValue to uint64_t to avoid truncation.

Tested on both 64-bit and 32-bit PowerPC, using external and
integrated assembler.

llvm-svn: 181891
2013-05-15 15:01:46 +00:00
Arnaud A. de Grandmaison
2c998eed2c Add Jade to the list of external projects using LLVM in the release notes.
Patch by: Antoine Lorence <Antoine.Lorence@insa-rennes.fr>

llvm-svn: 181886
2013-05-15 14:05:01 +00:00
Richard Sandiford
85e457ca32 [SystemZ] Add more future work items to the README
Based on an analysis by Ulrich Weigand.

llvm-svn: 181882
2013-05-15 12:53:31 +00:00
Richard Sandiford
2a8800d599 [SystemZ] Consolidate disassembler tests for valid input into 2 big tests
llvm-svn: 181879
2013-05-15 11:00:31 +00:00
Richard Sandiford
978bf2bb83 [SystemZ] Consolidate assembler tests into 4 big tests
llvm-svn: 181878
2013-05-15 09:58:19 +00:00
Timur Iskhodzhanov
a774900b73 Fix build on Windows
llvm-svn: 181873
2013-05-15 09:00:30 +00:00