1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-26 22:42:46 +02:00
Commit Graph

60448 Commits

Author SHA1 Message Date
Michael Gottesman
fcbd79805b Properly model precise lifetime when given an incomplete dataflow sequence.
The normal dataflow sequence in the ARC optimizer consists of the following
states:

    Retain -> CanRelease -> Use -> Release

The optimizer before this patch stored the uses that determine the lifetime of
the retainable object pointer when it bottom up hits a retain or when top down
it hits a release. This is correct for an imprecise lifetime scenario since what
we are trying to do is remove retains/releases while making sure that no
``CanRelease'' (which is usually a call) deallocates the given pointer before we
get to the ``Use'' (since that would cause a segfault).

If we are considering the precise lifetime scenario though, this is not
correct. In such a situation, we *DO* care about the previous sequence, but
additionally, we wish to track the uses resulting from the following incomplete
sequences:

  Retain -> CanRelease -> Release   (TopDown)
  Retain <- Use <- Release          (BottomUp)

*NOTE* This patch looks large but the most of it consists of updating
test cases. Additionally this fix exposed an additional bug. I removed
the test case that expressed said bug and will recommit it with the fix
in a little bit.

llvm-svn: 178921
2013-04-05 22:54:28 +00:00
Hal Finkel
dd515f6f89 Reapply r178845 with fix - Fix bug in PEI's virtual-register scavenging
This fixes PEI as previously described, but correctly handles the case where
the instruction defining the virtual register to be scavenged is the first in
the block. Arnold provided me with a bugpoint-reduced test case, but even that
seems too large to use as a regression test. If I'm successful in cleaning it
up then I'll commit that as well.

Original commit message:

    This change fixes a bug that I introduced in r178058. After a register is
    scavenged using one of the available spills slots the instruction defining the
    virtual register needs to be moved to after the spill code. The scavenger has
    already processed the defining instruction so that registers killed by that
    instruction are available for definition in that same instruction. Unfortunately,
    after this, the scavenger needs to iterate through the spill code and then
    visit, again, the instruction that defines the now-scavenged register. In order
    to avoid confusion, the register scavenger needs the ability to 'back up'
    through the spill code so that it can again process the instructions in the
    appropriate order. Prior to this fix, once the scavenger reached the
    just-moved instruction, it would assert if it killed any registers because,
    having already processed the instruction, it believed they were undefined.

    Unfortunately, I don't yet have a small test case. Thanks to Pranav Bhandarkar
    for diagnosing the problem and testing this fix.

llvm-svn: 178919
2013-04-05 22:31:56 +00:00
Bill Wendling
f2bb7aa5f8 Use the target options specified on a function to reset the back-end.
During LTO, the target options on functions within the same Module may
change. This would necessitate resetting some of the back-end. Do this for X86,
because it's a Friday afternoon.

llvm-svn: 178917
2013-04-05 21:52:40 +00:00
Hal Finkel
b56ec767c3 Revert r178845 - Fix bug in PEI's virtual-register scavenging
Reverting because this breaks one of the LTO builders. Original commit message:

    This change fixes a bug that I introduced in r178058. After a register is
    scavenged using one of the available spills slots the instruction defining the
    virtual register needs to be moved to after the spill code. The scavenger has
    already processed the defining instruction so that registers killed by that
    instruction are available for definition in that same instruction. Unfortunately,
    after this, the scavenger needs to iterate through the spill code and then
    visit, again, the instruction that defines the now-scavenged register. In order
    to avoid confusion, the register scavenger needs the ability to 'back up'
    through the spill code so that it can again process the instructions in the
    appropriate order. Prior to this fix, once the scavenger reached the
    just-moved instruction, it would assert if it killed any registers because,
    having already processed the instruction, it believed they were undefined.

    Unfortunately, I don't yet have a small test case. Thanks to Pranav Bhandarkar
    for diagnosing the problem and testing this fix.

llvm-svn: 178916
2013-04-05 21:30:40 +00:00
Jim Grosbach
e7766f7108 Tidy up a bit. No functional change.
llvm-svn: 178915
2013-04-05 21:20:12 +00:00
Shuxin Yang
5cf388a00f Disable the optimization about promoting vector-element-access with symbolic index.
This optimization is unstable at this moment; it 
  1) block us on a very important application
  2) PR15200
  3) test6 and test7 in test/Transforms/ScalarRepl/dynamic-vector-gep.ll
     (the CHECK command compare the output against wrong result)

   I personally believe this optimization should not have any impact on the
autovectorized code, as auto-vectorizer is supposed to put gather/scatter
in a "right" way.  Although in theory downstream optimizaters might reveal 
some gather/scatter optimization opportunities, the chance is quite slim.

   For the hand-crafted vectorizing code, in term of redundancy elimination,
load-CSE, copy-propagation and DSE can collectively achieve the same result,
but in much simpler way. On the other hand, these optimizers are able to 
improve the code in a incremental way; in contrast, SROA is sort of all-or-none
approach. However, SROA might slighly win in stack size, as it tries to figure 
out a stretch of memory tightenly cover the area accessed by the dynamic index.

 rdar://13174884
 PR15200

llvm-svn: 178912
2013-04-05 21:07:08 +00:00
Douglas Gregor
b8547c600f <rdar://problem/13551789> Fix a race in the LockFileManager.
It's possible for the lock file to disappear and the owning process to
return before we're able to see the generated file. Spin for a little
while to see if it shows up before failing. 

llvm-svn: 178909
2013-04-05 20:53:57 +00:00
Douglas Gregor
abdef6977d <rdar://problem/13551789> Fix yet another race in unique_file.
If the directory that will contain the unique file doesn't exist when
we tried to create the file, but another process creates it before we
get a chance to try creating it, we would bail out rather than try to
create the unique file.

llvm-svn: 178908
2013-04-05 20:48:36 +00:00
Michael J. Spencer
61add4113e [Support][FileSystem] Fix identify_magic for big endian ELF.
llvm-svn: 178905
2013-04-05 20:10:04 +00:00
Rafael Espindola
804fd781f1 Define versions of Section that are explicitly marked as little endian.
These should really be templated like ELF, but this is a start.

llvm-svn: 178896
2013-04-05 18:45:28 +00:00
Michael Gottesman
989573571b Added two debug logging messages to VisitInstructionsTopDown to match VisitInstructionsBottomUp.
llvm-svn: 178895
2013-04-05 18:26:08 +00:00
Rafael Espindola
77ac879d96 Don't use InMemoryStruct in getSection and getSection64.
llvm-svn: 178894
2013-04-05 18:18:19 +00:00
Michael Gottesman
c96404b280 Cleaned up whitespace and made debug logging less verbose.
llvm-svn: 178893
2013-04-05 18:10:41 +00:00
Renato Golin
9d05117f2b Reverting 178851 as it broke buildbots
llvm-svn: 178883
2013-04-05 16:39:53 +00:00
Chad Rosier
59dbb08c9b [ms-inline asm] Add support for numeric displacement expressions in bracketed
memory operands.

Essentially, this layers an infix calculator on top of the parsing state
machine.  The scale on the index register is still expected to be an immediate

 __asm mov eax, [eax + ebx*4]

and will not work with more complex expressions.  For example,

 __asm mov eax, [eax + ebx*(2*2)]

The plus and minus binary operators assume the numeric value of a register is
zero so as to not change the displacement.  Register operands should never
be an operand for a multiply or divide operation; the scale*indexreg
expression is always replaced with a zero on the operand stack to prevent
such a case.
rdar://13521380

llvm-svn: 178881
2013-04-05 16:28:55 +00:00
Reid Kleckner
e860c86e11 [Support] Disable assertion dialogs from the MSVC debug CRT
Summary:
Sets a report hook that emulates pressing "retry" in the "abort, retry,
ignore" dialog box that _CrtDbgReport normally raises.  There are many
other ways to disable assertion reports, but this was the only way I
could find that still calls our exception handler.

Reviewers: Bigcheese

CC: llvm-commits

Differential Revision: http://llvm-reviews.chandlerc.com/D625

llvm-svn: 178880
2013-04-05 16:18:03 +00:00
Rafael Espindola
79787bd764 Don't fetch pointers from a InMemoryStruct.
InMemoryStruct is extremely dangerous as it returns data from an internal
buffer when the endiannes doesn't match. This should fix the tests on big
endian hosts.

llvm-svn: 178875
2013-04-05 15:15:22 +00:00
Ulrich Weigand
045c5fae3d Respect Addend when processing MCJIT relocations to local/global symbols.
When the RuntimeDyldELF::processRelocationRef routine finds the target
symbol of a relocation in the local or global symbol table, it performs
a section-relative relocation:

    Value.SectionID = lsi->second.first;
    Value.Addend = lsi->second.second;

At this point, however, any Addend that might have been specified in
the original relocation record is lost.  This is somewhat difficult to
trigger for relocations within the code section since they usually
do not contain non-zero Addends (when built with the default JIT code
model, in any case).  However, the problem can be reliably triggered
by a relocation within the data section caused by code like:

 int test[2] = { -1, 0 };
 int *p = &test[1];

The initializer of "p" will need a relocation to "test + 4".  On
platforms using RelA relocations this means an Addend of 4 is required.
Current code ignores this addend when processing the relocation,
resulting in incorrect execution.

Fixed by taking the Addend into account when processing relocations
to symbols found in the local or global symbol table.

Tested on x86_64-linux and powerpc64-linux.

llvm-svn: 178869
2013-04-05 13:29:04 +00:00
Stepan Dyatkovskiy
257a194cf1 Buildbot fix for r178851: mistake was in wrong TargetRegisterInfo::getRegClass usage.
llvm-svn: 178854
2013-04-05 07:34:08 +00:00
Stepan Dyatkovskiy
98f7dac944 Fix for PR14824: "Optimization arm_ldst_opt inserts newly generated instruction vldmia at incorrect position".
Patch introduces memory operands tracking in ARMLoadStoreOpt::LoadStoreMultipleOpti. For each register it keeps the order of load operations as it was before optimization pass.
It is kind of deep improvement of fix proposed by Hao: http://llvm.org/bugs/show_bug.cgi?id=14824#c4
But it also tracks conflicts between different register classes (e.g. D2 and S5).
For more details see:
Bug description: http://llvm.org/bugs/show_bug.cgi?id=14824
LLVM Commits discussion: 
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130311/167936.html
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130318/168688.html
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130325/169376.html
http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130401/170238.html

llvm-svn: 178851
2013-04-05 05:52:14 +00:00
Hal Finkel
9e772fa482 Add a SchedMachineModel for the PPC G5
llvm-svn: 178850
2013-04-05 05:49:18 +00:00
Hal Finkel
ac71a202bd Add a SchedMachineModel for the PPC A2
llvm-svn: 178848
2013-04-05 05:34:08 +00:00
Hal Finkel
7fd3c6a69e Fix bug in PEI's virtual-register scavenging
This change fixes a bug that I introduced in r178058. After a register is
scavenged using one of the available spills slots the instruction defining the
virtual register needs to be moved to after the spill code. The scavenger has
already processed the defining instruction so that registers killed by that
instruction are available for definition in that same instruction. Unfortunately,
after this, the scavenger needs to iterate through the spill code and then
visit, again, the instruction that defines the now-scavenged register. In order
to avoid confusion, the register scavenger needs the ability to 'back up'
through the spill code so that it can again process the instructions in the
appropriate order. Prior to this fix, once the scavenger reached the
just-moved instruction, it would assert if it killed any registers because,
having already processed the instruction, it believed they were undefined.

Unfortunately, I don't yet have a small test case. Thanks to Pranav Bhandarkar
for diagnosing the problem and testing this fix.

llvm-svn: 178845
2013-04-05 05:01:13 +00:00
Arnold Schwaighofer
15f0999c37 ARM scheduler model: Add scheduler info to more instructions and resource
descriptions for compares

llvm-svn: 178844
2013-04-05 05:01:06 +00:00
Arnold Schwaighofer
e0459c3175 ARM scheduler model: Swift has varying latencies, uops for simple ALU ops
llvm-svn: 178842
2013-04-05 04:42:00 +00:00
Andrew Trick
6da34cd35e RegisterPressure heuristics currently require signed comparisons.
llvm-svn: 178823
2013-04-05 00:31:34 +00:00
Andrew Trick
28247fa37e Disable DFSResult for ConvergingScheduler.
For now, just save the compile time since the ConvergingScheduler
heuristics don't use this analysis. We'll probably enable it later
after compile-time investigation.

llvm-svn: 178822
2013-04-05 00:31:31 +00:00
Andrew Trick
d50d445665 MachineScheduler: format DEBUG output.
I'm getting more serious about tuning and enabling on x86/ARM. Start
by making the trace readable.

llvm-svn: 178821
2013-04-05 00:31:29 +00:00
Arnold Schwaighofer
abd363c1bc LoopVectorizer: Pass OperandValueKind information to the cost model
Pass down the fact that an operand is going to be a vector of constants.

This should bring the performance of MultiSource/Benchmarks/PAQ8p/paq8p on x86
back. It had degraded to scalar performance due to my pervious shift cost change
that made all shifts expensive on x86.

radar://13576547

llvm-svn: 178809
2013-04-04 23:26:27 +00:00
Arnold Schwaighofer
52871434dd X86 cost model: Differentiate cost for vector shifts of constants
SSE2 has efficient support for shifts by a scalar. My previous change of making
shifts expensive did not take this into account marking all shifts as expensive.
This would prevent vectorization from happening where it is actually beneficial.

With this change we differentiate between shifts of constants and other shifts.

radar://13576547

llvm-svn: 178808
2013-04-04 23:26:24 +00:00
Arnold Schwaighofer
861251004b CostModel: Add parameter to instruction cost to further classify operand values
On certain architectures we can support efficient vectorized version of
instructions if the operand value is uniform (splat) or a constant scalar.
An example of this is a vector shift on x86.

We can efficiently support

for (i = 0 ; i < ; i += 4)
  w[0:3] = v[0:3] << <2, 2, 2, 2>

but not

for (i = 0; i < ; i += 4)
  w[0:3] = v[0:3] << x[0:3]

This patch adds a parameter to getArithmeticInstrCost to further qualify operand
values as uniform or uniform constant.

Targets can then choose to return a different cost for instructions with such
operand values.

A follow-up commit will test this feature on x86.

radar://13576547

llvm-svn: 178807
2013-04-04 23:26:21 +00:00
Manman Ren
d4f1e1e1df Debug Info: revert 178722 for now.
There is a difference for FORM_ref_addr between DWARF 2 and DWARF 3+.
Since Eric is against guarding DWARF 2 ref_addr with DarwinGDBCompat, we are
still in discussion on how to handle this.

The correct solution is to update our header to say version 4 instead of version
2 and update tool chains as well.

rdar://problem/13559431

llvm-svn: 178806
2013-04-04 23:13:11 +00:00
Adrian Prantl
679308d57e typo
llvm-svn: 178804
2013-04-04 22:56:49 +00:00
Hal Finkel
02fd9b0859 Rename the current PPC BCL definition to BCLalways
BCL is normally a conditional branch-and-link instruction, but has
an unconditional form (which is used in the SjLj code, for example).
To make clear that this BCL instruction definition is specifically
the special unconditional form (which does not meaningfully take
a condition-register input), rename it to BCLalways.

No functionality change intended.

llvm-svn: 178803
2013-04-04 22:55:54 +00:00
Hal Finkel
1dc78e666b PPC: Improve code generation for mixed-precision reciprocal sqrt
The DAGCombine logic that recognized a/sqrt(b) and transformed it into
a multiplication by the reciprocal sqrt did not handle cases where the
sqrt and the division were separated by an fpext or fptrunc.

llvm-svn: 178801
2013-04-04 22:44:12 +00:00
Jyotsna Verma
c3ebace56c Hexagon: Expand br_cc.
It fixes following tests for Hexagon:

CodeGen/Generic/2003-07-29-BadConstSbyte.ll
CodeGen/Generic/2005-10-21-longlonggtu.ll
CodeGen/Generic/2009-04-28-i128-cmp-crash.ll
CodeGen/Generic/MachineBranchProb.ll
CodeGen/Generic/builtin-expect.ll
CodeGen/Generic/pr12507.ll

llvm-svn: 178794
2013-04-04 21:18:26 +00:00
Benjamin Kramer
d4c69ec04b Reassociate: Avoid iterator invalidation.
OpndPtrs stored pointers into the Opnd vector that became invalid when the
vector grows. Store indices instead. Sadly I only have a large testcase that
only triggers under valgrind, so I didn't include it.

llvm-svn: 178793
2013-04-04 21:15:42 +00:00
Richard Osborne
25a2bd3084 [XCore] Add bru instruction.
llvm-svn: 178783
2013-04-04 20:05:35 +00:00
Richard Osborne
2eabe25672 [XCore] The RRegs register class is a superset of GRRegs.
At the time when the XCore backend was added there were some issues with
with overlapping register classes but these all seem to be fixed now.
Describing the register classes correctly allow us to get rid of a
codegen only instruction (LDAWSP_lru6_RRegs) and it means we can
disassemble ru6 instructions that use registers above r11.

llvm-svn: 178782
2013-04-04 19:57:46 +00:00
Jakob Stoklund Olesen
a53fa8d450 Avoid high-latency false CPSR dependencies even for tMOVSi.
The Thumb2SizeReduction pass avoids false CPSR dependencies, except it
still aggressively creates tMOVi8 instructions because they are so
common.

Avoid creating false CPSR dependencies even for tMOVi8 instructions when
the the CPSR flags are known to have high latency. This allows integer
computation to overlap floating point computations.

Also process blocks in a reverse post-order and propagate high-latency
flags to successors.

<rdar://problem/13468102>

llvm-svn: 178773
2013-04-04 18:25:36 +00:00
Eli Bendersky
13fd057bb8 Formatting
llvm-svn: 178771
2013-04-04 18:03:41 +00:00
Vincent Lejeune
3a22d07044 R600: Use a mask for offsets when encoding instructions
llvm-svn: 178763
2013-04-04 14:00:09 +00:00
Vincent Lejeune
d5f0b3821e R600: Fix wrong address when substituting ENDIF
llvm-svn: 178762
2013-04-04 14:00:03 +00:00
Vincent Lejeune
a680946842 R600: Take export into account when computing cf address
llvm-svn: 178761
2013-04-04 13:59:59 +00:00
Jakob Stoklund Olesen
1969a96fcd Add SPARC v9 support for select on 64-bit compares.
This requires v9 cmov instructions using the %xcc flags instead of the
%icc flags.

Still missing:
- Select floats on %xcc flags.
- Select i64 on %fcc flags.

llvm-svn: 178737
2013-04-04 03:08:00 +00:00
Manman Ren
2dafd4d1df Debug Info: according to DWARF 2, FORM_ref_addr the same size as an address on
the target system.

It was hard-coded to 4 bytes before. I can't get llvm to generate a
ref_addr on a reasonably sized testing case.

rdar://problem/13559431

llvm-svn: 178722
2013-04-04 00:22:54 +00:00
Michael Gottesman
d8686ebbd6 Refactored out the helper method FindPredecessorAutoreleaseWithSafePath from ObjCARCOpt::OptimizeReturns.
Now ObjCARCOpt::OptimizeReturns is easy to read and reason about.

llvm-svn: 178715
2013-04-03 23:39:14 +00:00
Michael Gottesman
2560f9cf28 Refactored out the helper function FindPredecessorRetainWithSafePath from ObjCARCOpt::OptimizeReturns.
llvm-svn: 178714
2013-04-03 23:16:05 +00:00
Michael Gottesman
f7fe76689b Small cleanups.
Cleaned up trailing whitespace and added extra slashes in front of a
function level comment so that it follow the convention of having 3
slashes.

llvm-svn: 178712
2013-04-03 23:07:45 +00:00
Michael Gottesman
964b7a9c7b Refactored out a part of ObjCARCOpt::OptimizeReturns into its own method HasSafePathToPredecessorCall.
llvm-svn: 178710
2013-04-03 23:04:28 +00:00