1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-26 06:22:56 +02:00
Commit Graph

1539 Commits

Author SHA1 Message Date
Benjamin Kramer
4b76aa3d46 MathExtras: Bring Count(Trailing|Leading)Ones and CountPopulation in line with countTrailingZeros
Update all callers.

llvm-svn: 228930
2015-02-12 15:35:40 +00:00
Tom Stellard
20d19bec1e R600/SI: Disable subreg liveness
This is temporary while we try to fix a crash in the register coalescer.

llvm-svn: 228861
2015-02-11 18:24:53 +00:00
Tom Stellard
ba7a67583a R600: Split AMDGPUPassConfig into R600PassConfig and GCNPassConfig
llvm-svn: 228850
2015-02-11 17:11:51 +00:00
Tom Stellard
a8ce1e2a7e R600: Create an R600TargetMachine for pre-gcn GPUs
No functinality change. R600TargetMachine inherits from
AMDGPUTargetMachine.

llvm-svn: 228849
2015-02-11 17:11:50 +00:00
Tom Stellard
01068df0db R600/SI: Store immediate offsets > 12-bits in soffset
This will save us from having to extend these offsets to 64-bits
and storing them in a pair of vgprs.

llvm-svn: 228776
2015-02-11 00:34:35 +00:00
Tom Stellard
dbe7759986 R600/SI: Add soffset operand to mubuf addr64 instruction
We were previously hard-coding soffset to 0.

llvm-svn: 228775
2015-02-11 00:34:32 +00:00
Benjamin Kramer
db8b180786 Make helper functions/classes/globals static. NFC.
llvm-svn: 228410
2015-02-06 17:51:54 +00:00
Michel Danzer
dd88eb8ebc R600/SI: Don't enable WQM for V_INTERP_* instructions v2
Doesn't seem necessary anymore. I think this was mostly compensating for
not enabling WQM for texture sampling instructions.

v2: Add test coverage
Reviewed-by: Tom Stellard <tom@stellard.net>
llvm-svn: 228373
2015-02-06 02:51:25 +00:00
Michel Danzer
5d9cb0f9f9 R600/SI: Also enable WQM for image opcodes which calculate LOD v3
If whole quad mode isn't enabled for these, the level of detail is
calculated incorrectly for pixels along diagonal triangle edges, causing
artifacts.

v2: Use a TSFlag instead of lots of switch cases
v3: Add test coverage

Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88642
Reviewed-by: Tom Stellard <tom@stellard.net>
llvm-svn: 228372
2015-02-06 02:51:20 +00:00
Tom Stellard
783df8b1c7 R600/SI: Fix bug in TTI loop unrolling preferences
We should be setting UnrollingPreferences::MaxCount to MAX_UINT instead
of UnrollingPreferences::Count.

Count is a 'forced unrolling factor', while MaxCount sets an upper
limit to the unrolling factor.

Setting Count to MAX_UINT was causing the loop in the testcase to be
unrolled 15 times, when it only had a maximum of 4 iterations.

llvm-svn: 228303
2015-02-05 15:32:18 +00:00
Tom Stellard
3f975d151e R600/SI: Fix bug from insertion of llvm.SI.end.cf into loop headers
The llvm.SI.end.cf intrinsic is used to mark the end of if-then blocks,
if-then-else blocks, and loops.  It is responsible for updating the
exec mask to re-enable threads that had been masked during the preceding
control flow block.  For example:

s_mov_b64 exec, 0x3                 ; Initial exec mask
s_mov_b64 s[0:1], exec              ; Saved exec mask
v_cmpx_gt_u32 exec, s[2:3], v0, 0   ; llvm.SI.if
do_stuff()
s_or_b64 exec, exec, s[0:1]         ; llvm.SI.end.cf

The bug fixed by this patch was one where the llvm.SI.end.cf intrinsic
was being inserted into the header of loops.  This would happen when
an if block terminated in a loop header and we would end up with
code like this:

s_mov_b64 exec, 0x3                 ; Initial exec mask
s_mov_b64 s[0:1], exec              ; Saved exec mask
v_cmpx_gt_u32 exec, s[2:3], v0, 0   ; llvm.SI.if
do_stuff()

LOOP:                       ; Start of loop header
s_or_b64 exec, exec, s[0:1] ; llvm.SI.end.cf <-BUG: The exec mask has the
                              same value at the beginning of each loop
			      iteration.
do_stuff();
s_cbranch_execnz LOOP

The fix is to create a new basic block before the loop and insert the
llvm.SI.end.cf there.  This way the exec mask is restored before the
start of the loop instead of at the beginning of each iteration.

llvm-svn: 228302
2015-02-05 15:32:15 +00:00
Matt Arsenault
d857ef4f47 R600/SI: Fix i64 truncate to i1
llvm-svn: 228273
2015-02-05 06:05:13 +00:00
Tom Stellard
e60a7f33b1 R600/SI: Enable subreg liveness by default
llvm-svn: 228228
2015-02-04 23:14:18 +00:00
Tom Stellard
cf5e2de361 R600/SI: Expand misaligned 16-bit memory accesses
llvm-svn: 228190
2015-02-04 20:49:52 +00:00
Tom Stellard
e523dc6621 R600/SI: Make more store operations legal
v2i32, i32, trunc i32 to i16, and truc i32 to i8 stores are legal for
all address spaces.  We had marked them as custom in order to lower
them for the private address space, but this is no longer necessary.

This enables lowering of misaligned stores of these types in the
DAGLegalizer.

llvm-svn: 228189
2015-02-04 20:49:51 +00:00
Tom Stellard
5ea00f06af R600: Don't promote i64 stores to v2i32 during DAG legalization
We take care of this during instruction selection now.  This
fixes a potential infinite loop when lowering misaligned stores.

llvm-svn: 228188
2015-02-04 20:49:49 +00:00
Marek Olsak
9d32f6321f R600/SI: Remove useless patterns in VALU which are already covered by SALU
Also remove hasPostISelHook=1 from V_LSHL_B32. It's defined by InstSI already.

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 228039
2015-02-03 21:53:08 +00:00
Marek Olsak
4dc004640d R600/SI: Rewrite VOP1InstSI to contain a pseudo and _si opcode
What this does is that if you accidentally select these instructions on VI,
the code generation will fail, because the pseudo -> _vi mapping will be
undefined.

The idea is to be able to catch possible future bugs easily.

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 228038
2015-02-03 21:53:05 +00:00
Marek Olsak
7724be694b R600/SI: Fix B64 VALU shifts on VI
SI only has standard versions. VI only has REV versions.

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 228037
2015-02-03 21:53:01 +00:00
Marek Olsak
31489249b9 R600/SI: Don't generate non-existent LSHL, LSHR, ASHR B32 variants on VI
This can happen when a REV instruction is commuted.

The trick is not to define the _vi versions of instructions, which has these
consequences:
- code generation will always fail if a pseudo cannot be lowered
  (very useful to catch bugs where an unsupported instruction somehow makes
   it to the printer)
- ability to query if a pseudo can be lowered, which is done in commuteOpcode
  to prevent REV from commuting to non-REV on VI

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 227990
2015-02-03 17:38:12 +00:00
Marek Olsak
a9aa183de3 R600/SI: Remove VOP2_REV definitions from target-specific instructions
The getCommute* functions are only used with pseudos, so this commit doesn't
change anything.

The issue with missing non-rev versions of shift instructions on VI will fixed
separately.

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 227989
2015-02-03 17:38:05 +00:00
Marek Olsak
09c7c9cedd R600/SI: Trivial instruction definition corrections for VI (v2)
- V_MAC_LEGACY_F32 exists on VI, but it's VOP3-only.

- Define CVT_PK opcodes which are different between SI and VI. These are
  unused. The idea is to define all chip differences.

v2: keep V_MUL_LO_U32

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 227988
2015-02-03 17:38:01 +00:00
Marek Olsak
26f8311d9c R600/SI: Determine target-specific encoding of READLANE and WRITELANE early v2
These are VOP2 on SI and VOP3 on VI, and their pseudos are neither, which can
be a problem. In order to make isVOP2 and isVOP3 queries behave as expected,
the encoding must be determined first.

This doesn't fix any known issue, but better safe than sorry.

v2: add and use getMCOpcodeFromPseudo

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 227987
2015-02-03 17:37:57 +00:00
Marek Olsak
08103dc7bd R600/SI: Fix dependency between instruction writing M0 and S_SENDMSG on VI (v2)
This fixes a hang when using an empty geometry shader.

v2: - don't add s_nop when followed by s_waitcnt
    - comestic changes

Tested-by: Michel Dänzer <michel.daenzer@amd.com>
llvm-svn: 227986
2015-02-03 17:37:52 +00:00
Tom Stellard
5e1403ebec R600/SI: 64-bit and larger memory access must be at least 4-byte aligned
This is true for SI only. CI+ supports unaligned memory accesses,
but this requires driver support, so for now we disallow unaligned
accesses for all GCN targets.

llvm-svn: 227822
2015-02-02 18:02:28 +00:00
Chandler Carruth
a2cd22e25f [multiversion] Remove the function parameter from the unrolling
preferences interface on TTI now that all of TTI is per-function.

llvm-svn: 227741
2015-02-01 14:31:23 +00:00
Chandler Carruth
59453ca4a8 [multiversion] Switch the TTI queries from TargetMachine to Subtarget
now that we have a correct and cached subtarget specific to the
function.

Also, finish providing a cached per-function subtarget in the core
LLVMTargetMachine -- that layer hadn't switched over yet.

The only use of the TargetMachine was to re-lookup a subtarget for
a particular function to work around the fact that TTI was immutable.
Now that it is per-function and we haved a cached subtarget, use it.

This still leaves a few interfaces with real warts on them where we were
passing Function objects through the TTI interface. I'll remove these
and clean their usage up in subsequent commits now that this isn't
necessary.

llvm-svn: 227738
2015-02-01 14:22:17 +00:00
Chandler Carruth
6ea38a46d2 [multiversion] Remove the cached TargetMachine pointer from the
intermediate TTI implementation template and instead query up to the
derived class for both the TargetMachine and the TargetLowering.

Most of the derived types had a TLI cached already and there is no need
to store a less precisely typed target machine pointer.

This will in turn make it much cleaner to look up the TLI via
a per-function subtarget instead of the generic subtarget, and it will
pave the way toward pulling the subtarget used for unroll preferences
into the same form once we are *always* using the function to look up
the correct subtarget.

llvm-svn: 227737
2015-02-01 14:01:15 +00:00
Chandler Carruth
3ed152b528 [multiversion] Switch all of the targets over to use the
TargetIRAnalysis access path directly rather than implementing getTTI.

This even removes getTTI from the interface. It's more efficient for
each target to just register a precise callback that creates their
specific TTI.

As part of this, all of the targets which are building their subtargets
individually per-function now build their TTI instance with the function
and thus look up the correct subtarget and cache it. NVPTX, R600, and
XCore currently don't leverage this functionality, but its trivial for
them to add it now.

llvm-svn: 227735
2015-02-01 13:20:00 +00:00
Chandler Carruth
c67d7f29c0 [multiversion] Remove a false freedom to leave the TargetMachine pointer
null.

For some reason some of the original TTI code supported a null target
machine. This seems to have been legacy, and I made matters worse when
refactoring this code by spreading that pattern further through the
various targets.

The TargetMachine can't actually be null, and it doesn't make sense to
support that use case. I've now consistently removed it and removed all
of the code trying to cope with that situation. This is probably good,
as several targets *didn't* cope with it being null despite the null
default argument in their constructors. =]

llvm-svn: 227734
2015-02-01 12:38:24 +00:00
Chandler Carruth
0cdc876795 [PM] Remove a bunch of stale TTI creation method declarations. I nuked
their definitions, but forgot to clean up all the declarations which are
in different files.

llvm-svn: 227698
2015-02-01 00:22:15 +00:00
Matt Arsenault
7a8cacc8cb Fix typo
llvm-svn: 227697
2015-01-31 23:37:27 +00:00
Matt Arsenault
c7e64d4e6b R600/SI: Only select cvt_flr/cvt_rpi with no NaNs.
These have different behavior from cvt_i32_f32 on NaN.

llvm-svn: 227693
2015-01-31 21:28:13 +00:00
Chandler Carruth
ad2d6dd7d3 [PM] Switch the TargetMachine interface from accepting a pass manager
base which it adds a single analysis pass to, to instead return the type
erased TargetTransformInfo object constructed for that TargetMachine.

This removes all of the pass variants for TTI. There is now a single TTI
*pass* in the Analysis layer. All of the Analysis <-> Target
communication is through the TTI's type erased interface itself. While
the diff is large here, it is nothing more that code motion to make
types available in a header file for use in a different source file
within each target.

I've tried to keep all the doxygen comments and file boilerplate in line
with this move, but let me know if I missed anything.

With this in place, the next step to making TTI work with the new pass
manager is to introduce a really simple new-style analysis that produces
a TTI object via a callback into this routine on the target machine.
Once we have that, we'll have the building blocks necessary to accept
a function argument as well.

llvm-svn: 227685
2015-01-31 11:17:59 +00:00
Chandler Carruth
b2d6052871 [PM] Change the core design of the TTI analysis to use a polymorphic
type erased interface and a single analysis pass rather than an
extremely complex analysis group.

The end result is that the TTI analysis can contain a type erased
implementation that supports the polymorphic TTI interface. We can build
one from a target-specific implementation or from a dummy one in the IR.

I've also factored all of the code into "mix-in"-able base classes,
including CRTP base classes to facilitate calling back up to the most
specialized form when delegating horizontally across the surface. These
aren't as clean as I would like and I'm planning to work on cleaning
some of this up, but I wanted to start by putting into the right form.

There are a number of reasons for this change, and this particular
design. The first and foremost reason is that an analysis group is
complete overkill, and the chaining delegation strategy was so opaque,
confusing, and high overhead that TTI was suffering greatly for it.
Several of the TTI functions had failed to be implemented in all places
because of the chaining-based delegation making there be no checking of
this. A few other functions were implemented with incorrect delegation.
The message to me was very clear working on this -- the delegation and
analysis group structure was too confusing to be useful here.

The other reason of course is that this is *much* more natural fit for
the new pass manager. This will lay the ground work for a type-erased
per-function info object that can look up the correct subtarget and even
cache it.

Yet another benefit is that this will significantly simplify the
interaction of the pass managers and the TargetMachine. See the future
work below.

The downside of this change is that it is very, very verbose. I'm going
to work to improve that, but it is somewhat an implementation necessity
in C++ to do type erasure. =/ I discussed this design really extensively
with Eric and Hal prior to going down this path, and afterward showed
them the result. No one was really thrilled with it, but there doesn't
seem to be a substantially better alternative. Using a base class and
virtual method dispatch would make the code much shorter, but as
discussed in the update to the programmer's manual and elsewhere,
a polymorphic interface feels like the more principled approach even if
this is perhaps the least compelling example of it. ;]

Ultimately, there is still a lot more to be done here, but this was the
huge chunk that I couldn't really split things out of because this was
the interface change to TTI. I've tried to minimize all the other parts
of this. The follow up work should include at least:

1) Improving the TargetMachine interface by having it directly return
   a TTI object. Because we have a non-pass object with value semantics
   and an internal type erasure mechanism, we can narrow the interface
   of the TargetMachine to *just* do what we need: build and return
   a TTI object that we can then insert into the pass pipeline.
2) Make the TTI object be fully specialized for a particular function.
   This will include splitting off a minimal form of it which is
   sufficient for the inliner and the old pass manager.
3) Add a new pass manager analysis which produces TTI objects from the
   target machine for each function. This may actually be done as part
   of #2 in order to use the new analysis to implement #2.
4) Work on narrowing the API between TTI and the targets so that it is
   easier to understand and less verbose to type erase.
5) Work on narrowing the API between TTI and its clients so that it is
   easier to understand and less verbose to forward.
6) Try to improve the CRTP-based delegation. I feel like this code is
   just a bit messy and exacerbating the complexity of implementing
   the TTI in each target.

Many thanks to Eric and Hal for their help here. I ended up blocked on
this somewhat more abruptly than I expected, and so I appreciate getting
it sorted out very quickly.

Differential Revision: http://reviews.llvm.org/D7293

llvm-svn: 227669
2015-01-31 03:43:40 +00:00
Eric Christopher
ce1de59ee6 Reuse a bunch of cached subtargets and remove getSubtarget calls
without a Function argument.

llvm-svn: 227638
2015-01-30 23:24:40 +00:00
Tom Stellard
0eac7abd85 R600/SI: Handle SI_SPILL_V96_RESTORE in SIRegisterInfo::eliminateFrameIndex()
This fixes a crash in Unigine Heaven.

llvm-svn: 227618
2015-01-30 21:51:51 +00:00
Matt Arsenault
97c228de40 R600/SI: Implement enableAggressiveFMAFusion
Add tests for the various combines. This should
always be at least cycle neutral on all subtargets for f64,
and faster on some. For f32 we should prefer selecting
v_mad_f32 over v_fma_f32.

llvm-svn: 227484
2015-01-29 19:34:32 +00:00
Matt Arsenault
2c9174cd90 R600/SI: Add subtarget feature for if f32 fma is fast
llvm-svn: 227483
2015-01-29 19:34:25 +00:00
Matt Arsenault
20beceb945 R600/SI: Fix tonga's basic scheduling model
llvm-svn: 227482
2015-01-29 19:34:18 +00:00
Rafael Espindola
16f3006ec0 Compute the ELF SectionKind from the flags.
Any code creating an MCSectionELF knows ELF and already provides the flags.

SectionKind is an abstraction used by common code that uses a plain
MCSection.

Use the flags to compute the SectionKind. This removes a lot of
guessing and boilerplate from the MCSectionELF construction.

llvm-svn: 227476
2015-01-29 17:33:21 +00:00
Tom Stellard
6ae0912446 R600/SI: Remove stray debug statements
llvm-svn: 227462
2015-01-29 16:55:28 +00:00
Tom Stellard
33ad0a78c5 R600/SI: Define a schedule model and enable the generic machine scheduler
The schedule model is not complete yet, and could be improved.

llvm-svn: 227461
2015-01-29 16:55:25 +00:00
Tom Stellard
4758e65ded R600: Move DataLayout to AMDGPUTargetMachine
This is a follow up to r227113.

It is now required to use the amdgcn target for SI and newer GPUs.

llvm-svn: 227316
2015-01-28 16:04:26 +00:00
Tom Stellard
11f7c6e2bc R600: Use a Southern Islands GPU as the default for the amdgcn target
llvm-svn: 227314
2015-01-28 15:38:42 +00:00
Marek Olsak
2d9dfe9b69 R600/SI: Fix MIN3/MAX3 on VI, define MED3
llvm-svn: 227213
2015-01-27 17:25:15 +00:00
Marek Olsak
86baed7b95 R600/SI: Don't set patterns for chip-specific instructions while having pseudos
Only pseudos have patterns on them.

Also don't set the asm string for VINTRP_Pseudo. All pseudos should have empty
asm.

This matches what all other multiclasses do.

llvm-svn: 227212
2015-01-27 17:25:11 +00:00
Marek Olsak
1f4a0c87d5 R600/SI: Add VI versions of LDS atomics
Each class is split into two: one adds let statements around non-pseudos,
and the other one specifies the parameters.

llvm-svn: 227211
2015-01-27 17:25:07 +00:00
Marek Olsak
3ceda200e5 R600/SI: Add VI versions of MUBUF atomics
llvm-svn: 227210
2015-01-27 17:25:02 +00:00
Marek Olsak
1c17e8e231 R600/SI: Add VI versions of MUBUF loads and stores
This enables a lot of existing patterns for VI.

llvm-svn: 227209
2015-01-27 17:24:58 +00:00