selecting a node and use a mix of getTargetNode() and SelectNodeTo. Because
SelectNodeTo didn't check the CSE maps for a preexisting node and didn't insert
its result into the CSE maps, we would sometimes miss a CSE opportunity.
This is extremely rare, but worth fixing for completeness.
llvm-svn: 24565
changes allow us to generate the following code:
_foo:
li r2, 0
lvx v0, r2, r3
vaddfp v0, v0, v0
stvx v0, r2, r3
blr
for this llvm:
void %foo(<4 x float>* %a) {
entry:
%tmp1 = load <4 x float>* %a
%tmp2 = add <4 x float> %tmp1, %tmp1
store <4 x float> %tmp2, <4 x float>* %a
ret void
}
llvm-svn: 24534
file to become corrupted due to interactions between mmap'd memory segments
and file descriptors closing. The problem is completely avoiding by using
a third temporary file.
Patch provided by Evan Jones
llvm-svn: 24527
generates it. Make MVT::Vector expand-only, and remove the code in
Legalize that attempts to legalize it.
The plan for supporting N x Type is to continually epxand it in ExpandOp
until it gets down to 2 x Type, where it will be scalarized into a pair of
scalars.
llvm-svn: 24482
packed types with an element count of 1, although more generic support is
coming. This allows LLVM to turn the following code:
void %foo(<1 x float> * %a) {
entry:
%tmp1 = load <1 x float> * %a;
%tmp2 = add <1 x float> %tmp1, %tmp1
store <1 x float> %tmp2, <1 x float> *%a
ret void
}
Into:
_foo:
lfs f0, 0(r3)
fadds f0, f0, f0
stfs f0, 0(r3)
blr
llvm-svn: 24416
set and eliminating the need to iterate whenever something is removed (which
can be really slow in some cases). Thx to Jim for pointing out something silly
I was getting stuck on. :)
llvm-svn: 24241
alignment information appropriately. Includes code for PowerPC to support
fixed-size allocas with alignment larger than the stack. Support for
arbitrarily aligned dynamic allocas coming soon.
llvm-svn: 24224
a special case hack for X86, make the hack more general: if an incoming argument
register is not used in any block other than the entry block, don't copy it to
a vreg. This helps us compile code like this:
%struct.foo = type { int, int, [0 x ubyte] }
int %test(%struct.foo* %X) {
%tmp1 = getelementptr %struct.foo* %X, int 0, uint 2, int 100
%tmp = load ubyte* %tmp1 ; <ubyte> [#uses=1]
%tmp2 = cast ubyte %tmp to int ; <int> [#uses=1]
ret int %tmp2
}
to:
_test:
lbz r3, 108(r3)
blr
instead of:
_test:
lbz r2, 108(r3)
or r3, r2, r2
blr
The (dead) copy emitted to copy r3 into a vreg for extra-block uses was
increasing the live range of r3 past the load, preventing the coallescing.
This implements CodeGen/PowerPC/reg-coallesce-simple.ll
llvm-svn: 24115
generating results in vregs that will need them. In the case of something
like this: CopyToReg((add X, Y), reg1024), we no longer emit code like
this:
reg1025 = add X, Y
reg1024 = reg 1025
Instead, we emit:
reg1024 = add X, Y
Whoa! :)
llvm-svn: 24111
pointer marking the end of the list, the zero *must* be cast to the pointer
type. An un-cast zero is a 32-bit int, and at least on x86_64, gcc will
not extend the zero to 64 bits, thus allowing the upper 32 bits to be
random junk.
The new END_WITH_NULL macro may be used to annotate a such a function
so that GCC (version 4 or newer) will detect the use of un-casted zero
at compile time.
llvm-svn: 23888
the input is that type, this caused a failure on gs on X86 last night.
Move the hard checks into Build[US]Div since that is where decisions like
this should be made.
llvm-svn: 23881
For example, we can now join things like [0-30:0)[31-40:1)[52-59:2)
with [40:60:0) if the 52-59 range is defined by a copy from the 40-60 range.
The resultant range ends up being [0-30:0)[31-60:1).
This fires a lot through-out the test suite (e.g. shrinking bc from
19492 -> 18509 machineinstrs) though most gains are smaller (e.g. about
50 copies eliminated from crafty).
llvm-svn: 23866
(an unused method).
Fix the merger so that it can merge ranges like this [10:12)[16:40) with
[12:38) into [10:40) instead of bogus ranges. This sort of input will be
possible for the merger coming shortly
llvm-svn: 23865
Add a new flag to TargetLowering indicating if the target has really cheap
signed division by powers of two, make ppc use it. This will probably go
away in the future.
Implement some more ISD::SDIV folds in the dag combiner
Remove now dead code in the x86 backend.
llvm-svn: 23853
Fix a *bug* in the extendIntervalEndTo method. In particular, if adding
[2:10) to an interval containing [0:2),[10:30), we produced [0:10),[10,30).
Which is not the most smart thing to do. Now produce [0:30).
llvm-svn: 23841
allows us to lower legal return types to something else, to meet ABI
requirements (such as that i64 be returned in two i32 regs on Darwin/ppc).
llvm-svn: 23802
a lot throughout many programs. In particular, specfp triggers it a bunch for
constant FP nodes when you have code like cond ? 1.0 : -1.0.
If the PPC ISel exposed the loads implicit in pic references to external globals,
we would be able to eliminate a load in cases like this as well:
%X = external global int
%Y = external global int
int* %test4(bool %C) {
%G = select bool %C, int* %X, int* %Y
ret int* %G
}
Note that this breaks things that use SrcValue's (see the fixme), but since nothing
uses them yet, this is ok.
Also, simplify some code to use hasOneUse() on an SDOperand instead of hasNUsesOfValue directly.
llvm-svn: 23781