consecutive addresses togther. This makes it easier for the post-allocation pass
to form ldm / stm.
This is step 1. We are still missing a lot of ldm / stm opportunities because
of register allocation are not done in the desired order. More enhancements
coming.
llvm-svn: 73291
Emission for globals, using the correct data sections
Function alignment can be computed for each target using TargetELFWriterInfo
Some small fixes
llvm-svn: 73201
on x86 to handle more cases. Fix a bug in said code that would cause it
to read past the end of an object. Rewrite the code in
SelectionDAGLegalize::ExpandBUILD_VECTOR to be a bit more general.
Remove PerformBuildVectorCombine, which is no longer necessary with
these changes. In addition to simplifying the code, with this change,
we can now catch a few more cases of consecutive loads.
llvm-svn: 73012
integer type to be consistent with normal operation legalization. No visible
change because nothing is actually using this at the moment.
llvm-svn: 72980
Update code generator to use this attribute and remove NoImplicitFloat target option.
Update llc to set this attribute when -no-implicit-float command line option is used.
llvm-svn: 72959
build vectors with i64 elements will only appear on 32b x86 before legalize.
Since vector widening occurs during legalize, and produces i64 build_vector
elements, the dag combiner is never run on these before legalize splits them
into 32b elements.
Teach the build_vector dag combine in x86 back end to recognize consecutive
loads producing the low part of the vector.
Convert the two uses of TLI's consecutive load recognizer to pass LoadSDNodes
since that was required implicitly.
Add a testcase for the transform.
Old:
subl $28, %esp
movl 32(%esp), %eax
movl 4(%eax), %ecx
movl %ecx, 4(%esp)
movl (%eax), %eax
movl %eax, (%esp)
movaps (%esp), %xmm0
pmovzxwd %xmm0, %xmm0
movl 36(%esp), %eax
movaps %xmm0, (%eax)
addl $28, %esp
ret
New:
movl 4(%esp), %eax
pmovzxwd (%eax), %xmm0
movl 8(%esp), %eax
movaps %xmm0, (%eax)
ret
llvm-svn: 72957
integer and floating-point opcodes, introducing
FAdd, FSub, and FMul.
For now, the AsmParser, BitcodeReader, and IRBuilder all preserve
backwards compatability, and the Core LLVM APIs preserve backwards
compatibility for IR producers. Most front-ends won't need to change
immediately.
This implements the first step of the plan outlined here:
http://nondot.org/sabre/LLVMNotes/IntegerOverflow.txt
llvm-svn: 72897
using Promote which won't work because i64 isn't
a legal type. It's easy enough to use Custom, but
then we have the problem that when the type
legalizer is promoting FP_TO_UINT->i16, it has no
way of telling it should prefer FP_TO_SINT->i32
to FP_TO_UINT->i32. I have uncomfortably hacked
this by making the type legalizer choose FP_TO_SINT
when both are Custom.
This fixes several regressions in the testsuite.
llvm-svn: 72891
instcombine doesn't know when it's safe. To partially compensate
for this, introduce new code to do this transformation in
dagcombine, which can use UnsafeFPMath.
llvm-svn: 72872
EAX = ..., AX<imp-def>
...
= AX
This creates a double-def. Apparently this used to be necessary but is no longer needed.
Thanks to Anton for pointing this out. Anton, I cannot create a test case without your uncommitted ARM patches. Please check in a test case for me.
llvm-svn: 72755
one new .cpp file, in preparation for merging in the Direct Object Emission
changes we're working on. No functional changes.
Fixed coding style issues on the original patch. Patch by Aaron Gray
llvm-svn: 72754
ADDC/ADDE use MVT::i1 (later, whatever it gets legalized to)
instead of MVT::Flag. Remove CARRY_FALSE in favor of 0; adjust
all target-independent code to use this format.
Most targets will still produce a Flag-setting target-dependent
version when selection is done. X86 is converted to use i32
instead, which means TableGen needs to produce different code
in xxxGenDAGISel.inc. This keys off the new supportsHasI1 bit
in xxxInstrInfo, currently set only for X86; in principle this
is temporary and should go away when all other targets have
been converted. All relevant X86 instruction patterns are
modified to represent setting and using EFLAGS explicitly. The
same can be done on other targets.
The immediate behavior change is that an ADC/ADD pair are no
longer tightly coupled in the X86 scheduler; they can be
separated by instructions that don't clobber the flags (MOV).
I will soon add some peephole optimizations based on using
other instructions that set the flags to feed into ADC.
llvm-svn: 72707
failure during llvm-gcc bootstrap:
Assertion failed: (!Tmp2.getNode() && "Can't legalize BR_CC with legal condition!"), function ExpandNode, file /Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvmCore.roots/llvmCore~obj/src/lib/CodeGen/SelectionDAG/LegalizeDAG.cpp, line 2923.
/Volumes/Sandbox/Buildbot/llvm/full-llvm/build/llvmgcc42.roots/llvmgcc42~obj/src/gcc/libgcc2.c:1727: internal compiler error: Abort trap
Please submit a full bug report,
with preprocessed source if appropriate.
See <URL:http://developer.apple.com/bugreporter> for instructions.
llvm-svn: 72530
This is basically the end of this series of patches for LegalizeDAG; the
remaining special cases can't be removed without more infrastructure
work. There's a FIXME for each relevant opcode near the beginning of
SelectionDAGLegalize::LegalizeOp.
llvm-svn: 72514
e.g.
orl $65536, 8(%rax)
=>
orb $1, 10(%rax)
Since narrowing is not always a win, e.g. i32 -> i16 is a loss on x86, dag combiner consults with the target before performing the optimization.
llvm-svn: 72507
entries as there are basic blocks in the function. LiveVariables::getVarInfo
creates a VarInfo struct for every register in the function, leading to
quadratic space use. This patch changes the BitVector to a SparseBitVector,
which doesn't help the worst-case memory use but does reduce the actual use in
very long functions with short-lived variables.
llvm-svn: 72426
doesn't split legal vector operands. This is necessary because the
type legalization (and therefore, vector splitting) code will be going
away soon.
llvm-svn: 72349
The DAGCombiner created a negative shiftamount, stored in an
unsigned variable. Later the optimizer eliminated the shift entirely as being
undefined.
Example: (srl (shl X, 56) 48). ShiftAmt is 4294967288.
Fix it by checking that the shiftamount is positive, and storing in a signed
variable.
llvm-svn: 72331
will allow simplifying LegalizeDAG to eliminate type legalization. (I
have a patch to do that, but it's not quite finished; I'll commit it
once it's finished and I've fixed any review comments for this patch.)
See the comment at the beginning of
lib/CodeGen/SelectionDAG/LegalizeVectorOps.cpp for more details on the
motivation for this patch.
llvm-svn: 72325
code in preparation for code generation. The main thing it does
is handle the case when eh.exception calls (and, in a future
patch, eh.selector calls) are far away from landing pads. Right
now in practice you only find eh.exception calls close to landing
pads: either in a landing pad (the common case) or in a landing
pad successor, due to loop passes shifting them about. However
future exception handling improvements will result in calls far
from landing pads:
(1) Inlining of rewinds. Consider the following case:
In function @f:
...
invoke @g to label %normal unwind label %unwinds
...
unwinds:
%ex = call i8* @llvm.eh.exception()
...
In function @g:
...
invoke @something to label %continue unwind label %handler
...
handler:
%ex = call i8* @llvm.eh.exception()
... perform cleanups ...
"rethrow exception"
Now inline @g into @f. Currently this is turned into:
In function @f:
...
invoke @something to label %continue unwind label %handler
...
handler:
%ex = call i8* @llvm.eh.exception()
... perform cleanups ...
invoke "rethrow exception" to label %normal unwind label %unwinds
unwinds:
%ex = call i8* @llvm.eh.exception()
...
However we would like to simplify invoke of "rethrow exception" into
a branch to the %unwinds label. Then %unwinds is no longer a landing
pad, and the eh.exception call there is then far away from any landing
pads.
(2) Using the unwind instruction for cleanups.
It would be nice to have codegen handle the following case:
invoke @something to label %continue unwind label %run_cleanups
...
handler:
... perform cleanups ...
unwind
This requires turning "unwind" into a library call, which
necessarily takes a pointer to the exception as an argument
(this patch also does this unwind lowering). But that means
you are using eh.exception again far from a landing pad.
(3) Bugpoint simplifications. When bugpoint is simplifying
exception handling code it often generates eh.exception calls
far from a landing pad, which then causes codegen to assert.
Bugpoint then latches on to this assertion and loses sight
of the original problem.
Note that it is currently rare for this pass to actually do
anything. And in fact it normally shouldn't do anything at
all given the code coming out of llvm-gcc! But it does fire
a few times in the testsuite. As far as I can see this is
almost always due to the LoopStrengthReduce codegen pass
introducing pointless loop preheader blocks which are landing
pads and only contain a branch to another block. This other
block contains an eh.exception call. So probably by tweaking
LoopStrengthReduce a bit this can be avoided.
llvm-svn: 72276
the 'constract function dbg thingy'. Rename some methods to make them consistent
with the rest of the methods. Move the 'Emit' methods to the end of the file.
llvm-svn: 72192
build an integer and cast that to a float. This fixes a crash
caused by trying to split an f32 into two f16's.
This changes the behavior in test/CodeGen/XCore/fneg.ll because that
testcase now triggers a DAGCombine which converts the fneg into an integer
operation. If someone is interested, it's probably possible to tweak
the test to generate an actual fneg.
llvm-svn: 72162
function, this could be many, many times. We don't want to re-add variables to
that DIE for each time. We just want to add them once. Check to make sure that
we haven't added them already.
llvm-svn: 72047