the operands have pointer type, so that the resulting type matches
the original SCEV type, and so that unnecessary ptrtoints are
avoided in common cases.
llvm-svn: 75680
For now this only computes the allocated size of the memory pointed to by a
pointer, and offset a pointer from allocated pointer.
The actual checkLimits part will come later, after another round of review.
llvm-svn: 75657
additional bug fixes:
1. The bug that everyone hit was a problem in the asmprinter where it
would remove $stub but keep the L prefix on a name when emitting the
indirect symbol. This is easy to fix by keeping the name of the stub
and the name of the symbol in a StringMap instead of just keeping a
StringSet and trying to reconstruct it late.
2. There was a problem printing the personality function. The current
logic to print out the personality function from the DWARF information
is a bit of a cesspool right now that duplicates a bunch of other
logic in the asm printer. The short version of it is that it depends
on emitting both the L and _ prefix for symbols (at least on darwin)
and until I can untangle it, it is best to switch the mangler back to
emitting both prefixes.
llvm-svn: 75646
unbreaking llvm-gcc (on Darwin).
--- Reverse-merging r75620 into '.':
U include/llvm/Support/Mangler.h
--- Reverse-merging r75610 into '.':
U test/CodeGen/X86/loop-hoist.ll
G include/llvm/Support/Mangler.h
U lib/Target/X86/AsmPrinter/X86ATTAsmPrinter.cpp
U lib/VMCore/Mangler.cpp
llvm-svn: 75636
to symbols instead of doing it with "printSuffixedName". This gets us to the point
where there is a real separation between computing a symbol name and printing it,
something I need for MC printer stuff.
This patch also fixes a corner case bug where unnamed private globals wouldn't get
the private label prefix.
Next up, rename all uses of getValueName -> getMangledName for better greppability,
and then tackle the ppc/arm backends to eliminate "printSuffixedName".
llvm-svn: 75610
indicates whether the label is private or not, instead of taking
prefix stuff. One effect of this is that symbols will be generated
with *just* the private prefix, instead of both the private prefix
*and* the user-label-prefix, but this doesn't matter as long as it
is consistent. For example we'll now get "Lfoo" instead of "L_foo".
These are just assembler temporary labels anyway, so they never even
make it into the .o file.
llvm-svn: 75607
1) unique globals with the existing "Count" local in Mangler, not with
atomic nonsense. Using atomics will give us nondeterminstic output
from the compiler when using multiple threads, which is bad.
2) Do not mangle an unknown global name with a type suffix. We don't
need this anymore now that llvm ir doesn't have type planes.
llvm-svn: 75541
(I think it's reasonably clear that we want to have a canonical form for
constructs like this; if anyone thinks that a select is not the best
canonical form, please tell me.)
llvm-svn: 75531
so that all code paths get it. PR4256 was about a case where the
phi translation loop would find all preds in the Visited cache, so
it could get by without re-sorting the NonLocalPointerDeps cache.
Fix this by resorting it earlier, there is no reason not to do this.
This patch inspired by Jakub Staszak's patch.
llvm-svn: 75476
of lea. It is better for code size (and presumably efficiency) to use:
movl $foo, %eax
rather than:
leal foo, eax
Both give a nice zero extending "move immediate" instruction, the former is just
smaller. Note that global addresses should be handled different by the x86
backend, but I chose to follow the style already in place and add more fixme's.
llvm-svn: 75403
Basically, using:
lea symbol(%rip), %rax
is not valid in -static mode, because the current RIP may not be
within 32-bits of "symbol" when an app is built partially pic and
partially static. The fix for this is to compile it to:
lea symbol, %rax
It would be better to codegen this as:
movq $symbol, %rax
but that will come next.
The hard part of fixing this bug was fixing abi-isel, which was actively
testing for the wrong behavior. Also, the RUN lines are completely impossible
to understand what they are testing. To help with this, convert the -static
x86-64 codegen tests to use filecheck. This is much more stable and makes it
more clear what the codegen is expected to be.
llvm-svn: 75382
value. Adjust other code to deal with that correctly. Make
DAGTypeLegalizer::PromoteIntRes_EXTRACT_VECTOR_ELT take advantage of
this new flexibility to simplify the code and make it deal with unusual
vectors (like <4 x i1>) correctly. Fixes PR3037.
llvm-svn: 75176
registers based on dynamic conditions. For example, X86 EBP/RBP, when used as
frame register has to be spilled in the first fixed object. It should inform
PEI this so it doesn't get allocated another stack object. Also, it should not
be spilled as other callee-saved registers but rather its spilling and restoring
are being handled by emitPrologue and emitEpilogue. Avoid spilling it twice.
llvm-svn: 75116
as an (index,bool) pair. The bool flag records whether the kill is a
PHI kill or not. This code will be used to enable splitting of live
intervals containing PHI-kills.
A slight change to live interval weights introduced an extra spill
into lsr-code-insertion (outside the critical sections). The test
condition has been updated to reflect this.
llvm-svn: 75097
* remove some old code that was needed when we'd put ESP in the scale instead of
the base of some instructions.
* Fix a bug with the P modifier in inline asm that caused us to drop it.
llvm-svn: 75077
VSETCC must define all bits, which is different than it was documented
to before. Since all targets that implement VSETCC already have this
behavior, and we don't optimize based on this, just change the
documentation. We now get nice code for vec_compare.ll
llvm-svn: 74978
Note, isUndef marker must be placed even on implicit_def def operand or else the scavenger will not ignore it. This is necessary because -O0 path does not use liveintervalanalysis, it treats implicit_def just like any other def.
llvm-svn: 74601
allowed to be undefined when the expression is seen, we cannot enforce the
same-section requirement until the entire assembly file has been seen.
llvm-svn: 74565
Avoid unnecessary duplication of operand 0 of X86::FpSET_ST0_80. This duplication would
cause one register to remain on the stack at the function return.
llvm-svn: 74534
The register allocator, when it allocates a register to a virtual register defined by an implicit_def, can allocate any physical register without worrying about overlapping live ranges. It should mark all of operands of the said virtual register so later passes will do the right thing.
This is not the best solution. But it should be a lot less fragile to having the scavenger try to track what is defined by implicit_def.
llvm-svn: 74518
an individual exhaustive evaluation reflects only the exit value
implied by an individual exit, which may differ from the actual
exit value of the loop if there are other exits. This fixes PR4477.
llvm-svn: 74447
After much back and forth, I decided to deviate from ARM design and split LDR into 4 instructions (r + imm12, r + imm8, r + r << imm12, constantpool). The advantage of this is 1) it follows the latest ARM technical manual, and 2) makes it easier to reduce the width of the instruction later. The down side is this creates more inconsistency between the two sub-targets. We should split ARM LDR instruction in a similar fashion later. I've added a README entry for this.
llvm-svn: 74420
when one of them can be converted to a trivial icmp and conditional
branch.
This addresses what is essentially a phase ordering problem.
SimplifyCFG knows how to do this transformation, but it doesn't do so
if the primary block has any instructions in it other than an icmp and
a branch. In the given testcase, the block contains other instructions,
however they are loop-invariant and can be hoisted. SimplifyCFG doesn't
have LoopInfo though, so it can't hoist them. And, it's important that
the blocks be merged before LoopRotation, as it doesn't support
multiple-exit loops.
llvm-svn: 74396
inserted to replace that value must dominate all of of the basic
blocks associated with the uses of the value in the PHI, not just
one of them.
llvm-svn: 74376
implementation primarily differs from the former in that the asmprinter
doesn't make a zillion decisions about whether or not something will be
RIP relative or not. Instead, those decisions are made by isel lowering
and propagated through to the asm printer. To achieve this, we:
1. Represent RIP relative addresses by setting the base of the X86 addr
mode to X86::RIP.
2. When ISel Lowering decides that it is safe to use RIP, it lowers to
X86ISD::WrapperRIP. When it is unsafe to use RIP, it lowers to
X86ISD::Wrapper as before.
3. This removes isRIPRel from X86ISelAddressMode, representing it with
a basereg of RIP instead.
4. The addressing mode matching logic in isel is greatly simplified.
5. The asmprinter is greatly simplified, notably the "NotRIPRel" predicate
passed through various printoperand routines is gone now.
6. The various symbol printing routines in asmprinter now no longer infer
when to emit (%rip), they just print the symbol.
I think this is a big improvement over the previous situation. It does have
two small caveats though: 1. I implemented a horrible "no-rip" modifier for
the inline asm "P" constraint modifier. This is a short term hack, there is
a much better, but more involved, solution. 2. I had to xfail an
-aggressive-remat testcase because it isn't handling the use of RIP in the
constant-pool reading instruction. This specific test is easy to fix without
-aggressive-remat, which I intend to do next.
llvm-svn: 74372
test suite. Remove documentation for --with-f2c, which
is no longer supported. Remove information about obtaining
tcl/expect, which ship with Mac OS X by default since
10.4.
llvm-svn: 74271
terminator, instead of after the last phi. This fixes a bug
exposed by ScalarEvolution analyzing more kinds of loops.
This fixes PR4436.
llvm-svn: 74072
trip counts in more cases.
Generalize ScalarEvolution's isLoopGuardedByCond code to recognize
And and Or conditions, splitting the code out into an
isNecessaryCond helper function so that it can evaluate Ands and Ors
recursively, and make SCEVExpander be much more aggressive about
hoisting instructions out of loops.
test/CodeGen/X86/pr3495.ll has an additional instruction now, but
it appears to be due to an arbitrary register allocation difference.
llvm-svn: 74048
generating LLVM IR; it is correct in the code as written
to use 8-byte-aligned operations to copy Key in bar. Formerly
the gcc inliner was run, now it isn't. I don't think it's
possible to preserve this as a pure FE test. Adding -O2 lets
the llvm optimizers get rid of the 8-byte-aligned stores, at least.
llvm-svn: 73981
generated code was apparently doing stores directly into the return value
aggregate; now, it's doing a copy from a compiler-generated static object.
That object is initialized using [4 x i8] which breaks the test. I believe
this change preserves the original point of the test.
Of course it would be better for the code to do stores directly into the
return aggregate, but that is not what happens at -O0; the llvm optimizers
seem to do that on x86 but not on ppc32, possibly because of the explicit
padding (which is unavoidable). I think it must have been being done by
a gcc optimizer pass before.
llvm-svn: 73972
blocks, and also exit blocks with multiple conditions (combined
with (bitwise) ands and ors). It's often infeasible to compute an
exact trip count in such cases, but a useful upper bound can often
be found.
llvm-svn: 73866
a global with that gets printed with the :mem modifier. All operands to lea's
should be handled with the lea32mem operand kind, and this allows the TLS stuff
to do this. There are several better ways to do this, but I went for the minimal
change since I can't really test this (beyond make check).
This also makes the use of EBX explicit in the operand list in the 32-bit,
instead of implicit in the instruction.
llvm-svn: 73834
SCEVUnknowns with identical Instructions to be equal. This allows
it to analze cases such as the attached testcase, where the front-end
has cloned the loop controlling expression. Along with r73805, this
lets IndVarSimplify eliminate all the sign-extend casts in the
loop in the attached testcase.
llvm-svn: 73807
expression in IVUsers, because in the case of a use of a non-linear
addrec outside of a loop, this causes the addrec to be evaluated as
a linear addrec.
llvm-svn: 73774
as if they were multiple uses of the same instruction. This interacts
well with the existing loadpre that j-t does to open up many new jump
threads earlier.
llvm-svn: 73768
while experimenting. I'm reasonably sure this is correct, but please
tell me if these instructions have some strange property which makes this
change unsafe.
llvm-svn: 73746
casted induction variables in cases where the cast
isn't foldable. It ended up being a pessimization in
many cases. This could be fixed, but it would require
a bunch of complicated code in IVUsers' clients. The
advantages of this approach aren't visible enough to
justify it at this time.
llvm-svn: 73706
move loads back past a check that the load address
is valid, see new testcase. The test that went
in with 72661 has exactly this case, except that
the conditional it's moving past is checking
something else; I've settled for changing that
test to reference a global, not a pointer. It
may be possible to scan all the tests you pass and
make sure none of them are checking any component
of the address, but it's not trivial and I'm not
trying to do that here.
llvm-svn: 73632
obscuring what would otherwise be a low-bits mask. Use ComputeMaskedBits
to compute what ShrinkDemandedConstant knew about to reconstruct a
low-bits mask value.
llvm-svn: 73540
TurnCopyIntoImpDef turns a copy into implicit_def and remove the val# defined by it. This causes an scavenger assertion later if the def reaches other blocks. Disable the transformation if the value live interval extends beyond its def block.
llvm-svn: 73478
support for x86, and UMULO/SMULO for many architectures, including PPC
(PR4201), ARM, and Cell. The resulting expansion isn't perfect, but it's
not bad.
llvm-svn: 73477
failures.
To support this, add some utility functions to Type to help support
vector/scalar-independent code. Change ConstantInt::get and
ConstantFP::get to support vector types, and add an overload to
ConstantInt::get that uses a static IntegerType type, for
convenience.
Introduce a new getConstant method for ScalarEvolution, to simplify
common use cases.
llvm-svn: 73431
incomming chain of the RETURN node. The incomming chain must
be the outgoing chain of the CALL node. This causes the
backend to identify tail calls that are not tail calls. This
patch fixes this.
llvm-svn: 73387
- Change register allocation hint to a pair of unsigned integers. The hint type is zero (which means prefer the register specified as second part of the pair) or entirely target dependent.
- Allow targets to specify alternative register allocation orders based on allocation hint.
Part 2.
- Use the register allocation hint system to implement more aggressive load / store multiple formation.
- Aggressively form LDRD / STRD. These are formed *before* register allocation. It has to be done this way to shorten live interval of base and offset registers. e.g.
v1025 = LDR v1024, 0
v1026 = LDR v1024, 0
=>
v1025,v1026 = LDRD v1024, 0
If this transformation isn't done before allocation, v1024 will overlap v1025 which means it more difficult to allocate a register pair.
- Even with the register allocation hint, it may not be possible to get the desired allocation. In that case, the post-allocation load / store multiple pass must fix the ldrd / strd instructions. They can either become ldm / stm instructions or back to a pair of ldr / str instructions.
This is work in progress, not yet enabled.
llvm-svn: 73381
they contain multiplications of constants with add operations.
This helps simplify several kinds of things; in particular it
helps simplify expressions like ((-1 * (%a + %b)) + %a) to %b,
as expressions like this often come up in loop trip count
computations.
llvm-svn: 73361
induction variable when the addrec to be expanded does not require
a wider type. This eliminates the need for IndVarSimplify to
micro-manage SCEV expansions, because SCEVExpander now
automatically expands them in the form that IndVarSimplify considers
to be canonical. (LSR still micro-manages its SCEV expansions,
because it's optimizing for the target, rather than for
other optimizations.)
Also, this uses the new getAnyExtendExpr, which has more clever
expression simplification logic than the IndVarSimplify code it
replaces, and this cleans up some ugly expansions in code such as
the included masked-iv.ll testcase.
llvm-svn: 73294
consecutive addresses togther. This makes it easier for the post-allocation pass
to form ldm / stm.
This is step 1. We are still missing a lot of ldm / stm opportunities because
of register allocation are not done in the desired order. More enhancements
coming.
llvm-svn: 73291
out of sync with regular cc.
The only difference between the tail call cc and the normal
cc was that one parameter register - R9 - was reserved for
calling functions through a function pointer. After time the
tail call cc has gotten out of sync with the regular cc.
We can use R11 which is also caller saved but not used as
parameter register for potential function pointers and
remove the special tail call cc on x86-64.
llvm-svn: 73233
other operators. For the rare cases where a list type cannot be
deduced, provide a []<type> syntax, where <type> is the list element
type.
llvm-svn: 73078
on x86 to handle more cases. Fix a bug in said code that would cause it
to read past the end of an object. Rewrite the code in
SelectionDAGLegalize::ExpandBUILD_VECTOR to be a bit more general.
Remove PerformBuildVectorCombine, which is no longer necessary with
these changes. In addition to simplifying the code, with this change,
we can now catch a few more cases of consecutive loads.
llvm-svn: 73012
nodes for vectors with an i16 element type. Add an optimization for
building a vector which is all zeros/undef except for the bottom
element, where the bottom element is an i8 or i16.
llvm-svn: 72988
Update code generator to use this attribute and remove NoImplicitFloat target option.
Update llc to set this attribute when -no-implicit-float command line option is used.
llvm-svn: 72959
build vectors with i64 elements will only appear on 32b x86 before legalize.
Since vector widening occurs during legalize, and produces i64 build_vector
elements, the dag combiner is never run on these before legalize splits them
into 32b elements.
Teach the build_vector dag combine in x86 back end to recognize consecutive
loads producing the low part of the vector.
Convert the two uses of TLI's consecutive load recognizer to pass LoadSDNodes
since that was required implicitly.
Add a testcase for the transform.
Old:
subl $28, %esp
movl 32(%esp), %eax
movl 4(%eax), %ecx
movl %ecx, 4(%esp)
movl (%eax), %eax
movl %eax, (%esp)
movaps (%esp), %xmm0
pmovzxwd %xmm0, %xmm0
movl 36(%esp), %eax
movaps %xmm0, (%eax)
addl $28, %esp
ret
New:
movl 4(%esp), %eax
pmovzxwd (%eax), %xmm0
movl 8(%esp), %eax
movaps %xmm0, (%eax)
ret
llvm-svn: 72957
integer and floating-point opcodes, introducing
FAdd, FSub, and FMul.
For now, the AsmParser, BitcodeReader, and IRBuilder all preserve
backwards compatability, and the Core LLVM APIs preserve backwards
compatibility for IR producers. Most front-ends won't need to change
immediately.
This implements the first step of the plan outlined here:
http://nondot.org/sabre/LLVMNotes/IntegerOverflow.txt
llvm-svn: 72897
Update code generator to use this attribute and remove DisableRedZone target option.
Update llc to set this attribute when -disable-red-zone command line option is used.
llvm-svn: 72894
RewriteStoreUserOfWholeAlloca deal with tail padding because
isSafeUseOfBitCastedAllocation expects them to. Otherwise, we crash
trying to erase the bitcast.
llvm-svn: 72688