(I think it's reasonably clear that we want to have a canonical form for
constructs like this; if anyone thinks that a select is not the best
canonical form, please tell me.)
llvm-svn: 75531
the changes are allowed by not calling this function for bitcasts.
The Instruction::AShr case is dead because
SimplifyDemandedInstructionBits handles that case.
llvm-svn: 75514
This involves temporarily hard wiring some parts to use the global context. This isn't ideal, but it's
the only way I could figure out to make this process vaguely incremental.
llvm-svn: 75445
Make llvm_unreachable take an optional string, thus moving the cerr<< out of
line.
LLVM_UNREACHABLE is now a simple wrapper that makes the message go away for
NDEBUG builds.
llvm-svn: 75379
failures.
To support this, add some utility functions to Type to help support
vector/scalar-independent code. Change ConstantInt::get and
ConstantFP::get to support vector types, and add an overload to
ConstantInt::get that uses a static IntegerType type, for
convenience.
Introduce a new getConstant method for ScalarEvolution, to simplify
common use cases.
llvm-svn: 73431
integer and floating-point opcodes, introducing
FAdd, FSub, and FMul.
For now, the AsmParser, BitcodeReader, and IRBuilder all preserve
backwards compatability, and the Core LLVM APIs preserve backwards
compatibility for IR producers. Most front-ends won't need to change
immediately.
This implements the first step of the plan outlined here:
http://nondot.org/sabre/LLVMNotes/IntegerOverflow.txt
llvm-svn: 72897
instcombine doesn't know when it's safe. To partially compensate
for this, introduce new code to do this transformation in
dagcombine, which can use UnsafeFPMath.
llvm-svn: 72872
Instcombine to be more aggressive about using SimplifyDemandedBits
on shift nodes. This allows a shift to be simplified to zero in the
included test case.
llvm-svn: 72204
integer types, unless they are already strange. This prevents it from
turning the code produced by SROA into crazy libcalls and stuff that
the code generator can't handle. In the attached example, the result
was an i96 multiply that caused the x86 backend to assert.
Note that if TargetData had an idea of what the legal types are for
a target that this could be used to stop instcombine from introducing
i64 muls, as Scott wanted.
llvm-svn: 68598
to/from integer types that are not intptr_t to convert to intptr_t
then do an integer conversion to the dest type. This exposes the
cast to the optimizer.
llvm-svn: 67638
1. Make instcombine always canonicalize trunc x to i1 into an icmp(x&1). This
exposes the AND to other instcombine xforms and is more of what the code
generator expects.
2. Rewrite the remaining trunc pattern match to use 'match', which
simplifies it a lot.
llvm-svn: 67635
linkage: the value may be replaced with something
different at link time. (Frontends that want to
allow values to be loaded out of weak constants can
give their constants weak_odr linkage).
llvm-svn: 67407
allocations. Apparently the assumption is there is an
instruction (terminator?) following the allocation so I
am allowing the same assumption.
llvm-svn: 66716
use, check also for the case where it has two uses,
the other being a llvm.dbg.declare. This is needed so
debug info doesn't affect codegen.
llvm-svn: 65970
memcpy to match the alignment of the destination. It isn't necessary
for making loads and stores handled like the SSE loadu/storeu
intrinsics, and it was causing a performance regression in
MultiSource/Applications/JM/lencod.
The problem appears to have been a memcpy that copies from some
highly aligned array into an alloca; the alloca was then being
assigned a large alignment, which required codegen to perform
dynamic stack-pointer re-alignment, which forced the enclosing
function to have a frame pointer, which led to increased spilling.
llvm-svn: 65289
as legality. Make load sinking and gep sinking more careful: we only
do it when it won't pessimize loads from the stack. This has the added
benefit of not producing code that is unanalyzable to SROA.
llvm-svn: 65209
Enhance instcombine to use the preferred field of
GetOrEnforceKnownAlignment in more cases, so that regular IR operations are
optimized in the same way that the intrinsics currently are.
llvm-svn: 64623
turn icmp eq a+x, b+x into icmp eq a, b if a+x or b+x has other uses. This
may have been increasing register pressure leading to the bzip2 slowdown.
llvm-svn: 63487
improvements to the EvaluateInDifferentType code. This code works
by just inserted a bunch of new code and then seeing if it is
useful. Instcombine is not allowed to do this: it can only insert
new code if it is useful, and only when it is converging to a more
canonical fixed point. Now that we iterate when DCE makes progress,
this causes an infinite loop when the code ends up not being used.
llvm-svn: 63483
simplifydemandedbits to simplify instructions with *multiple
uses* in contexts where it can get away with it. This allows
it to simplify the code in multi-use-or.ll into a single 'add
double'.
This change is particularly interesting because it will cover
up for some common codegen bugs with large integers created due
to the recent SROA patch. When working on fixing those bugs,
this should be disabled.
llvm-svn: 63481
Now, if it detects that "V" is the same as some other value,
SimplifyDemandedBits returns the new value instead of RAUW'ing it immediately.
This has two benefits:
1) simpler code in the recursive SimplifyDemandedBits routine.
2) it allows future fun stuff in instcombine where an operation has multiple
uses and can be simplified in one context, but not all.
#2 isn't implemented yet, this patch should have no functionality change.
llvm-svn: 63479