Changing the sign when comparing the base pointer would introduce all
sorts of unexpected things like:
%gep.i = getelementptr inbounds [1 x i8]* %a, i32 0, i32 0
%gep2.i = getelementptr inbounds [1 x i8]* %b, i32 0, i32 0
%cmp.i = icmp ult i8* %gep.i, %gep2.i
%cmp.i1 = icmp ult [1 x i8]* %a, %b
%cmp = icmp ne i1 %cmp.i, %cmp.i1
ret i1 %cmp
into:
%cmp.i = icmp slt [1 x i8]* %a, %b
%cmp.i1 = icmp ult [1 x i8]* %a, %b
%cmp = xor i1 %cmp.i, %cmp.i1
ret i1 %cmp
By preserving the original sign, we now get:
ret i1 false
This fixes PR16483.
llvm-svn: 185259
Real world code sometimes has the denominator of a 'udiv' be a
'select'. LLVM can handle such cases but only when the 'select'
operands are symmetric in structure (both select operands are a constant
power of two or a left shift, etc.). This falls apart if we are dealt a
'udiv' where the code is not symetric or if the select operands lead us
to more select instructions.
Instead, we should treat the LHS and each select operand as a distinct
divide operation and try to optimize them independently. If we can
to simplify each operation, then we can replace the 'udiv' with, say, a
'lshr' that has a new select with a bunch of new operands for the
select.
llvm-svn: 185257
No functionality change.
It should suffice to check the type of a debug info metadata, instead of
calling Verify. For cases where we know the type of a DI metadata, use
assert.
llvm-svn: 185249
We may, after other optimizations, find ourselves with IR that looks
like:
%shl = shl i32 1, %y
%cmp = icmp ult i32 %shl, 32
Instead, we should just compare the shift count:
%cmp = icmp ult i32 %y, 5
llvm-svn: 185242
This fixes PR16418, which reports that a function calling
__builtin_unwind_init() asserts. The cause is that this generates a
spill/restore for VRSAVE, and we support that only on Darwin (because VRSAVE is
only really used on Darwin).
The test case checks only that we don't crash. We can add correctness checks
once someone verifies what behavior the function is supposed to have.
llvm-svn: 185235
To support this we have to insert 'extractelement' instructions to pick the right lane.
We had this functionality before but I removed it when we moved to the multi-block design because it was too complicated.
llvm-svn: 185230
Change assert("text") to assert(0 && "text"). The first case is a const char *
to bool conversion, which always evaluates to true, never triggering the
assert. The second case will always trigger the assert.
llvm-svn: 185227
In this code we keep track of pointers that we are allowed to read from, if they are accessed by non-predicated blocks.
We use this list to allow vectorization of conditional loads in predicated blocks because we know that these addresses don't segfault.
llvm-svn: 185214
- lit tests verify that each line of input LLVM IR gets a !dbg node and a
corresponding entry of metadata that contains the line number
- unit tests verify that DebugIR works as advertised in the interface
- refactored some useful IR generation functionality from the MCJIT unit tests
so it can be reused
llvm-svn: 185212
On OpenBSD, the stack-smash protection transform uses "__guard_local"
and "__stack_smash_handler" instead of "__stack_chk_guard" and
"__stack_chk_fail". However, CodeGen/PowerPC/stack-protector.ll
doesn't specify a target OS, so on OpenBSD it fails.
Add -mtriple=ppc32-unknown-linux to make the test host-OS agnostic. While
there, convert to FileCheck.
Patch by Matthew Dempsky.
llvm-svn: 185206
Based on GCC's output for TLS variables (OP_constNu, x@dtpoff,
OP_lo_user), this implements debug info support for TLS in ELF. Verified
that this output is correct/sufficient on Linux (using gold - if you're
using binutils-ld, you'll need something with the fix for
http://sourceware.org/bugzilla/show_bug.cgi?id=15685 in it).
Support on non-ELF is sort of "arbitrary" at the moment - if Apple folks
want to discuss (or just go ahead & implement) how this should work in
MachO, etc, I'm open.
llvm-svn: 185203
Under certain (evidently rare) circumstances, this code used to convert OR(a,
AND(x, y)) into OR(a, x). This was incorrect.
While there, I've added a comment to the code immediately above.
llvm-svn: 185201
- warn users when -debug-ir is used with old JIT engine (only partial debug
info is available)
For example, to debug an IR file with GDB (that supports JIT registration), do:
$ gdb --args lli -use-mcjit -debug-ir testcase.ll
(gdb) break main
(gdb) run
<Process continues to lli main>
(gdb) continue
<Process continues to testcase.ll main()
(gdb) step
<Now stepping through the LLVM IR in testcase.ll>
llvm-svn: 185197
- Build debug metadata for 'bare' Modules using DIBuilder
- DebugIR can be constructed to generate an IR file (to be seen by a debugger)
or not in cases where the user already has an IR file on disk.
llvm-svn: 185193
should expand ATOMIC_CMP_SWAP nodes the same way that it does for ATOMIC_SWAP.
Since ATOMIC_LOADs on some targets (e.g. older ARM variants) get legalized to
ATOMIC_CMP_SWAPs, the missing case had been causing i64 atomic loads to crash
during isel.
<rdar://problem/14074644>
llvm-svn: 185186
Allow a BlockFrequency to be divided by a non-zero BranchProbability
with saturating arithmetic. This will be used to compute the frequency
of a loop header given the probability of leaving the loop.
Our long division algorithm already saturates on overflow, so that was a
freebie.
llvm-svn: 185184