1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-22 20:43:44 +02:00
Commit Graph

1164 Commits

Author SHA1 Message Date
Duncan Sands
1f4e159b01 Add a testcase that would have noticed the typo fixed in commit 166475.
llvm-svn: 166547
2012-10-24 07:17:20 +00:00
Duncan Sands
6ce2ce7ed1 Transform code like this
%V = mul i64 %N, 4
 %t = getelementptr i8* bitcast (i32* %arr to i8*), i32 %V
into
 %t1 = getelementptr i32* %arr, i32 %N
 %t = bitcast i32* %t1 to i8*
incorporating the multiplication into the getelementptr.
This happens all the time in dragonegg, for example for
  int foo(int *A, int N) {
    return A[N];
  }
because gcc turns this into byte pointer arithmetic before it hits the plugin:
  D.1590_2 = (long unsigned int) N_1(D);
  D.1591_3 = D.1590_2 * 4;
  D.1592_5 = A_4(D) + D.1591_3;
  D.1589_6 = *D.1592_5;
  return D.1589_6;
The D.1592_5 line is a POINTER_PLUS_EXPR, which is turned into a getelementptr
on a bitcast of A_4 to i8*, so this becomes exactly the kind of IR that the
transform fires on.

An analogous transform (with no testcases!) already existed for bitcasts of
arrays, so I rewrote it to share code with this one.

llvm-svn: 166474
2012-10-23 08:28:26 +00:00
Benjamin Kramer
f3ac3fa22b InstCombine: Fix an edge case where constant icmps could sneak into ConstantFoldInstOperands and crash.
Have to refactor the ConstantFolder interface one day to define bugs like this away. Fixes PR14131.

llvm-svn: 166374
2012-10-20 08:43:52 +00:00
Meador Inge
4cd6c97082 instcombine: Migrate strcpy optimizations
This patch migrates the strcpy optimizations from the simplify-libcalls pass
into the instcombine library call simplifier.  Note also that StrCpyChkOpt
has been updated with a few simplifications that were being done in the
simplify-libcalls version of StrCpyOpt, but not in the migrated implementation
of StrCpyOpt.  There is no reason to overload StrCpyOpt with fortified and
regular simplifications in the new model since there is already a dedicated
simplifier for __strcpy_chk.

llvm-svn: 166198
2012-10-18 18:12:40 +00:00
Michael Gottesman
b9205da3c6 [InstCombine] Teach InstCombine how to handle an obfuscated splat.
An obfuscated splat is where the frontend poorly generates code for a splat
using several different shuffles to create the splat, i.e.,

  %A = load <4 x float>* %in_ptr, align 16
  %B = shufflevector <4 x float> %A, <4 x float> undef, <4 x i32> <i32 0, i32 0, i32 undef, i32 undef>
  %C = shufflevector <4 x float> %B, <4 x float> %A, <4 x i32> <i32 0, i32 1, i32 4, i32 undef>
  %D = shufflevector <4 x float> %C, <4 x float> %A, <4 x i32> <i32 0, i32 1, i32 2, i32 4>

llvm-svn: 166061
2012-10-16 21:29:38 +00:00
Meador Inge
a24293f9d8 instcombine: Migrate strcmp and strncmp optimizations
This patch migrates the strcmp and strncmp optimizations from the
simplify-libcalls pass into the instcombine library call simplifier.

llvm-svn: 165915
2012-10-15 03:47:37 +00:00
Meador Inge
7fafebf1b4 instcombine: Migrate strchr and strrchr optimizations
This patch migrates the strchr and strrchr optimizations from the
simplify-libcalls pass into the instcombine library call simplifier.

llvm-svn: 165875
2012-10-13 16:45:37 +00:00
Meador Inge
29f1f4f7b1 instcombine: Migrate strcat and strncat optimizations
This patch migrates the strcat and strncat optimizations from the
simplify-libcalls pass into the instcombine library call simplifier.

llvm-svn: 165874
2012-10-13 16:45:32 +00:00
Nick Lewycky
211cb06fc3 Don't crash when !tbaa.struct contents is invalid.
llvm-svn: 165693
2012-10-11 02:05:23 +00:00
Duncan Sands
448245e370 The alignment of an sret parameter is known: it must be at least the
alignment of the return type.  Teach the optimizers this.

llvm-svn: 165226
2012-10-04 13:36:31 +00:00
Benjamin Kramer
aa07e96212 Fix broken tests.
llvm-svn: 165019
2012-10-02 15:49:34 +00:00
Nick Lewycky
689f61680b Surprisingly, we missed a trivial case here. Fix that!
llvm-svn: 164814
2012-09-28 09:33:53 +00:00
Meador Inge
55db33f26d instcombine: Add more test cases for __strncpy_chk simplification
llvm-svn: 164800
2012-09-27 21:21:31 +00:00
Meador Inge
5e916a5b6f instcombine: Add more test cases for __strcpy_chk simplification
llvm-svn: 164799
2012-09-27 21:21:28 +00:00
Meador Inge
a7ed72c476 instcombine: Add more test cases for __memmove_chk simplification
llvm-svn: 164798
2012-09-27 21:21:25 +00:00
Meador Inge
89cd434f53 instcombine: Add more test cases for __memcpy_chk simplification
llvm-svn: 164797
2012-09-27 21:21:21 +00:00
Meador Inge
8fb8751a66 instcombine: Add more test cases for __memset_chk simplification
llvm-svn: 164796
2012-09-27 21:21:18 +00:00
Sylvestre Ledru
b77340e506 Revert 'Fix a typo 'iff' => 'if''. iff is an abreviation of if and only if. See: http://en.wikipedia.org/wiki/If_and_only_if Commit 164767
llvm-svn: 164768
2012-09-27 10:14:43 +00:00
Sylvestre Ledru
1c5e7904de Fix a typo 'iff' => 'if'
llvm-svn: 164767
2012-09-27 09:59:43 +00:00
Nick Lewycky
9ae46f5c91 Prefer shuffles to selects. Backends love shuffles!
llvm-svn: 164763
2012-09-27 08:33:56 +00:00
Richard Osborne
53dd900512 Add missing : in CHECK line.
llvm-svn: 164540
2012-09-24 17:22:43 +00:00
Richard Osborne
9be8a6db61 Add missing check for presence of target data.
This avoids a crash in visitAllocaInst when target data isn't available.

llvm-svn: 164539
2012-09-24 17:10:03 +00:00
Benjamin Kramer
3083ebd4de InstCombine: Make sure we use the pre-zext type when creating a constant of a value that is zext'd.
Fixes PR13250.

llvm-svn: 164377
2012-09-21 16:26:41 +00:00
Richard Osborne
552f578018 Fix instcombine to obey requested alignment when merging allocas.
llvm-svn: 164117
2012-09-18 09:31:44 +00:00
Dan Gohman
3abf8239e6 Handle the new !tbaa.struct metadata tags when converting a memcpy into scalar
loads and stores.

llvm-svn: 163844
2012-09-13 21:51:01 +00:00
Michael Gottesman
0a6d89b0cc [llvm] Updated the test fold-vector-select so that we test the vector selects exhaustively.
llvm-svn: 162953
2012-08-30 23:11:49 +00:00
Nadav Rotem
5848719c42 It is illegal to transform (sdiv (ashr X c1) c2) -> (sdiv x (2^c1 * c2)),
because C always rounds towards zero.

Thanks Dirk and Ben.

llvm-svn: 162899
2012-08-30 11:23:20 +00:00
Benjamin Kramer
bc139a63fc InstCombine: Guard the transform introduced in r162743 against large ints and non-const shifts.
llvm-svn: 162751
2012-08-28 13:08:13 +00:00
Nadav Rotem
d0cd39e7c9 Make sure that we don't call getZExtValue on values > 64 bits.
Thanks Benjamin for noticing this.

llvm-svn: 162749
2012-08-28 12:23:22 +00:00
Nadav Rotem
9582c96aef Teach InstCombine to canonicalize [SU]div+[AL]shl patterns.
For example:
  %1 = lshr i32 %x, 2
  %2 = udiv i32 %1, 100

rdar://12182093

llvm-svn: 162743
2012-08-28 10:01:43 +00:00
Chandler Carruth
b82f8d4af5 Port the global copy optimization from the SROA pass to InstCombine.
This optimization is really just replacing allocas wholesale with
globals, there is no scalarization.

The underlying motivation for this patch is to simplify the SROA pass
and focus it on splitting and promoting allocas.

llvm-svn: 162271
2012-08-21 08:39:44 +00:00
Benjamin Kramer
54d9d1a993 InstCombine: Fix a crasher when encountering a function pointer.
llvm-svn: 162180
2012-08-18 22:04:34 +00:00
Benjamin Kramer
1a05d12328 InstCombine: Add a couple of fabs identities for comparing with 0.0.
llvm-svn: 162174
2012-08-18 20:06:47 +00:00
Benjamin Kramer
4e9e4d1818 MemoryBuiltins: Properly guard ObjectSizeOffsetVisitor against cycles in the IR.
The previous fix only checked for simple cycles, use a set to catch longer
cycles too.

Drop the broken check from the ObjectSizeOffsetEvaluator. The BoundsChecking
pass doesn't have to deal with invalid IR like InstCombine does.

llvm-svn: 162120
2012-08-17 19:26:41 +00:00
Benjamin Kramer
d431f3a1f2 Guard MemoryBuiltins against self-looping GEPs, which can occur in unreachable code due to constant propagation.
Fixes PR13621.

llvm-svn: 162098
2012-08-17 14:16:37 +00:00
Michael Liao
cd290ba4fd fix infinite loop in instcombine with more than 4GB memcpy
- memcpy size is wrongly truncated into 32-bit and treat 8GB memcpy is
  0-sized memcpy
- as 0-sized memcpy/memset is already removed before SimplifyMemTransfer
  and SimplifyMemSet in visitCallInst, replace 0 checking with
  assertions.
- replace getZExtValue() with getLimitedValue() according to
  Eli Friedman

llvm-svn: 161923
2012-08-15 03:49:59 +00:00
Eli Friedman
449495cd62 The normal edge of an invoke is not allowed to branch to a block with a
landingpad.  Enforce it in the verifier, and fix the regression tests to match.

llvm-svn: 161697
2012-08-10 20:55:20 +00:00
Bob Wilson
51c50d44b7 Fix a serious typo in InstCombine's optimization of comparisons.
An unsigned value converted to floating-point will always be greater than
a negative constant.  Unfortunately InstCombine reversed the check so that
unsigned values were being optimized to always be greater than all positive
floating-point constants.  <rdar://problem/12029145>

llvm-svn: 161452
2012-08-07 22:35:16 +00:00
Nadav Rotem
1fbb339620 When constant folding GEP expressions, keep the address space information of pointers.
Together with Ran Chachick <ran.chachick@intel.com>

llvm-svn: 160954
2012-07-30 07:25:20 +00:00
Nuno Lopes
a4d7ce1441 fix PR13390: do not loop forever with self-referencing self instructions
llvm-svn: 160876
2012-07-27 18:21:15 +00:00
Nuno Lopes
7ec9936cb2 fix infinite loop in instcombine in the presence of a (malformed) self-referencing select inst.
This can happen as long as the instruction is not reachable. Instcombine does generate these unreachable malformed selects when doing RAUW

llvm-svn: 160874
2012-07-27 18:03:57 +00:00
Pete Cooper
ddb89a91ca Simplify demanded bits of select sources where the condition is a constant vector
llvm-svn: 160835
2012-07-26 23:10:24 +00:00
Pete Cooper
8d971d19cb Teach SimplifyDemandedBits how to look through fpext and fptrunc to simplify their operand
llvm-svn: 160823
2012-07-26 22:37:04 +00:00
Duncan Sands
9f6bce9180 Don't perform an overaligned load in this test, since that's undefined
behaviour that might be exploited one day.

llvm-svn: 160714
2012-07-25 09:45:37 +00:00
Duncan Sands
8080fe449f When folding a load from a global constant, if the load started in the middle
of an array element (rather than at the beginning of the element) and extended
into the next element, then the load from the second element was being handled
wrong due to incorrect updating of the notion of which byte to load next.  This
fixes PR13442.  Thanks to Chris Smowton for reporting the problem, analyzing it
and providing a fix.

llvm-svn: 160711
2012-07-25 09:14:54 +00:00
Nuno Lopes
06ac861756 teach objectsize about strdup() and strndup()
llvm-svn: 160676
2012-07-24 16:28:13 +00:00
Evan Cheng
5e82ad04d5 Back out r160101 and instead implement a dag combine to recover from instcombine transformation.
llvm-svn: 160387
2012-07-17 18:54:11 +00:00
Evan Cheng
e6c5349fcd Instcombine was transforming:
%shr = lshr i64 %key, 3
  %0 = load i64* %val, align 8
  %sub = add i64 %0, -1
  %and = and i64 %sub, %shr
  ret i64 %and

to:
  %shr = lshr i64 %key, 3
  %0 = load i64* %val, align 8
  %sub = add i64 %0, 2305843009213693951
  %and = and i64 %sub, %shr
  ret i64 %and

The demanded bit optimization is actually a pessimization because add -1 would
be codegen'ed as a sub 1. Teach the demanded constant shrinking optimization
to check for negated constant to make sure it is actually reducing the width
of the constant.

rdar://11793464

llvm-svn: 160101
2012-07-12 01:45:35 +00:00
Nuno Lopes
c676931bb9 instcombine: merge the functions that remove dead allocas and dead mallocs/callocs/...
This patch removes ~70 lines in InstCombineLoadStoreAlloca.cpp and makes both functions a bit more aggressive than before :)
In theory, we can be more aggressive when removing an alloca than a malloc, because an alloca pointer should never escape, but we are not taking advantage of this anyway

llvm-svn: 159952
2012-07-09 18:38:20 +00:00
Nuno Lopes
f3ba9a4d21 teach instcombine to remove allocated buffers even if there are stores, memcpy/memmove/memset, and objectsize users.
This means we can do cheap DSE for heap memory.
Nothing is done if the pointer excapes or has a load.

The churn in the tests is mostly due to objectsize, since we want to make sure we
don't delete the malloc call before evaluating the objectsize (otherwise it becomes -1/0)

llvm-svn: 159876
2012-07-06 23:09:25 +00:00
Chandler Carruth
5d3a0ce4e5 Fix the remaining TCL-style quotes found in the testsuite. This is
another mechanical change accomplished though the power of terrible Perl
scripts.

I have manually switched some "s to 's to make escaping simpler.

While I started this to fix tests that aren't run in all configurations,
the massive number of tests is due to a really frustrating fragility of
our testing infrastructure: things like 'grep -v', 'not grep', and
'expected failures' can mask broken tests all too easily.

Essentially, I'm deeply disturbed that I can change the testsuite so
radically without causing any change in results for most platforms. =/

llvm-svn: 159547
2012-07-02 19:09:46 +00:00
Chandler Carruth
8a358b3669 Convert all tests using TCL-style quoting to use shell-style quoting.
This was done through the aid of a terrible Perl creation. I will not
paste any of the horrors here. Suffice to say, it require multiple
staged rounds of replacements, state carried between, and a few
nested-construct-parsing hacks that I'm not proud of. It happens, by
luck, to be able to deal with all the TCL-quoting patterns in evidence
in the LLVM test suite.

If anyone is maintaining large out-of-tree test trees, feel free to poke
me and I'll send you the steps I used to convert things, as well as
answer any painful questions etc. IRC works best for this type of thing
I find.

Once converted, switch the LLVM lit config to use ShTests the same as
Clang. In addition to being able to delete large amounts of Python code
from 'lit', this will also simplify the entire test suite and some of
lit's architecture.

Finally, the test suite runs 33% faster on Linux now. ;]
For my 16-hardware-thread (2x 4-core xeon e5520): 36s -> 24s

llvm-svn: 159525
2012-07-02 12:47:22 +00:00
Nuno Lopes
b0d4abe297 make instcombine produce calls to llvm.donothing instead of a random intrinsic
llvm-svn: 159384
2012-06-28 22:31:24 +00:00
Evan Cheng
9132bcf0e3 Remove a instcombine transform that (no longer?) makes sense:
// C - zext(bool) -> bool ? C - 1 : C
    if (ZExtInst *ZI = dyn_cast<ZExtInst>(Op1))
      if (ZI->getSrcTy()->isIntegerTy(1))
        return SelectInst::Create(ZI->getOperand(0), SubOne(C), C);

This ends up forming sext i1 instructions that codegen to terrible code. e.g.
int blah(_Bool x, _Bool y) {
  return (x - y) + 1;
}
=>
        movzbl  %dil, %eax
        movzbl  %sil, %ecx
        shll    $31, %ecx
        sarl    $31, %ecx
        leal    1(%rax,%rcx), %eax
        ret


Without the rule, llvm now generates:
        movzbl  %sil, %ecx
        movzbl  %dil, %eax
        incl    %eax
        subl    %ecx, %eax
        ret

It also helps with ARM (and pretty much any target that doesn't have a sext i1 :-).

The transformation was done as part of Eli's r75531. He has given the ok to
remove it.

rdar://11748024

llvm-svn: 159230
2012-06-26 22:03:13 +00:00
Duncan Sands
1770ae1ae4 Replacing zero-sized alloca's with a null pointer is too aggressive, instead
merge all zero-sized alloca's into one, fixing c43204g from the Ada ACATS
conformance testsuite.  What happened there was that a variable sized object
was being allocated on the stack, "alloca i8, i32 %size".  It was then being
passed to another function, which tested that the address was not null (raising
an exception if it was) then manipulated %size bytes in it (load and/or store).
The optimizers cleverly managed to deduce that %size was zero (congratulations
to them, as it isn't at all obvious), which made the alloca zero size, causing
the optimizers to replace it with null, which then caused the check mentioned
above to fail, and the exception to be raised, wrongly.  Note that no loads
and stores were actually being done to the alloca (the loop that does them is
executed %size times, i.e. is not executed), only the not-null address check.

llvm-svn: 159202
2012-06-26 13:39:21 +00:00
Nuno Lopes
165c99b53d improve optimization of invoke instructions:
- simplifycfg:  invoke undef/null -> unreachable
 - instcombine:  invoke new  -> invoke expect(0, 0)  (an arbitrary NOOP intrinsic;  only done if the allocated memory is unused, of course)
 - verifier:  allow invoke of intrinsics  (to make the previous step work)

llvm-svn: 159146
2012-06-25 17:11:47 +00:00
Jakob Stoklund Olesen
c970d61f6d Revert remaining part of r93200: "Disable folding sext(trunc(x)) -> x"
This fixes PR5997.

These transforms were disabled because codegen couldn't deal with other
uses of trunc(x). This is now handled by the peephole pass.

This causes no regressions on x86-64.

llvm-svn: 159003
2012-06-22 16:36:43 +00:00
Nuno Lopes
009e7f08aa instcombine: disable optimization of 'invoke null/undef'. I'll move this functionality to SimplifyCFG (since we cannot make changes to the CFG here).
Fixes the crashes with the attached test case

llvm-svn: 158951
2012-06-21 23:52:14 +00:00
Evan Cheng
404624ee4d Look pass zext to strength reduce an udiv. Patch by David Majnemer. rdar://11721329
llvm-svn: 158946
2012-06-21 22:52:49 +00:00
Nuno Lopes
8baf9fdf84 Add support for invoke to the MemoryBuiltin analysid.
Update comments accordingly.

Make instcombine remove useless invokes to C++'s 'new' allocation function (test attached).

llvm-svn: 158937
2012-06-21 21:25:05 +00:00
Nuno Lopes
46de159c09 hopefully fix the buildbots: some tests have wrong definitions of malloc and were crashing this code on 64 bits machines
llvm-svn: 158923
2012-06-21 16:47:58 +00:00
Nuno Lopes
c9edab11db refactor the MemoryBuiltin analysis:
- provide more extensive set of functions to detect library allocation functions (e.g., malloc, calloc, strdup, etc)
 - provide an API to compute the size and offset of an object pointed by

Move a few clients (GVN, AA, instcombine, ...) to the new API.
This implementation is a lot more aggressive than each of the custom implementations being replaced.

Patch reviewed by Nick Lewycky and Chandler Carruth, thanks.

llvm-svn: 158919
2012-06-21 15:45:28 +00:00
Manman Ren
e3471c0bdf InstCombine: fix a bug when combining (fcmp cc0 x, y) && (fcmp cc1 x, y).
uno && ueq was converted to ueq, it should be converted to uno.

llvm-svn: 158441
2012-06-14 05:57:42 +00:00
Benjamin Kramer
14e8b5eac3 InstCombine: Turn (zext A) == (B & (1<<X)-1) into A == (trunc B), narrowing the compare.
This saves a cast, and zext is more expensive on platforms with subreg support
than trunc is. This occurs in the BSD implementation of memchr(3), see PR12750.
On the synthetic benchmark from that bug stupid_memchr and bsd_memchr have the
same performance now when not inlining either function.

stupid_memchr: 323.0us
bsd_memchr: 321.0us
memchr: 479.0us

where memchr is the llvm-gcc compiled bsd_memchr from osx lion's libc. When
inlining is enabled bsd_memchr still regresses down to llvm-gcc memchr time,
I haven't fully understood the issue yet, something is grossly mangling the
loop after inlining.

llvm-svn: 158297
2012-06-10 20:35:00 +00:00
Nuno Lopes
4485a55890 canonicalize:
-%a + 42
into
42 - %a

previously we were emitting:
-(%a + 42)

This fixes the infinite loop in PR12338. The generated code is still not perfect, though.
Will work on that next

llvm-svn: 158237
2012-06-08 22:30:05 +00:00
Nadav Rotem
e3db9cf2fd Fix a bug in FoldSelectOpOp. Bitcast ops may change the number of vector elements, which may disagree with the select condition type.
llvm-svn: 158166
2012-06-07 20:28:57 +00:00
Meador Inge
3822a47f16 Adding a missing -S to the opt invocation.
llvm-svn: 158128
2012-06-07 01:02:13 +00:00
Bill Wendling
9fe34c7293 Spell optimization name correclty.
llvm-svn: 158123
2012-06-06 23:53:23 +00:00
Bill Wendling
e89b370473 Another testcase for r156548.
<rdar://problem/10889741>

llvm-svn: 158121
2012-06-06 23:36:22 +00:00
Chad Rosier
fb2fc059af Fix combine of uno && ord -> false so that the ordering of the fcmps doesn't
matter.
rdar://11579835

llvm-svn: 158084
2012-06-06 17:22:40 +00:00
Chad Rosier
a81f430388 Remove extraneous CHECK-NOTs from previous commit and add a new test case.
llvm-svn: 158045
2012-06-06 02:12:17 +00:00
Chad Rosier
71dda0c580 FileCheckize this test.
llvm-svn: 158044
2012-06-06 01:38:32 +00:00
Benjamin Kramer
f5a7a0dcf1 InstCombine: Fix infinite loop when encountering switch on trivial icmp.
The test case feeds the following into InstCombine's visitSelect:
%tobool8 = icmp ne i32 0, 0
%phitmp = select i1 %tobool8, i32 3, i32 0
Then instcombine replaces the right side of the switch with 0, doesn't notice
that nothing changes and tries again indefinitely.

This fixes PR12897.

llvm-svn: 157587
2012-05-28 19:18:16 +00:00
Benjamin Kramer
1e46062a2b PR12967: Don't crash when trying to fold a shift that's larger than the type's size.
llvm-svn: 157548
2012-05-27 22:03:32 +00:00
Nuno Lopes
944814b41a revert my previous patches that introduced an additional parameter to the objectsize intrinsic.
After a lot of discussion, we realized it's not the best option for run-time bounds checking

llvm-svn: 157255
2012-05-22 15:25:31 +00:00
Nuno Lopes
11d6ecb6db objectsize: add a few more tests and fix a bug
llvm-svn: 156625
2012-05-11 18:25:29 +00:00
Eli Friedman
1746bfc50e Fix a minor logic mistake transforming compares in instcombine. PR12514.
llvm-svn: 156600
2012-05-11 01:32:59 +00:00
Nuno Lopes
415911a5c7 objectsize: add support for GEPs with non-constant indexes
add an additional parameter to InstCombiner::EmitGEPOffset() to force it to *not* emit operations with NUW flag

llvm-svn: 156585
2012-05-10 23:17:35 +00:00
Nuno Lopes
3d7a8137ee objectsize:
refactor code a bit to enable future changes to support run-time information
add support to compute allocation sizes at run-time if penalty > 1 (e.g., malloc(x), calloc(x, y), and VLAs)

llvm-svn: 156515
2012-05-09 21:30:57 +00:00
Nuno Lopes
e8880a9916 change the objectsize intrinsic signature: add a 3rd parameter to denote the maximum runtime performance penalty that the user is willing to accept.
This commit only adds the parameter. Code taking advantage of it will follow.

llvm-svn: 156473
2012-05-09 15:52:43 +00:00
Stepan Dyatkovskiy
469935e0ae Small fix in InstCombineCasts.cpp. Restored "alloca + bitcast" reducing for case when alloca's size is calculated within the "add/sub/... nsw".
Also added fix to 2011-06-13-nsw-alloca.ll test.

llvm-svn: 156231
2012-05-05 07:09:40 +00:00
Nuno Lopes
2762496a1a remove calls to calloc if the allocated memory is not used (it was already being done for malloc)
fix a few typos found by Chad in my previous commit

llvm-svn: 156110
2012-05-03 22:08:19 +00:00
Nuno Lopes
26239aeb99 add support for calloc to objectsize lowering
llvm-svn: 156102
2012-05-03 21:19:58 +00:00
Lang Hames
26d71c9d0a Add support for llvm.arm.neon.vmull* intrinsics to InstCombine. Fixes
<rdar://problem/11291436>.

This is a second attempt at a fix for this, the first was r155468. Thanks
to Chandler, Bob and others for the feedback that helped me improve this.

llvm-svn: 155866
2012-05-01 00:20:38 +00:00
Duncan Sands
8fa4166239 Just mark the sign bit as known zero, rather than any other irrelevant bits
known zero in the LHS.  Fixes PR12541.

llvm-svn: 155818
2012-04-30 11:56:58 +00:00
Dan Gohman
25a863dcf7 Reapply r155682, making constant folding more consistent, with a fix to work
properly with how the code handles all-undef PHI nodes.

llvm-svn: 155721
2012-04-27 17:50:22 +00:00
Chad Rosier
f3d4646377 Add instcombine patterns for the following transformations:
(x & y) | (x ^ y) -> x | y 
 (x & y) + (x ^ y) -> x | y 

Patch by Manman Ren.
rdar://10770603

llvm-svn: 155674
2012-04-26 23:29:14 +00:00
Chandler Carruth
8821a79947 Actually delete now-empty file.
llvm-svn: 155532
2012-04-25 02:30:00 +00:00
Lang Hames
7f69fbca29 Reverting r155468. Chris and Chandler have convinced me that it's dangerous and
in poor taste.

Talking through some alternate solutions with Chandler.

llvm-svn: 155530
2012-04-25 02:16:54 +00:00
Nadav Rotem
3c817bb807 ConstantFoldSelectInstruction swapped the operands of the select.
Fix 12592. Patch by Matt Pharr.

llvm-svn: 155480
2012-04-24 20:18:49 +00:00
Lang Hames
08eb5f2340 Add support for llvm.arm.neon.vmull* intrinsics to InstCombine. This fixes
<rdar://problem/11291436>.

llvm-svn: 155468
2012-04-24 18:58:36 +00:00
Jakob Stoklund Olesen
6c1440cf27 Reapply r155136 after fixing PR12599.
Original commit message:

Defer some shl transforms to DAGCombine.

The shl instruction is used to represent multiplication by a constant
power of two as well as bitwise left shifts. Some InstCombine
transformations would turn an shl instruction into a bit mask operation,
making it difficult for later analysis passes to recognize the
constsnt multiplication.

Disable those shl transformations, deferring them to DAGCombine time.
An 'shl X, C' instruction is now treated mostly the same was as 'mul X, C'.

These transformations are deferred:

  (X >>? C) << C   --> X & (-1 << C)  (When X >> C has multiple uses)
  (X >>? C1) << C2 --> X << (C2-C1) & (-1 << C2)   (When C2 > C1)
  (X >>? C1) << C2 --> X >>? (C1-C2) & (-1 << C2)  (When C1 > C2)

The corresponding exact transformations are preserved, just like
div-exact + mul:

  (X >>?,exact C) << C   --> X
  (X >>?,exact C1) << C2 --> X << (C2-C1)
  (X >>?,exact C1) << C2 --> X >>?,exact (C1-C2)

The disabled transformations could also prevent the instruction selector
from recognizing rotate patterns in hash functions and cryptographic
primitives. I have a test case for that, but it is too fragile.

llvm-svn: 155362
2012-04-23 17:39:52 +00:00
Jakob Stoklund Olesen
3d22f26e88 Revert r155136 "Defer some shl transforms to DAGCombine."
While the patch was perfect and defect free, it exposed a really nasty
bug in X86 SelectionDAG that caused an llc crash when compiling lencod.

I'll put the patch back in after fixing the SelectionDAG problem.

llvm-svn: 155181
2012-04-20 00:38:45 +00:00
Jakob Stoklund Olesen
1507d20c57 Defer some shl transforms to DAGCombine.
The shl instruction is used to represent multiplication by a constant
power of two as well as bitwise left shifts. Some InstCombine
transformations would turn an shl instruction into a bit mask operation,
making it difficult for later analysis passes to recognize the
constsnt multiplication.

Disable those shl transformations, deferring them to DAGCombine time.
An 'shl X, C' instruction is now treated mostly the same was as 'mul X, C'.

These transformations are deferred:

  (X >>? C) << C   --> X & (-1 << C)  (When X >> C has multiple uses)
  (X >>? C1) << C2 --> X << (C2-C1) & (-1 << C2)   (When C2 > C1)
  (X >>? C1) << C2 --> X >>? (C1-C2) & (-1 << C2)  (When C1 > C2)

The corresponding exact transformations are preserved, just like
div-exact + mul:

  (X >>?,exact C) << C   --> X
  (X >>?,exact C1) << C2 --> X << (C2-C1)
  (X >>?,exact C1) << C2 --> X >>?,exact (C1-C2)

The disabled transformations could also prevent the instruction selector
from recognizing rotate patterns in hash functions and cryptographic
primitives. I have a test case for that, but it is too fragile.

llvm-svn: 155136
2012-04-19 16:46:26 +00:00
Jakob Stoklund Olesen
0ad7ee539b FileCheckize
llvm-svn: 155010
2012-04-18 17:01:26 +00:00
Jakob Stoklund Olesen
330da655a7 Nobody likes shifty instructions, but that was a bit strong.
llvm-svn: 155009
2012-04-18 16:44:44 +00:00
Chandler Carruth
b3fb4be360 Teach InstCombine to nuke a common alloca pattern -- an alloca which has
GEPs, bit casts, and stores reaching it but no other instructions. These
often show up during the iterative processing of the inliner, SROA, and
DCE. Once we hit this point, we can completely remove the alloca. These
were actually showing up in the final, fully optimized code in a bunch
of inliner tests I've been working on, and notably they show up after
LLVM finishes optimizing away all function calls involved in
hash_combine(a, b).

llvm-svn: 154285
2012-04-08 14:36:56 +00:00
Rafael Espindola
88a1aeb123 Always compute all the bits in ComputeMaskedBits.
This allows us to keep passing reduced masks to SimplifyDemandedBits, but
know about all the bits if SimplifyDemandedBits fails. This allows instcombine
to simplify cases like the one in the included testcase.

llvm-svn: 154011
2012-04-04 12:51:34 +00:00
Chandler Carruth
0bba49050f Filecheck-ize this test so that it actually tests something reasonable.
llvm-svn: 153697
2012-03-29 22:01:41 +00:00
Nick Lewycky
92a7d87ceb Factor out the multiply analysis code in ComputeMaskedBits and apply it to the
overflow checking multiply intrinsic as well.

Add a test for this, updating the test from grep to FileCheck.

llvm-svn: 153028
2012-03-18 23:28:48 +00:00
Bill Wendling
9343ed10c6 Revert r152907.
llvm-svn: 152935
2012-03-16 18:20:54 +00:00
Bill Wendling
3c44ed8385 The alignment of the pointer part of the store instruction may have an
alignment. If that's the case, then we want to make sure that we don't increase
the alignment of the store instruction. Because if we increase it to be "more
aligned" than the pointer, code-gen may use instructions which require a greater
alignment than the pointer guarantees.
<rdar://problem/11043589>

llvm-svn: 152907
2012-03-16 07:40:08 +00:00
Eli Friedman
0763584d78 In InstCombiner::visitOr, make sure we reverse the operand swap used for checking for or-of-xor operations after those checks; a later check expects that any constant will be in Op1. PR12234.
llvm-svn: 152884
2012-03-16 00:52:42 +00:00
Benjamin Kramer
dbfa526afc Don't try to filecheck bitcode.
llvm-svn: 152498
2012-03-10 18:07:46 +00:00
Bill Wendling
5f16e35eed Make this transformation slightly less agressive and more correct.
The 'CmpInst::isFalseWhenEqual' function returns 'false' for values other than
simply equality. For instance, it returns 'false' for <= or >=. This isn't the
correct behavior for this transformation, which is checking for strict equality
and non-equality. It was causing the gcc.c-torture/execute/frame-address.c test
to fail because it would completely (and incorrectly) optimize a whole function
into a 'ret i32 0'.

llvm-svn: 152497
2012-03-10 17:56:03 +00:00
Bill Wendling
690b72d2b3 Testcase for r151691.
llvm-svn: 151694
2012-02-29 01:53:13 +00:00
Nick Lewycky
a93c874757 Reinstate the optimization from r151449 with a fix to not turn 'gep %x' into
'gep null' when the icmp predicate is unsigned (or is signed without inbounds).

llvm-svn: 151467
2012-02-26 02:09:49 +00:00
Nick Lewycky
849715d31f Roll these back to r151448 until I figure out how they're breaking
MultiSource/Applications/lua.

llvm-svn: 151463
2012-02-25 23:01:19 +00:00
Nick Lewycky
94be1c7d95 Teach instsimplify to be more aggressive when analyzing comparisons of pointers
by using llvm::isIdentifiedObject. Also teach it to handle GEPs that have
the same base pointer and constant operands. Fixes PR11238!

llvm-svn: 151449
2012-02-25 19:07:42 +00:00
Benjamin Kramer
dacc2e8edb InstCombine: Don't transform a signed icmp of two GEPs into a signed compare of the indices.
This transformation is not safe in some pathological cases (signed icmp of pointers should be an
extremely rare thing, but it's valid IR!). Add an explanatory comment.

Kudos to Duncan for pointing out this edge case (and not giving up explaining it until I finally got it).

llvm-svn: 151055
2012-02-21 13:31:09 +00:00
Benjamin Kramer
64719820cf Test case for r150978.
llvm-svn: 150979
2012-02-20 19:00:28 +00:00
Benjamin Kramer
9ade8e4d79 InstCombine: When comparing two GEPs that were derived from the same base pointer but use different types, expand the offset calculation and to the compare on the offset if profitable.
This came up in SmallVector code.

llvm-svn: 150962
2012-02-20 15:07:47 +00:00
Benjamin Kramer
3d87f26b44 InstCombine: Make OptimizePointerDifference more aggressive.
- Ignore pointer casts.
- Also expand GEPs that aren't constantexprs when they have one use or only constant indices.

- We now compile "&foo[i] - &foo[j]" into "i - j".

llvm-svn: 150961
2012-02-20 14:34:57 +00:00
Eli Bendersky
4afdeeb682 Replace all instances of dg.exp file with lit.local.cfg, since all tests are run with LIT now and now Dejagnu. dg.exp is no longer needed.
Patch reviewed by Daniel Dunbar. It will be followed by additional cleanup patches.

llvm-svn: 150664
2012-02-16 06:28:33 +00:00
Devang Patel
7f07d60411 Check against umin while converting fcmp into an icmp.
llvm-svn: 150425
2012-02-13 23:05:18 +00:00
Jim Grosbach
bc7e9b3c96 Revert "Disable InstCombine unsafe folding bitcasts of calls w/ varargs."
This reverts commit d0e277d272d517ca1cda368267d199f0da7cad95.

llvm-svn: 149647
2012-02-03 00:00:50 +00:00
Jim Grosbach
6186319c3f Disable InstCombine unsafe folding bitcasts of calls w/ varargs.
Changing arguments from being passed as fixed to varargs is unsafe, as
the ABI may require they be handled differently (stack vs. register, for
example).

Remove two tests which rely on the bitcast being folded into the direct
call, which is exactly the transformation that's unsafe.

llvm-svn: 149457
2012-02-01 00:08:17 +00:00
Rafael Espindola
7bddde2b49 Add r149110 back with a fix for when the vector and the int have the same
width.

llvm-svn: 149151
2012-01-27 23:33:07 +00:00
Rafael Espindola
7800e62486 Revert r149110 and add a testcase that was crashing since that revision.
Unfortunately I also had to disable constant-pool-sharing.ll the code it tests has been
updated to use the IL logic.

llvm-svn: 149148
2012-01-27 22:42:48 +00:00
Chris Lattner
929f66cdfa enhance constant folding to be able to constant fold bitcast of
ConstantVector's to integer type.

llvm-svn: 149110
2012-01-27 01:44:03 +00:00
Duncan Sands
b545beb54e Don't try to create a GEP when the pointee type is unsized (such GEPs
are invalid).  Fixes a crash on array1.C from the GCC testsuite when
compiled with dragonegg.

llvm-svn: 147946
2012-01-11 12:20:08 +00:00
Benjamin Kramer
f9cefbfed0 InstCombine: Teach foldLogOpOfMaskedICmpsHelper that sign bit tests are bit tests.
This subsumes several other transforms while enabling us to catch more cases.

llvm-svn: 147777
2012-01-09 17:23:27 +00:00
Benjamin Kramer
e1321329f4 Tweak my last commit to be less conservative about uses.
We still save an instruction when just the "and" part is replaced.
Also change the code to match comments more closely.

llvm-svn: 147753
2012-01-08 21:12:51 +00:00
Benjamin Kramer
e94856c8c4 InstCombine: If we have a bit test and a sign test anded/ored together, merge the sign bit into the bit test.
This is common in bit field code, e.g. checking if the first or the last bit of a bit field is set.

llvm-svn: 147749
2012-01-08 18:32:24 +00:00
Benjamin Kramer
e5589bccdd FileCheck hygiene.
llvm-svn: 147580
2012-01-05 00:43:34 +00:00
Nick Lewycky
d6260dc3cb Teach instcombine all sorts of great stuff about shifts that have exact, nuw or
nsw bits on them.

llvm-svn: 147528
2012-01-04 09:28:29 +00:00
Nick Lewycky
c7e12f7dbf Make use of the exact bit when optimizing '(X >>exact 3) << 1' to eliminate the
'and' that would zero out the trailing bits, and to produce an exact shift
ourselves.

llvm-svn: 147391
2011-12-31 21:30:22 +00:00
Chandler Carruth
a012c64ced Add an explicit test that we now fold cttz.i32(..., true) >> 5 -> 0.
This is a result of Benjamin's work on ValueTracking.

llvm-svn: 147259
2011-12-24 22:34:15 +00:00
Benjamin Kramer
94f07f8c2c InstCombine: Add a combine that turns (2^n)-1 ^ x back into (2^n)-1 - x iff x is smaller than 2^n and it fuses with a following add.
This was intended to undo the sub canonicalization in cases where it's not profitable, but it also
finds some cases on it's own.

llvm-svn: 147256
2011-12-24 17:31:53 +00:00
Benjamin Kramer
b5e584392b ComputeMaskedBits: Make knownzero computation more aggressive for ctlz with undef zero.
unsigned foo(unsigned x) { return 31 - __builtin_clz(x); }
now compiles into a single "bsrl" instruction on x86.

llvm-svn: 147255
2011-12-24 17:31:46 +00:00
Benjamin Kramer
0b4d2e3d2a InstCombine: Canonicalize (2^n)-1 - x into (2^n)-1 ^ x iff x is known to be smaller than 2^n.
This has the obvious advantage of being commutable and is always a win on x86 because
const - x wastes a register there. On less weird architectures this may lead to
a regression because other arithmetic doesn't fuse with it anymore. I'll address that
problem in a followup.

llvm-svn: 147254
2011-12-24 17:31:38 +00:00
Pete Cooper
550b96ab46 Added InstCombine for "select cond, ~cond, x" type patterns
These can be reduced to "~cond & x" or "~cond | x"

llvm-svn: 146624
2011-12-15 00:56:45 +00:00
Chandler Carruth
2bedf185c9 Manually upgrade the test suite to specify the flag to cttz and ctlz.
I followed three heuristics for deciding whether to set 'true' or
'false':

- Everything target independent got 'true' as that is the expected
  common output of the GCC builtins.
- If the target arch only has one way of implementing this operation,
  set the flag in the way that exercises the most of codegen. For most
  architectures this is also the likely path from a GCC builtin, with
  'true' being set. It will (eventually) require lowering away that
  difference, and then lowering to the architecture's operation.
- Otherwise, set the flag differently dependending on which target
  operation should be tested.

Let me know if anyone has any issue with this pattern or would like
specific tests of another form. This should allow the x86 codegen to
just iteratively improve as I teach the backend how to differentiate
between the two forms, and everything else should remain exactly the
same.

llvm-svn: 146370
2011-12-12 11:59:10 +00:00
Nadav Rotem
1a91e4381d Add support for vectors of pointers.
llvm-svn: 145801
2011-12-05 06:29:09 +00:00
Pete Cooper
c708e83499 Improved fix for abs(val) != 0 to check other similar case. Also fixed style issues and confusing comment
llvm-svn: 145618
2011-12-01 19:13:26 +00:00
Pete Cooper
d4569610df Removed use of grep from test and moved it to be with other icmp tests
llvm-svn: 145570
2011-12-01 04:35:26 +00:00
Pete Cooper
7e03b7250d Added instcombine pattern to spot comparing -val or val against 0.
(val != 0) == (-val != 0) so "abs(val) != 0" becomes "val != 0"

Fixes <rdar://problem/10482509>

llvm-svn: 145563
2011-12-01 03:58:40 +00:00
Chad Rosier
c5fa9f413a Add support for sqrt, sqrtl, and sqrtf in TargetLibraryInfo. Disable
(fptrunc (sqrt (fpext x))) -> (sqrtf x) transformation if -fno-builtin is 
specified.
rdar://10466410

llvm-svn: 145460
2011-11-29 23:57:10 +00:00
Duncan Sands
97cc6da56c Fix a theoretical problem (not seen in the wild): if different instances of a
weak variable are compiled by different compilers, such as GCC and LLVM, while
LLVM may increase the alignment to the preferred alignment there is no reason to
think that GCC will use anything more than the ABI alignment.  Since it is the
GCC version that might end up in the final program (as the linkage is weak), it
is wrong to increase the alignment of loads from the global up to the preferred
alignment as the alignment might only be the ABI alignment.

Increasing alignment up to the ABI alignment might be OK, but I'm not totally
convinced that it is.  It seems better to just leave the alignment of weak
globals alone.

llvm-svn: 145413
2011-11-29 18:26:38 +00:00
Eli Friedman
bc47555417 Add a missing safety check to ProcessUGT_ADDCST_ADD. Fixes PR11438.
llvm-svn: 145316
2011-11-28 23:32:19 +00:00
Eli Friedman
473a76a0df Make SelectionDAG::InferPtrAlignment use llvm::ComputeMaskedBits instead of duplicating the logic for globals. Make llvm::ComputeMaskedBits handle GlobalVariables slightly more aggressively, to match what InferPtrAlignment knew how to do.
llvm-svn: 145304
2011-11-28 22:48:22 +00:00
Chris Lattner
84bf52737a remove autoupgrade support for old forms of llvm.prefetch and the old
trampoline forms.  Both of these were correct in LLVM 3.0, and we don't
need to support LLVM 2.9 and earlier in mainline.

llvm-svn: 145174
2011-11-27 07:42:04 +00:00
Chris Lattner
9d1e8420ff Upgrade syntax of tests using volatile instructions to use 'load volatile' instead of 'volatile load', which is archaic.
llvm-svn: 145171
2011-11-27 06:54:59 +00:00
Bill Wendling
a855903bda Convert to the new EH model.
llvm-svn: 144050
2011-11-08 00:23:01 +00:00
Eli Friedman
676558ae92 Make sure we use the right insertion point when instcombine replaces a PHI with another instruction. (Specifically, don't insert an arbitrary instruction before a PHI.) Fixes PR11275.
llvm-svn: 143437
2011-11-01 04:49:29 +00:00
Eli Friedman
fb0b9216e1 Extend instcombine's shufflevector simplification to handle more cases where the input and output vectors have different sizes. Patch by Xiaoyi Guo.
llvm-svn: 142671
2011-10-21 19:06:29 +00:00
Bill Wendling
2c5486d770 Add support for the Objective-C personality function to the instruction
combining of the landingpad instruction. The ObjC personality function acts
almost identically to the C++ personality function. In particular, it uses
"null" as a "catch-all" value.

llvm-svn: 142256
2011-10-17 21:20:24 +00:00
Chandler Carruth
9c33ff8a8b Add a routine to swap branch instruction operands, and update any
profile metadata at the same time. Use it to preserve metadata attached
to a branch when re-writing it in InstCombine.

Add metadata to the canonicalize_branch InstCombine test, and check that
it is tranformed correctly.

Reviewed by Nick Lewycky!

llvm-svn: 142168
2011-10-17 01:11:57 +00:00
Lang Hames
386b01379a Added a testcase for r141599, rdar://problem/10063881.
llvm-svn: 141628
2011-10-11 01:32:10 +00:00
Jim Grosbach
254b9ed208 Revert 141203. InstCombine is looping on unit tests.
llvm-svn: 141209
2011-10-05 20:44:29 +00:00