1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-01 08:23:21 +01:00
Commit Graph

838 Commits

Author SHA1 Message Date
Chris Lattner
b61cf1e296 handle the constant case of vector insertion. For something
like this:

struct S { float A, B, C, D; };

struct S g;
struct S bar() { 
  struct S A = g;
  ++A.B;
  A.A = 42;
  return A;
}

we now generate:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	12(%rax), %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	unpcklps	%xmm0, %xmm1
	addss	LCPI1_0(%rip), %xmm2
	pshufd	$16, %xmm2, %xmm2
	movss	LCPI1_1(%rip), %xmm0
	pshufd	$16, %xmm0, %xmm0
	unpcklps	%xmm2, %xmm0
	ret

instead of:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	12(%rax), %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	unpcklps	%xmm0, %xmm1
	addss	LCPI1_0(%rip), %xmm2
	movd	%xmm2, %eax
	shlq	$32, %rax
	addq	$1109917696, %rax       ## imm = 0x42280000
	movd	%rax, %xmm0
	ret

llvm-svn: 112345
2010-08-28 01:50:57 +00:00
Chris Lattner
c70b0c0ee7 optimize bitcasts from large integers to vector into vector
element insertion from the pieces that feed into the vector.
This handles a pattern that occurs frequently due to code
generated for the x86-64 abi.  We now compile something like
this:

struct S { float A, B, C, D; };
struct S g;
struct S bar() { 
  struct S A = g;
  ++A.A;
  ++A.C;
  return A;
}

into all nice vector operations:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	12(%rax), %xmm3
	pshufd	$16, %xmm2, %xmm2
	unpcklps	%xmm2, %xmm0
	addss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	pshufd	$16, %xmm3, %xmm2
	unpcklps	%xmm2, %xmm1
	ret

instead of icky integer operations:

_bar:                                   ## @bar
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	movd	%xmm0, %ecx
	movl	4(%rax), %edx
	movl	12(%rax), %esi
	shlq	$32, %rdx
	addq	%rcx, %rdx
	movd	%rdx, %xmm0
	addss	8(%rax), %xmm1
	movd	%xmm1, %eax
	shlq	$32, %rsi
	addq	%rax, %rsi
	movd	%rsi, %xmm1
	ret

This resolves rdar://8360454

llvm-svn: 112343
2010-08-28 01:20:38 +00:00
Chris Lattner
08d2f26030 tidy up test.
llvm-svn: 112321
2010-08-27 23:15:21 +00:00
Chris Lattner
3f880c2097 Enhance the shift propagator to handle the case when you have:
A = shl x, 42
...
B = lshr ..., 38

which can be transformed into:
A = shl x, 4
...

iff we can prove that the would-be-shifted-in bits
are already zero.  This eliminates two shifts in the testcase
and allows eliminate of the whole i128 chain in the real example.

llvm-svn: 112314
2010-08-27 22:53:44 +00:00
Chris Lattner
80632e5fd9 Implement a pretty general logical shift propagation
framework, which is good at ripping through bitfield
operations.  This generalize a bunch of the existing
xforms that instcombine does, such as 
  (x << c) >> c -> and
to handle intermediate logical nodes.  This is useful for
ripping up the "promote to large integer" code produced by
SRoA.

llvm-svn: 112304
2010-08-27 22:24:38 +00:00
Chris Lattner
1a15c898b9 merge and filecheckize test
llvm-svn: 112289
2010-08-27 20:44:45 +00:00
Chris Lattner
a571568019 merge two tests
llvm-svn: 112288
2010-08-27 20:42:10 +00:00
Chris Lattner
866b888095 teach the truncation optimization that an entire chain of
computation can be truncated if it is fed by a sext/zext that doesn't
have to be exactly equal to the truncation result type.

llvm-svn: 112285
2010-08-27 20:32:06 +00:00
Chris Lattner
69a9143584 Add an instcombine to clean up a common pattern produced
by the SRoA "promote to large integer" code, eliminating
some type conversions like this:

   %94 = zext i16 %93 to i32                       ; <i32> [#uses=2]
   %96 = lshr i32 %94, 8                           ; <i32> [#uses=1]
   %101 = trunc i32 %96 to i8                      ; <i8> [#uses=1]

This also unblocks other xforms from happening, now clang is able to compile:

struct S { float A, B, C, D; };
float foo(struct S A) { return A.A + A.B+A.C+A.D; }

into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	pshufd	$1, %xmm0, %xmm2
	addss	%xmm0, %xmm2
	movdqa	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	pshufd	$1, %xmm1, %xmm0
	addss	%xmm3, %xmm0
	ret

on x86-64, instead of:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movapd	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	movd	%xmm1, %rax
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm3, %xmm0
	ret

This seems pretty close to optimal to me, at least without
using horizontal adds.  This also triggers in lots of other
code, including SPEC.

llvm-svn: 112278
2010-08-27 18:31:05 +00:00
Chris Lattner
e9dafffae3 filecheckize
llvm-svn: 112235
2010-08-26 22:23:39 +00:00
Chris Lattner
1efc631212 rename test.
llvm-svn: 112234
2010-08-26 22:20:47 +00:00
Chris Lattner
d5d68438c1 optimize "integer extraction out of the middle of a vector" as produced
by SRoA.  This is part of rdar://7892780, but needs another xform to
expose this.

llvm-svn: 112232
2010-08-26 22:14:59 +00:00
Chris Lattner
19a5dc488b optimize bitcast(trunc(bitcast(x))) where the result is a float and 'x'
is a vector to be a vector element extraction.  This allows clang to
compile:

struct S { float A, B, C, D; };
float foo(struct S A) { return A.A + A.B+A.C+A.D; }

into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movapd	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	movd	%xmm1, %rax
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm3, %xmm0
	ret

instead of:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	movd	%eax, %xmm0
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movd	%xmm1, %rax
	movd	%eax, %xmm1
	addss	%xmm2, %xmm1
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm1, %xmm0
	ret

... eliminating half of the horribleness.

llvm-svn: 112227
2010-08-26 21:55:42 +00:00
Chris Lattner
d1a8743984 filecheckize
llvm-svn: 112225
2010-08-26 21:51:41 +00:00
Chris Lattner
3113ee607c rename test
llvm-svn: 112224
2010-08-26 21:50:56 +00:00
Owen Anderson
678fd04aa5 Re-apply r111568 with a fix for the clang self-host.
llvm-svn: 111665
2010-08-20 18:24:43 +00:00
Owen Anderson
7c1b4fbd3b Previous revert failed to remove this file.
llvm-svn: 111582
2010-08-19 23:45:15 +00:00
Owen Anderson
0e57acb623 Revert r111568 to unbreak clang self-host.
llvm-svn: 111571
2010-08-19 23:25:16 +00:00
Owen Anderson
7f2852ba2d When a set of bitmask operations, typically from a bitfield initialization, only modifies the low bytes of a value,
we can narrow the store to only over-write the affected bytes.

llvm-svn: 111568
2010-08-19 22:15:40 +00:00
Eric Christopher
08e9f0250a Temporarily revert r110987 as it's causing some miscompares in
vector heavy code.  I'll re-enable when we've tracked down the problem.

llvm-svn: 111318
2010-08-17 22:55:27 +00:00
Nate Begeman
e57074fc48 Reapply this transformation now that it is passing the external test which it previously failed.
llvm-svn: 110987
2010-08-13 00:17:53 +00:00
Eric Christopher
34acdf57df Temporarily revert 110737 and 110734, they were causing failures
in an external testsuite.

llvm-svn: 110905
2010-08-12 07:01:22 +00:00
Nate Begeman
36e284c2be Add test for recent instcombine vector shuffle enhancement
llvm-svn: 110737
2010-08-10 21:58:00 +00:00
Eli Friedman
7197d66ff1 PR7853: fix a silly mistake introduced in r101899, and add a test to make sure
it doesn't regress again.

llvm-svn: 110597
2010-08-09 20:49:43 +00:00
Dan Gohman
a80f89dbc7 Make instcombine set explicit alignments on load or store
instructions with alignment 0, so that subsequent passes don't
need to bother checking the TargetData ABI size manually.

llvm-svn: 110128
2010-08-03 18:20:32 +00:00
Owen Anderson
e957c57ebb Re-apply the infamous r108614, with a fix pointed out by Dirk Steinke.
llvm-svn: 110036
2010-08-02 09:32:13 +00:00
Daniel Dunbar
f2be238c99 Speculatively revert r108614, "Another attempt at getting the clang self-host to
like my instcombine patch.", in an attempt to fix Clang i386 bootstrap.
 - Also PR7719.

llvm-svn: 109953
2010-07-31 19:51:11 +00:00
Owen Anderson
f66e1873ea Testcase for r108687.
llvm-svn: 108689
2010-07-19 08:14:26 +00:00
Owen Anderson
c8dc055b5e Another attempt at getting the clang self-host to like my instcombine patch.
llvm-svn: 108614
2010-07-17 06:56:35 +00:00
Eric Christopher
5eef314caf Also revert 108422, it's causing some test failures.
Working on testcases for Owen.

llvm-svn: 108494
2010-07-16 01:36:12 +00:00
Owen Anderson
01a2992a91 Reapply r108378, with bugfixes, testcase, and improved comment formatting.
This now passes LIT, nighty test, and llvm-gcc bootstrap on my machine.

llvm-svn: 108422
2010-07-15 15:00:23 +00:00
Chris Lattner
38e6ecd9f1 revert r108320, I see the failures now...
llvm-svn: 108322
2010-07-14 06:16:35 +00:00
Chris Lattner
5822d6d579 reapply benjamin's instcombine patch, I don't see anything wrong with it and can't repro any problems with a manual self-host.
llvm-svn: 108320
2010-07-14 05:59:13 +00:00
Benjamin Kramer
cf8ad46899 Nope, still breaks the release selfhost bots :(
llvm-svn: 108153
2010-07-12 16:38:48 +00:00
Benjamin Kramer
e391789246 Reapply the "or" half of r108136, which seems to be less problematic.
llvm-svn: 108152
2010-07-12 16:15:48 +00:00
Benjamin Kramer
98c95e7743 Revert r108141 again, sigh.
llvm-svn: 108148
2010-07-12 14:42:04 +00:00
Benjamin Kramer
c4f46375d3 Reapply 108136 with an ugly pasto fixed.
llvm-svn: 108141
2010-07-12 13:44:00 +00:00
Benjamin Kramer
d9bf737e62 Revert r108136 until I figure out why it broke selfhost.
llvm-svn: 108139
2010-07-12 12:35:49 +00:00
Benjamin Kramer
f00a49ceff instcombine: fold (x & y) | (~x & z) and (x & y) ^ (~x & z) into ((y ^ z) & x) ^ z which is one instruction shorter. (PR6773)
before:
  %and = and i32 %y, %x
  %neg = xor i32 %x, -1
  %and4 = and i32 %z, %neg
  %xor = xor i32 %and4, %and

after:
  %xor1 = xor i32 %z, %y
  %and2 = and i32 %xor1, %x
  %xor = xor i32 %and2, %z

llvm-svn: 108136
2010-07-12 11:54:45 +00:00
Chris Lattner
59bffe35a1 fix PR7311 by avoiding breaking casts when a bitcast from scalar->vector
is involved.

llvm-svn: 108117
2010-07-12 01:19:22 +00:00
Chris Lattner
d8288040c3 fix PR7429, a crash turning a load from a string into a float.
llvm-svn: 108113
2010-07-12 00:22:51 +00:00
Chris Lattner
68f5ec0fa2 convert to filechecconvert to filecheckk
llvm-svn: 108112
2010-07-12 00:21:10 +00:00
Chris Lattner
64eeea9044 merge two tests.
llvm-svn: 108111
2010-07-12 00:19:47 +00:00
Benjamin Kramer
27eb255a70 Teach instcombine to transform
(X >s -1) ? C1 : C2 and (X <s  0) ? C2 : C1
into ((X >>s 31) & (C2 - C1)) + C1, avoiding the conditional.

This optimization could be extended to take non-const C1 and C2 but we better
stay conservative to avoid code size bloat for now.

for
int sel(int n) {
     return n >= 0 ? 60 : 100;
}

we now generate
  sarl  $31, %edi
  andl  $40, %edi
  leal  60(%rdi), %eax

instead of
  testl %edi, %edi
  movl  $60, %ecx
  movl  $100, %eax
  cmovnsl %ecx, %eax

llvm-svn: 107866
2010-07-08 11:39:10 +00:00
Dan Gohman
50fffcaea3 Constant fold x == undef to undef.
llvm-svn: 107074
2010-06-28 21:30:07 +00:00
Rafael Espindola
d7a63bead9 Remove arm_apcscc from the test files. It is the default and doing this
matches what llvm-gcc and clang now produce.

llvm-svn: 106221
2010-06-17 15:18:27 +00:00
Dan Gohman
22d22caaed Teach instcombine to promote alloca array sizes.
llvm-svn: 104945
2010-05-28 15:09:00 +00:00
Dan Gohman
bab79afa29 Add a testcase for getelementptr index promotion.
llvm-svn: 104944
2010-05-28 15:07:59 +00:00
Duncan Sands
32d3986765 Teach instCombine to remove malloc+free if malloc's only uses are comparisons
to null.  Patch by Matti Niemenmaa.

llvm-svn: 104871
2010-05-27 19:09:06 +00:00
Chris Lattner
0b442d35da Teach instcombine to transform a bitcast/(zext|trunc)/bitcast sequence
with a vector input and output into a shuffle vector.  This sort of 
sequence happens when the input code stores with one type and reloads
with another type and then SROA promotes to i96 integers, which make
everyone sad.

This fixes rdar://7896024

llvm-svn: 103354
2010-05-08 21:50:26 +00:00
Nick Lewycky
c639c07492 Fix declarations in a few more tests.
llvm-svn: 101676
2010-04-17 21:29:25 +00:00
Eric Christopher
6b38179ee2 Verify function prototypes before trying to optimize functions. We also
need TargetData, just return false if we don't have it.

Update testcases accordingly.

Fixes PR6807.

llvm-svn: 101011
2010-04-12 04:48:00 +00:00
Dan Gohman
5bf62639ed Print empty structs as {} rather than { }.
llvm-svn: 100787
2010-04-08 18:03:05 +00:00
Chris Lattner
23334439e9 add newlines at the end of files.
llvm-svn: 100705
2010-04-07 22:53:17 +00:00
Mon P Wang
484bbe6aa9 Reapply address space patch after fixing an issue in MemCopyOptimizer.
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)

llvm-svn: 100304
2010-04-04 03:10:48 +00:00
Mon P Wang
0ccf050ca3 Revert r100191 since it breaks objc in clang
llvm-svn: 100199
2010-04-02 18:43:02 +00:00
Mon P Wang
a01350755e Reapply address space patch after fixing an issue in MemCopyOptimizer.
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)

llvm-svn: 100191
2010-04-02 18:04:15 +00:00
Bob Wilson
aae933cc81 Revert Mon Ping's change 99928, since it broke all the llvm-gcc buildbots.
llvm-svn: 99948
2010-03-30 22:27:04 +00:00
Mon P Wang
9351ea594a Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
A update of langref will occur in a subsequent checkin.

llvm-svn: 99928
2010-03-30 20:55:56 +00:00
Evan Cheng
a544f02286 Fix an incorrect logic causing instcombine to miss some _chk -> non-chk transformations.
llvm-svn: 99263
2010-03-23 06:06:09 +00:00
Evan Cheng
34c5c9af6f Fix a typo in ValueTracking that's causing instcombine to delete needed shift instructions.
llvm-svn: 98416
2010-03-13 02:20:29 +00:00
Duncan Sands
01532e804a When constant folding GEP of GEP, do not crash if an index of
the inner GEP is not a ConstantInt.

llvm-svn: 98359
2010-03-12 17:55:20 +00:00
Evan Cheng
db47eab2a3 Re-commit 97860 with fix. getMallocAllocatedType may return null.
llvm-svn: 98000
2010-03-08 22:54:36 +00:00
Eric Christopher
a671e4f7aa Migrate _chk call lowering from SimplifyLibCalls to InstCombine. Stub
out the remainder of the calls that we should lower in some way and
move the tests to the new correct directory. Fix up tests that are now
optimized more than they were before by -instcombine.

llvm-svn: 97875
2010-03-06 10:50:38 +00:00
Eric Christopher
b3f0926f84 Temporarily revert:
Log:
Transform @llvm.objectsize to integer if the argument is a result of malloc of known size.

Modified:
   llvm/trunk/lib/Transforms/InstCombine/InstCombineCalls.cpp
   llvm/trunk/test/Transforms/InstCombine/objsize.ll

It appears to be causing swb and nightly test failures.

llvm-svn: 97866
2010-03-06 03:11:35 +00:00
Evan Cheng
071e007ce6 Transform @llvm.objectsize to integer if the argument is a result of malloc of known size.
llvm-svn: 97860
2010-03-06 01:01:42 +00:00
Evan Cheng
782183fe4a Instcombine should turn llvm.objectsize of a alloca with static size to an integer.
llvm-svn: 97827
2010-03-05 20:47:23 +00:00
Chris Lattner
789121d6e2 fix PR6512, a case where instcombine would incorrectly merge loads
from different addr spaces.

llvm-svn: 97813
2010-03-05 18:53:28 +00:00
Chris Lattner
617f774b4e Fix PR6503. This turned into a much more interesting and nasty bug. Various
parts of the cmp|cmp and cmp&cmp folding logic wasn't prepared for vectors
(unrelated to the bug but noticed while in the code) and the code was 
*definitely* not safe to use by the (cast icmp)|(cast icmp) handling logic
that I added in r95855.  Fix all this up by changing the various routines
to more consistently use IRBuilder and not pass in the I which had the wrong 
type.

llvm-svn: 97801
2010-03-05 08:46:26 +00:00
Chris Lattner
f44ffcbc2c make these less sensitive to temporary naming.
llvm-svn: 97799
2010-03-05 08:43:33 +00:00
Chris Lattner
3d5ab3df06 remove this testcase, it isn't clear what it was testing and it is subsumed by or.ll
llvm-svn: 97798
2010-03-05 08:43:06 +00:00
Nick Lewycky
58ab63e179 Make the 'icmp pred trunc(ext(X)), CST --> icmp pred X, ext(trunc(CST))'
transformation much more careful. Truncating binary '01' to '1' sounds like it's
safe until you realize that it switched from positive to negative under a signed
interpretation, and that depends on the icmp predicate.

Also a few miscellaneous cleanups.

llvm-svn: 97721
2010-03-04 06:54:10 +00:00
Chris Lattner
9e230fb6b2 fix incorrect folding of icmp with undef, PR6481.
llvm-svn: 97659
2010-03-03 19:46:03 +00:00
Bill Wendling
d1f658563d This test case:
long test(long x) { return (x & 123124) | 3; }

Currently compiles to:

_test:
        orl     $3, %edi
        movq    %rdi, %rax
        andq    $123127, %rax
        ret

This is because instruction and DAG combiners canonicalize

  (or (and x, C), D) -> (and (or, D), (C | D))

However, this is only profitable if (C & D) != 0. It gets in the way of the
3-addressification because the input bits are known to be zero.

llvm-svn: 97616
2010-03-03 00:35:56 +00:00
Dan Gohman
37bf232609 Floating-point add, sub, and mul are now spelled fadd, fsub, and fmul,
respectively.

llvm-svn: 97531
2010-03-02 01:11:08 +00:00
Dan Gohman
5e58ab0b56 LLVM instruction syntax doesn't have trailing semicolons.
llvm-svn: 97456
2010-03-01 17:53:15 +00:00
John McCall
69bc985550 Teach APFloat how to create both QNaNs and SNaNs and with arbitrary-width
payloads.  APFloat's internal folding routines always make QNaNs now,
instead of sometimes making QNaNs and sometimes SNaNs depending on the
type.

llvm-svn: 97364
2010-02-28 02:51:25 +00:00
Dan Gohman
b3bf4992c5 Don't do (X != Y) ? X : Y -> X for floating-point values; it doesn't
handle NaN properly.

Do (X une Y) ? X : Y  -> X if one of X and Y is not zero.

llvm-svn: 96955
2010-02-23 17:17:57 +00:00
Dan Gohman
5a6b9ad7db Remove the code which constant-folded ptrtoint(inttoptr(x)+c) to
getelementptr. Despite only doing so in the case where x is a known
array object and c can be converted to an index within range, this
could still be invalid if c is actually the address of an object
allocated outside of LLVM. Also, SCEVExpander, the original motivation
for this code, has since been improved to avoid inttoptr+ptroint in
more cases.

llvm-svn: 96950
2010-02-23 16:35:41 +00:00
Dan Gohman
0891078790 Convert this test to FileCheck and add a testcase for PR3574.
llvm-svn: 96851
2010-02-23 01:28:09 +00:00
Evan Cheng
d9816ef946 Instcombine constant folding can normalize gep with negative index to index with large offset. When instcombine objsize checking transformation sees these geps where the offset seemingly point out of bound, it should just return "i don't know" rather than asserting.
llvm-svn: 96825
2010-02-22 23:34:00 +00:00
Dan Gohman
fa70dbe3f7 Add a test for canonicalizing ConstantExpr operands.
llvm-svn: 96820
2010-02-22 23:07:52 +00:00
Dan Gohman
dcc3634e46 Constant-fold certain comparisons with infinity and negative infinity.
llvm-svn: 96777
2010-02-22 04:06:03 +00:00
Dan Gohman
2a4208c74e Fold bswap(undef) to undef.
llvm-svn: 96432
2010-02-17 00:54:58 +00:00
Eric Christopher
96f3c4222f Fix a problem where we had bitcasted operands that gave us
odd offsets since the bitcasted pointer size and the offset pointer
size are going to be different types for the GEP vs base object.

llvm-svn: 96134
2010-02-13 23:38:01 +00:00
Eric Christopher
2e0201ee18 Make sure that ConstantExpr offsets also aren't off of extern
symbols.

Thanks to Duncan Sands for the testcase!

llvm-svn: 95877
2010-02-11 17:44:04 +00:00
Chris Lattner
a59eb7c09c Rename ValueRequiresCast to ShouldOptimizeCast, to better reflect
what it does.  Enhance it to return false to optimizing vector
sign extensions from vector comparisions, which is the idiom used
to get a splatted vector for a vector comparison.

Doing this breaks vector-casts.ll, add some compensating 
transformations to handle the important case they cover without
depending on this canonicalization.

This fixes rdar://7434900 a serious pessimization of vector compares.

llvm-svn: 95855
2010-02-11 06:26:33 +00:00
Chris Lattner
ef91e752a6 convert to filecheck.
llvm-svn: 95854
2010-02-11 06:24:37 +00:00
Eric Christopher
9516309f55 Add ConstantExpr handling to Intrinsic::objectsize lowering.
Update testcase accordingly now that we can optimize another
section.

llvm-svn: 95846
2010-02-11 01:48:54 +00:00
Eric Christopher
6691a59247 Move Intrinsic::objectsize lowering back to InstCombineCalls and
enable constant 0 offset lowering.

llvm-svn: 95691
2010-02-09 21:24:27 +00:00
Eric Christopher
871cf7bce2 Pull these back out, they're a little too aggressive and time
consuming for a simple optimization.

llvm-svn: 95671
2010-02-09 17:29:18 +00:00
Chris Lattner
26b712379f fix PR6193, only considering sign extensions *from i1* for this
xform.

llvm-svn: 95642
2010-02-09 01:12:41 +00:00
Eric Christopher
428b385575 Add a new pass to do llvm.objsize lowering using SCEV.
Initial skeleton and SCEVUnknown lowering implemented,
the rest should come relatively quickly.  Move testcase
to new directory.

Move pass to right before SimplifyLibCalls - which is
moved down a bit so we can take advantage of a few opts.

llvm-svn: 95628
2010-02-09 00:35:38 +00:00
Chris Lattner
44965f1107 fix logical-select to invoke filecheck right, and fix hte instcombine
xform it is checking to actually pass.  There is no need to match
m_SelectCst<0, -1> since instcombine canonicalizes that into not(sext).

Add matches for sext(not(x)) in addition to not(sext(x)).

llvm-svn: 95420
2010-02-05 19:53:02 +00:00
Eric Christopher
f89979ce6a Remove this code for now. I have a better idea and will rewrite with
that in mind.

llvm-svn: 95402
2010-02-05 19:04:06 +00:00
Eric Christopher
ee4a176739 Temporarily revert this since it appears to have caused a build
failure.

llvm-svn: 95294
2010-02-04 06:41:27 +00:00
Eric Christopher
9b3e42f09e Rework constant expr and array handling for objectsize instcombining.
Fix bugs where we would compute out of bounds as in bounds, and where
we couldn't know that the linker could override the size of an array.

Add a few new testcases, change existing testcase to use a private
global array instead of extern.

llvm-svn: 95283
2010-02-04 02:55:34 +00:00
Eric Christopher
fe6ab1518e If we're dealing with a zero-length array, don't lower to any
particular size, we just don't know what the length is yet.

llvm-svn: 95266
2010-02-03 23:56:07 +00:00
Eric Christopher
ac28e14b77 Recommit this, looks like it wasn't the cause.
llvm-svn: 95165
2010-02-03 00:21:58 +00:00
Eric Christopher
f070aae6f7 Hopefully temporarily revert this.
llvm-svn: 95154
2010-02-02 23:01:31 +00:00
Eric Christopher
575fe8690d Re-add strcmp and known size object size checking optimization.
Passed bootstrap and nightly test run here.

llvm-svn: 95145
2010-02-02 22:10:43 +00:00
Chris Lattner
9f50341a96 don't turn (A & (C0?-1:0)) | (B & ~(C0?-1:0)) -> C0 ? A : B
for vectors.  Codegen is generating awful code or segfaulting
in various cases (e.g. PR6204).

llvm-svn: 95058
2010-02-02 02:43:51 +00:00
Chris Lattner
18e6b4eb6b fix PR6195, a bug constant folding scalar -> vector compares.
llvm-svn: 94997
2010-02-01 20:04:40 +00:00
Dan Gohman
0b2c2769ba Generalize target-independent folding rules for sizeof to handle more
cases, and implement target-independent folding rules for alignof and
offsetof. Also, reassociate reassociative operators when it leads to
more folding.

Generalize ScalarEvolution's isOffsetOf to recognize offsetof on
arrays. Rename getAllocSizeExpr to getSizeOfExpr, and getFieldOffsetExpr
to getOffsetOfExpr, for consistency with analagous ConstantExpr routines.

Make the target-dependent folder promote GEP array indices to
pointer-sized integers, to make implicit casting explicit and exposed
to subsequent folding.

And add a bunch of testcases for this new functionality, and a bunch
of related existing functionality.

llvm-svn: 94987
2010-02-01 18:27:38 +00:00
Chris Lattner
5f10919836 fix rdar://7590304, a miscompilation of objc apps on arm. The caller
of objc message send was getting marked arm_apcscc, but the prototype
isn't.  This is fine at runtime because objcmsgsend is implemented in
assembly.  Only turn a mismatched caller and callee into 'unreachable'
if the callee is a definition.

llvm-svn: 94986
2010-02-01 18:11:34 +00:00
Chris Lattner
a336497d3f fix rdar://7590304, an infinite loop in instcombine. In the invoke
case, instcombine can't zap the invoke for fear of changing the CFG.
However, we have to do something to prevent the next iteration of
instcombine from inserting another store -> undef before the invoke
thereby getting into infinite iteration between dead store elim and
store insertion.

Just zap the callee to null, which will prevent the next iteration
from doing anything.

llvm-svn: 94985
2010-02-01 18:04:58 +00:00
Eli Friedman
0babc63336 Remove test which is no longer relevant.
llvm-svn: 94944
2010-01-31 04:40:45 +00:00
Eli Friedman
19c5c57885 Simplify/generalize the xor+add->sign-extend instcombine.
llvm-svn: 94943
2010-01-31 04:29:12 +00:00
Eli Friedman
58c7936637 Add a small transform: transform -(X<<Y) to (-X<<Y) when the shift has a single
use and X is free to negate.

llvm-svn: 94941
2010-01-31 02:30:23 +00:00
Bob Wilson
ccd1585ba8 Remove ARM-specific calling convention from this test. Target data is
needed for this test, but otherwise, there's nothing ARM-specific about
it and no need to specify the calling convention.

llvm-svn: 94862
2010-01-30 00:40:23 +00:00
Eric Christopher
47d90f7adb Revert my last couple of patches. They appear to have broken bison.
llvm-svn: 94841
2010-01-29 21:16:24 +00:00
Bob Wilson
f897b7b37e Improve isSafeToLoadUnconditionally to recognize that GEPs with constant
indices are safe if the result is known to be within the bounds of the
underlying object.

llvm-svn: 94829
2010-01-29 19:19:08 +00:00
Eric Christopher
7d74af1824 Add constant support to object size handling and remove default
lowering. We'll either figure it out, or not and be lowered by
SelectionDAGBuild.

Add test.

llvm-svn: 94775
2010-01-29 01:09:57 +00:00
Duncan Sands
a3395c61b5 Fix PR6165. The bug was that LHSKnownZero was being and'd with DemandedMask
when it should have been and'd with LowBits.  Fix that and while there beef
up the logic in the case of a negative LHS.

llvm-svn: 94745
2010-01-28 17:22:42 +00:00
Chris Lattner
91fafbd4e8 change the canonical form of "cond ? -1 : 0" to be
"sext cond" instead of a select.  This simplifies some instcombine
code, matches the policy for zext (cond ? 1 : 0 -> zext), and allows
us to generate better code for a testcase on ppc.

llvm-svn: 94339
2010-01-24 00:09:49 +00:00
Chris Lattner
8909d5aca5 implement a simple instcombine xform that has been in the
readme forever.

llvm-svn: 94318
2010-01-23 18:49:30 +00:00
Mon P Wang
b7fce13b78 InstCombine should not fold sext/zext of a vector and a bitcast to a scalar to a sext/zext
llvm-svn: 94280
2010-01-23 04:35:57 +00:00
Chris Lattner
e0124f19f9 optimize ~(~X >>s Y) --> (X >>s Y), patch by Edmund Grimley
Evans!

llvm-svn: 93884
2010-01-19 18:16:19 +00:00
Chris Lattner
6cd7a81f86 my instcombine transformations to make extension elimination more
aggressive changed the canonical form from sext(trunc(x)) to ashr(lshr(x)),
make sure to transform a couple more things into that canonical form,
and catch a case where we missed turning zext/shl/ashr into a single sext.

llvm-svn: 93787
2010-01-18 22:19:16 +00:00
Chris Lattner
6020407d78 filecheckize this.
llvm-svn: 93776
2010-01-18 22:00:46 +00:00
Chris Lattner
882bcda7ca remove a redundant test, filecheckize another.
llvm-svn: 93774
2010-01-18 21:55:43 +00:00
Bill Wendling
615508be92 Reduce fsub-fadd.ll and merge it into fsub-fsub.ll. Rename fsub-fsub.ll to
fsub.ll and FileCheckify it.

llvm-svn: 93669
2010-01-17 00:21:21 +00:00
Bill Wendling
488a7187b4 When the visitSub method was split into visitSub and visitFSub, this xform was
added to the FSub version. However, the original version of this xform guarded
against doing this for floating point (!Op0->getType()->isFPOrFPVector()).

This is causing LLVM to perform incorrect xforms for code like:

void func(double *rhi, double *rlo, double xh, double xl, double yh, double yl){
  double mh, ml;
  double c = 134217729.0;
  double up, u1, u2, vp, v1, v2;
        
  up = xh*c;
  u1 = (xh - up) + up;
  u2 = xh - u1;
        
  vp = yh*c;
  v1 = (yh - vp) + vp;
  v2 = yh - v1;
        
  mh = xh*yh;
  ml = (((u1*v1 - mh) + (u1*v2)) + (u2*v1)) + (u2*v2);
  ml += xh*yl + xl*yh;
        
  *rhi = mh + ml;
  *rlo = (mh - (*rhi)) + ml;
}

The last line was optimized away, but rl is intended to be the difference
between the infinitely precise result of mh + ml and after it has been rounded
to double precision.

llvm-svn: 93369
2010-01-13 23:23:17 +00:00
Chris Lattner
1549da6af4 disable this testcase, PR5997
llvm-svn: 93206
2010-01-11 23:18:33 +00:00
Chris Lattner
85a6f02b94 add one more bitfield optimization, allowing clang to generate
good code on PR4216:

_test_bitfield:                                             ## @test_bitfield
	orl	$32962, %edi
	movl	$4294941946, %eax
	andq	%rdi, %rax
	ret

instead of:

_test_bitfield:
        movl    $4294941696, %ecx
        movl    %edi, %eax
        orl     $194, %edi
        orl     $32768, %eax
        andq    $250, %rdi
        andq    %rax, %rcx
        movq    %rdi, %rax
        orq     %rcx, %rax
        ret

Evan is looking into the remaining andq+imm -> andl optimization.

llvm-svn: 93147
2010-01-11 06:55:24 +00:00
Chris Lattner
16e36659f5 Extend CanEvaluateZExtd to handle and/or/xor more aggressively in the
BitsToClear case.  This allows it to promote expressions which have an
and/or/xor after the lshr, promoting cases like test2 (from PR4216) 
and test3 (random extample extracted from a spec benchmark).

clang now compiles the code in PR4216 into:

_test_bitfield:                                             ## @test_bitfield
	movl	%edi, %eax
	orl	$194, %eax
	movl	$4294902010, %ecx
	andq	%rax, %rcx
	orl	$32768, %edi
	andq	$39936, %rdi
	movq	%rdi, %rax
	orq	%rcx, %rax
	ret

instead of:

_test_bitfield:                                             ## @test_bitfield
	movl	%edi, %eax
	orl	$194, %eax
	movl	$4294902010, %ecx
	andq	%rax, %rcx
	shrl	$8, %edi
	orl	$128, %edi
	shlq	$8, %rdi
	andq	$39936, %rdi
	movq	%rdi, %rax
	orq	%rcx, %rax
	ret

which is still not great, but is progress.

llvm-svn: 93145
2010-01-11 04:05:13 +00:00
Chris Lattner
f2ba85eedc Remove the dead TD argument to CanEvaluateZExtd, and add a
new BitsToClear result which allows us to start promoting
expressions that end with a lshr-by-constant.  This is
conservatively correct and better than what we had before
(see testcases) but still needs to be extended further.

llvm-svn: 93144
2010-01-11 03:32:00 +00:00
Chris Lattner
18d753e05f teach sext optimization to handle truncs from types that are not
the dest of the sext.

llvm-svn: 93128
2010-01-10 20:30:41 +00:00
Chris Lattner
ca53de1ab7 teach zext optimization how to deal with truncs that don't come from
the zext dest type.  This allows us to handle test52/53 in cast.ll,
and allows llvm-gcc to generate much better code for PR4216 in -m64
mode:

_test_bitfield:                                             ## @test_bitfield
	orl	$32962, %edi
	movl	%edi, %eax
	andl	$-25350, %eax
	ret

This also fixes a bug handling vector extends, ensuring that the
mask produced is a vector constant, not an integer constant.

llvm-svn: 93127
2010-01-10 20:25:54 +00:00
Chris Lattner
e68d6e61b1 now that the cost model has changed, we can always consider
elimination of a sign extend to be a win, which simplifies 
the client of CanEvaluateSExtd, and allows us to eliminate
more casts (examples taken from real code).

llvm-svn: 93109
2010-01-10 07:40:50 +00:00
Chris Lattner
e86619ca01 change the preferred canonical form for a sign extension to be
lshr+ashr instead of trunc+sext.  We want to avoid type 
conversions whenever possible, it is easier to codegen expressions
without truncates and extensions.

llvm-svn: 93107
2010-01-10 07:08:30 +00:00
Chris Lattner
1106f03886 two changes:
1) don't try to optimize a sext or zext that is only used by a trunc, let
   the trunc get optimized first.  This avoids some pointless effort in
   some common cases since instcombine scans down a block in the first pass.
2) Change the cost model for zext elimination to consider an 'and' cheaper
   than a zext.  This allows us to do it more aggressively, and for the next
   patch to simplify the code quite a bit.

llvm-svn: 93097
2010-01-10 02:39:31 +00:00
Chris Lattner
a04ed0659f enhance CanEvaluateZExtd to handle shift left and sext, allowing
more expressions to be promoted and casts eliminated.

llvm-svn: 93096
2010-01-10 02:22:12 +00:00
Chris Lattner
05ae88cc8f teach instcombine to delete sign extending shift pairs (sra(shl X, C), C) when
the input is already sign extended.

llvm-svn: 93019
2010-01-08 19:04:21 +00:00
Chris Lattner
944f9c4ac1 teach ComputeNumSignBits to look through PHI nodes.
llvm-svn: 92964
2010-01-07 23:44:37 +00:00
Chris Lattner
b8ec2bccaf filecheckize
llvm-svn: 92963
2010-01-07 23:42:23 +00:00
Chris Lattner
db8fa82914 Enhance instcombine to reason more strongly about promoting computation
that feeds into a zext, similar to the patch I did yesterday for sext.
There is a lot of room for extension beyond this patch.

llvm-svn: 92962
2010-01-07 23:41:00 +00:00
Chris Lattner
0b73344d8a Teach instcombine's sext elimination logic to be more aggressive.
Previously, instcombine would only promote an expression tree to
the larger type if doing so eliminated two casts.  This is because
a need to manually do the sign extend after the promoted expression
tree with two shifts.  Now, we keep track of whether the result of
the computation is going to be properly sign extended already.  If
so, we can unconditionally promote the expression, which allows us
to zap more sext's.

This implements rdar://6598839 (aka gcc pr38751)

llvm-svn: 92815
2010-01-06 01:56:21 +00:00
Chris Lattner
53b9ed70ee more rearrangement and cleanup, fix my test failure.
llvm-svn: 92792
2010-01-05 22:21:18 +00:00
Chris Lattner
2f69f6a822 remove two trunc xforms that are subsumed by EvaluateInDifferentType.
The only difference is that EvaluateInDifferentType checks to ensure
they are profitable before doing them :)

llvm-svn: 92788
2010-01-05 22:01:41 +00:00
Chris Lattner
96e30cb44f merge some tests.
llvm-svn: 92786
2010-01-05 21:54:09 +00:00
Chris Lattner
fd23a9b6dd merge cast2 into cast.ll
llvm-svn: 92784
2010-01-05 21:48:13 +00:00
Chris Lattner
24e500eb45 remove useless test.
llvm-svn: 92782
2010-01-05 21:46:22 +00:00
Chris Lattner
61de3ae41a another example.
llvm-svn: 92781
2010-01-05 21:43:08 +00:00
Chris Lattner
65d5ec781a remove a useless negative test, add a rdar # to an xfail that I'm working on.
llvm-svn: 92777
2010-01-05 21:37:44 +00:00
Chris Lattner
3e7dbaf22d clean up tests.
llvm-svn: 92776
2010-01-05 21:32:59 +00:00
Chris Lattner
13293b9738 just remove this xform which is subsumed by others.
llvm-svn: 92775
2010-01-05 21:16:30 +00:00
Chris Lattner
f457542506 optimize comparisons against cttz/ctlz/ctpop, patch by Alastair Lynn!
llvm-svn: 92745
2010-01-05 18:09:56 +00:00
Dan Gohman
5fa04f2707 Delete useless trailing semicolons.
llvm-svn: 92740
2010-01-05 17:55:26 +00:00
Chris Lattner
491e03b6ef optimize cttz and ctlz when we can prove something about the
leading/trailing bits.  Patch by Alastair Lynn!

llvm-svn: 92706
2010-01-05 07:23:56 +00:00
Chris Lattner
3b060b2d41 Truncate GEP indexes larger than the pointer size down to pointer size
when doing this transform if the GEP is not inbounds.  No testcase because
it is very difficult to trigger this: instcombine already canonicalizes
GEP indices to pointer size, so it relies specific permutations of the
instcombine worklist.

Thanks to Duncan for pointing this possible problem out.

llvm-svn: 92495
2010-01-04 18:57:15 +00:00
Chris Lattner
ce3f5f3448 implement an instcombine xform needed by clang's codegen
on the example in PR4216.  This doesn't trigger in the testsuite,
so I'd really appreciate someone scrutinizing the logic for
correctness.

llvm-svn: 92458
2010-01-04 06:03:59 +00:00
Chris Lattner
647c629ee4 generalize the previous transformation to handle indexing into
arrays of structs and other arrays, so long as all the subsequent
indexes are constants.  This triggers frequently for stuff like:

@divisions = internal constant [29 x [2 x i32]] [[2 x i32] zeroinitializer, [2 x i32] [i32 0, i32 1], [2 x i32] [i32 0, i32 2], [2 x i32] [i32 0, i32 1], [2 x i32] zeroinitializer, [2 x i32] [i32 0, i32 1], [2 x i32] [i32 0, i32 1], [2 x i32] [i32 0, i32 2], [2 x i32] [i32 0, i32 2], [2 x i32] zeroinitializer, [2 x i32] zeroinitializer, [2 x i32] zeroinitializer, [2 x i32] [i32 0, i32 2], [2 x i32] [i32 0, i32 1], [2 x i32] zeroinitializer, [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 1], [2 x i32] [i32 1, i32 1], [2 x i32] [i32 1, i32 2], [2 x i32] [i32 1, i32 1], [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 2], [2 x i32] [i32 1, i32 2], [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 0], [2 x i32] [i32 1, i32 1], [2 x i32] [i32 1, i32 2], [2 x i32] [i32 1, i32 2]], align 32 ; <[29 x [2 x i32]]*> [#uses=50]

	  %623 = getelementptr inbounds [29 x [2 x i32]]* @divisions, i64 0, i64 %619, i64 0 ; <i32*> [#uses=1]
	   %684 = icmp eq i32 %683, 999 

also for the "my_defs" table in 'gs', etc.

llvm-svn: 92444
2010-01-03 03:03:27 +00:00
Chris Lattner
acb0c133ec teach instcombine to optimize idioms like A[i]&42 == 0. This
occurs in 403.gcc in mode_mask_array, in safe-ctype.c (which
is copied in multiple apps) in _sch_istable, etc.

llvm-svn: 92427
2010-01-02 22:08:28 +00:00
Chris Lattner
4af67af013 Teach the table lookup optimization to generate range compares
when a consequtive sequence of elements all satisfies the 
predicate.  Like the double compare case, this generates better
code than the magic constant case and generalizes to more than
32/64 element array lookups.

Here are some examples where it triggers.  From 403.gcc, most
accesses to the rtx_class array are handled, e.g.:

@rtx_class = constant [153 x i8] c"xxxxxmmmmmmmmxxxxxxxxxxxxmxxxxxxiiixxxxxxxxxxxxxxxxxxxooxooooooxxoooooox3x2c21c2222ccc122222ccccaaaaaa<<<<<<<<<<<<<<<<<<111111111111bbooxxxxxxxxxxcc2211x", align 32 ; <[153 x i8]*> [#uses=547]
   %142 = icmp eq i8 %141, 105
@rtx_class = constant [153 x i8] c"xxxxxmmmmmmmmxxxxxxxxxxxxmxxxxxxiiixxxxxxxxxxxxxxxxxxxooxooooooxxoooooox3x2c21c2222ccc122222ccccaaaaaa<<<<<<<<<<<<<<<<<<111111111111bbooxxxxxxxxxxcc2211x", align 32 ; <[153 x i8]*> [#uses=543]
	   %165 = icmp eq i8 %164, 60      

Also, most of the 59-element arrays (mode_class/rid_to_yy, etc) 
optimized before are actually range compares.  This lets 32-bit
machines optimize them.

400.perlbmk has stuff like this:

400.perlbmk: PL_regkind, even for 32-bit:
@PL_regkind = constant [62 x i8] c"\00\00\02\02\02\06\06\06\06\09\09\0B\0B\0D\0E\0E\0E\11\12\12\14\14\16\16\18\18\1A\1A\1C\1C\1E\1F !!!$$&'((((,-.///88886789:;8$", align 32 ; <[62 x i8]*> [#uses=4]
	   %811 = icmp ne i8 %810, 33 

@PL_utf8skip = constant [256 x i8] c"\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\01\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\02\03\03\03\03\03\03\03\03\03\03\03\03\03\03\03\03\04\04\04\04\04\04\04\04\05\05\05\05\06\06\07\0D", align 32 ; <[256 x i8]*> [#uses=94]
	   %12 = icmp ult i8 %10, 2
           
etc.

llvm-svn: 92426
2010-01-02 21:50:18 +00:00
Nick Lewycky
cda0109ec5 Fix logic error in previous commit. The != case needs to become an or, not an
and.

llvm-svn: 92419
2010-01-02 16:14:56 +00:00
Nick Lewycky
3cc8fe073a Optimize pointer comparison into the typesafe form, now that the backends will
handle them efficiently. This is the opposite direction of the transformation
we used to have here.

llvm-svn: 92418
2010-01-02 15:25:44 +00:00
Chris Lattner
e1a2489017 Generalize the previous xform to handle cases where exactly
two elements match or don't match with two comparisons.  For
example, the testcase compiles into:

define i1 @test5(i32 %X) {
  %1 = icmp eq i32 %X, 2                          ; <i1> [#uses=1]
  %2 = icmp eq i32 %X, 7                          ; <i1> [#uses=1]
  %R = or i1 %1, %2                               ; <i1> [#uses=1]
  ret i1 %R
}

This generalizes the previous xforms when the array is larger than
64 elements (and this case matches) and generates better code for
cases where it overlaps with the magic bitshift case.

This generalizes more cases than you might expect.  For example,
400.perlbmk has:

@PL_utf8skip = constant [256 x i8] c"\01\01\01\...
%15 = icmp ult i8 %7, 7

403.gcc has:
@rid_to_yy = internal constant [114 x i16] [i16 259, i16 260, ...
%18 = icmp eq i16 %16, 295 

and xalancbmk has a bunch of examples, such as 
_ZN11xercesc_2_5L15gCombiningCharsE and _ZN11xercesc_2_5L10gBaseCharsE.

llvm-svn: 92417
2010-01-02 09:35:17 +00:00
Chris Lattner
1cdc77b8da enhance the compare/load/index optimization to work on *any* load
from a global with 32/64 elements or less (depending on whether
i64 is native on the target), generating a bitshift idiom to 
determine the result.  For example, on test4 we produce:

define i1 @test4(i32 %X) {
  %1 = lshr i32 933, %X                           ; <i32> [#uses=1]
  %2 = and i32 %1, 1                              ; <i32> [#uses=1]
  %R = icmp ne i32 %2, 0                          ; <i1> [#uses=1]
  ret i1 %R
}

This triggers in a number of interesting cases, for example, here's an
fp case:
@A.3255 = internal constant [4 x double] [double 4.100000e+00, double -3.900000e+00, double -1.000000e+00, double 1.000000e+00], align 32 ; <[4 x double]*> [#uses=7]
...
	   %7 = fcmp olt double %3, 0.000000e+00

In this case we make the slen2_tab global dead, which is nice:
@slen2_tab = internal constant [16 x i32] [i32 0, i32 1, i32 2, i32 3, i32 0, i32 1, i32 2, i32 3, i32 1, i32 2, i32 3, i32 1, i32 2, i32 3, i32 2, i32 3], align 32 ; <[16 x i32]*> [#uses=1]
...
	   %204 = icmp eq i32 %46, 0     

Perl has a bunch of these, also on the 'Perl_regkind' array:
@Perl_yygindex = internal constant [51 x i16] [i16 0, i16 0, i16 0, i16 0, i16 374, i16 351, i16 0, i16 -12, i16 0, i16 946, i16 413, i16 -83, i16 0, i16 0, i16 0, i16 -311, i16 -13, i16 4007, i16 2893, i16 0, i16 0, i16 0, i16 0, i16 0, i16 372, i16 -8, i16 0, i16 0, i16 246, i16 -131, i16 43, i16 86, i16 208, i16 -45, i16 -169, i16 987, i16 0, i16 0, i16 0, i16 0, i16 308, i16 0, i16 -271, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0, i16 0], align 32 ; <[51 x i16]*> [#uses=1]
...
  %1364 = icmp eq i16 %1361, 0

186.crafty really likes this on 64-bit machines, because it triggers on a bunch of globals like this:
@white_outpost = internal constant [64 x i8] c"\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\00\02\02\00\00\00\00\00\04\05\05\04\00\00\00\00\03\06\06\03\00\00\00\00\00\01\01\00\00\00\00\00\00\00\00\00\00\00", align 32 ; <[64 x i8]*> [#uses=2]

However the big winner is 403.gcc, which triggers hundreds of times, eliminating all the accesses to the 57-element arrays 'mode_class', mode_unit_size, mode_bitsize, regclass_map, etc.

go 64-bit machines :)

llvm-svn: 92415
2010-01-02 08:56:52 +00:00
Chris Lattner
59136ba5ad enhance the previous optimization to work with fcmp in addition
to icmp.

llvm-svn: 92412
2010-01-02 08:20:51 +00:00
Chris Lattner
f3f6c10218 Teach instcombine to fold compares of loads from constant
arrays with variable indices into a comparison of the index
with a constant.  The most common occurrence of this that
I see by far is stuff like:

if ("foobar"[i] == '\0') ...

which we compile into: if (i == 6), saving a load and 
materialization of the global address.  This also exposes 
loop trip count information to later passes in many cases.

This triggers hundreds of times in xalancbmk, which is where I first
noticed it, but it also triggers in many other apps.  Here are a few 
interesting ones from various apps:

@must_be_connected_without = internal constant [8 x i8*] [i8* getelementptr inbounds ([3 x i8]* @.str64320, i64 0, i64 0), i8* getelementptr inbounds ([3 x i8]* @.str27283, i64 0, i64 0), i8* getelementptr inbounds ([4 x i8]* @.str71327, i64 0, i64 0), i8* getelementptr inbounds ([4 x i8]* @.str72328, i64 0, i64 0), i8* getelementptr inbounds ([3 x i8]* @.str18274, i64 0, i64 0), i8* getelementptr inbounds ([6 x i8]* @.str11267, i64 0, i64 0), i8* getelementptr inbounds ([3 x i8]* @.str32288, i64 0, i64 0), i8* null], align 32 ; <[8 x i8*]*> [#uses=2]
  %scevgep.i = getelementptr [8 x i8*]* @must_be_connected_without, i64 0, i64 %indvar.i ; <i8**> [#uses=1]
  %17 = load ...
  %18 = icmp eq i8* %17, null                     ; <i1> [#uses=1]
-> icmp eq i64 %indvar.i, 7 


@yytable1095 = internal constant [84 x i8] c"\12\01(\05\06\07\08\09\0A\0B\0C\0D\0E1\0F\10\11266\1D: \10\11,-,0\03'\10\11B6\04\17&\18\1945\05\06\07\08\09\0A\0B\0C\0D\0E\1E\0F\10\11*\1A\1B\1C$3+>#%;<IJ=ADFEGH9KL\00\00\00C", align 32 ; <[84 x i8]*> [#uses=2]
  %57 = getelementptr inbounds [84 x i8]* @yytable1095, i64 0, i64 %56 ; <i8*> [#uses=1]
   %mode.0.in = getelementptr inbounds [9 x i32]* @mb_mode_table, i64 0, i64 %.pn ; <i32*> [#uses=1]
load ...
   %64 = icmp eq i8 %58, 4                         ; <i1> [#uses=1]
-> icmp eq i64 %.pn, 35             ; <i1> [#uses=0]


@gsm_DLB = internal constant [4 x i16] [i16 6554, i16 16384, i16 26214, i16 32767]
%scevgep.i = getelementptr [4 x i16]* @gsm_DLB, i64 0, i64 %indvar.i ; <i16*> [#uses=1]
%425 = load %scevgep.i
%426 = icmp eq i16 %425, -32768                 ; <i1> [#uses=0]
-> false

llvm-svn: 92411
2010-01-02 08:12:04 +00:00
Chris Lattner
cf784992da remove the instcombine transformations that are inserting nasty
pointer to int casts that confuse later optimizations.  See PR3351
for details.

This improves but doesn't complete fix 483.xalancbmk because llvm-gcc
does this xform in GCC's "fold" routine as well.  Clang++ will do
better I guess.

llvm-svn: 92408
2010-01-02 00:31:05 +00:00
Chris Lattner
ef4fba933d add a simple instcombine xform, simplify another one to use hasAllZeroIndices()
instead of hand rolling a loop.

llvm-svn: 92403
2010-01-01 23:09:08 +00:00
Chris Lattner
feb7b1af69 generalize the pointer difference optimization to handle
a constantexpr gep on the 'base' side of the expression.
This completes comment #4 in PR3351, which comes from
483.xalancbmk.

llvm-svn: 92402
2010-01-01 22:42:29 +00:00
Chris Lattner
89b1b63bdf teach instcombine to optimize pointer difference idioms involving constant
expressions.  This is a step towards comment #4 in PR3351.

llvm-svn: 92401
2010-01-01 22:29:12 +00:00
Chris Lattner
ce7717e168 implement the transform requested in PR5284
llvm-svn: 92398
2010-01-01 18:34:40 +00:00
Chris Lattner
e5f5e4b151 add a few trivial instcombines for llvm.powi.
llvm-svn: 92383
2010-01-01 01:52:15 +00:00
Chris Lattner
ae95fddf98 add check lines for min/max tests.
llvm-svn: 91816
2009-12-21 06:08:50 +00:00
Chris Lattner
013b88ee59 really convert this to filecheck.
llvm-svn: 91815
2009-12-21 06:06:10 +00:00
Chris Lattner
c9bfe8679e give instcombine some helper functions for matching MIN and MAX, and
implement some optimizations for MIN(MIN()) and MAX(MAX()) and 
MIN(MAX()) etc.  This substantially improves the code in PR5822 but
doesn't kick in much elsewhere.  2 max's were optimized in 
pairlocalalign and one in smg2000.

llvm-svn: 91814
2009-12-21 06:03:05 +00:00
Chris Lattner
d5bdc7876d filecheckize
llvm-svn: 91813
2009-12-21 05:53:13 +00:00
Chris Lattner
f1474e1761 enhance x-(-A) -> x+A to preserve NUW/NSW.
Use the presence of NSW/NUW to fold "icmp (x+cst), x" to a constant in
cases where it would otherwise be undefined behavior.

Surprisingly (to me at least), this triggers hundreds of the times in
a few benchmarks: lencode, ldecode, and 466.h264ref seem to *really*
like this.

llvm-svn: 91812
2009-12-21 04:04:05 +00:00
Chris Lattner
d34eb29977 Optimize all cases of "icmp (X+Cst), X" to something simpler. This triggers
a bunch in lencode, ldecod, spass, 176.gcc, 252.eon, among others.  It is 
also the first part of PR5822

llvm-svn: 91811
2009-12-21 03:19:28 +00:00
Chris Lattner
072f39fd3a convert to filecheck
llvm-svn: 91810
2009-12-21 03:11:05 +00:00
Chris Lattner
d9bf69f1a5 fix PR5827 by disabling the phi slicing transformation in a case
where instcombine would have to split a critical edge due to a
phi node of an invoke.  Since instcombine can't change the CFG,
it has to bail out from doing the transformation.

llvm-svn: 91763
2009-12-19 07:01:15 +00:00
Eli Friedman
c8ab298dbd Optimize icmp of null and select of two constants even if the select has
multiple uses.  (The construct in question was found in gcc.)

llvm-svn: 91675
2009-12-18 08:22:35 +00:00
Eli Friedman
9543d05079 Allow instcombine to combine "sext(a) >u const" to "a >u trunc(const)".
llvm-svn: 91631
2009-12-17 22:42:29 +00:00
Eli Friedman
8afea2d095 Make the ptrtoint comparison simplification work if one side is a global.
llvm-svn: 91624
2009-12-17 21:27:47 +00:00
Eli Friedman
ff5c248066 Slightly generalize transformation of memmove(a,a,n) so that it also applies
to memcpy. (Such a memcpy is technically illegal, but in practice is safe
and is generated by struct self-assignment in C code.)

llvm-svn: 91621
2009-12-17 21:07:31 +00:00
Eli Friedman
5c7e38b936 Aggressively flip compare constant expressions where appropriate; constant
folding in particular expects null to be on the RHS.

llvm-svn: 91587
2009-12-17 06:07:04 +00:00
Benjamin Kramer
6cd9b2ba74 Fix some CHECK lines which were ignored by accident.
llvm-svn: 91214
2009-12-12 09:25:50 +00:00
Nick Lewycky
10693e2bb0 Generalize this optimization to work on equality comparisons between any two
integers that are constant except for a single bit (the same n-th bit in each).

llvm-svn: 90646
2009-12-05 05:00:00 +00:00
Chris Lattner
851aea6ce2 fix PR5673 by being more careful about pointers to functions.
llvm-svn: 90369
2009-12-03 01:05:45 +00:00
Chris Lattner
2d3554c3d9 merge sext-2 into sext.ll
llvm-svn: 90293
2009-12-02 05:34:35 +00:00
Chris Lattner
3781027d07 rename test
llvm-svn: 90292
2009-12-02 05:32:33 +00:00
Chris Lattner
2c2a69cd14 filecheckize
llvm-svn: 90291
2009-12-02 05:32:16 +00:00
Mon P Wang
91ac05d480 Fixed an assertion failure for tracking sext of a vector of integers
llvm-svn: 90290
2009-12-02 04:59:58 +00:00
Nick Lewycky
116b145b02 Teach ConstantFolding to do a better job when folding gep(bitcast).
This permits the devirtualization of llvm.org/PR3100#c9 when compiled by clang.

llvm-svn: 90099
2009-11-29 21:40:55 +00:00
Chris Lattner
cd6fed25d5 add testcases for the foo_with_overflow op xforms added recently and
fix bugs exposed by the tests.  Testcases from Alastair Lynn!

llvm-svn: 90056
2009-11-29 02:57:29 +00:00
Chris Lattner
d48ff7ea6a Implement PR5634.
llvm-svn: 90046
2009-11-29 00:51:17 +00:00
Chris Lattner
cf7665b0c8 Fix PR5471 by removing an instcombine xform. Some pieces of the code
generates store to undef and some generates store to null as the idiom
for undefined behavior.  Since simplifycfg zaps both, don't remove the
undefined behavior in instcombine.

llvm-svn: 89971
2009-11-26 22:04:42 +00:00
Dan Gohman
58bb87921b Make ConstantFoldConstantExpression recursively visit the entire
ConstantExpr, not just the top-level operator. This allows it to
fold many more constants.

Also, make GlobalOpt call ConstantFoldConstantExpression on
GlobalVariable initializers.

llvm-svn: 89659
2009-11-23 16:22:21 +00:00
Nick Lewycky
9d1ee635e3 Reapply r88830 with a bugfix: this transform only applies to icmp eq/ne. This
fixes part of PR5438.

llvm-svn: 89639
2009-11-23 03:17:33 +00:00
Nick Lewycky
f05946faff Revert r88830 and r88831 which appear to have caused a selfhost buildbot some
grief. I suspect this patch merely exposed a bug else.

llvm-svn: 88841
2009-11-15 07:47:32 +00:00
Nick Lewycky
62f00c17eb Correct typo.
llvm-svn: 88831
2009-11-15 06:16:57 +00:00
Nick Lewycky
14a2122db3 Teach instcombine to look for booleans in wider integers when it encounters a
zext(icmp). It may be able to optimize that away. This fixes one of the cases
in PR5438.

llvm-svn: 88830
2009-11-15 05:55:17 +00:00
Duncan Sands
f0d9823d0b Don't trivially delete unused calls to llvm.invariant.start. This allows
llvm.invariant.start to be used without necessarily being paired with a call
to llvm.invariant.end.  If you run the entire optimization pipeline then such
calls are in fact deleted (adce does it), but that's actually a good thing since
we probably do want them to be zapped late in the game.  There should really be
an integration test that checks that the llvm.invariant.start call lasts long
enough that all passes that do interesting things with it get to do their stuff
before it is deleted.  But since no passes do anything interesting with it yet
this will have to wait for later.

llvm-svn: 86840
2009-11-11 15:34:13 +00:00
Chris Lattner
562cc40dbb unify the code that determines whether it is a good idea to change the type
of a computation.  This fixes some infinite loops when dealing with TD that
has no native types.

llvm-svn: 86670
2009-11-10 07:23:37 +00:00
Chris Lattner
f2b3c795fd if a 'with overflow' intrinsic just has the normal result used, simplify
it to a normal binop.  Patch by Alastair Lynn, testcase by me.

llvm-svn: 86524
2009-11-09 07:07:56 +00:00
Chris Lattner
5a3a41a757 enhance PHI slicing to handle the case when a slicable PHI is begin
used by a chain of other PHIs.

llvm-svn: 86503
2009-11-09 01:38:00 +00:00