1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-25 12:12:47 +01:00
Commit Graph

1871 Commits

Author SHA1 Message Date
Chris Lattner
c70b0c0ee7 optimize bitcasts from large integers to vector into vector
element insertion from the pieces that feed into the vector.
This handles a pattern that occurs frequently due to code
generated for the x86-64 abi.  We now compile something like
this:

struct S { float A, B, C, D; };
struct S g;
struct S bar() { 
  struct S A = g;
  ++A.A;
  ++A.C;
  return A;
}

into all nice vector operations:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	12(%rax), %xmm3
	pshufd	$16, %xmm2, %xmm2
	unpcklps	%xmm2, %xmm0
	addss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	pshufd	$16, %xmm3, %xmm2
	unpcklps	%xmm2, %xmm1
	ret

instead of icky integer operations:

_bar:                                   ## @bar
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	movd	%xmm0, %ecx
	movl	4(%rax), %edx
	movl	12(%rax), %esi
	shlq	$32, %rdx
	addq	%rcx, %rdx
	movd	%rdx, %xmm0
	addss	8(%rax), %xmm1
	movd	%xmm1, %eax
	shlq	$32, %rsi
	addq	%rax, %rsi
	movd	%rsi, %xmm1
	ret

This resolves rdar://8360454

llvm-svn: 112343
2010-08-28 01:20:38 +00:00
Owen Anderson
dc4703bcd5 Add a prototype of a new peephole optimizing pass that uses LazyValue info to simplify PHIs and select's.
This pass addresses the missed optimizations from PR2581 and PR4420.

llvm-svn: 112325
2010-08-27 23:31:36 +00:00
Chris Lattner
08d2f26030 tidy up test.
llvm-svn: 112321
2010-08-27 23:15:21 +00:00
Chris Lattner
3f880c2097 Enhance the shift propagator to handle the case when you have:
A = shl x, 42
...
B = lshr ..., 38

which can be transformed into:
A = shl x, 4
...

iff we can prove that the would-be-shifted-in bits
are already zero.  This eliminates two shifts in the testcase
and allows eliminate of the whole i128 chain in the real example.

llvm-svn: 112314
2010-08-27 22:53:44 +00:00
Chris Lattner
80632e5fd9 Implement a pretty general logical shift propagation
framework, which is good at ripping through bitfield
operations.  This generalize a bunch of the existing
xforms that instcombine does, such as 
  (x << c) >> c -> and
to handle intermediate logical nodes.  This is useful for
ripping up the "promote to large integer" code produced by
SRoA.

llvm-svn: 112304
2010-08-27 22:24:38 +00:00
Chris Lattner
1a15c898b9 merge and filecheckize test
llvm-svn: 112289
2010-08-27 20:44:45 +00:00
Chris Lattner
a571568019 merge two tests
llvm-svn: 112288
2010-08-27 20:42:10 +00:00
Chris Lattner
866b888095 teach the truncation optimization that an entire chain of
computation can be truncated if it is fed by a sext/zext that doesn't
have to be exactly equal to the truncation result type.

llvm-svn: 112285
2010-08-27 20:32:06 +00:00
Chris Lattner
69a9143584 Add an instcombine to clean up a common pattern produced
by the SRoA "promote to large integer" code, eliminating
some type conversions like this:

   %94 = zext i16 %93 to i32                       ; <i32> [#uses=2]
   %96 = lshr i32 %94, 8                           ; <i32> [#uses=1]
   %101 = trunc i32 %96 to i8                      ; <i8> [#uses=1]

This also unblocks other xforms from happening, now clang is able to compile:

struct S { float A, B, C, D; };
float foo(struct S A) { return A.A + A.B+A.C+A.D; }

into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	pshufd	$1, %xmm0, %xmm2
	addss	%xmm0, %xmm2
	movdqa	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	pshufd	$1, %xmm1, %xmm0
	addss	%xmm3, %xmm0
	ret

on x86-64, instead of:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movapd	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	movd	%xmm1, %rax
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm3, %xmm0
	ret

This seems pretty close to optimal to me, at least without
using horizontal adds.  This also triggers in lots of other
code, including SPEC.

llvm-svn: 112278
2010-08-27 18:31:05 +00:00
Owen Anderson
35ff7a208e Use LVI to eliminate conditional branches where we've tested a related condition previously. Update tests for this change.
This fixes PR5652.

llvm-svn: 112270
2010-08-27 17:12:29 +00:00
Chris Lattner
e9dafffae3 filecheckize
llvm-svn: 112235
2010-08-26 22:23:39 +00:00
Chris Lattner
1efc631212 rename test.
llvm-svn: 112234
2010-08-26 22:20:47 +00:00
Chris Lattner
d5d68438c1 optimize "integer extraction out of the middle of a vector" as produced
by SRoA.  This is part of rdar://7892780, but needs another xform to
expose this.

llvm-svn: 112232
2010-08-26 22:14:59 +00:00
Chris Lattner
19a5dc488b optimize bitcast(trunc(bitcast(x))) where the result is a float and 'x'
is a vector to be a vector element extraction.  This allows clang to
compile:

struct S { float A, B, C, D; };
float foo(struct S A) { return A.A + A.B+A.C+A.D; }

into:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movapd	%xmm1, %xmm3
	addss	%xmm2, %xmm3
	movd	%xmm1, %rax
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm3, %xmm0
	ret

instead of:

_foo:                                   ## @foo
## BB#0:                                ## %entry
	movd	%xmm0, %rax
	movd	%eax, %xmm0
	shrq	$32, %rax
	movd	%eax, %xmm2
	addss	%xmm0, %xmm2
	movd	%xmm1, %rax
	movd	%eax, %xmm1
	addss	%xmm2, %xmm1
	shrq	$32, %rax
	movd	%eax, %xmm0
	addss	%xmm1, %xmm0
	ret

... eliminating half of the horribleness.

llvm-svn: 112227
2010-08-26 21:55:42 +00:00
Chris Lattner
d1a8743984 filecheckize
llvm-svn: 112225
2010-08-26 21:51:41 +00:00
Chris Lattner
3113ee607c rename test
llvm-svn: 112224
2010-08-26 21:50:56 +00:00
Owen Anderson
77fcf53657 Make JumpThreading smart enough to properly thread StrSwitch when it's compiled with clang++.
llvm-svn: 112198
2010-08-26 17:40:24 +00:00
Devang Patel
05becf3ac5 DIGlobalVariable can be used to encode debug info for globals that are directly folded into a constant by FE.
llvm-svn: 112072
2010-08-25 18:52:02 +00:00
Owen Anderson
e0cdfa265a In the default address space, any GEP off of null results in a trap value if you try to load it. Thus,
any load in the default address space that completes implies that the base value that it GEP'd from
was not null.

llvm-svn: 112015
2010-08-25 01:16:47 +00:00
Owen Anderson
678fd04aa5 Re-apply r111568 with a fix for the clang self-host.
llvm-svn: 111665
2010-08-20 18:24:43 +00:00
Owen Anderson
7c1b4fbd3b Previous revert failed to remove this file.
llvm-svn: 111582
2010-08-19 23:45:15 +00:00
Owen Anderson
0e57acb623 Revert r111568 to unbreak clang self-host.
llvm-svn: 111571
2010-08-19 23:25:16 +00:00
Owen Anderson
7f2852ba2d When a set of bitmask operations, typically from a bitfield initialization, only modifies the low bytes of a value,
we can narrow the store to only over-write the affected bytes.

llvm-svn: 111568
2010-08-19 22:15:40 +00:00
Kenneth Uildriks
69cdd103c0 Fixed and reactivated a partial specialization test
llvm-svn: 111516
2010-08-19 12:42:38 +00:00
Chris Lattner
ab876b6ce8 Fix PR7755: knowing something about an inval for a pred
from the LHS should disable reconsidering that pred on the
RHS.  However, knowing something about the pred on the RHS
shouldn't disable subsequent additions on the RHS from
happening.

llvm-svn: 111349
2010-08-18 03:14:36 +00:00
Eric Christopher
08e9f0250a Temporarily revert r110987 as it's causing some miscompares in
vector heavy code.  I'll re-enable when we've tracked down the problem.

llvm-svn: 111318
2010-08-17 22:55:27 +00:00
Dan Gohman
e26025ddd0 When rotating loops, put the original header at the bottom of the
loop, making the resulting loop significantly less ugly.  Also, zap
its trivial PHI nodes, since it's easy.

llvm-svn: 111255
2010-08-17 17:39:21 +00:00
Dan Gohman
9178d0792f Instead, teach SimplifyCFG to trim non-address-taken blocks from
indirectbr destination lists.

llvm-svn: 111122
2010-08-16 14:41:14 +00:00
Dan Gohman
afb3db46d2 LoopSimplify shouldn't split loop backedges that use indirectbr. PR7867.
llvm-svn: 111061
2010-08-14 00:43:09 +00:00
Dan Gohman
d04a608a73 Teach SimplifyCFG how to simplify indirectbr instructions.
- Eliminate redundant successors.
 - Convert an indirectbr with one successor into a direct branch.

Also, generalize SimplifyCFG to be able to be run on a function entry block.
It knows quite a few simplifications which are applicable to the entry
block, and it only needs a few checks to avoid trouble with the entry block.

llvm-svn: 111060
2010-08-14 00:29:42 +00:00
Nate Begeman
e57074fc48 Reapply this transformation now that it is passing the external test which it previously failed.
llvm-svn: 110987
2010-08-13 00:17:53 +00:00
Chris Lattner
fd40059e71 fix PR7876: If ipsccp decides that a function's address is taken
before it rewrites the code, we need to use that in the post-rewrite pass.

llvm-svn: 110962
2010-08-12 22:25:23 +00:00
Eric Christopher
34acdf57df Temporarily revert 110737 and 110734, they were causing failures
in an external testsuite.

llvm-svn: 110905
2010-08-12 07:01:22 +00:00
Nate Begeman
36e284c2be Add test for recent instcombine vector shuffle enhancement
llvm-svn: 110737
2010-08-10 21:58:00 +00:00
Eli Friedman
7197d66ff1 PR7853: fix a silly mistake introduced in r101899, and add a test to make sure
it doesn't regress again.

llvm-svn: 110597
2010-08-09 20:49:43 +00:00
Dan Gohman
d108d2b2f8 Move x86-specific tests out of test/Transforms/LoopStrengthReduce and
into test/CodeGen/X86, so that they aren't run when the x86 target is
not enabled.

Fix uglygep.ll to not be x86-specific.

llvm-svn: 110343
2010-08-05 17:04:15 +00:00
Dan Gohman
a80f89dbc7 Make instcombine set explicit alignments on load or store
instructions with alignment 0, so that subsequent passes don't
need to bother checking the TargetData ABI size manually.

llvm-svn: 110128
2010-08-03 18:20:32 +00:00
Peter Collingbourne
10c4f9d6bd Add an atomic lowering pass
llvm-svn: 110113
2010-08-03 16:19:16 +00:00
Owen Anderson
e957c57ebb Re-apply the infamous r108614, with a fix pointed out by Dirk Steinke.
llvm-svn: 110036
2010-08-02 09:32:13 +00:00
Daniel Dunbar
f2be238c99 Speculatively revert r108614, "Another attempt at getting the clang self-host to
like my instcombine patch.", in an attempt to fix Clang i386 bootstrap.
 - Also PR7719.

llvm-svn: 109953
2010-07-31 19:51:11 +00:00
Owen Anderson
647ac93b7d Fix a test with malformed IR. Not sure why this didn't fail before.
llvm-svn: 109422
2010-07-26 18:44:56 +00:00
Dan Gohman
9e0ae022d2 Fix SCEVExpander::visitAddRecExpr so that it remembers the induction variable
it inserted rather than using LoopInfo::getCanonicalInductionVariable to
rediscover it, since that doesn't work on non-canonical loops. This fixes
infinite recurrsion on such loops; PR7562.

llvm-svn: 109419
2010-07-26 18:28:14 +00:00
Dan Gohman
48bddf693c Avoid depending on LCSSA implicitly pulling in LoopSimplify.
llvm-svn: 109410
2010-07-26 18:00:43 +00:00
Owen Anderson
f66e1873ea Testcase for r108687.
llvm-svn: 108689
2010-07-19 08:14:26 +00:00
Owen Anderson
c8dc055b5e Another attempt at getting the clang self-host to like my instcombine patch.
llvm-svn: 108614
2010-07-17 06:56:35 +00:00
Nick Lewycky
1b4a83430b Arrays and vectors with different numbers of elements are not equivalent.
llvm-svn: 108517
2010-07-16 06:31:12 +00:00
Tobias Grosser
9c86be4570 LoopSimplify does not update domfrontier correctly.
This fixes PR7649.

llvm-svn: 108513
2010-07-16 05:59:45 +00:00
Eric Christopher
5eef314caf Also revert 108422, it's causing some test failures.
Working on testcases for Owen.

llvm-svn: 108494
2010-07-16 01:36:12 +00:00
Dan Gohman
39c67ace89 Fix this test.
llvm-svn: 108491
2010-07-16 01:28:45 +00:00
Dan Gohman
1705ac2740 Fix the order that SCEVExpander considers add operands in so that
it doesn't miss an opportunity to form a GEP, regardless of the
relative loop depths of the operands. This fixes rdar://8197217.

llvm-svn: 108475
2010-07-15 23:38:13 +00:00
Owen Anderson
01a2992a91 Reapply r108378, with bugfixes, testcase, and improved comment formatting.
This now passes LIT, nighty test, and llvm-gcc bootstrap on my machine.

llvm-svn: 108422
2010-07-15 15:00:23 +00:00
Chris Lattner
2b2265a9c5 Fix PR7647, handling the case when 'To' ends up being
mutated by recursive simplification.  This also enhances
ReplaceAndSimplifyAllUses to actually do a real RAUW
at the end of it, which updates any value handles
pointing to "From" to start pointing to "To".  This
seems useful for debug info and random other VH users.

llvm-svn: 108415
2010-07-15 06:36:08 +00:00
Chris Lattner
38e6ecd9f1 revert r108320, I see the failures now...
llvm-svn: 108322
2010-07-14 06:16:35 +00:00
Chris Lattner
5822d6d579 reapply benjamin's instcombine patch, I don't see anything wrong with it and can't repro any problems with a manual self-host.
llvm-svn: 108320
2010-07-14 05:59:13 +00:00
Duncan Sands
8864383748 Handle the case of a tail recursion in which the tail call is followed
by a return that returns a constant, while elsewhere in the function
another return instruction returns a different constant.  This is a
special case of accumulator recursion, so just generalize the existing
logic a bit.

llvm-svn: 108241
2010-07-13 15:41:41 +00:00
Benjamin Kramer
cf8ad46899 Nope, still breaks the release selfhost bots :(
llvm-svn: 108153
2010-07-12 16:38:48 +00:00
Benjamin Kramer
e391789246 Reapply the "or" half of r108136, which seems to be less problematic.
llvm-svn: 108152
2010-07-12 16:15:48 +00:00
Benjamin Kramer
98c95e7743 Revert r108141 again, sigh.
llvm-svn: 108148
2010-07-12 14:42:04 +00:00
Benjamin Kramer
c4f46375d3 Reapply 108136 with an ugly pasto fixed.
llvm-svn: 108141
2010-07-12 13:44:00 +00:00
Benjamin Kramer
d9bf737e62 Revert r108136 until I figure out why it broke selfhost.
llvm-svn: 108139
2010-07-12 12:35:49 +00:00
Benjamin Kramer
f00a49ceff instcombine: fold (x & y) | (~x & z) and (x & y) ^ (~x & z) into ((y ^ z) & x) ^ z which is one instruction shorter. (PR6773)
before:
  %and = and i32 %y, %x
  %neg = xor i32 %x, -1
  %and4 = and i32 %z, %neg
  %xor = xor i32 %and4, %and

after:
  %xor1 = xor i32 %z, %y
  %and2 = and i32 %xor1, %x
  %xor = xor i32 %and2, %z

llvm-svn: 108136
2010-07-12 11:54:45 +00:00
Chris Lattner
59bffe35a1 fix PR7311 by avoiding breaking casts when a bitcast from scalar->vector
is involved.

llvm-svn: 108117
2010-07-12 01:19:22 +00:00
Chris Lattner
baef771d17 if jump threading is able to infer interesting values on both
the LHS and RHS of an and/or instruction, don't multiply add
known predecessor values.  This fixes the crash on testcase
from PR7498

llvm-svn: 108114
2010-07-12 00:47:34 +00:00
Chris Lattner
d8288040c3 fix PR7429, a crash turning a load from a string into a float.
llvm-svn: 108113
2010-07-12 00:22:51 +00:00
Chris Lattner
68f5ec0fa2 convert to filechecconvert to filecheckk
llvm-svn: 108112
2010-07-12 00:21:10 +00:00
Chris Lattner
64eeea9044 merge two tests.
llvm-svn: 108111
2010-07-12 00:19:47 +00:00
Benjamin Kramer
27eb255a70 Teach instcombine to transform
(X >s -1) ? C1 : C2 and (X <s  0) ? C2 : C1
into ((X >>s 31) & (C2 - C1)) + C1, avoiding the conditional.

This optimization could be extended to take non-const C1 and C2 but we better
stay conservative to avoid code size bloat for now.

for
int sel(int n) {
     return n >= 0 ? 60 : 100;
}

we now generate
  sarl  $31, %edi
  andl  $40, %edi
  leal  60(%rdi), %eax

instead of
  testl %edi, %edi
  movl  $60, %ecx
  movl  $100, %eax
  cmovnsl %ecx, %eax

llvm-svn: 107866
2010-07-08 11:39:10 +00:00
Chris Lattner
bf009b527a Fix the second half of PR7437: scalarrepl wasn't preserving
address spaces when SRoA'ing memcpy's.

llvm-svn: 107846
2010-07-08 00:27:05 +00:00
Dale Johannesen
e7117f93f1 Prevent test from hanging waiting for input.
llvm-svn: 107446
2010-07-01 22:57:11 +00:00
Devang Patel
a0fa700f3c Debugging infomration is encoded in llvm IR using metadata. This is designed
such a way that debug info for symbols preserved even if symbols are
optimized away by the optimizer. 

Add new special pass to remove debug info for such symbols.

llvm-svn: 107416
2010-07-01 19:49:20 +00:00
Devang Patel
7962349c46 Remove all debug info related named mdnodes.
llvm-svn: 107323
2010-06-30 21:29:00 +00:00
Dan Gohman
32534063c7 Fix ScalarEvolution's tripcount computation for chains of loops
where each loop's induction variable's start value is the exit
value of a preceding loop.

llvm-svn: 107224
2010-06-29 23:43:06 +00:00
Dan Gohman
50fffcaea3 Constant fold x == undef to undef.
llvm-svn: 107074
2010-06-28 21:30:07 +00:00
Chris Lattner
93a4f87f9c this test is failing nondeterministically and blaming me, just disable
it for now.

llvm-svn: 106960
2010-06-26 22:08:30 +00:00
Benjamin Kramer
8101054d77 Fix test weirdness.
llvm-svn: 106959
2010-06-26 22:06:50 +00:00
Benjamin Kramer
d02a62bee2 Fix some tests that didn't test anything.
llvm-svn: 106954
2010-06-26 20:05:06 +00:00
Kenneth Uildriks
e4adab665c Partial specialization test should not depend on the order of specialization operations or the names assigned to the specialized functions
llvm-svn: 106953
2010-06-26 18:47:40 +00:00
Duncan Sands
68f39e00a4 Fix PR7328: when turning a tail recursion into a loop, need to preserve
the returned value after the tail call if it differs from other return
values.  The optimal thing to do would be to introduce a phi node for
the return value, but for the moment just fix the miscompile.

llvm-svn: 106947
2010-06-26 12:53:31 +00:00
Dan Gohman
e375e96f0d Disable indvars on loops when LoopSimplify form is not available.
This fixes PR7333.

llvm-svn: 106267
2010-06-18 01:35:11 +00:00
Rafael Espindola
d7a63bead9 Remove arm_apcscc from the test files. It is the default and doing this
matches what llvm-gcc and clang now produce.

llvm-svn: 106221
2010-06-17 15:18:27 +00:00
Rafael Espindola
6ccafc9391 Make sure that simplify libcalls does not replace a call with one calling
convention with a new call with a different calling convention.

llvm-svn: 106134
2010-06-16 19:34:01 +00:00
Benjamin Kramer
f600dc2eed simplify-libcalls: fold strncmp(x, y, 1) -> memcmp(x, y, 1)
The memcmp will be optimized further and even the pathological case
'strstr(x, "x") == x' generates optimal code now.

llvm-svn: 106097
2010-06-16 10:30:29 +00:00
Benjamin Kramer
21a49e9375 simplify-libcalls: fold strstr(a, b) == a -> strncmp(a, b, strlen(b)) == 0
llvm-svn: 106047
2010-06-15 21:34:25 +00:00
Rafael Espindola
ab5183047b Remove the arm_aapcscc marker from the tests. It is the default
for the linux targets.

llvm-svn: 106029
2010-06-15 19:04:29 +00:00
Chris Lattner
88d51b0f4c jump threading can't split a critical edge from an indirectbr. This
fixes PR7356.

llvm-svn: 105950
2010-06-14 19:45:43 +00:00
Benjamin Kramer
443f74025b Test case for r105914.
llvm-svn: 105915
2010-06-13 16:16:54 +00:00
Kenneth Uildriks
73367eb575 Partial specialization was not checking the callsite to make sure it was using the same constants as the specialization, leading to calls to the wrong specialization. Patch by Takumi Nakamura\!
llvm-svn: 105528
2010-06-05 14:50:21 +00:00
Devang Patel
8bf4434e6e Copy location info for current function argument from dbg.declare if respective store instruction does not have any location info.
llvm-svn: 105490
2010-06-04 22:27:30 +00:00
Duncan Sands
da677f56f2 Fix PR7272: when inlining through a callsite with byval arguments,
the newly created allocas may be used by inlined calls, so these
need to have their tail call flags cleared.  Fixes PR7272.

llvm-svn: 105255
2010-05-31 21:00:26 +00:00
Nick Lewycky
418d80e555 The memcpy intrinsic only takes i8* for %src and %dst, so cast them to that
first. Fixes PR7265.

llvm-svn: 105206
2010-05-31 06:16:35 +00:00
Dale Johannesen
6b20aa3751 Add missing space; works for me.
llvm-svn: 104992
2010-05-28 18:45:59 +00:00
Dan Gohman
22d22caaed Teach instcombine to promote alloca array sizes.
llvm-svn: 104945
2010-05-28 15:09:00 +00:00
Dan Gohman
bab79afa29 Add a testcase for getelementptr index promotion.
llvm-svn: 104944
2010-05-28 15:07:59 +00:00
Devang Patel
8d4eb26f24 Do not drop location info for inlined function args.
llvm-svn: 104884
2010-05-27 20:25:04 +00:00
Duncan Sands
32d3986765 Teach instCombine to remove malloc+free if malloc's only uses are comparisons
to null.  Patch by Matti Niemenmaa.

llvm-svn: 104871
2010-05-27 19:09:06 +00:00
Benjamin Kramer
0acbf37982 Properly promote operands when optimizing a single-character memcmp.
llvm-svn: 104648
2010-05-25 22:53:43 +00:00
Nick Lewycky
fc4c30e9e3 Actually run the test. Thanks Daniel Dunbar!
llvm-svn: 103720
2010-05-13 17:41:06 +00:00
Nick Lewycky
38e49fbf52 Add testcase for r103653.
llvm-svn: 103699
2010-05-13 06:00:14 +00:00
Chris Lattner
e74d980a02 make simplifycfg insert an llvm.trap before the 'unreachable' it introduces
when it detects undefined behavior.  llvm.trap generally codegens into some
thing really small (e.g. a 2 byte ud2 instruction on x86) and debugging this
sort of thing is "nontrivial".  For example, we now compile:

void foo() { *(int*)0 = 42; }

into:

_foo:
	pushl	%ebp
	movl	%esp, %ebp
	ud2

Some may even claim that this is a security hole, though that seems dubious
to me.  This addresses rdar://7958343 - Optimizing away null dereference 
potentially allows arbitrary code execution

llvm-svn: 103356
2010-05-08 22:15:59 +00:00
Chris Lattner
0b442d35da Teach instcombine to transform a bitcast/(zext|trunc)/bitcast sequence
with a vector input and output into a shuffle vector.  This sort of 
sequence happens when the input code stores with one type and reloads
with another type and then SROA promotes to i96 integers, which make
everyone sad.

This fixes rdar://7896024

llvm-svn: 103354
2010-05-08 21:50:26 +00:00