1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-21 20:12:56 +02:00
Commit Graph

201 Commits

Author SHA1 Message Date
Cameron Zwarich
68f72463eb Add native integer type TargetData to some existing tests.
llvm-svn: 127717
2011-03-16 00:13:40 +00:00
Cameron Zwarich
573c518b3d Add a test case for r127320.
llvm-svn: 127321
2011-03-09 08:11:02 +00:00
Cameron Zwarich
e0b4705b03 Add support to scalar replacement for partial vector accesses of an alloca, e.g.
a union of a float, <2 x float>, and <4 x float>. This mostly comes up with the
use of vector intrinsics, especially in NEON when programmers know the layout of
the register file. This enables codegen to eliminate a lot of the subregister
traffic it would otherwise generate.

This commit only enables this for a small number of floating-point cases, but a
lot more integer cases. I assume this is okay for all ports, but I did not do
extensive testing of the quality of code involving i512 vectors and the like. If
there is a use case where this generates worse code than before, let me know and
we can scale it back.

This fixes <rdar://problem/9036264>.

llvm-svn: 127317
2011-03-09 05:43:05 +00:00
Chris Lattner
8f4ed7d057 merge all the "crash tests" into crash.ll
llvm-svn: 124101
2011-01-24 03:37:34 +00:00
Chris Lattner
f33b65080c fix PR9017, a bug where we'd assert when promoting in unreachable
code.

llvm-svn: 124100
2011-01-24 03:29:07 +00:00
Chris Lattner
3609e5afb4 enhance SRoA to promote allocas that are used by PHI nodes. This often
occurs because instcombine sinks loads and inserts phis.  This kicks in 
on such apps as 175.vpr, eon, 403.gcc, xalancbmk and a bunch of times in
spec2006 in some app that uses std::deque.

This resolves the last of rdar://7339113.

llvm-svn: 124090
2011-01-24 01:07:11 +00:00
Chris Lattner
50288f8e54 Enhance SRoA to promote allocas that are used by selects in some
common cases.  This triggers a surprising number of times in SPEC2K6
because min/max idioms end up doing this.  For example, code from the
STL ends up looking like this to SRoA:

  %202 = load i64* %__old_size, align 8, !tbaa !3
  %203 = load i64* %__old_size, align 8, !tbaa !3
  %204 = load i64* %__n, align 8, !tbaa !3
  %205 = icmp ult i64 %203, %204
  %storemerge.i = select i1 %205, i64* %__n, i64* %__old_size
  %206 = load i64* %storemerge.i, align 8, !tbaa !3

We can now promote both the __n and the __old_size allocas.

This addresses another chunk of rdar://7339113, poor codegen on
stringswitch.

llvm-svn: 124088
2011-01-23 22:04:55 +00:00
Chris Lattner
2ef605b499 Enhance SRoA to be more aggressive about scalarization of aggregate allocas
that have PHI or select uses of their element pointers.  This can often happen
when instcombine sinks two loads into a successor, inserting a phi or select.

With this patch, we can scalarize the alloca, but the pinned elements are not
yet promoted.  This is still a win for large aggregates where only one element
is used.  This fixes rdar://8904039 and part of rdar://7339113 (poor codegen
on stringswitch).

llvm-svn: 124070
2011-01-23 08:27:54 +00:00
Chris Lattner
ba66871643 remove an old hack that avoided creating MMX datatypes. The
X86 backend has been fixed.

llvm-svn: 124064
2011-01-23 06:40:33 +00:00
Chris Lattner
29f339f87c if an alloca is only ever accessed as a unit, and is accessed with load/store instructions,
then don't try to decimate it into its individual pieces.  This will just make a mess of the
IR and is pointless if none of the elements are individually accessed.  This was generating
really terrible code for std::bitset (PR8980) because it happens to be lowered by clang
as an {[8 x i8]} structure instead of {i64}.

The testcase now is optimized to:

define i64 @test2(i64 %X) {
  br label %L2

L2:                                               ; preds = %0
  ret i64 %X
}

before we generated:

define i64 @test2(i64 %X) {
  %sroa.store.elt = lshr i64 %X, 56
  %1 = trunc i64 %sroa.store.elt to i8
  %sroa.store.elt8 = lshr i64 %X, 48
  %2 = trunc i64 %sroa.store.elt8 to i8
  %sroa.store.elt9 = lshr i64 %X, 40
  %3 = trunc i64 %sroa.store.elt9 to i8
  %sroa.store.elt10 = lshr i64 %X, 32
  %4 = trunc i64 %sroa.store.elt10 to i8
  %sroa.store.elt11 = lshr i64 %X, 24
  %5 = trunc i64 %sroa.store.elt11 to i8
  %sroa.store.elt12 = lshr i64 %X, 16
  %6 = trunc i64 %sroa.store.elt12 to i8
  %sroa.store.elt13 = lshr i64 %X, 8
  %7 = trunc i64 %sroa.store.elt13 to i8
  %8 = trunc i64 %X to i8
  br label %L2

L2:                                               ; preds = %0
  %9 = zext i8 %1 to i64
  %10 = shl i64 %9, 56
  %11 = zext i8 %2 to i64
  %12 = shl i64 %11, 48
  %13 = or i64 %12, %10
  %14 = zext i8 %3 to i64
  %15 = shl i64 %14, 40
  %16 = or i64 %15, %13
  %17 = zext i8 %4 to i64
  %18 = shl i64 %17, 32
  %19 = or i64 %18, %16
  %20 = zext i8 %5 to i64
  %21 = shl i64 %20, 24
  %22 = or i64 %21, %19
  %23 = zext i8 %6 to i64
  %24 = shl i64 %23, 16
  %25 = or i64 %24, %22
  %26 = zext i8 %7 to i64
  %27 = shl i64 %26, 8
  %28 = or i64 %27, %25
  %29 = zext i8 %8 to i64
  %30 = or i64 %29, %28
  ret i64 %30
}

In this case, instcombine was able to eliminate the nonsense, but in PR8980 enough
PHIs are in play that instcombine backs off.  It's better to not generate this stuff
in the first place.

llvm-svn: 123571
2011-01-16 06:18:28 +00:00
Bob Wilson
3b0197489e Extend SROA to handle arrays accessed as homogeneous structs and vice versa.
This is a minor extension of SROA to handle a special case that is
important for some ARM NEON operations.  Some of the NEON intrinsics
return multiple values, which are handled as struct types containing
multiple elements of the same vector type.  The corresponding return
types declared in the arm_neon.h header have equivalent arrays.  We
need SROA to recognize that it can split up those arrays and structs
into separate vectors, even though they are not always accessed with
the same type.  SROA already handles loads and stores of an entire
alloca by using insertvalue/extractvalue to access the individual
pieces, and that code works the same regardless of whether the type
is a struct or an array.  So, all that needs to be done is to check
for compatible arrays and homogeneous structs.

llvm-svn: 123381
2011-01-13 17:45:11 +00:00
Bob Wilson
9f8d730f9b Make SROA more aggressive with allocas containing padding.
SROA only split up structs and arrays one level at a time, so padding can
only cause trouble if it is located in between the struct or array elements.

llvm-svn: 123380
2011-01-13 17:45:08 +00:00
Nick Lewycky
f6fa6b29f4 Treat a call of function pointer like a load of the pointer when considering
whether the pointer can be replaced with the global variable it is a copy of.
Fixes PR8680.

llvm-svn: 120126
2010-11-24 22:04:20 +00:00
Chris Lattner
6048697a30 allow eliminating an alloca that is just copied from an constant global
if it is passed as a byval argument.  The byval argument will just be a
read, so it is safe to read from the original global instead.  This allows
us to promote away the %agg.tmp alloca in PR8582

llvm-svn: 119686
2010-11-18 06:41:51 +00:00
Chris Lattner
791e914b1b enhance the "alloca is just a memcpy from constant global"
to ignore calls that obviously can't modify the alloca
because they are readonly/readnone.

llvm-svn: 119683
2010-11-18 06:26:49 +00:00
Chris Lattner
44ccd4643d fix a small oversight in the "eliminate memcpy from constant global"
optimization.  If the alloca that is "memcpy'd from constant" also has
a memcpy from *it*, ignore it: it is a load.  We now optimize the testcase to:

define void @test2() {
  %B = alloca %T
  %a = bitcast %T* @G to i8*
  %b = bitcast %T* %B to i8*
  call void @llvm.memcpy.p0i8.p0i8.i64(i8* %b, i8* %a, i64 124, i32 4, i1 false)
  call void @bar(i8* %b)
  ret void
}

previously we would generate:

define void @test() {
  %B = alloca %T
  %b = bitcast %T* %B to i8*
  %G.0 = getelementptr inbounds %T* @G, i32 0, i32 0
  %tmp3 = load i8* %G.0, align 4
  %G.1 = getelementptr inbounds %T* @G, i32 0, i32 1
  %G.15 = bitcast [123 x i8]* %G.1 to i8*
  %1 = bitcast [123 x i8]* %G.1 to i984*
  %srcval = load i984* %1, align 1
  %B.0 = getelementptr inbounds %T* %B, i32 0, i32 0
  store i8 %tmp3, i8* %B.0, align 4
  %B.1 = getelementptr inbounds %T* %B, i32 0, i32 1
  %B.12 = bitcast [123 x i8]* %B.1 to i8*
  %2 = bitcast [123 x i8]* %B.1 to i984*
  store i984 %srcval, i984* %2, align 1
  call void @bar(i8* %b)
  ret void
}

llvm-svn: 119682
2010-11-18 06:20:47 +00:00
Chris Lattner
c1e63bb987 filecheckize
llvm-svn: 119681
2010-11-18 06:16:43 +00:00
Chris Lattner
b1c861e28c deepen my MMX/SRoA hack to avoid hurting non-x86 codegen.
llvm-svn: 112763
2010-09-01 23:09:27 +00:00
Chris Lattner
9759e898f0 add a gross hack to work around a problem that Argiris reported
on llvmdev: SRoA is introducing MMX datatypes like <1 x i64>,
which then cause random problems because the X86 backend is
producing mmx stuff without inserting proper emms calls.

In the short term, force off MMX datatypes.  In the long term,
the X86 backend should not select generic vector types to MMX
registers.  This is being worked on, but won't be done in time
for 2.8.  rdar://8380055

llvm-svn: 112696
2010-09-01 05:14:33 +00:00
Chris Lattner
c7ae149253 filecheckize
llvm-svn: 112695
2010-09-01 05:10:14 +00:00
Chris Lattner
bf009b527a Fix the second half of PR7437: scalarrepl wasn't preserving
address spaces when SRoA'ing memcpy's.

llvm-svn: 107846
2010-07-08 00:27:05 +00:00
Rafael Espindola
d7a63bead9 Remove arm_apcscc from the test files. It is the default and doing this
matches what llvm-gcc and clang now produce.

llvm-svn: 106221
2010-06-17 15:18:27 +00:00
Rafael Espindola
ab5183047b Remove the arm_aapcscc marker from the tests. It is the default
for the linux targets.

llvm-svn: 106029
2010-06-15 19:04:29 +00:00
Chris Lattner
16e1226366 move comment.
llvm-svn: 101433
2010-04-16 01:05:52 +00:00
Chris Lattner
f90092185c fix PR6832: we were using the alignment of a pointer when we
wanted the alignment of the pointee.

llvm-svn: 101432
2010-04-16 01:05:38 +00:00
Devang Patel
9f858ad942 Remove tests that checks @llvm.dbg.stoppoint handling.
llvm-svn: 97493
2010-03-01 20:33:48 +00:00
Bob Wilson
e94d77854f Fix a crash in scalarrepl for memcpy/memmove where the source and destination
are the same.  I had already fixed a similar problem where the source and
destination were different bitcasts derived from the same alloca, but the
previous fix still did not handle the case where both operands are exactly
the same value.  Radar 7552893.

llvm-svn: 93848
2010-01-19 04:32:48 +00:00
Dan Gohman
5fa04f2707 Delete useless trailing semicolons.
llvm-svn: 92740
2010-01-05 17:55:26 +00:00
Bob Wilson
0dc93264b1 Generalize SROA to allow the first index of a GEP to be non-zero. Add a
missing check that an array reference doesn't go past the end of the array,
and remove some redundant checks for in-bound array and vector references
that are no longer needed.

llvm-svn: 91897
2009-12-22 06:57:14 +00:00
Bob Wilson
03b6955c7f Reapply 91459 with a simple fix for the problem that broke the x86_64-darwin
bootstrap.  This also replaces the WeakVH references that Chris objected to
with normal Value references.

llvm-svn: 91711
2009-12-18 20:14:40 +00:00
Bob Wilson
9d4b46c0e6 Re-revert 91459. It's breaking the x86_64 darwin bootstrap.
llvm-svn: 91607
2009-12-17 18:34:24 +00:00
Daniel Dunbar
c4abbc0ab6 Reapply r91459, it was only unmasking the bug, and since TOT is still broken having it reverted does no good.
llvm-svn: 91559
2009-12-16 20:09:53 +00:00
Daniel Dunbar
929f303477 Revert "Reapply 91184 with fixes and an addition to the testcase to cover the
problem", this broke llvm-gcc bootstrap for release builds on
x86_64-apple-darwin10.

This reverts commit db22309800b224a9f5f51baf76071d7a93ce59c9.

llvm-svn: 91534
2009-12-16 10:56:17 +00:00
Bob Wilson
8505aa648d Reapply 91184 with fixes and an addition to the testcase to cover the problem
found last time.  Instead of trying to modify the IR while iterating over it,
I've change it to keep a list of WeakVH references to dead instructions, and
then delete those instructions later.  I also added some special case code to
detect and handle the situation when both operands of a memcpy intrinsic are
referencing the same alloca.

llvm-svn: 91459
2009-12-15 22:00:51 +00:00
Shantonu Sen
9782dd21e0 Remove empty file completely
llvm-svn: 91277
2009-12-14 14:15:15 +00:00
Chris Lattner
6603d21f13 revert r91184, because it causes a crash on a .bc file I just
sent to Bob.

llvm-svn: 91268
2009-12-14 05:11:02 +00:00
Bob Wilson
8486cae4ce Revise scalar replacement to be more flexible about handle bitcasts and GEPs.
While scanning through the uses of an alloca, keep track of the current offset
relative to the start of the alloca, and check memory references to see if
the offset & size correspond to a component within the alloca.  This has the
nice benefit of unifying much of the code from isSafeUseOfAllocation,
isSafeElementUse, and isSafeUseOfBitCastedAllocation.  The code to rewrite
the uses of a promoted alloca, after it is determined to be safe, is
reorganized in the same way.

Also, when rewriting GEP instructions, mark them as "in-bounds" since all the
indices are known to be safe.

llvm-svn: 91184
2009-12-11 23:47:40 +00:00
Chris Lattner
cdfa9dadf1 fix PR5436 by making the 'simple' case of SRoA not promote out of range
array indexes.  The "complex" case of SRoA still handles them, and correctly.

This fixes a weirdness where we'd correctly avoid transforming A[0][42] if
the 42 was too large, but we'd only do it if it was one gep, not two separate
ones.

llvm-svn: 90007
2009-11-27 16:37:41 +00:00
Chris Lattner
02211273c7 filecheckize
llvm-svn: 90006
2009-11-27 16:31:59 +00:00
Kenneth Uildriks
e711736014 Make opt default to not adding a target data string and update tests that depend on target data to supply it within the test
llvm-svn: 85900
2009-11-03 15:29:06 +00:00
Dan Gohman
205b641954 Change tests from "opt %s" to "opt < %s" so that opt doesn't see the
input filename so that opt doesn't print the input filename in the
output so that grep lines in the tests don't unintentionally match
strings in the input filename.

llvm-svn: 81537
2009-09-11 18:01:28 +00:00
Dan Gohman
c95df8b6d8 Use opt -S instead of piping bitcode output through llvm-dis.
llvm-svn: 81257
2009-09-08 22:34:10 +00:00
Dan Gohman
8d84372836 Change these tests to feed the assembly files to opt directly, instead
of using llvm-as, now that opt supports this.

llvm-svn: 81226
2009-09-08 16:50:01 +00:00
Nick Lewycky
e791519ea3 Don't crash trying to promote VLAs.
llvm-svn: 79226
2009-08-17 05:37:31 +00:00
Dan Gohman
5f6f8101d5 Split the Add, Sub, and Mul instruction opcodes into separate
integer and floating-point opcodes, introducing
FAdd, FSub, and FMul.

For now, the AsmParser, BitcodeReader, and IRBuilder all preserve
backwards compatability, and the Core LLVM APIs preserve backwards
compatibility for IR producers. Most front-ends won't need to change
immediately.

This implements the first step of the plan outlined here:
http://nondot.org/sabre/LLVMNotes/IntegerOverflow.txt

llvm-svn: 72897
2009-06-04 22:49:04 +00:00
Eli Friedman
2b0edc3327 PR4286: Make RewriteLoadUserOfWholeAlloca and
RewriteStoreUserOfWholeAlloca deal with tail padding because 
isSafeUseOfBitCastedAllocation expects them to.  Otherwise, we crash 
trying to erase the bitcast.

llvm-svn: 72688
2009-06-01 09:14:32 +00:00
Chris Lattner
0fd5aea274 fix RewriteStoreUserOfWholeAlloca to use the correct type size
method, fixing a crash on PR4146.  While the store will 
ultimately overwrite the "padded size" number of bits in memory,
the stored value may be a subset of this size.  This function
only wants to handle the case where all bits are stored.

llvm-svn: 71224
2009-05-08 15:54:41 +00:00
Chris Lattner
95aad4d625 fix a crash on a pointless but valid zero-length memset, rdar://6808691
llvm-svn: 69680
2009-04-21 16:52:12 +00:00
Zhou Sheng
90fc23d03d Fix a bug.
If I->use_empty(), this method should return false.

llvm-svn: 67180
2009-03-18 07:56:13 +00:00
Chris Lattner
f05ebf0849 teach SROA to handle promoting vector allocas with a memset into them into
a vector type instead of into an integer type.

llvm-svn: 66368
2009-03-08 04:17:04 +00:00
Chris Lattner
54d2292fe5 Enhance SROA to "promote to scalar" allocas which are
memcpy/memmove'd into or out of.  This fixes a serious
perf issue that Nate ran into.

llvm-svn: 66366
2009-03-08 04:04:21 +00:00
Devang Patel
12e9aa7629 While converting an aggregate to scalare, ignore and remove aggregate's debug info.
llvm-svn: 66262
2009-03-06 07:03:54 +00:00
Chris Lattner
5051e7afde Fix PR3720 by properly propagating alignment information from memcpy/memmove
onto element accesses.

llvm-svn: 66053
2009-03-04 19:20:50 +00:00
Chris Lattner
76fd170cbc adjust for asmprinter change.
llvm-svn: 65741
2009-03-01 00:26:51 +00:00
Devang Patel
7377e7aa89 Enable scalar replacement of AllocaInst whose one of the user is dbg info.
llvm-svn: 64207
2009-02-10 07:00:59 +00:00
Chris Lattner
5118081112 fix PR3489, use bits instead of bytes.
llvm-svn: 63916
2009-02-06 04:34:07 +00:00
Chris Lattner
4d41e7d461 teach "convert from scalar" to handle loads of fca's.
llvm-svn: 63659
2009-02-03 21:08:45 +00:00
Chris Lattner
eb3d568867 make scalar conversion handle stores of first class
aggregate values.  loads are not yet handled (coming
soon to an sroa near you).

llvm-svn: 63649
2009-02-03 19:30:11 +00:00
Chris Lattner
5f3116636b Make SROA produce a vector only when the alloca is actually
accessed at least once as a vector.  This prevents it from
compiling the example in not-a-vector into:

define double @test(double %A, double %B) {
	%tmp4 = insertelement <7 x double> undef, double %A, i32 0
	%tmp = insertelement <7 x double> %tmp4, double %B, i32 4
	%tmp2 = extractelement <7 x double> %tmp, i32 4
	ret double %tmp2
}

instead, producing the integer code.  Producing vectors when they
aren't otherwise in the program is dangerous because a lot of other
code treats them carefully and doesn't want to break them down.
OTOH, many things want to break down tasty i448's.

llvm-svn: 63638
2009-02-03 18:15:05 +00:00
Chris Lattner
028861e55b this produces an undefined result, just check that the alloca is gone
and that sroa doesn't crash.

llvm-svn: 63637
2009-02-03 18:13:00 +00:00
Chris Lattner
447b5517bc add another case of undefined behavior without crashing, PR3466.
llvm-svn: 63620
2009-02-03 07:08:57 +00:00
Chris Lattner
b47738daab Teach ConvertUsesToScalar to handle memset, allowing it to handle
crazy cases like:

struct f {  int A, B, C, D, E, F; };
short test4() {
  struct f A;
  A.A = 1;
  memset(&A.B, 2, 12);
  return A.C;
}

llvm-svn: 63596
2009-02-03 02:01:43 +00:00
Chris Lattner
2dae393299 rearrange how SRoA handles promotion of allocas to vectors.
With the new world order, it can handle cases where the first
store into the alloca is an element of the vector, instead of
requiring the first analyzed store to have the vector type 
itself.  This allows us to un-xfail 
test/CodeGen/X86/vec_ins_extract.ll.

llvm-svn: 63590
2009-02-03 01:30:09 +00:00
Chris Lattner
7f52743cca this test produces an undefined value, we don't care
what it is, but we do want the alloca promoted.

llvm-svn: 63587
2009-02-03 01:13:52 +00:00
Chris Lattner
5c43f87c53 update test
llvm-svn: 63532
2009-02-02 18:12:58 +00:00
Chris Lattner
ce09ac0c3d Fix a bug which caused us to miscompile a couple of Ada
tests.  Thanks for the beautiful reduced testcase Duncan!

llvm-svn: 63529
2009-02-02 18:02:59 +00:00
Chris Lattner
235913be77 Simplify and generalize the SROA "convert to scalar" transformation to
be able to handle *ANY* alloca that is poked by loads and stores of 
bitcasts and GEPs with constant offsets.  Before the code had a number
of annoying limitations and caused it to miss cases such as storing into
holes in structs and complex casts (as in bitfield-sroa) where we had
unions of bitfields etc.  This also handles a number of important cases
that are exposed due to the ABI lowering stuff we do to pass stuff by
value.

One case that is pretty great is that we compile 
2006-11-07-InvalidArrayPromote.ll into:

define i32 @func(<4 x float> %v0, <4 x float> %v1) nounwind {
	%tmp10 = call <4 x i32> @llvm.x86.sse2.cvttps2dq(<4 x float> %v1)
	%tmp105 = bitcast <4 x i32> %tmp10 to i128
	%tmp1056 = zext i128 %tmp105 to i256	
	%tmp.upgrd.43 = lshr i256 %tmp1056, 96
	%tmp.upgrd.44 = trunc i256 %tmp.upgrd.43 to i32	
	ret i32 %tmp.upgrd.44
}

which turns into:

_func:
	subl	$28, %esp
	cvttps2dq	%xmm1, %xmm0
	movaps	%xmm0, (%esp)
	movl	12(%esp), %eax
	addl	$28, %esp
	ret

Which is pretty good code all things considering :).

One effect of this is that SROA will start generating arbitrary bitwidth 
integers that are a multiple of 8 bits.  In the case above, we got a 
256 bit integer, but the codegen guys assure me that it can handle the 
simple and/or/shift/zext stuff that we're doing on these operations.

This addresses rdar://6532315

llvm-svn: 63469
2009-01-31 02:28:54 +00:00
Chris Lattner
f9dd07a3c3 Fix some issues with volatility, move "CanConvertToScalar" check
after the others.

llvm-svn: 63227
2009-01-28 20:16:43 +00:00
Chris Lattner
2712dbe282 strengthen this test.
llvm-svn: 63222
2009-01-28 19:29:30 +00:00
Chris Lattner
1219b4e6bc Fix PR3304
llvm-svn: 61995
2009-01-09 18:18:43 +00:00
Chris Lattner
60a03a2f36 This implements the second half of the fix for PR3290, handling
loads from allocas that cover the entire aggregate.  This handles
some memcpy/byval cases that are produced by llvm-gcc.  This triggers
a few times in kc++ (with std::pair<std::_Rb_tree_const_iterator
<kc::impl_abstract_phylum*>,bool>) and once in 176.gcc (with %struct..0anon).

llvm-svn: 61915
2009-01-08 05:42:05 +00:00
Chris Lattner
8adf14ea21 Implement the first half of PR3290: if there is a store of an
integer to a (transitive) bitcast the alloca and if that integer
has the full size of the alloca, then it clobbers the whole thing.
Handle this by extracting pieces out of the stored integer and 
filing them away in the SROA'd elements.

This triggers fairly frequently because the CFE uses integers to
pass small structs by value and the inliner exposes these.  For 
example, in kimwitu++, I see a bunch of these with i64 stores to
"%struct.std::pair<std::_Rb_tree_const_iterator<kc::impl_abstract_phylum*>,bool>"

In 176.gcc I see a few i32 stores to "%struct..0anon".

In the testcase, this is a difference between compiling test1 to:

_test1:
	subl	$12, %esp
	movl	20(%esp), %eax
	movl	%eax, 4(%esp)
	movl	16(%esp), %eax
	movl	%eax, (%esp)
	movl	(%esp), %eax
	addl	4(%esp), %eax
	addl	$12, %esp
	ret

vs:

_test1:
	movl	8(%esp), %eax
	addl	4(%esp), %eax
	ret

The second half of this will be to handle loads of the same form.

llvm-svn: 61853
2009-01-07 08:11:13 +00:00
Matthijs Kooijman
12cd5d041d Allow scalarrepl to treat an all-zero GEP just as bitcast.
This includes not marking a GEP involving a vector as unsafe, but only when it
has all zero indices. This allows scalarrepl to work in a few more cases.

llvm-svn: 57177
2008-10-06 16:23:31 +00:00
Matthijs Kooijman
9fb8d5f5b1 Add a testcase showing that scalarrepl supports first class structs.
I originally made this script to show that scalarrepl didn't support them, but
it turned out it does. Better to still add the testcase then.

llvm-svn: 56781
2008-09-29 10:42:13 +00:00
Chris Lattner
72ee6ebb0a Fix PR2423 by checking all indices for out of range access, not only
indices that start with an array subscript.  x->field[10000] is just 
as bad as (*X)[14][10000].

llvm-svn: 55226
2008-08-23 05:21:06 +00:00
Chris Lattner
d80c865a09 Fix PR2369 by making scalarrepl more careful about promoting
structures.  Its default threshold is to promote things that are
smaller than 128 bytes, which is sane.  However, it is not sane
to do this for things that turn into 128 *registers*.  Add a cap
on the number of registers introduced, defaulting to 128/4=32.

llvm-svn: 52611
2008-06-22 17:46:21 +00:00
Evan Cheng
66ce588b87 Fix some tests.
llvm-svn: 52245
2008-06-12 21:23:38 +00:00
Matthijs Kooijman
e8fb62fb3c Fix some escaping and quoting in RUN lines, mainly involving { and <. In two
cases quoting of <{ didn't work out, so I changed the grep to check for }>
instead.

This fixes 7 testcases that were not properly running before.

llvm-svn: 52182
2008-06-10 16:04:47 +00:00
Matthijs Kooijman
6e1c286f53 Learn ScalarReplAggregrates how stores and loads of first class aggregrates
work and how to replace them into individual values. Also, when trying to
replace an aggregrate that is used by load or store with a single (large)
integer, don't crash (but don't replace the aggregrate either).

Also adds a testcase for both structs and arrays.

llvm-svn: 51997
2008-06-05 12:51:53 +00:00
Gabor Greif
807c2df887 sabre brings to my attention that the 'tr' suffix is also obsolete
llvm-svn: 51349
2008-05-20 21:00:03 +00:00
Gabor Greif
d8a4dbb5da Rename the last test with .llx extension to .ll, resolve duplicate test by renaming to isnan2. Now that no test has llx ending there is no need to search for them from dg.exp too.
llvm-svn: 51328
2008-05-20 19:52:04 +00:00
Tanya Lattner
9bd47b05dd Upgrade tests to not use llvm-upgrade.
llvm-svn: 48484
2008-03-18 04:14:37 +00:00
Chris Lattner
75f5d14574 fix a bug Anders ran into where scalarrepl would crash when promoting
a union containing a vector and an array whose elements were smaller than
the vector elements.  this means we need to compile the load of the 
array elements into an extract element plus a truncate.

llvm-svn: 47752
2008-02-29 07:12:06 +00:00
Chris Lattner
83227e350d Fix a bug where scalarrepl would discard offset if type would match.
In practice this can only happen on code with already undefined behavior, 
but this is still a good thing to handle correctly.

llvm-svn: 46539
2008-01-30 00:39:15 +00:00
Duncan Sands
662fb070a7 Change uses of getTypeSize to getABITypeSize, getTypeStoreSize
or getTypeSizeInBits as appropriate in ScalarReplAggregates.
The right change to make was not always obvious, so it would
be good to have an sroa guru review this.  While there I noticed
some bugs, and fixed them: (1) arrays of x86 long double have
holes due to alignment padding, but this wasn't being spotted
by HasStructPadding (renamed to HasPadding).  The same goes
for arrays of oddly sized ints.  Vectors also suffer from this,
in fact the problem for vectors is much worse because basic
vector assumptions seem to be broken by vectors of type with
alignment padding.   I didn't try to fix any of these vector
problems.  (2) The code for extracting smaller integers from
larger ones (in the "int union" case) was wrong on big-endian
machines for integers with size not a multiple of 8, like i1.
Probably this is impossible to hit via llvm-gcc, but I fixed
it anyway while there and added a testcase.  I also got rid of
some trailing whitespace and changed a function name which
had an obvious typo in it.

llvm-svn: 43672
2007-11-04 14:43:57 +00:00
John Criswell
57e5ed4b5a Convert .cvsignore files
llvm-svn: 37801
2007-06-29 16:35:07 +00:00
Chris Lattner
282776d211 Testcase for PR1421
llvm-svn: 37357
2007-05-30 06:10:46 +00:00
Chris Lattner
5743ab17d4 testcase for PR1446
llvm-svn: 37325
2007-05-24 18:42:47 +00:00
Chris Lattner
dfc8ec6660 Move Mem2Reg/DifferingTypes.ll -> ScalarRepl/DifferingTypes.ll. -scalarrepl
implements this xform.

llvm-svn: 36804
2007-05-05 22:22:03 +00:00
Chris Lattner
c9f600b6a4 new testcase, should be able to eliminate the alloca and memcpy
llvm-svn: 36428
2007-04-25 06:29:34 +00:00
Reid Spencer
df17fa8ef9 For PR1319:
Remove && from the end of the lines to prevent tests from throwing run
lines into the background. Also, clean up places where the same command
is run multiple times by using a temporary file.

llvm-svn: 36142
2007-04-16 17:36:08 +00:00
Reid Spencer
11d48d6932 For PR1319:
Upgrade to use new Tcl exec based test harness. This exposes 3 bugs that
were previously not being reported:
test/Transforms/GlobalDCE/2002-08-17-FunctionDGE.ll
test/Transforms/GlobalOpt/memset.ll
test/Transforms/IndVarsSimplify/exit_value_tests.llx

llvm-svn: 36065
2007-04-15 09:21:47 +00:00
Reid Spencer
56b310ae49 Make the llvm-runtest function much more amenable by eliminating all the
global variables that needed to be passed in. This makes it possible to
add new global variables with only a couple changes (Makefile and llvm-dg.exp)
instead of touching every single dg.exp file.

llvm-svn: 35918
2007-04-11 19:56:59 +00:00
Reid Spencer
50ee6b8557 Remove use of implementation keyword.
llvm-svn: 35412
2007-03-28 02:38:26 +00:00
Chris Lattner
d95a054122 new testcase
llvm-svn: 35397
2007-03-28 01:31:33 +00:00
Chris Lattner
e13d458bd7 add a testcase the resent patches fail on.
llvm-svn: 35167
2007-03-19 18:25:48 +00:00
Chris Lattner
9623388b12 add PR#
llvm-svn: 35151
2007-03-19 00:17:19 +00:00
Chris Lattner
bafeb87ca6 add pr#
llvm-svn: 35149
2007-03-19 00:15:43 +00:00
Chris Lattner
a068bd03cf new testcase
llvm-svn: 35148
2007-03-19 00:11:30 +00:00
Chris Lattner
6e0b0d4a3a testcase for SROA with memset etc
llvm-svn: 35147
2007-03-19 00:09:00 +00:00
Reid Spencer
4572ce85b0 Regression is gone, don't try to find it on clean target.
llvm-svn: 33296
2007-01-17 07:59:14 +00:00