1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-18 18:42:46 +02:00
Commit Graph

63962 Commits

Author SHA1 Message Date
NAKAMURA Takumi
415a2a0a9b Add me to the "blame list"!
And it is my 1st test commit.

llvm-svn: 112384
2010-08-28 20:24:43 +00:00
Dan Gohman
d499f5cbb9 Remove obsolete keywords which are no longer relevant.
llvm-svn: 112382
2010-08-28 20:14:05 +00:00
Dan Gohman
d8226c08e0 Remove unions from the vim syntax highlighting.
llvm-svn: 112381
2010-08-28 20:11:28 +00:00
Chris Lattner
8cb4abbc0e fix the buildvector->insertp[sd] logic to not always create a redundant
insertp[sd] $0, which is a noop.  Before:

_f32:                                   ## @f32
	pshufd	$1, %xmm1, %xmm2
	pshufd	$1, %xmm0, %xmm3
	addss	%xmm2, %xmm3
	addss	%xmm1, %xmm0
                                        ## kill: XMM0<def> XMM0<kill> XMM0<def>
	insertps	$0, %xmm0, %xmm0
	insertps	$16, %xmm3, %xmm0
	ret

after:

_f32:                                   ## @f32
	movdqa	%xmm0, %xmm2
	addss	%xmm1, %xmm2
	pshufd	$1, %xmm1, %xmm1
	pshufd	$1, %xmm0, %xmm3
	addss	%xmm1, %xmm3
	movdqa	%xmm2, %xmm0
	insertps	$16, %xmm3, %xmm0
	ret

The extra movs are due to a random (poor) scheduling decision.

llvm-svn: 112379
2010-08-28 17:59:08 +00:00
Chris Lattner
c3b630d64b fix the BuildVector -> unpcklps logic to not do pointless shuffles
when the top elements of a vector are undefined.  This happens all
the time for X86-64 ABI stuff because only the low 2 elements of
a 4 element vector are defined.  For example, on:

_Complex float f32(_Complex float A, _Complex float B) {
  return A+B;
}

We used to produce (with SSE2, SSE4.1+ uses insertps):

_f32:                                   ## @f32
	movdqa	%xmm0, %xmm2
	addss	%xmm1, %xmm2
	pshufd	$16, %xmm2, %xmm2
	pshufd	$1, %xmm1, %xmm1
	pshufd	$1, %xmm0, %xmm0
	addss	%xmm1, %xmm0
	pshufd	$16, %xmm0, %xmm1
	movdqa	%xmm2, %xmm0
	unpcklps	%xmm1, %xmm0
	ret

We now produce:

_f32:                                   ## @f32
	movdqa	%xmm0, %xmm2
	addss	%xmm1, %xmm2
	pshufd	$1, %xmm1, %xmm1
	pshufd	$1, %xmm0, %xmm3
	addss	%xmm1, %xmm3
	movaps	%xmm2, %xmm0
	unpcklps	%xmm3, %xmm0
	ret

This implements rdar://8368414

llvm-svn: 112378
2010-08-28 17:28:30 +00:00
Chris Lattner
7fa5fa1207 improve comments in the unpcklps generating logic, introduce
a new EltStride variable instead of reusing NumElems variable
for a non-obvious purpose.  No functionality change.

llvm-svn: 112377
2010-08-28 17:15:43 +00:00
Michael J. Spencer
5cb73a9fb8 Don't cast Win32 FILETIME structs to int64. Patch by Dimitry Andric!
According to the Microsoft documentation here:
http://msdn.microsoft.com/en-us/library/ms724284%28VS.85%29.aspx

this cast used in lib/System/Win32/Path.inc:

__int64 ft = *reinterpret_cast<__int64*>(&fi.ftLastWriteTime);

should not be done.  The documentation says: "Do not cast a pointer to a
FILETIME structure to either a ULARGE_INTEGER* or __int64* value because
it can cause alignment faults on 64-bit Windows."

llvm-svn: 112376
2010-08-28 16:39:32 +00:00
Chris Lattner
d16c80e27f remove the MSIL backend. It isn't maintained, is buggy, has no testcases
and hasn't kept up with ToT.  Approved by Anton.

llvm-svn: 112375
2010-08-28 16:33:36 +00:00
Benjamin Kramer
92e13eeec0 Update ocaml test.
llvm-svn: 112364
2010-08-28 10:29:41 +00:00
Benjamin Kramer
4c15ccc237 Remove unions from the ocaml bindings.
llvm-svn: 112363
2010-08-28 09:47:42 +00:00
Bob Wilson
956e07b985 Use pseudo instructions for VST1 and VST2.
llvm-svn: 112357
2010-08-28 05:12:57 +00:00
Chris Lattner
ecf276b787 remove unions from LLVM IR. They are severely buggy and not
being actively maintained, improved, or extended.

llvm-svn: 112356
2010-08-28 04:09:24 +00:00
Chris Lattner
4b49ada02c remove the ABCD and SSI passes. They don't have any clients that
I'm aware of, aren't maintained, and LVI will be replacing their value.
nlewycky approved this on irc.

llvm-svn: 112355
2010-08-28 03:51:24 +00:00
Chris Lattner
e7afb6fbb0 remove dead proto
llvm-svn: 112354
2010-08-28 03:45:03 +00:00
Chris Lattner
7a86b354fc more dead thing zapping.
llvm-svn: 112353
2010-08-28 03:43:50 +00:00
Chris Lattner
faae20ef23 zap dead method
llvm-svn: 112352
2010-08-28 03:42:45 +00:00
Chris Lattner
fc1da78d16 for completeness, allow undef also.
llvm-svn: 112351
2010-08-28 03:36:51 +00:00
Chris Lattner
b2dbdbc795 squish dead code.
llvm-svn: 112350
2010-08-28 03:21:03 +00:00
Chris Lattner
fcf6250d57 zap dead code
llvm-svn: 112349
2010-08-28 03:18:45 +00:00
Bruno Cardoso Lopes
1052e6d5d9 Clean up the logic of vector shuffles -> vector shifts.
Also teach this logic how to handle target specific shuffles if
needed, this is necessary while searching recursively for zeroed
scalar elements in vector shuffle operands.

llvm-svn: 112348
2010-08-28 02:46:39 +00:00
Chris Lattner
b61cf1e296 handle the constant case of vector insertion. For something
like this:

struct S { float A, B, C, D; };

struct S g;
struct S bar() { 
  struct S A = g;
  ++A.B;
  A.A = 42;
  return A;
}

we now generate:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	12(%rax), %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	unpcklps	%xmm0, %xmm1
	addss	LCPI1_0(%rip), %xmm2
	pshufd	$16, %xmm2, %xmm2
	movss	LCPI1_1(%rip), %xmm0
	pshufd	$16, %xmm0, %xmm0
	unpcklps	%xmm2, %xmm0
	ret

instead of:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	12(%rax), %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	unpcklps	%xmm0, %xmm1
	addss	LCPI1_0(%rip), %xmm2
	movd	%xmm2, %eax
	shlq	$32, %rax
	addq	$1109917696, %rax       ## imm = 0x42280000
	movd	%rax, %xmm0
	ret

llvm-svn: 112345
2010-08-28 01:50:57 +00:00
Duncan Sands
604b706b87 Straighten out any triple strings passed on the command line before
they hit the rest of the system.

llvm-svn: 112344
2010-08-28 01:30:02 +00:00
Chris Lattner
c70b0c0ee7 optimize bitcasts from large integers to vector into vector
element insertion from the pieces that feed into the vector.
This handles a pattern that occurs frequently due to code
generated for the x86-64 abi.  We now compile something like
this:

struct S { float A, B, C, D; };
struct S g;
struct S bar() { 
  struct S A = g;
  ++A.A;
  ++A.C;
  return A;
}

into all nice vector operations:

_bar:                                   ## @bar
## BB#0:                                ## %entry
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	pshufd	$16, %xmm0, %xmm0
	movss	4(%rax), %xmm2
	movss	12(%rax), %xmm3
	pshufd	$16, %xmm2, %xmm2
	unpcklps	%xmm2, %xmm0
	addss	8(%rax), %xmm1
	pshufd	$16, %xmm1, %xmm1
	pshufd	$16, %xmm3, %xmm2
	unpcklps	%xmm2, %xmm1
	ret

instead of icky integer operations:

_bar:                                   ## @bar
	movq	_g@GOTPCREL(%rip), %rax
	movss	LCPI1_0(%rip), %xmm1
	movss	(%rax), %xmm0
	addss	%xmm1, %xmm0
	movd	%xmm0, %ecx
	movl	4(%rax), %edx
	movl	12(%rax), %esi
	shlq	$32, %rdx
	addq	%rcx, %rdx
	movd	%rdx, %xmm0
	addss	8(%rax), %xmm1
	movd	%xmm1, %eax
	shlq	$32, %rsi
	addq	%rax, %rsi
	movd	%rsi, %xmm1
	ret

This resolves rdar://8360454

llvm-svn: 112343
2010-08-28 01:20:38 +00:00
Dan Gohman
507f5a8ae7 Completely disable tail calls when fast-isel is enabled, as fast-isel
doesn't currently support dealing with this.

llvm-svn: 112341
2010-08-28 00:51:03 +00:00
Dan Gohman
ffab4c6a7d Trim a #include.
llvm-svn: 112340
2010-08-28 00:49:13 +00:00
Dan Gohman
aa972a1ee3 Fix an index calculation thinko.
llvm-svn: 112337
2010-08-28 00:39:27 +00:00
Bob Wilson
abdcae7f20 We don't need to custom-select VLDMQ and VSTMQ anymore.
llvm-svn: 112336
2010-08-28 00:20:11 +00:00
Benjamin Kramer
edb09ef2df Update CMake build. Add newline at end of file.
llvm-svn: 112332
2010-08-28 00:11:12 +00:00
Bob Wilson
412a170b04 When merging Thumb2 loads/stores, do not give up when the offset is one of
the special values that for ARM would be used with IB or DA modes.  Fall
through and consider materializing a new base address is it would be
profitable.

llvm-svn: 112329
2010-08-27 23:57:52 +00:00
Owen Anderson
dc4703bcd5 Add a prototype of a new peephole optimizing pass that uses LazyValue info to simplify PHIs and select's.
This pass addresses the missed optimizations from PR2581 and PR4420.

llvm-svn: 112325
2010-08-27 23:31:36 +00:00
Owen Anderson
f2255ee253 Improve the precision of getConstant().
llvm-svn: 112323
2010-08-27 23:29:38 +00:00
Bob Wilson
31d487d235 Change ARM VFP VLDM/VSTM instructions to use addressing mode #4, just like
all the other LDM/STM instructions.  This fixes asm printer crashes when
compiling with -O0.  I've changed one of the NEON tests (vst3.ll) to run
with -O0 to check this in the future.

Prior to this change VLDM/VSTM used addressing mode #5, but not really.
The offset field was used to hold a count of the number of registers being
loaded or stored, and the AM5 opcode field was expanded to specify the IA
or DB mode, instead of the standard ADD/SUB specifier.  Much of the backend
was not aware of these special cases.  The crashes occured when rewriting
a frameindex caused the AM5 offset field to be changed so that it did not
have a valid submode.  I don't know exactly what changed to expose this now.
Maybe we've never done much with -O0 and NEON.  Regardless, there's no longer
any reason to keep a count of the VLDM/VSTM registers, so we can use
addressing mode #4 and clean things up in a lot of places.

llvm-svn: 112322
2010-08-27 23:18:17 +00:00
Chris Lattner
08d2f26030 tidy up test.
llvm-svn: 112321
2010-08-27 23:15:21 +00:00
Chris Lattner
79f7f9e3f8 no really, fix the test.
llvm-svn: 112317
2010-08-27 23:05:54 +00:00
Chris Lattner
1cdec6a76b fix this test. It's not clear what it's really testing.
llvm-svn: 112316
2010-08-27 23:05:27 +00:00
Chris Lattner
3f880c2097 Enhance the shift propagator to handle the case when you have:
A = shl x, 42
...
B = lshr ..., 38

which can be transformed into:
A = shl x, 4
...

iff we can prove that the would-be-shifted-in bits
are already zero.  This eliminates two shifts in the testcase
and allows eliminate of the whole i128 chain in the real example.

llvm-svn: 112314
2010-08-27 22:53:44 +00:00
Devang Patel
eb68981283 Simplify.
llvm-svn: 112305
2010-08-27 22:25:51 +00:00
Chris Lattner
80632e5fd9 Implement a pretty general logical shift propagation
framework, which is good at ripping through bitfield
operations.  This generalize a bunch of the existing
xforms that instcombine does, such as 
  (x << c) >> c -> and
to handle intermediate logical nodes.  This is useful for
ripping up the "promote to large integer" code produced by
SRoA.

llvm-svn: 112304
2010-08-27 22:24:38 +00:00
Bob Wilson
d107dd5cd2 Fix a comment typo.
llvm-svn: 112302
2010-08-27 21:56:59 +00:00
Bob Wilson
09b040a386 Unsigned value cannot be < 0.
llvm-svn: 112300
2010-08-27 21:44:35 +00:00
Dan Gohman
ee0a450648 When merging adjacent operands, scan ahead and merge all equal
adjacent operands at once, instead of just two at a time.

llvm-svn: 112299
2010-08-27 21:39:59 +00:00
Eric Christopher
67801775eb Fix a couple of typos.
Patch by Cameron Esfahani!

llvm-svn: 112297
2010-08-27 21:38:11 +00:00
Chris Lattner
5ed3d56ced remove some special shift cases that have been subsumed into the
more general simplify demanded bits logic.

llvm-svn: 112291
2010-08-27 21:04:34 +00:00
Dan Gohman
f950201d01 Make the {A,+,B}<L> + {C,+,D}<L> --> Other + {A+C,+,B+D}<L>
transformation collect all the addrecs with the same loop
add combine them at once rather than starting everything over
at the first chance.

llvm-svn: 112290
2010-08-27 20:45:56 +00:00
Chris Lattner
1a15c898b9 merge and filecheckize test
llvm-svn: 112289
2010-08-27 20:44:45 +00:00
Chris Lattner
a571568019 merge two tests
llvm-svn: 112288
2010-08-27 20:42:10 +00:00
Bill Wendling
09a19ea0bf Remove now unneeded command line flag that enables 'optimize compares.'
llvm-svn: 112287
2010-08-27 20:39:09 +00:00
Owen Anderson
a1a80a3acd Fix typos in comments.
llvm-svn: 112286
2010-08-27 20:32:56 +00:00
Chris Lattner
866b888095 teach the truncation optimization that an entire chain of
computation can be truncated if it is fed by a sext/zext that doesn't
have to be exactly equal to the truncation result type.

llvm-svn: 112285
2010-08-27 20:32:06 +00:00
Dan Gohman
636c57c5de Switch ScalarEvolution's main Value*->SCEV* map from std::map
to DenseMap.

llvm-svn: 112281
2010-08-27 18:55:03 +00:00