1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-23 13:02:52 +02:00
Commit Graph

504 Commits

Author SHA1 Message Date
Evan Cheng
f824b47188 Use movq to move low half of XMM register and zero-extend the rest.
llvm-svn: 50874
2008-05-08 22:35:02 +00:00
Evan Cheng
f97e716511 Handle vector move / load which zero the destination register top bits (i.e. movd, movq, movss (addr), movsd (addr)) with X86 specific dag combine.
llvm-svn: 50838
2008-05-08 00:57:18 +00:00
Evan Cheng
7ff000c175 Add nounwind.
llvm-svn: 50837
2008-05-07 22:59:08 +00:00
Evan Cheng
c86c035346 Yet another nasty spiller bug.
%ecx = op
store %cl<kill>, (addr)
(addr) = op %al

It's not safe to unfold the last operand and eliminate store even though %cl is marked kill. It's a sub-register use which means one of its super-register(s) may be used below.

llvm-svn: 50794
2008-05-07 00:49:28 +00:00
Anton Korobeynikov
a9ff11ea5f Use target triple in tests, not 'realign-stack=0' option. Per request.
llvm-svn: 50778
2008-05-06 23:09:29 +00:00
Evan Cheng
a84ed06284 Fix PR2287. Darwin passes mmx values in register in 64-mode, not Linux.
llvm-svn: 50716
2008-05-06 07:23:50 +00:00
Mon P Wang
84a269e023 Added addition atomic instrinsics and, or, xor, min, and max.
llvm-svn: 50663
2008-05-05 19:05:59 +00:00
Chris Lattner
ca94848f66 no need for eh info
llvm-svn: 50658
2008-05-05 18:24:33 +00:00
Dan Gohman
c860d9c77c Add AsmPrinter support for emitting a directive to declare that
the code being generated does not require an executable stack.

Also, add target-specific code to make use of this on Linux
on x86. 

llvm-svn: 50634
2008-05-05 00:28:39 +00:00
Evan Cheng
a7747df955 Select vector shift with non-immediate i32 shift amount operand by first moving the operand into the right register.
llvm-svn: 50619
2008-05-04 09:15:50 +00:00
Evan Cheng
c1c2adbfc6 Add separate intrinsics for MMX / SSE shifts with i32 integer operands. This allow us to simplify the horribly complicated matching code.
llvm-svn: 50601
2008-05-03 00:52:09 +00:00
Chris Lattner
8cc3e89b87 specify an arch for non-x86 hosts.
llvm-svn: 50576
2008-05-02 15:11:58 +00:00
Chris Lattner
e9bbe8e6b6 don't randomly miscompile seto/setuo just because we are in
ffastmath mode.  This fixes rdar://5902801, a miscompilation
of gcc.dg/builtins-8.c.

Bill, please pull this into Tak.

llvm-svn: 50523
2008-05-01 07:26:11 +00:00
Arnold Schwaighofer
29a654857a Really commit the test checking the argument lowering behaviour on x86-64 :).
llvm-svn: 50478
2008-04-30 09:19:47 +00:00
Chris Lattner
0f63b8fecc make the vector conversion magic handle multiple results.
We now compile test2/test3 to:

_test2:
	## InlineAsm Start
	set %xmm0, %xmm1
	## InlineAsm End
	addps	%xmm1, %xmm0
	ret
_test3:
	## InlineAsm Start
	set %xmm0, %xmm1
	## InlineAsm End
	paddd	%xmm1, %xmm0
	ret

as expected.

llvm-svn: 50389
2008-04-29 04:48:56 +00:00
Chris Lattner
e75d09711d add support for multiple return values in inline asm. This is a step
towards PR2094.  It now compiles the attached .ll file to:

_sad16_sse2:
	movslq	%ecx, %rax
	## InlineAsm Start
	%ecx %rdx %rax %rax %r8d %rdx %rsi
	## InlineAsm End
	## InlineAsm Start
	set %eax
	## InlineAsm End
	ret

which is pretty decent for a 3 output, 4 input asm.

llvm-svn: 50386
2008-04-29 04:29:54 +00:00
Evan Cheng
381b094a2b Another extract_subreg coalescing bug.
e.g.
vr1024<2> extract_subreg vr1025, 2
If vr1024 do not have the same register class as vr1025, it's not safe to coalesce this away. For example, vr1024 might be a GPR32 while vr1025 might be a GPR64.

llvm-svn: 50385
2008-04-29 01:41:44 +00:00
Evan Cheng
8696eb18c1 Add -march=x86.
llvm-svn: 50380
2008-04-28 23:31:41 +00:00
Evan Cheng
3167c716f2 Test case.
llvm-svn: 50377
2008-04-28 22:14:34 +00:00
Chris Lattner
39a4281deb Implement a signficant optimization for inline asm:
When choosing between constraints with multiple options,
like "ir", test to see if we can use the 'i' constraint and
go with that if possible.  This produces more optimal ASM in
all cases (sparing a register and an instruction to load it),
and fixes inline asm like this:

void test () {
  asm volatile (" %c0 %1 " : : "imr" (42), "imr"(14));
}

Previously we would dump "42" into a memory location (which
is ok for the 'm' constraint) which would cause a problem
because the 'c' modifier is not valid on memory operands.

Isn't it great how inline asm turns 'missed optimization'
into 'compile failed'??

Incidentally, this was the todo in 
PowerPC/2007-04-24-InlineAsm-I-Modifier.ll

Please do NOT pull this into Tak.

llvm-svn: 50315
2008-04-27 00:37:18 +00:00
Nate Begeman
1723f6af2b Feedback from chris
llvm-svn: 50305
2008-04-25 21:47:35 +00:00
Nate Begeman
acd7e1c464 Add a testcase for the recent "handle variable vector insert elt in mem" patch
llvm-svn: 50303
2008-04-25 21:26:59 +00:00
Evan Cheng
db1497fa77 Update tests.
llvm-svn: 50293
2008-04-25 20:13:47 +00:00
Evan Cheng
11f101a800 Special handling for MMX values being passed in either GPR64 or lower 64-bits of XMM registers.
llvm-svn: 50289
2008-04-25 19:11:04 +00:00
Evan Cheng
39ae78cadb MMX argument passing fixes:
On Darwin / Linux x86-32, v8i8, v4i16, v2i32 values are passed in MM[0-2].                                                                                                                                      
On Darwin / Linux x86-32, v1i64 values are passed in memory.                                                                                                                                                    
On Darwin x86-64, v8i8, v4i16, v2i32 values are passed in XMM[0-7].                                                                                                                                     
On Darwin x86-64, v1i64 values are passed in 64-bit GPRs.

llvm-svn: 50257
2008-04-25 07:56:45 +00:00
Chris Lattner
8c9f6c929a Loosen up an assertion to allow intrinsics. I really have no
idea what this code (findNonImmUse) does, so I'm only guessing 
that this is the right thing.  It would be really really nice
if this had comments and perhaps switched to SmallPtrSet
(hint hint) :)

This fixes rdar://5886601, a crash on gcc.target/i386/sse4_1-pblendw.c

llvm-svn: 50252
2008-04-25 05:13:01 +00:00
Evan Cheng
484060ba4a Fix bug in x86 memcpy / memset lowering. If there are trailing bytes not handled by rep instructions, a new memcpy / memset is introduced for them. However, since source / destination addresses are already adjusted, their offsets should be zero.
llvm-svn: 50239
2008-04-25 00:26:43 +00:00
Anton Korobeynikov
244a615291 Disable stack realignment for these tests
llvm-svn: 50172
2008-04-23 18:25:44 +00:00
Anton Korobeynikov
1898ce20e2 Fix test becase ABI stack alignment dropped to 'normal' value
llvm-svn: 50171
2008-04-23 18:25:16 +00:00
Anton Korobeynikov
15c5a2ce26 Fix test, instruction count is valid only if stack is not realigned
llvm-svn: 50170
2008-04-23 18:24:48 +00:00
Dan Gohman
93b5be1824 Implement an x86-64 ABI detail of passing structs by hidden first
argument. The x86-64 ABI requires the incoming value of %rdi to
be copied to %rax on exit from a function that is returning a
large C struct.

Also, add a README-X86-64 entry detailing the missed optimization
opportunity and proposing an alternative approach.

llvm-svn: 50075
2008-04-21 23:59:07 +00:00
Chris Lattner
2c5b96fbee A better fix for my previous patch, MOVZQI2PQIrr just requires SSE2.
llvm-svn: 49986
2008-04-20 05:52:46 +00:00
Chris Lattner
8503e2d236 Not all x86-64 machines have sse3 apparently.
llvm-svn: 49985
2008-04-20 05:47:56 +00:00
Chris Lattner
c310b1f1f3 rename *.llx -> *.ll
llvm-svn: 49970
2008-04-19 22:29:10 +00:00
Evan Cheng
073659986f Be more careful with insert_subreg and extract_subreg where either source or destination operand has already been coalesced with another register that's defined by a insert_subreg or extract_subreg.
llvm-svn: 49843
2008-04-17 07:58:04 +00:00
Evan Cheng
7c2c3333ca Fix a sub-register indice propagation bug.
llvm-svn: 49832
2008-04-17 00:06:42 +00:00
Evan Cheng
e2e899b5c2 Don't forget about sub-register indices when rematting instructions.
llvm-svn: 49830
2008-04-16 23:44:44 +00:00
Evan Cheng
4b16ea6247 Really test what's intended.
llvm-svn: 49802
2008-04-16 18:21:55 +00:00
Evan Cheng
6d05ce493b Rewrite LiveVariable liveness computation. The new implementation is much simplified. It eliminated the nasty recursive routines and removed the partial def / use bookkeeping. There is also potential for performance improvement by replacing the conservative handling of partial physical register definitions. The code is currently disabled until live interval analysis is taught of the name scheme.
This patch also fixed a couple of nasty corner cases.

llvm-svn: 49784
2008-04-16 09:46:40 +00:00
Dan Gohman
be8f2b452b Add support for the form of the SSE41 extractps instruction that
puts its result in a 32-bit GPR.

llvm-svn: 49762
2008-04-16 02:32:24 +00:00
Dan Gohman
cf79877623 Recreate the size SDNode instead of reusing the old one in the x86
memcpy lowering code; this ensures that the size node has the desired
result type. This fixes a regression from r49572 with @llvm.memcpy.i64
on x86-32.

llvm-svn: 49761
2008-04-16 01:32:32 +00:00
Dan Gohman
7d27552962 Add movd instructions to move from MMX registers
to 64-bit GPR registers on x86-64.

llvm-svn: 49757
2008-04-15 23:55:07 +00:00
Dan Gohman
3b99b3c807 Treat EntryToken nodes as "passive" so that they aren't added to the
ScheduleDAG; they don't correspond to any actual instructions so they
don't need to be scheduled.

This fixes a bug where the EntryToken was being scheduled multiple
times in some cases, though it ended up not causing any trouble because 
EntryToken doesn't expand into anything. With this fixed the schedulers
reliably schedule the expected number of units, so we can check this
with an assertion.

This requires a tweak to test/CodeGen/X86/loop-hoist.ll because it
ends up getting scheduled differently in a trivial way, though it was
enough to fool the prcontext+grep that the test does.

llvm-svn: 49701
2008-04-15 01:22:18 +00:00
Dale Johannesen
d9a9c746d8 Remove -unwind-tables-optional everywhere, since
this is now the default.

llvm-svn: 49667
2008-04-14 17:56:54 +00:00
Arnold Schwaighofer
82af0e6a43 This patch corrects the handling of byval arguments for tailcall
optimized x86-64 (and x86) calls so that they work (... at least for
my test cases).

Should fix the following problems:

Problem 1: When i introduced the optimized handling of arguments for
tail called functions (using a sequence of copyto/copyfrom virtual
registers instead of always lowering to top of the stack) i did not
handle byval arguments correctly e.g they did not work at all :).

Problem 2: On x86-64 after the arguments of the tail called function
are moved to their registers (which include ESI/RSI etc), tail call
optimization performs byval lowering which causes xSI,xDI, xCX
registers to be overwritten. This is handled in this patch by moving
the arguments to virtual registers first and after the byval lowering
the arguments are moved from those virtual registers back to
RSI/RDI/RCX.

llvm-svn: 49584
2008-04-12 18:11:06 +00:00
Dan Gohman
15edbf989f Drop ISD::MEMSET, ISD::MEMMOVE, and ISD::MEMCPY, which are not Legal
on any current target and aren't optimized in DAGCombiner. Instead
of using intermediate nodes, expand the operations, choosing between
simple loads/stores, target-specific code, and library calls,
immediately.

Previously, the code to emit optimized code for these operations
was only used at initial SelectionDAG construction time; now it is
used at all times. This fixes some cases where rep;movs was being
used for small copies where simple loads/stores would be better.

This also cleans up code that checks for alignments less than 4;
let the targets make that decision instead of doing it in
target-independent code. This allows x86 to use rep;movs in
low-alignment cases.

Also, this fixes a bug that resulted in the use of rep;stos for
memsets of 0 with non-constant memory size when the alignment was
at least 4. It's better to use the library in this case, which
can be significantly faster when the size is large.

This also preserves more SourceValue information when memory
intrinsics are lowered into simple loads/stores.

llvm-svn: 49572
2008-04-12 04:36:06 +00:00
Dan Gohman
41f9d24d52 Fix a bug that prevented x86-64 from using rep.movsq for
8-byte-aligned data.

llvm-svn: 49571
2008-04-12 02:35:39 +00:00
Chris Lattner
3b289289a7 Fix the x86-64 side of PR2108 by adding a v2f64 version of
MOVZQI2PQIrr.  This would be better handled as a dag combine 
(with the goal of eliminating the bitconvert) but I don't know
how to do that safely.  Thoughts welcome.

llvm-svn: 49463
2008-04-10 05:13:43 +00:00
Evan Cheng
1803e20a62 Teach branch folding pass about implicit_def instructions. Unfortunately we can't just eliminate them since register scavenger expects every register use to be defined. However, we can delete them when there are no intra-block uses. Carefully removing some implicit def's which enable more blocks to be optimized away.
llvm-svn: 49461
2008-04-10 02:32:10 +00:00
Evan Cheng
def576f9e6 - More aggressively coalescing away copies whose source is defined by an implicit_def.
- Added insert_subreg coalescing support.

llvm-svn: 49448
2008-04-09 20:57:25 +00:00