FunctionLoweringInfo, as it isn't SelectionDAG-specific. This isn't
completely natural, as PHI node state is not per-function but rather
per-basic-block, however there's currently no other convenient
per-basic-block state to group it with.
llvm-svn: 102109
user-defined operations that use MMX register types, but
the compiler shouldn't generate them on its own. This adds
a Synthesizable abstraction to represent this, and changes
the vector widening computation so it won't produce MMX types.
(The motivation is to remove noise from the ABI compatibility
part of the gcc test suite, which has some breakage right now.)
llvm-svn: 101951
into SelectionDAGBuilder. This avoids a separate pass over the
instructions, and has the side effect of providing debug location
information to the copy.
llvm-svn: 101906
const_casts, and it reinforces the design of the Target classes being
immutable.
SelectionDAGISel::IsLegalToFold is now a static member function, because
PIC16 uses it in an unconventional way. There is more room for API
cleanup here.
And PIC16's AsmPrinter no longer uses TargetLowering.
llvm-svn: 101635
with a fix for self-hosting
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101465
with a fix
rotate CallInst operands, i.e. move callee to the back
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101397
of the operand array
the motivation for this patch are laid out in my mail to llvm-commits:
more efficient access to operands and callee, faster callgraph-construction,
smaller compiler binary
llvm-svn: 101364
This doesn't occur much at all, it only seems to formed in the case
when the trunc optimization kicks in due to phase ordering. In that
case it is saves a few bytes on x86-32.
llvm-svn: 101350
a load/or/and/store sequence into a narrower store when it is
safe. Daniel tells me that clang will start producing this sort
of thing with bitfields, and this does trigger a few dozen times
on 176.gcc produced by llvm-gcc even now.
This compiles code like CodeGen/X86/2009-05-28-DAGCombineCrash.ll
into:
movl %eax, 36(%rdi)
instead of:
movl $4294967295, %eax ## imm = 0xFFFFFFFF
andq 32(%rdi), %rax
shlq $32, %rcx
addq %rax, %rcx
movq %rcx, 32(%rdi)
and each of the testcases into a single store. Each of them used
to compile into craziness like this:
_test4:
movl $65535, %eax ## imm = 0xFFFF
andl (%rdi), %eax
shll $16, %esi
addl %eax, %esi
movl %esi, (%rdi)
ret
llvm-svn: 101343
so the user at least knows what inline asm is a problem. For example:
error: inline asm not supported yet: don't know how to handle tied indirect register inputs
pr8788-1.c:14:10: note: generated from here
asm ("\n" : "+r" (stack->regs)
^
Instead of:
fatal error: error in backend: Don't know how to handle tied indirect register inputs yet!
llvm-svn: 100731
1. Introduce some enums and accessors in the InlineAsm class
that eliminate a ton of magic numbers when handling inline
asm SDNode.
2. Add a new MDNodeSDNode selection dag node type that holds
a MDNode (shocking!)
3. Add a new argument to ISD::INLINEASM nodes that hold !srcloc
metadata, propagating it to the instruction emitter, which
drops it.
No functionality change.
llvm-svn: 100605
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
llvm-svn: 100304
representation. This eliminates the 'DILocation' MDNodes for
file/line/col tuples from -O0 -g codegen.
This remove the old DebugLoc class, making it a typedef for DebugLoc,
I'll rename NewDebugLoc next.
I didn't update the JIT to use the new apis, so it will continue to
work, but be as slow as before. Someone should eventually do this
or, better yet, rip out the JIT debug info stuff and build the JIT
on top of MC.
llvm-svn: 100209
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
llvm-svn: 100191
1. Makes it possible to lower with floating point loads and stores.
2. Avoid unaligned loads / stores unless it's fast.
3. Fix some memcpy lowering logic bug related to when to optimize a
load from constant string into a constant.
4. Adjust x86 memcpy lowering threshold to make it more sane.
5. Fix x86 target hook so it uses vector and floating point memory
ops more effectively.
rdar://7774704
llvm-svn: 100090
instructions. In addition to being a convenience,
they are faster than the old apis, particularly when
not going from an MDKindID like people should be
doing.
llvm-svn: 99982
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
A update of langref will occur in a subsequent checkin.
llvm-svn: 99928
and those derived from them. These are obnoxious because
they were written as: PatLeaf<(bitconvert). Not having an
argument was foiling adding better type checking for operand
count matching up with what was required (in this case,
bitconvert always requires an operand!)
llvm-svn: 99759
the custom insertion hook deletes the instruction, then we try to set dead
flags on it. Neither the code that I added nor the code that was there
before was safe.
llvm-svn: 99538
bytes instead of one byte. This is important because
we're running up to too many opcodes to fit in a byte
and it is aggrevated by FIRST_TARGET_MEMORY_OPCODE
making the numbering sparse. This just bites the
bullet and bloats out the table. In practice, this
increases the size of the x86 isel table from 74.5K
to 76K. I think we'll cope :)
This fixes rdar://7791648
llvm-svn: 99494
happening.
Enhance scheduling to set the DEAD flag on implicit defs
more aggressively. Before, we'd set an implicit def operand
to dead if it were present in the SDNode corresponding to
the machineinstr but had no use. Now we do it in this case
AND if the implicit def does not exist in the SDNode at all.
This exposes a couple of problems: one is the FIXME, which
causes a live intervals crash on CodeGen/X86/sibcall.ll.
The second is that it makes machinecse and licm more
aggressive (which is a good thing) but also exposes a case
where licm hoists a set0 and then it doesn't get resunk.
Talking to codegen folks about both these issues, but I need
this patch in in the meantime.
llvm-svn: 99485
Here is a theoretical example that illustrates why the placement is important.
tmp1 =
store tmp1 -> x
...
tmp2 = add ...
...
call
...
store tmp2 -> x
Now mem2reg comes along:
tmp1 =
dbg_value (tmp1 -> x)
...
tmp2 = add ...
...
call
...
dbg_value (tmp2 -> x)
When the debugger examine the value of x after the add instruction but before the call, it should have the value of tmp1.
Furthermore, for dbg_value's that reference constants, they should not be emitted at the beginning of the block (since they do not have "producers").
This patch also cleans up how SDISel manages DbgValue nodes. It allow a SDNode to be referenced by multiple SDDbgValue nodes. When a SDNode is deleted, it uses the information to find the SDDbgValues and invalidate them. They are not deleted until the corresponding SelectionDAG is destroyed.
llvm-svn: 99469