large for the testsuite) took over six minutes to compile on my Mac.
The patched LLVM-GCC compiles that testcase in three seconds (GCC
takes less than one second). This hash function is more complex
(about 35 instructions on x86) than what Chris wanted, but I expect it
will be well-behaved with arbitrary inputs.
Thank you to everyone who responded to my previous request for advice.
llvm-svn: 66962
- Required some extra makefile tweaks to introduce a new flag var
which only goes to compile/link tools but not the relink step,
otherwise we get a copy of libgcov in the relinked .o files.
- No configure magic for this.
llvm-svn: 66945
by inserting explicit zero extensions where necessary. Included
is a testcase where SelectionDAG produces a virtual register
holding an i1 value which FastISel previously mistakenly assumed
to be zero-extended.
llvm-svn: 66941
changes.
For InvokeInst now all arguments begin at op_begin().
The Callee, Cont and Fail are now faster to get by
access relative to op_end().
This patch introduces some temporary uglyness in CallSite.
Next I'll bring CallInst up to a similar scheme and then
the uglyness will magically vanish.
This patch also exposes all the reliance of the libraries
on InvokeInst's operand ordering. I am thinking of taking
care of that too.
llvm-svn: 66920
errors when thrown. This gets us nice errors like this from tblgen:
CMOVL32rr: (set GR32:i32:$dst, (X86cmov GR32:$src1, GR32:$src2))
/Users/sabre/llvm/Debug/bin/tblgen: error:
Included from X86.td:116:
Parsing X86InstrInfo.td:922: In CMOVL32rr: X86cmov node requires exactly 4 operands!
def CMOVL32rr : I<0x4C, MRMSrcReg, // if <s, GR32 = GR32
^
instead of just:
CMOVL32rr: (set GR32:i32:$dst, (X86cmov GR32:$src1, GR32:$src2))
/Users/sabre/llvm/Debug/bin/tblgen: In CMOVL32rr: X86cmov node requires exactly 4 operands!
This is all I plan to do with this, but it should be easy enough to improve if anyone
cares (e.g. keeping more loc info in "dag" expr records in tblgen.
llvm-svn: 66898
1. ConstantPoolSDNode alignment field is log2 value of the alignment requirement. This is not consistent with other SDNode variants.
2. MachineConstantPool alignment field is also a log2 value.
3. However, some places are creating ConstantPoolSDNode with alignment value rather than log2 values. This creates entries with artificially large alignments, e.g. 256 for SSE vector values.
4. Constant pool entry offsets are computed when they are created. However, asm printer group them by sections. That means the offsets are no longer valid. However, asm printer uses them to determine size of padding between entries.
5. Asm printer uses expensive data structure multimap to track constant pool entries by sections.
6. Asm printer iterate over SmallPtrSet when it's emitting constant pool entries. This is non-deterministic.
Solutions:
1. ConstantPoolSDNode alignment field is changed to keep non-log2 value.
2. MachineConstantPool alignment field is also changed to keep non-log2 value.
3. Functions that create ConstantPool nodes are passing in non-log2 alignments.
4. MachineConstantPoolEntry no longer keeps an offset field. It's replaced with an alignment field. Offsets are not computed when constant pool entries are created. They are computed on the fly in asm printer and JIT.
5. Asm printer uses cheaper data structure to group constant pool entries.
6. Asm printer compute entry offsets after grouping is done.
7. Change JIT code to compute entry offsets on the fly.
llvm-svn: 66875
for i32/i64 expressions (we could also do i16 on cpus where
i16 lea is fast, but I didn't add this). On the example, we now
generate:
_test:
movl 4(%esp), %eax
cmpl $42, (%eax)
setl %al
movzbl %al, %eax
leal 4(%eax,%eax,8), %eax
ret
instead of:
_test:
movl 4(%esp), %eax
cmpl $41, (%eax)
movl $4, %ecx
movl $13, %eax
cmovg %ecx, %eax
ret
llvm-svn: 66869
operands can't both be fully folded at the same time. For example,
in the included testcase, a global variable is being added with
an add of two values. The global variable wants RIP-relative
addressing, so it can't share the address with another base
register, but it's still possible to fold the initial add.
llvm-svn: 66865
right; did the wrong thing when there are exactly 11
non-debug instructions, followed by debug info.
Remove a FIXME since it's apparently been fixed along the way.
llvm-svn: 66840
in the Ada testcase. Reverting this only covers up
the real problem, which is a nasty conceptual difficulty
in the phi elimination pass: when eliminating phi nodes
in landing pads, the register copies need to come before
the invoke, not at the end of the basic block which is
too late... See PR3784.
llvm-svn: 66826
refers to the "prefix" directory, i.e., one level above "bin". LLVMGCCPATH
is used as the directory containing the llvm-gcc executable, so add a "/bin"
suffix to get from LLVMGCCDIR to LLVMGCCPATH.
llvm-svn: 66823
access each with a fixed negative index from op_end().
This has two important implications:
- getUser() will work faster, because there are less iterations
for the waymarking algorithm to perform. This is important
when running various analyses that want to determine callers
of basic blocks.
- getSuccessor() now runs faster, because the indirection via OperandList
is not necessary: Uses corresponding to the successors are at fixed
offset to "this".
The price we pay is the slightly more complicated logic in the operator
User::delete, as it has to pick up the information whether it has to free
the memory of an original unconditional BranchInst or a BranchInst that
was originally conditional, but has been shortened to unconditional.
I was not able to come up with a nicer solution to this problem. (And
rest assured, I tried *a lot*).
Similar reorderings will follow for InvokeInst and CallInst. After that
some optimizations to pred_iterator and CallSite will fall out naturally.
llvm-svn: 66815