Chris Lattner
76e0e3fe40
move a failing testcase from bit-tracking.ll to narrow.ll, and move the
...
xfail marker with it
llvm-svn: 26129
2006-02-12 02:02:43 +00:00
Chris Lattner
54a36a05b3
Revert my last patch. It too breaks stuff
...
llvm-svn: 26128
2006-02-12 01:59:10 +00:00
Chris Lattner
d8c54dc866
Make these tests fail if opt crashes.
...
llvm-svn: 26127
2006-02-12 01:32:58 +00:00
Chris Lattner
7c61638c1b
Fix for my previously reverted patch
...
llvm-svn: 26126
2006-02-11 21:24:54 +00:00
Chris Lattner
e6b05ffc0f
Update comments to be actually accurate
...
llvm-svn: 26124
2006-02-11 09:37:07 +00:00
Chris Lattner
5267440902
This is implemented by the simplify-libcalls pass, not instcombine
...
llvm-svn: 26123
2006-02-11 09:33:28 +00:00
Chris Lattner
9b092ebf17
Port the recent innovations in ComputeMaskedBits to SimplifyDemandedBits.
...
This allows us to simplify on conditions where bits are not known, but they
are not demanded either! This also fixes a couple of bugs in
ComputeMaskedBits that were exposed during this work.
In the future, swaths of instcombine should be removed, as this code
subsumes a bunch of ad-hockery.
llvm-svn: 26122
2006-02-11 09:31:47 +00:00
Chris Lattner
05b7c448f3
revert my previous change, it exposed other problems.
...
llvm-svn: 26121
2006-02-11 08:47:47 +00:00
Duraid Madina
81b9a8f8f7
fix storing booleans (grawp missed this one)
...
llvm-svn: 26120
2006-02-11 07:33:17 +00:00
Duraid Madina
a514d8d8a6
now short immediates will get matched (previously constants were all
...
triggering movl 64bit imm fat instructions)
llvm-svn: 26119
2006-02-11 07:32:15 +00:00
Chris Lattner
c859e1d2b5
Make this check stricter. Disallow loop exit blocks from being shared by
...
loops and their subloops.
llvm-svn: 26118
2006-02-11 02:13:17 +00:00
Evan Cheng
1b8029264a
Prevent certain nodes that have already been selected from being folded into
...
X86 addressing mode. Currently we do not allow any node whose target node
produces a chain as well as any node that is at the root of the addressing
mode expression tree.
llvm-svn: 26117
2006-02-11 02:05:36 +00:00
Chris Lattner
95fccc4772
remove dead expr
...
llvm-svn: 26116
2006-02-11 01:43:37 +00:00
Jim Laskey
55467f611d
Reorg for integration with gcc4. Old style debug info will not be passed though
...
to SelIDAG.
llvm-svn: 26115
2006-02-11 01:01:30 +00:00
Chris Lattner
f7cf05d3a2
implement unswitching of loops with switch stmts and selects in them
...
llvm-svn: 26114
2006-02-11 00:43:37 +00:00
Chris Lattner
ec24272c6f
Update PHI nodes in successors of exit blocks.
...
llvm-svn: 26113
2006-02-10 23:26:14 +00:00
Chris Lattner
672c45a3c0
Reform the unswitching code in terms of edge splitting, not block splitting.
...
llvm-svn: 26112
2006-02-10 23:16:39 +00:00
Evan Cheng
eaaafb36c8
Nicer code. :-)
...
llvm-svn: 26111
2006-02-10 22:46:26 +00:00
Evan Cheng
d141f84c34
Added X86 isel debugging stuff.
...
llvm-svn: 26110
2006-02-10 22:24:32 +00:00
Chris Lattner
0d03d576e4
Remove a level of indirection.
...
llvm-svn: 26109
2006-02-10 21:32:11 +00:00
Chris Lattner
ada27b9dba
Fix a case where UnswitchTrivialCondition broke critical edges with
...
phi's in the successors
llvm-svn: 26108
2006-02-10 19:08:15 +00:00
Chris Lattner
f8458d73bb
Use the auto-generated call matcher. Remove a broken impl of the frameaddr/returnaddr
...
intrinsics.
Autogen frameindex matcher
llvm-svn: 26107
2006-02-10 07:35:42 +00:00
Chris Lattner
0dc11e140e
Update to new-style flags usage, simplifying the .td file
...
llvm-svn: 26106
2006-02-10 06:58:25 +00:00
Evan Cheng
53c409e694
Remove a completed entry; add a new entry about fisttp op
...
llvm-svn: 26105
2006-02-10 05:48:15 +00:00
Chris Lattner
8d28872523
add some notes, move some code around. Implement unswitching of loops
...
with branches on partially invariant computations.
llvm-svn: 26104
2006-02-10 02:30:37 +00:00
Chris Lattner
e51f996026
Move code around to be more logical, no functionality change.
...
llvm-svn: 26103
2006-02-10 02:01:22 +00:00
Chris Lattner
4432cd73f2
When unswitching a trivial loop, do admit we are doing it! :)
...
llvm-svn: 26102
2006-02-10 01:36:35 +00:00
Chris Lattner
eef362d63e
Implement unconditional unswitching of 'trivial' loops, those loops that contain
...
branches in their entry block that control whether or not the loop is a noop or not.
llvm-svn: 26101
2006-02-10 01:24:09 +00:00
Chris Lattner
602fc20596
Simplify control flow a bit, note that unswitch preserves canonical loop form
...
llvm-svn: 26098
2006-02-09 22:15:42 +00:00
Evan Cheng
d491020c15
Match tblgen change.
...
llvm-svn: 26096
2006-02-09 22:12:53 +00:00
Evan Cheng
e8a0e56482
Call InsertISelMapEntry rather than map insertion operator to prevent overly
...
aggrssive inlining. This reduces Select_store frame size from 24k to 10k.
llvm-svn: 26095
2006-02-09 22:12:27 +00:00
Evan Cheng
7441e03b0b
Added SelectionDAG::InsertISelMapEntry(). This is used to workaround the gcc
...
problem where it inline the map insertion call too aggressively. Before this
change it was producing a frame size of 24k for Select_store(), now it's down
to 10k (by calling this method rather than calling the map insertion operator).
llvm-svn: 26094
2006-02-09 22:11:03 +00:00
Chris Lattner
19acace8c0
Make the threshold a parameter
...
llvm-svn: 26093
2006-02-09 20:15:48 +00:00
Chris Lattner
4e9d16a0ad
Done
...
llvm-svn: 26091
2006-02-09 20:00:19 +00:00
Chris Lattner
5a10d24e67
Enable LSR by default for SPARC: it is a clear win.
...
llvm-svn: 26090
2006-02-09 19:59:55 +00:00
Chris Lattner
4ae405113a
Simplify the loop-unswitch pass, by not even trying to unswitch loops with
...
uses of loop values outside the loop. We need loop-closed SSA form to do
this right, or to use SSA rewriting if we really care.
llvm-svn: 26089
2006-02-09 19:14:52 +00:00
Chris Lattner
08e2215a18
Fix 80-column violations
...
llvm-svn: 26088
2006-02-09 07:41:14 +00:00
Chris Lattner
dcb3481968
Enhance MVIZ in three ways:
...
1. Teach it new tricks: in particular how to propagate through signed shr and sexts.
2. Teach it to return a bitset of known-1 and known-0 bits, instead of just zero.
3. Teach instcombine (AND X, C) to fold when we know all C bits of X.
This implements Regression/Transforms/InstCombine/bittest.ll, and allows
future things to be simplified.
llvm-svn: 26087
2006-02-09 07:38:58 +00:00
Chris Lattner
1ad0bfc2f0
new testcase
...
llvm-svn: 26086
2006-02-09 07:38:30 +00:00
Evan Cheng
6bd0f9c4ba
Match getTargetNode() changes (now return SDNode* instead of SDOperand).
...
llvm-svn: 26085
2006-02-09 07:17:49 +00:00
Evan Cheng
fd9315eaaa
Match getTargetNode() changes (now returns SDNode* instead of SDOperand).
...
llvm-svn: 26084
2006-02-09 07:16:09 +00:00
Evan Cheng
3486292b7d
More changes to reduce frame size.
...
Move all getTargetNode() out of SelectionDAG.h into SelectionDAG.cpp. This
prevents them from being inlined.
Change getTargetNode() so they return SDNode * instead of SDOperand to prevent
copying. It should also help compilation speed.
llvm-svn: 26083
2006-02-09 07:15:23 +00:00
Chris Lattner
d033cbb057
this apparently passes on linux
...
llvm-svn: 26082
2006-02-09 07:12:13 +00:00
Chris Lattner
8eae5f68d5
add an option to turn on LSR.
...
llvm-svn: 26080
2006-02-09 05:06:36 +00:00
Chris Lattner
e976315aec
simplify this code now that each constant pool entry is not separately allocated
...
llvm-svn: 26079
2006-02-09 04:49:59 +00:00
Chris Lattner
c81f375f21
Adjust to MachineConstantPool interface change: instead of keeping a
...
value/alignment pair for each constant, keep a value/offset pair.
llvm-svn: 26078
2006-02-09 04:46:04 +00:00
Chris Lattner
7775687364
instead of keeping track of Constant/alignment pairs, actually compute the
...
offset of each entry from the start of the constant pool.
llvm-svn: 26077
2006-02-09 04:44:32 +00:00
Chris Lattner
1111a4b76d
rename fields of constant pool entries
...
llvm-svn: 26076
2006-02-09 04:22:52 +00:00
Chris Lattner
10af703b36
Use a MachineConstantPoolEntry struct instead of a pair to hold
...
constant pool entries.
llvm-svn: 26075
2006-02-09 04:21:49 +00:00
Chris Lattner
837a816470
Simplify code, alignment must be specified now.
...
llvm-svn: 26074
2006-02-09 02:26:04 +00:00