This is the last MVTSDNode.
This allows us to eliminate a bunch of special case code for handling
MVTSDNodes.
Also, remove some uses of dyn_cast that should really be cast (which is
cheaper in a release build).
llvm-svn: 22368
1. Pass Value*'s into lowering methods so that the proper pointers can be
added to load/stores from the valist
2. Intrinsics that return void should only return a token chain, not a token
chain/retval pair.
3. Rename LowerVAArgNext -> LowerVAArg, because VANext is long gone.
4. Now that we have Value*'s available in the lowering methods, pass them
into any load/stores from the valist that are emitted
llvm-svn: 22339
(TRUNK)Stores and (EXT|ZEXT|SEXT)Loads have an extra SDOperand which is a SrcValueSDNode which contains the Value*. Note that if the operation is introduced by the backend, it will still have the operand, but the value* will be null.
llvm-svn: 21599
subtracts. This is a very rough and nasty implementation of Lefevre's
"pattern finding" algorithm. With a few small changes though, it should
end up beating most other methods in common use, regardless of the size
of the constant (currently, it's often one or two shifts worse)
TODO: rewrite it so it's not hideously ugly (this is a translation from
perl, which doesn't help ;)
bypass most of it for multiplies by 2^n+1
(eventually) teach it that some combinations of shift+add are
cheaper than others (e.g. shladd on ia64, scaled adds on alpha)
get it to try multiple booth encodings in search of the cheapest
routine
make it work for negative constants
This is hacked up as a DAG->DAG transform, so once I clean it up I hope
it'll be pulled out of here and put somewhere else. The only thing backends
should really have to worry about for now is where to draw the line
between using this code vs. going ahead and doing an integer multiply
anyway.
llvm-svn: 21560
* fold left shifts of 1, 2, 3 or 4 bits into adds
This doesn't save much now, but should get a serious workout once
multiplies by constants get converted to shift/add/sub sequences.
Hold on! :)
llvm-svn: 21282
0x00000..00FFF..FF
^ ^
^ ^
any number of
0's followed by
some number of
1's
then we use dep.z to just paste zeros over the input. For the special
cases where this is zxt1/zxt2/zxt4, we use those instructions instead,
because we're all about readability!!!
that's what it's about!! readability!
*twitch* ;D
llvm-svn: 21279
things like this:
mov r9 = 65535;;
and r8 = r8, r9;;
To be emitted instead of:
zxt2 r8 = r8;;
To get this back, the selector for ISD::AND should recognize this case.
llvm-svn: 21269
to avoid redundant mov out3=r44 type instructions, we need to
tell the register allocator the truth about out? registers.
FIXME: unfortunately, since the list of allocatable registers is immutable,
we can't simply 'delete r127' from the allocation order, say, if 'out0' is
used. The only correct thing we can do is have a linear order of regs:
out7, out6 ... out2, out1, out0, r32, r33, r34 ... r126, r127
and slide a 'window' of 96 registers along this line, depending on how many
of the out? regs a function actually uses. The only downside of this is
that the out? registers will be allocated _first_, which makes the
resulting assembly ugly. :( Note this in the README. Hope this gets fixed
soon. :) (note the 3rd person speech there)
llvm-svn: 21252
* clean up immediates (we use 14, 22 and 64 bit immediates now. sane.)
* fold r0/f0/f1 registers into comparisons against 0/0.0/1.0
* fix nasty thinko - didn't use two-address form of conditional add
for extending bools to integers, so occasionally there would be
garbage in the result. it's amazing how often zeros are just
sitting around in registers ;) - this should fix a bunch of tests.
llvm-svn: 21221
* fix overallocation of integer (stacked) registers: we can't allocate
registers for local use if they are required as output registers
this fixes 'toast' in the test suite, and all sorts of larger programs
like bzip2 etc.
llvm-svn: 21178