1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-23 19:23:23 +01:00
Commit Graph

901 Commits

Author SHA1 Message Date
Adrian Prantl
1a2b6a92ae Bugfix for the debug intrinsic handling in InstCombiner:
Since we can't guarantee that the original dbg.declare instrinsic
is removed by LowerDbgDeclare(), we need to make sure that we are
not inserting the same dbg.value intrinsic over and over.
This removes tons of redundant DIEs when compiling optimized code.

rdar://problem/13056109

llvm-svn: 180615
2013-04-26 17:48:33 +00:00
Nadav Rotem
9360cf35a9 Add an option -vectorize-slp-aggressive for running the BB vectorizer. Make -fslp-vectorize run the slp-vectorizer.
llvm-svn: 179508
2013-04-15 05:39:58 +00:00
Nadav Rotem
4628b1562a Rename the slp-vectorizer clang/llvm flags. No functionality change.
llvm-svn: 179505
2013-04-15 04:54:42 +00:00
Alexey Samsonov
e5e655ef63 [ASan] Allow disabling init-order checks for globals by source file name.
llvm-svn: 179280
2013-04-11 13:20:00 +00:00
Nadav Rotem
96f8f45bd5 Add support for bottom-up SLP vectorization infrastructure.
This commit adds the infrastructure for performing bottom-up SLP vectorization (and other optimizations) on parallel computations.
The infrastructure has three potential users:

  1. The loop vectorizer needs to be able to vectorize AOS data structures such as (sum += A[i] + A[i+1]).

  2. The BB-vectorizer needs this infrastructure for bottom-up SLP vectorization, because bottom-up vectorization is faster to compute.

  3. A loop-roller needs to be able to analyze consecutive chains and roll them into a loop, in order to reduce code size. A loop roller does not need to create vector instructions, and this infrastructure separates the chain analysis from the vectorization.

This patch also includes a simple (100 LOC) bottom up SLP vectorizer that uses the infrastructure, and can vectorize this code:

void SAXPY(int *x, int *y, int a, int i) {
  x[i]   = a * x[i]   + y[i];
  x[i+1] = a * x[i+1] + y[i+1];
  x[i+2] = a * x[i+2] + y[i+2];
  x[i+3] = a * x[i+3] + y[i+3];
}

llvm-svn: 179117
2013-04-09 19:44:35 +00:00
Alexey Samsonov
984e7940a4 [ASan] emit instrumentation for initialization order checking by default
llvm-svn: 177063
2013-03-14 12:38:58 +00:00
Nick Lewycky
d2ee2e0cd8 Refactor GCOV's six constructor arguments into a struct with a getter that
constructs default arguments. It can now take default arguments from
cl::opt'ions. Add a new -default-gcov-version=... option, and actually test it!

Sink the reverse-order of the version into GCOVProfiling, hiding it from our
users.

llvm-svn: 177002
2013-03-14 05:13:26 +00:00
Nick Lewycky
110760eb4d Fix typo in comment.
llvm-svn: 176997
2013-03-14 01:26:17 +00:00
Nick Lewycky
360f8f1e37 Switch from a version 4.2/4.4 switch to a four-byte version string to be put
into the actual gcov file.

Instead of using the bottom 4 bytes as the function identifier, use a counter.
This makes the identifier numbers stable across multiple runs.

llvm-svn: 176616
2013-03-07 08:28:49 +00:00
Nick Lewycky
7f68cef53a In GCC 4.7, function names are now forbidden from .gcda files. Support this by
passing a null pointer to the function name in to GCDAProfiling, and add another
switch onto GCOVProfiling.

llvm-svn: 176173
2013-02-27 06:22:56 +00:00
Pedro Artigas
149b81ab7e Enhance integer division emulation support to handle types smaller than 32 bits,
enhancement done the trivial way; by extending inputs and truncating outputs 
which is addequate for targets with little or no support for integer arithmetic
on integer types less than 32 bits.

llvm-svn: 176139
2013-02-26 23:33:20 +00:00
Hal Finkel
5e320e9019 BBVectorize: Cap the number of candidate pairs in each instruction group
For some basic blocks, it is possible to generate many candidate pairs for
relatively few pairable instructions. When many (tens of thousands) of these pairs
are generated for a single instruction group, the time taken to generate and
rank the different vectorization plans can become quite large. As a result, we now
cap the number of candidate pairs within each instruction group. This is done by
closing out the group once the threshold is reached (set now at 3000 pairs).

Although this will limit the overall compile-time impact, this may not be the best
way to achieve this result. It might be better, for example, to prune excessive
candidate pairs after the fact the prevent the generation of short, but highly-connected
groups. We can experiment with this in the future.

This change reduces the overall compile-time slowdown of the csa.ll test case in
PR15222 to ~5x. If 5x is still considered too large, a lower limit can be
used as the default.

This represents a functionality change, but only for very large inputs
(thus, there is no regression test).

llvm-svn: 175251
2013-02-15 04:28:42 +00:00
Jakub Staszak
eb31170ee2 Remove unneeded #includes.
llvm-svn: 174806
2013-02-09 13:29:10 +00:00
Michael Gottesman
3d8ed99b1f Extracted ObjCARC.cpp into its own library libLLVMObjCARCOpts in preparation for refactoring the ARC Optimizer.
llvm-svn: 173647
2013-01-28 01:35:51 +00:00
Will Dietz
8f6c040948 Move Blacklist.h to include/ to enable use from clang.
llvm-svn: 172806
2013-01-18 11:29:21 +00:00
Alexey Samsonov
fdff1da8c4 ASan: add optional 'zero-based shadow' option to ASan passes. Always tell the values of shadow scale and offset to the runtime
llvm-svn: 172709
2013-01-17 11:12:32 +00:00
Jakub Staszak
f1ea1a7f37 Fix include guards so they exactly match file names.
llvm-svn: 172025
2013-01-10 00:45:19 +00:00
Chandler Carruth
4c4c8c33e1 Move CallGraphSCCPass.h into the Analysis tree; that's where the
implementation lives already.

llvm-svn: 171746
2013-01-07 15:26:48 +00:00
Chandler Carruth
0e84971ae2 Switch the SCEV expander and LoopStrengthReduce to use
TargetTransformInfo rather than TargetLowering, removing one of the
primary instances of the layering violation of Transforms depending
directly on Target.

This is a really big deal because LSR used to be a "special" pass that
could only be tested fully using llc and by looking at the full output
of it. It also couldn't run with any other loop passes because it had to
be created by the backend. No longer is this true. LSR is now just
a normal pass and we should probably lift the creation of LSR out of
lib/CodeGen/Passes.cpp and into the PassManagerBuilder. =] I've not done
this, or updated all of the tests to use opt and a triple, because
I suspect someone more familiar with LSR would do a better job. This
change should be essentially without functional impact for normal
compilations, and only change behvaior of targetless compilations.

The conversion required changing all of the LSR code to refer to the TTI
interfaces, which fortunately are very similar to TargetLowering's
interfaces. However, it also allowed us to *always* expect to have some
implementation around. I've pushed that simplification through the pass,
and leveraged it to simplify code somewhat. It required some test
updates for one of two things: either we used to skip some checks
altogether but now we get the default "no" answer for them, or we used
to have no information about the target and now we do have some.

I've also started the process of removing AddrMode, as the TTI interface
doesn't use it any longer. In some cases this simplifies code, and in
others it adds some complexity, but I think it's not a bad tradeoff even
there. Subsequent patches will try to clean this up even further and use
other (more appropriate) abstractions.

Yet again, almost all of the formatting changes brought to you by
clang-format. =]

llvm-svn: 171735
2013-01-07 14:41:08 +00:00
Chandler Carruth
f8bff2bea0 Make SimplifyCFG simply depend upon TargetTransformInfo and pass it
through as a reference rather than a pointer. There is always *some*
implementation of this available, so this simplifies code by not having
to test for whether it is available or not.

Further, it turns out there were piles of places where SimplifyCFG was
recursing and not passing down either TD or TTI. These are fixed to be
more pedantically consistent even though I don't have any particular
cases where it would matter.

llvm-svn: 171691
2013-01-07 03:53:25 +00:00
Chandler Carruth
413d8d63a5 Sink the AddressingModeMatcher helper class into an anonymous namespace
next to its only user. This helper relies on TargetLowering information
that shouldn't be generally used throughout the Transfoms library, and
so it made little sense as a generic utility.

This also consolidates the file where we need to remove the remaining
uses of TargetLowering in favor of the IR-layer abstract interface in
TargetTransformInfo.

llvm-svn: 171590
2013-01-05 02:09:22 +00:00
Chandler Carruth
4c1f3c24db Move all of the header files which are involved in modelling the LLVM IR
into their new header subdirectory: include/llvm/IR. This matches the
directory structure of lib, and begins to correct a long standing point
of file layout clutter in LLVM.

There are still more header files to move here, but I wanted to handle
them in separate commits to make tracking what files make sense at each
layer easier.

The only really questionable files here are the target intrinsic
tablegen files. But that's a battle I'd rather not fight today.

I've updated both CMake and Makefile build systems (I think, and my
tests think, but I may have missed something).

I've also re-sorted the includes throughout the project. I'll be
committing updates to Clang, DragonEgg, and Polly momentarily.

llvm-svn: 171366
2013-01-02 11:36:10 +00:00
Alexey Samsonov
3a437b2e64 Add proper support for -fsanitize-blacklist= flag for TSan and MSan. LLVM part.
llvm-svn: 171183
2012-12-28 09:30:44 +00:00
Evgeniy Stepanov
ea2d72253e [msan] Remove unreachable blocks before instrumenting a function.
llvm-svn: 170883
2012-12-21 11:18:49 +00:00
Evgeniy Stepanov
a0f0c84fb8 [msan] Add track-origins argument to the pass constructor.
llvm-svn: 170544
2012-12-19 13:55:51 +00:00
Nadav Rotem
2c25a05088 LoopVectorizer: Use the "optsize" attribute to decide if we are allowed to increase the function size.
llvm-svn: 170004
2012-12-12 19:29:45 +00:00
Alexey Samsonov
46f9cc5d51 Improve debug info generated with enabled AddressSanitizer.
When ASan replaces <alloca instruction> with
<offset into a common large alloca>, it should also patch
llvm.dbg.declare calls and replace debug info descriptors to mark
that we've replaced alloca with a value that stores an address
of the user variable, not the user variable itself.

See PR11818 for more context.

llvm-svn: 169984
2012-12-12 14:31:53 +00:00
Nadav Rotem
edcebe4904 LoopVectorizer: When -Os is used, vectorize only loops that dont require a tail loop. There is no testcase because I dont know of a way to initialize the loop vectorizer pass without adding an additional hidden flag.
llvm-svn: 169950
2012-12-12 01:11:46 +00:00
Rafael Espindola
03394e4dc6 Use an ArrayRef instead of a std::vector&.
llvm-svn: 169881
2012-12-11 16:36:02 +00:00
Bill Wendling
bb1f8f293a Don't use a red zone for code coverage if the user specified `-mno-red-zone'.
The `-mno-red-zone' flag wasn't being propagated to the functions that code
coverage generates. This allowed some of them to use the red zone when that
wasn't allowed.
<rdar://problem/12843084>

llvm-svn: 169754
2012-12-10 19:46:49 +00:00
Bill Wendling
3f153ce37b s/AttrListPtr/AttributeSet/g to better label what this class is going to be in the near future.
llvm-svn: 169651
2012-12-07 23:16:57 +00:00
Matt Beaumont-Gay
3e68d7d342 Add 'using' declarations to suppress -Woverloaded-virtual warnings.
llvm-svn: 169214
2012-12-04 05:41:27 +00:00
Nadav Rotem
7a8cea8699 minor renaming, documentation and cleanups.
llvm-svn: 169175
2012-12-03 22:57:09 +00:00
Alexey Samsonov
5084c52b4e ASan: add blacklist file to ASan pass options. Clang patch for this will follow.
llvm-svn: 169143
2012-12-03 19:09:26 +00:00
Chandler Carruth
ca305491f6 Sort the #include lines for the include/... tree with the script.
AKA: Recompile *ALL* the source code!

This one went much better. No manual edits here. I spot-checked for
silliness and grep-checked for really broken edits and everything seemed
good. It all still compiles. Yell if you see something that looks goofy.

llvm-svn: 169133
2012-12-03 17:02:12 +00:00
Chandler Carruth
a490793037 Use the new script to sort the includes of every file under lib.
Sooooo many of these had incorrect or strange main module includes.
I have manually inspected all of these, and fixed the main module
include to be the nearest plausible thing I could find. If you own or
care about any of these source files, I encourage you to take some time
and check that these edits were sensible. I can't have broken anything
(I strictly added headers, and reordered them, never removed), but they
may not be the headers you'd really like to identify as containing the
API being implemented.

Many forward declarations and missing includes were added to a header
files to allow them to parse cleanly when included first. The main
module rule does in fact have its merits. =]

llvm-svn: 169131
2012-12-03 16:50:05 +00:00
Alexey Samsonov
cf993b7c27 Add options to AddressSanitizer passes to make them configurable by frontend.
llvm-svn: 168910
2012-11-29 18:14:24 +00:00
Evgeniy Stepanov
6d7e99f2ac Initial commit of MemorySanitizer.
Compiler pass only.

llvm-svn: 168866
2012-11-29 09:57:20 +00:00
Kostya Serebryany
133cb3c737 [asan] Split AddressSanitizer into two passes (FunctionPass, ModulePass), LLVM part. This requires a clang part which will follow.
llvm-svn: 168781
2012-11-28 10:31:36 +00:00
Joey Gouly
e5caa7eb3e Remove unused parameter Penalty from the BoundsChecking pass.
llvm-svn: 168511
2012-11-23 10:47:35 +00:00
Meador Inge
a191db7d99 instcombine: Migrate math library call simplifications
This patch migrates the math library call simplifications from the
simplify-libcalls pass into the instcombine library call simplifier.

I have typically migrated just one simplifier at a time, but the math
simplifiers are interdependent because:

   1. CosOpt, PowOpt, and Exp2Opt all depend on UnaryDoubleFPOpt.
   2. CosOpt, PowOpt, Exp2Opt, and UnaryDoubleFPOpt all depend on
      the option -enable-double-float-shrink.

These two factors made migrating each of these simplifiers individually
more of a pain than it would be worth.  So, I migrated them all together.

llvm-svn: 167815
2012-11-13 04:16:17 +00:00
Meador Inge
9ad3990ff0 Add method for replacing instructions to LibCallSimplifier
In some cases the library call simplifier may need to replace instructions
other than the library call being simplified.  In those cases it may be
necessary for clients of the simplifier to override how the replacements
are actually done.  As such, a new overrideable method for replacing
instructions was added to LibCallSimplifier.

A new subclass of LibCallSimplifier is also defined which overrides
the instruction replacement method.  This is because the instruction
combiner defines its own replacement method which updates the worklist
when instructions are replaced.

llvm-svn: 167681
2012-11-11 03:51:43 +00:00
Chandler Carruth
0a6b99ee2b Revert the majority of the next patch in the address space series:
r165941: Resubmit the changes to llvm core to update the functions to
         support different pointer sizes on a per address space basis.

Despite this commit log, this change primarily changed stuff outside of
VMCore, and those changes do not carry any tests for correctness (or
even plausibility), and we have consistently found questionable or flat
out incorrect cases in these changes. Most of them are probably correct,
but we need to devise a system that makes it more clear when we have
handled the address space concerns correctly, and ideally each pass that
gets updated would receive an accompanying test case that exercises that
pass specificaly w.r.t. alternate address spaces.

However, from this commit, I have retained the new C API entry points.
Those were an orthogonal change that probably should have been split
apart, but they seem entirely good.

In several places the changes were very obvious cleanups with no actual
multiple address space code added; these I have not reverted when
I spotted them.

In a few other places there were merge conflicts due to a cleaner
solution being implemented later, often not using address spaces at all.
In those cases, I've preserved the new code which isn't address space
dependent.

This is part of my ongoing effort to clean out the partial address space
code which carries high risk and low test coverage, and not likely to be
finished before the 3.2 release looms closer. Duncan and I would both
like to see the above issues addressed before we return to these
changes.

llvm-svn: 167222
2012-11-01 09:14:31 +00:00
Chandler Carruth
76f7f4a33e Revert the series of commits starting with r166578 which introduced the
getIntPtrType support for multiple address spaces via a pointer type,
and also introduced a crasher bug in the constant folder reported in
PR14233.

These commits also contained several problems that should really be
addressed before they are re-committed. I have avoided reverting various
cleanups to the DataLayout APIs that are reasonable to have moving
forward in order to reduce the amount of churn, and minimize the number
of commits that were reverted. I've also manually updated merge
conflicts and manually arranged for the getIntPtrType function to stay
in DataLayout and to be defined in a plausible way after this revert.

Thanks to Duncan for working through this exact strategy with me, and
Nick Lewycky for tracking down the really annoying crasher this
triggered. (Test case to follow in its own commit.)

After discussing with Duncan extensively, and based on a note from
Micah, I'm going to continue to back out some more of the more
problematic patches in this series in order to ensure we go into the
LLVM 3.2 branch with a reasonable story here. I'll send a note to
llvmdev explaining what's going on and why.

Summary of reverted revisions:

r166634: Fix a compiler warning with an unused variable.
r166607: Add some cleanup to the DataLayout changes requested by
         Chandler.
r166596: Revert "Back out r166591, not sure why this made it through
         since I cancelled the command. Bleh, sorry about this!
r166591: Delete a directory that wasn't supposed to be checked in yet.
r166578: Add in support for getIntPtrType to get the pointer type based
         on the address space.
llvm-svn: 167221
2012-11-01 08:07:29 +00:00
Hans Wennborg
40eb1b4055 Use TargetTransformInfo to control switch-to-lookup table transformation
When the switch-to-lookup tables transform landed in SimplifyCFG, it
was pointed out that this could be inappropriate for some targets.
Since there was no way at the time for the pass to know anything about
the target, an awkward reverse-transform was added in CodeGenPrepare
that turned lookup tables back into switches for some targets.

This patch uses the new TargetTransformInfo to determine if a
switch should be transformed, and removes
CodeGenPrepare::ConvertLoadToSwitch.

llvm-svn: 167011
2012-10-30 11:23:25 +00:00
Nadav Rotem
f8e4e2b652 Rename the BB-vectorize flag to match the dragonegg name
llvm-svn: 166948
2012-10-29 18:01:14 +00:00
Nadav Rotem
24b8d6c6f1 Change the PassManagerBuilder (used by -O3) loop vectorizer flag from -vectorize to -vectorize-loops because we dont want to share the same flag as the bb-vectorizer.
llvm-svn: 166937
2012-10-29 16:36:25 +00:00
Rafael Espindola
4b51029c9e Change the internalize pass to internalize all symbols when given an empty
list of externals. This makes sense since a shared library with no symbols
can still be useful if it has static constructors.

llvm-svn: 166795
2012-10-26 18:47:48 +00:00
Micah Villmow
521311700f Add in support for getIntPtrType to get the pointer type based on the address space.
This checkin also adds in some tests that utilize these paths and updates some of the
clients.

llvm-svn: 166578
2012-10-24 15:52:52 +00:00
Nadav Rotem
13a468b929 revert r166264 because the LTO build is still failing
llvm-svn: 166340
2012-10-19 21:28:43 +00:00
Evgeniy Stepanov
3628bf2d69 Move SplitBlockAndInsertIfThen to BasicBlockUtils.
llvm-svn: 166278
2012-10-19 10:48:31 +00:00
Nadav Rotem
ac33a84388 recommit the patch that makes LSR and LowerInvoke use the TargetTransform interface.
llvm-svn: 166264
2012-10-19 04:27:49 +00:00
Chandler Carruth
7bfc26a7b4 Introduce a BarrierNoop pass, a hack designed to allow *some* control
over the implicitly-formed-and-nesting CGSCC pass manager and function
pass managers, especially when using them on the opt commandline or
using extension points in the module builder. The '-barrier' opt flag
(or the pass itself) will create a no-op module pass in the pipeline,
resetting the pass manager stack, and allowing the creation of a new
pipeline of function passes or CGSCC passes to be created that is
independent from any previous pipelines.

For example, this can be used to test running two CGSCC passes in
independent CGSCC pass managers as opposed to in the same CGSCC pass
manager. It also allows us to introduce a further hack into the
PassManagerBuilder to separate the O0 pipeline extension passes from the
always-inliner's CGSCC pass manager, which they likely do not want to
participate in... At the very least none of the Sanitizer passes want
this behavior.

This fixes a bug with ASan at O0 currently, and I'll commit the ASan
test which covers this pass. I'm happy to add a test case that this pass
exists and works, but not sure how much time folks would like me to
spend adding test cases for the details of its behavior of partition
pass managers.... The whole thing is just vile, and mostly intended to
unblock ASan, so I'm hoping to rip this all out in a brave new pass
manager world.

llvm-svn: 166172
2012-10-18 08:05:46 +00:00
Bob Wilson
b6adb70bdd Temporarily revert the TargetTransform changes.
The TargetTransform changes are breaking LTO bootstraps of clang.  I am
working with Nadav to figure out the problem, but I am reverting it for now
to get our buildbots working.

This reverts svn commits: 165665 165669 165670 165786 165787 165997
and I have also reverted clang svn 165741

llvm-svn: 166168
2012-10-18 05:43:52 +00:00
Nadav Rotem
8303c909c7 Add a loop vectorizer.
llvm-svn: 166112
2012-10-17 18:25:06 +00:00
Micah Villmow
272663afc2 Resubmit the changes to llvm core to update the functions to support different pointer sizes on a per address space basis.
llvm-svn: 165941
2012-10-15 16:24:29 +00:00
Kostya Serebryany
a6cd7ad8f2 [asan] make AddressSanitizer to be a FunctionPass instead of ModulePass. This will simplify chaining other FunctionPasses with asan. Also some minor cleanup
llvm-svn: 165936
2012-10-15 14:20:06 +00:00
Meador Inge
0d4d368820 Implement new LibCallSimplifier class
This patch implements the new LibCallSimplifier class as outlined in [1].
In addition to providing the new base library simplification infrastructure,
all the fortified library call simplifications were moved over to the new
infrastructure.  The rest of the library simplification optimizations will
be moved over with follow up patches.

NOTE: The original fortified library call simplifier located in the
SimplifyFortifiedLibCalls class was not removed because it is still
used by CodeGenPrepare.  This class will eventually go away too.

[1] http://lists.cs.uiuc.edu/pipermail/llvmdev/2012-August/052283.html

llvm-svn: 165873
2012-10-13 16:45:24 +00:00
Micah Villmow
4eb108750d Revert 165732 for further review.
llvm-svn: 165747
2012-10-11 21:27:41 +00:00
Micah Villmow
d8b76fdc50 Add in the first iteration of support for llvm/clang/lldb to allow variable per address space pointer sizes to be optimized correctly.
llvm-svn: 165726
2012-10-11 17:21:41 +00:00
Nadav Rotem
b82a3821f7 Add a new interface to allow IR-level passes to access codegen-specific information.
llvm-svn: 165665
2012-10-10 22:04:55 +00:00
Nadav Rotem
c94270cb4d Refactor the AddrMode class out of TLI to its own header file.
This class is used by LSR and a number of places in the codegen.
This is the first step in de-coupling LSR from TLI, and creating
a new interface in between them.

llvm-svn: 165455
2012-10-08 23:06:34 +00:00
Micah Villmow
bb1a25cd67 Move TargetData to DataLayout.
llvm-svn: 165402
2012-10-08 16:38:25 +00:00
Preston Gurd
0256511b5d This patch corrects commit 165126 by using an integer bit width instead of
a pointer to a type, in order to remove the uses of getGlobalContext().

Patch by Tyler Nowicki.

llvm-svn: 165255
2012-10-04 21:33:40 +00:00
Craig Topper
0cda5b9a3c Rename virtual table anchors from Anchor() to anchor() for consistency with the rest of the tree.
llvm-svn: 164666
2012-09-26 06:36:36 +00:00
Michael Ilseman
3e79cb01e7 Expansions for u/srem, using the udiv expansion. More unit tests for udiv and u/srem.
Fixed issue with Release build.

llvm-svn: 164654
2012-09-26 01:55:01 +00:00
Chad Rosier
28c95ff726 Revert r164614 to appease the buildbots.
llvm-svn: 164627
2012-09-25 19:57:20 +00:00
Michael Ilseman
16fda61fbf Expansions for u/srem, using the udiv expansion. More unit tests for udiv and u/srem.
llvm-svn: 164614
2012-09-25 17:56:47 +00:00
Michael Ilseman
462b51585f Document the interface for integer expansion, using doxygen-style comments
llvm-svn: 164231
2012-09-19 16:03:57 +00:00
Michael Ilseman
c064f031ff Forward declarations
llvm-svn: 164230
2012-09-19 15:55:03 +00:00
Benjamin Kramer
96b6f5646b Remove unused and broken CloneFunction wrapper.
It converted the CodeInfo argument to bool implicitly.

llvm-svn: 164215
2012-09-19 13:03:01 +00:00
Michael Ilseman
2555741dad New utility for expanding integer division for targets that don't support it.
Implementation derived from compiler-rt's implementation of signed and unsigned integer division.

llvm-svn: 164173
2012-09-18 22:02:40 +00:00
Craig Topper
24f46609c1 Mark unimplemented copy constructors and copy assignment operators as LLVM_DELETED_FUNCTION.
llvm-svn: 164017
2012-09-17 07:16:40 +00:00
Chandler Carruth
93b1521a98 Port the SSAUpdater-based promotion logic from the old SROA pass to the
new one, and add support for running the new pass in that mode and in
that slot of the pass manager. With this the new pass can completely
replace the old one within the pipeline.

The strategy for enabling or disabling the SSAUpdater logic is to do it
by making the requirement of the domtree analysis optional. By default,
it is required and we get the standard mem2reg approach. This is usually
the desired strategy when run in stand-alone situations. Within the
CGSCC pass manager, we disable requiring of the domtree analysis and
consequentially trigger fallback to the SSAUpdater promotion.

In theory this would allow the pass to re-use a domtree if one happened
to be available even when run in a mode that doesn't require it. In
practice, it lets us have a single pass rather than two which was
simpler for me to wrap my head around.

There is a hidden flag to force the use of the SSAUpdater code path for
the purpose of testing. The primary testing strategy is just to run the
existing tests through that path. One notable difference is that it has
custom code to handle lifetime markers, and one of the tests has been
enhanced to exercise that code.

This has survived a bootstrap and the test suite without serious
correctness issues, however my run of the test suite produced *very*
alarming performance numbers. I don't entirely understand or trust them
though, so more investigation is on-going.

To aid my understanding of the performance impact of the new SROA now
that it runs throughout the optimization pipeline, I'm enabling it by
default in this commit, and will disable it again once the LNT bots have
picked up one iteration with it. I want to get those bots (which are
much more stable) to evaluate the impact of the change before I jump to
any conclusions.

NOTE: Several Clang tests will fail because they run -O3 and check the
result's order of output. They'll go back to passing once I disable it
again.

llvm-svn: 163965
2012-09-15 11:43:14 +00:00
Evan Cheng
ef1d563477 Stylistic and 80-col fixes
llvm-svn: 163940
2012-09-14 21:25:34 +00:00
Chandler Carruth
3be91908a4 Introduce a new SROA implementation.
This is essentially a ground up re-think of the SROA pass in LLVM. It
was initially inspired by a few problems with the existing pass:
- It is subject to the bane of my existence in optimizations: arbitrary
  thresholds.
- It is overly conservative about which constructs can be split and
  promoted.
- The vector value replacement aspect is separated from the splitting
  logic, missing many opportunities where splitting and vector value
  formation can work together.
- The splitting is entirely based around the underlying type of the
  alloca, despite this type often having little to do with the reality
  of how that memory is used. This is especially prevelant with unions
  and base classes where we tail-pack derived members.
- When splitting fails (often due to the thresholds), the vector value
  replacement (again because it is separate) can kick in for
  preposterous cases where we simply should have split the value. This
  results in forming i1024 and i2048 integer "bit vectors" that
  tremendously slow down subsequnet IR optimizations (due to large
  APInts) and impede the backend's lowering.

The new design takes an approach that fundamentally is not susceptible
to many of these problems. It is the result of a discusison between
myself and Duncan Sands over IRC about how to premptively avoid these
types of problems and how to do SROA in a more principled way. Since
then, it has evolved and grown, but this remains an important aspect: it
fixes real world problems with the SROA process today.

First, the transform of SROA actually has little to do with replacement.
It has more to do with splitting. The goal is to take an aggregate
alloca and form a composition of scalar allocas which can replace it and
will be most suitable to the eventual replacement by scalar SSA values.
The actual replacement is performed by mem2reg (and in the future
SSAUpdater).

The splitting is divided into four phases. The first phase is an
analysis of the uses of the alloca. This phase recursively walks uses,
building up a dense datastructure representing the ranges of the
alloca's memory actually used and checking for uses which inhibit any
aspects of the transform such as the escape of a pointer.

Once we have a mapping of the ranges of the alloca used by individual
operations, we compute a partitioning of the used ranges. Some uses are
inherently splittable (such as memcpy and memset), while scalar uses are
not splittable. The goal is to build a partitioning that has the minimum
number of splits while placing each unsplittable use in its own
partition. Overlapping unsplittable uses belong to the same partition.
This is the target split of the aggregate alloca, and it maximizes the
number of scalar accesses which become accesses to their own alloca and
candidates for promotion.

Third, we re-walk the uses of the alloca and assign each specific memory
access to all the partitions touched so that we have dense use-lists for
each partition.

Finally, we build a new, smaller alloca for each partition and rewrite
each use of that partition to use the new alloca. During this phase the
pass will also work very hard to transform uses of an alloca into a form
suitable for promotion, including forming vector operations, speculating
loads throguh PHI nodes and selects, etc.

After splitting is complete, each newly refined alloca that is
a candidate for promotion to a scalar SSA value is run through mem2reg.

There are lots of reasonably detailed comments in the source code about
the design and algorithms, and I'm going to be trying to improve them in
subsequent commits to ensure this is well documented, as the new pass is
in many ways more complex than the old one.

Some of this is still a WIP, but the current state is reasonbly stable.
It has passed bootstrap, the nightly test suite, and Duncan has run it
successfully through the ACATS and DragonEgg test suites. That said, it
remains behind a default-off flag until the last few pieces are in
place, and full testing can be done.

Specific areas I'm looking at next:
- Improved comments and some code cleanup from reviews.
- SSAUpdater and enabling this pass inside the CGSCC pass manager.
- Some datastructure tuning and compile-time measurements.
- More aggressive FCA splitting and vector formation.

Many thanks to Duncan Sands for the thorough final review, as well as
Benjamin Kramer for lots of review during the process of writing this
pass, and Daniel Berlin for reviewing the data structures and algorithms
and general theory of the pass. Also, several other people on IRC, over
lunch tables, etc for lots of feedback and advice.

llvm-svn: 163883
2012-09-14 09:22:59 +00:00
Alex Rosenberg
ab6a7af449 Add a pass that renames everything with metasyntatic names. This works well after using bugpoint to reduce the confusion presented by the original names, which no longer mean what they used to.
llvm-svn: 163592
2012-09-11 02:46:18 +00:00
Andrew Trick
e06bae49d1 Remove unused declaration
llvm-svn: 163579
2012-09-11 00:39:12 +00:00
Benjamin Kramer
3d97b600fa Move bypassSlowDivision into the llvm namespace.
llvm-svn: 163503
2012-09-10 11:52:08 +00:00
Jakub Staszak
be574b61dd Remove unneeded code.
llvm-svn: 163160
2012-09-04 19:49:17 +00:00
Preston Gurd
c80dc7d214 Generic Bypass Slow Div
- CodeGenPrepare pass for identifying div/rem ops
- Backend specifies the type mapping using addBypassSlowDivType
- Enabled only for Intel Atom with O2 32-bit -> 8-bit
- Replace IDIV with instructions which test its value and use DIVB if the value
is positive and less than 256.
- In the case when the quotient and remainder of a divide are used a DIV
and a REM instruction will be present in the IR. In the non-Atom case
they are both lowered to IDIVs and CSE removes the redundant IDIV instruction,
using the quotient and remainder from the first IDIV. However,
due to this optimization CSE is not able to eliminate redundant
IDIV instructions because they are located in different basic blocks.
This is overcome by calculating both the quotient (DIV) and remainder (REM)
in each basic block that is inserted by the optimization and reusing the result
values when a subsequent DIV or REM instruction uses the same operands.
- Test cases check for the presents of the optimization when calculating
either the quotient, remainder,  or both.

Patch by Tyler Nowicki!

llvm-svn: 163150
2012-09-04 18:22:17 +00:00
Benjamin Kramer
b92d13cc42 Make MemoryBuiltins aware of TargetLibraryInfo.
This disables malloc-specific optimization when -fno-builtin (or -ffreestanding)
is specified. This has been a problem for a long time but became more severe
with the recent memory builtin improvements.

Since the memory builtin functions are used everywhere, this required passing
TLI in many places. This means that functions that now have an optional TLI
argument, like RecursivelyDeleteTriviallyDeadFunctions, won't remove dead
mallocs anymore if the TLI argument is missing. I've updated most passes to do
the right thing.

Fixes PR13694 and probably others.

llvm-svn: 162841
2012-08-29 15:32:21 +00:00
Nuno Lopes
c3de8e176e add EmitStrNLen()
llvm-svn: 160741
2012-07-25 17:18:59 +00:00
Nuno Lopes
537a3395e5 make all Emit*() functions consult the TargetLibraryInfo information before creating a call to a library function.
Update all clients to pass the TLI information around.
Previous draft reviewed by Eli.

llvm-svn: 160733
2012-07-25 16:46:31 +00:00
Nuno Lopes
6147c101eb baby steps toward fixing some problems with inbound GEPs that overflow, as discussed 2 months ago or so.
Make sure we do not emit index computations with NSW flags so that we dont get an undef value if the GEP overflows

llvm-svn: 160589
2012-07-20 23:07:40 +00:00
Nuno Lopes
66a3934c7a move the bounds checking pass to the instrumentation folder, where it belongs. I dunno why in the world I dropped it in the Scalar folder in the first place.
No functionality change.

llvm-svn: 160587
2012-07-20 22:39:33 +00:00
Chandler Carruth
4b51f99c87 Move llvm/Support/IRBuilder.h -> llvm/IRBuilder.h
This was always part of the VMCore library out of necessity -- it deals
entirely in the IR. The .cpp file in fact was already part of the VMCore
library. This is just a mechanical move.

I've tried to go through and re-apply the coding standard's preferred
header sort, but at 40-ish files, I may have gotten some wrong. Please
let me know if so.

I'll be committing the corresponding updates to Clang and Polly, and
Duncan has DragonEgg.

Thanks to Bill and Eric for giving the green light for this bit of cleanup.

llvm-svn: 159421
2012-06-29 12:38:19 +00:00
Hal Finkel
89ff4e2b47 Allow BBVectorize to form non-2^n-length vectors.
The original algorithm only used recursive pair fusion of equal-length
types. This is now extended to allow pairing of any types that share
the same underlying scalar type. Because we would still generally
prefer the 2^n-length types, those are formed first. Then a second
set of iterations form the non-2^n-length types.

Also, a call to SimplifyInstructionsInBlock has been added after each
pairing iteration. This takes care of DCE (and a few other things)
that make the following iterations execute somewhat faster. For the
same reason, some of the simple shuffle-combination cases are now
handled internally.

There is some additional refactoring work to be done, but I've had
many requests for this feature, so additional refactoring will come
soon in future commits (as will additional test cases).

llvm-svn: 159330
2012-06-28 05:42:42 +00:00
Eli Bendersky
5d45af3f75 The name (and comment describing) of llvm::GetFirstDebuigLocInBasicBlock no longer represents what the function does. Therefore, the function is removed and its functionality is folded into the only place in the code-base where it was being used.
llvm-svn: 159133
2012-06-25 10:13:14 +00:00
Hal Finkel
409cab2a0a Allow controlling vectorization of boolean values separately from other integer types.
These are used as the result of comparisons, and often handled differently from larger integer types.

llvm-svn: 159111
2012-06-24 13:28:01 +00:00
Hal Finkel
d0a65988d8 Allow BBVectorize to fuse compare instructions.
llvm-svn: 159088
2012-06-23 21:52:50 +00:00
Nadav Rotem
313b090606 Add a number of threshold arguments to the SRA pass.
A patch by Tom Stellard with minor changes.

llvm-svn: 158918
2012-06-21 13:44:31 +00:00
Nuno Lopes
114b8eaa9c add a new pass to instrument loads and stores for run-time bounds checking
move EmitGEPOffset from InstCombine to Transforms/Utils/Local.h

(a draft of this) patch reviewed by Andrew, thanks.

llvm-svn: 157261
2012-05-22 17:19:09 +00:00
Andrew Trick
c0a8fc3638 Remove a stale forward declaration.
llvm-svn: 156770
2012-05-14 18:03:19 +00:00
Eric Christopher
255767a1d7 Remove excess semi-colons to quiet warnings.
llvm-svn: 156416
2012-05-08 20:45:04 +00:00
Chandler Carruth
51819a2bcf Teach the code extractor how to extract a sequence of blocks from
RegionInfo's RegionNode. This mirrors the logic for automating the
extraction from a Loop.

llvm-svn: 156208
2012-05-04 21:33:30 +00:00
Chandler Carruth
8cdf727fc7 Factor the computation of input and output sets into a public interface
of the CodeExtractor utility. This allows speculatively computing input
and output sets to measure the likely size impact of the code
extraction.

These sets cannot be reused sadly -- we mutate the function prior to
forming the final sets used by the actual extraction.

The interface has been revamped slightly to make it easier to use
correctly by making the interface const and sinking the computation of
the number of exit blocks into the full extraction function and away
from the rest of this logic which just computed two output parameters.

llvm-svn: 156168
2012-05-04 11:20:27 +00:00
Chandler Carruth
67c334679c Move the CodeExtractor utility to a dedicated header file / source file,
and expose it as a utility class rather than as free function wrappers.

The simple free-function interface works well for the bugpoint-specific
pass's uses of code extraction, but in an upcoming patch for more
advanced code extraction, they simply don't expose a rich enough
interface. I need to expose various stages of the process of doing the
code extraction and query information to decide whether or not to
actually complete the extraction or give up.

Rather than build up a new predicate model and pass that into these
functions, just take the class that was actually implementing the
functions and lift it up into a proper interface that can be used to
perform code extraction. The interface is cleaned up and re-documented
to work better in a header. It also is now setup to accept the blocks to
be extracted in the constructor rather than in a method.

In passing this essentially reverts my previous commit here exposing
a block-level query for eligibility of extraction. That is no longer
necessary with the more rich interface as clients can query the
extraction object for eligibility directly. This will reduce the number
of walks of the input basic block sequence by quite a bit which is
useful if this enters the normal optimization pipeline.

llvm-svn: 156163
2012-05-04 10:18:49 +00:00
Chandler Carruth
a75274c657 Factor the logic for testing whether a basic block is viable for code
extraction into a public interface. Also clean it up and apply it more
consistently such that we check for landing pads *anywhere* in the
extracted code, not just in single-block extraction.

This will be used to guide decisions in passes that are planning to
eventually perform a round of code extraction.

llvm-svn: 156114
2012-05-03 22:26:53 +00:00
Bill Wendling
5a1a6421ca Second attempt at PR12573:
Allow the "SplitCriticalEdge" function to split the edge to a landing pad. If
the pass is *sure* that it thinks it knows what it's doing, then it may go ahead
and specify that the landing pad can have its critical edge split. The loop
unswitch pass is one of these passes. It will split the critical edges of all
edges coming from a loop to a landing pad not within the loop. Doing so will
retain important loop analysis information, such as loop simplify.

llvm-svn: 155817
2012-04-30 10:44:54 +00:00
Hal Finkel
c55edb7b35 Enhance BBVectorize to more-properly handle pointer values and vectorize GEPs.
llvm-svn: 154734
2012-04-14 07:32:43 +00:00
Hal Finkel
12b4c41203 Add support to BBVectorize for vectorizing selects.
llvm-svn: 154700
2012-04-13 20:45:45 +00:00
Hongbin Zheng
48758c581f Refactor: Use positive field names in VectorizeConfig.
llvm-svn: 154249
2012-04-07 03:56:23 +00:00
Hongbin Zheng
7a4e40f87f Introduce the VectorizeConfig class, with which we can control the behavior
of the BBVectorizePass without using command line option. As pointed out
  by Hal, we can ask the TargetLoweringInfo for the architecture specific
  VectorizeConfig to perform vectorizing with architecture specific
  information.

llvm-svn: 154096
2012-04-05 15:46:55 +00:00
Hongbin Zheng
8d380b332d Add the function "vectorizeBasicBlock" which allow users vectorize a
BasicBlock in other passes, e.g. we can call vectorizeBasicBlock in the
 loop unroll pass right after the loop is unrolled.

llvm-svn: 154089
2012-04-05 08:05:16 +00:00
Bill Wendling
1db4186413 Add an option to turn off the expensive GVN load PRE part of GVN.
llvm-svn: 153902
2012-04-02 22:16:50 +00:00
Chandler Carruth
61d0735f00 Remove a bunch of empty, dead, and no-op methods from all of these
interfaces. These methods were used in the old inline cost system where
there was a persistent cache that had to be updated, invalidated, and
cleared. We're now doing more direct computations that don't require
this intricate dance. Even if we resume some level of caching, it would
almost certainly have a simpler and more narrow interface than this.

llvm-svn: 153813
2012-03-31 12:48:08 +00:00
Chandler Carruth
8cacff57bf Initial commit for the rewrite of the inline cost analysis to operate
on a per-callsite walk of the called function's instructions, in
breadth-first order over the potentially reachable set of basic blocks.

This is a major shift in how inline cost analysis works to improve the
accuracy and rationality of inlining decisions. A brief outline of the
algorithm this moves to:

- Build a simplification mapping based on the callsite arguments to the
  function arguments.
- Push the entry block onto a worklist of potentially-live basic blocks.
- Pop the first block off of the *front* of the worklist (for
  breadth-first ordering) and walk its instructions using a custom
  InstVisitor.
- For each instruction's operands, re-map them based on the
  simplification mappings available for the given callsite.
- Compute any simplification possible of the instruction after
  re-mapping, and store that back int othe simplification mapping.
- Compute any bonuses, costs, or other impacts of the instruction on the
  cost metric.
- When the terminator is reached, replace any conditional value in the
  terminator with any simplifications from the mapping we have, and add
  any successors which are not proven to be dead from these
  simplifications to the worklist.
- Pop the next block off of the front of the worklist, and repeat.
- As soon as the cost of inlining exceeds the threshold for the
  callsite, stop analyzing the function in order to bound cost.

The primary goal of this algorithm is to perfectly handle dead code
paths. We do not want any code in trivially dead code paths to impact
inlining decisions. The previous metric was *extremely* flawed here, and
would always subtract the average cost of two successors of
a conditional branch when it was proven to become an unconditional
branch at the callsite. There was no handling of wildly different costs
between the two successors, which would cause inlining when the path
actually taken was too large, and no inlining when the path actually
taken was trivially simple. There was also no handling of the code
*path*, only the immediate successors. These problems vanish completely
now. See the added regression tests for the shiny new features -- we
skip recursive function calls, SROA-killing instructions, and high cost
complex CFG structures when dead at the callsite being analyzed.

Switching to this algorithm required refactoring the inline cost
interface to accept the actual threshold rather than simply returning
a single cost. The resulting interface is pretty bad, and I'm planning
to do lots of interface cleanup after this patch.

Several other refactorings fell out of this, but I've tried to minimize
them for this patch. =/ There is still more cleanup that can be done
here. Please point out anything that you see in review.

I've worked really hard to try to mirror at least the spirit of all of
the previous heuristics in the new model. It's not clear that they are
all correct any more, but I wanted to minimize the change in this single
patch, it's already a bit ridiculous. One heuristic that is *not* yet
mirrored is to allow inlining of functions with a dynamic alloca *if*
the caller has a dynamic alloca. I will add this back, but I think the
most reasonable way requires changes to the inliner itself rather than
just the cost metric, and so I've deferred this for a subsequent patch.
The test case is XFAIL-ed until then.

As mentioned in the review mail, this seems to make Clang run about 1%
to 2% faster in -O0, but makes its binary size grow by just under 4%.
I've looked into the 4% growth, and it can be fixed, but requires
changes to other parts of the inliner.

llvm-svn: 153812
2012-03-31 12:42:41 +00:00
Chandler Carruth
e507ddfe74 Switch to WeakVHs in the value mapper, and aggressively prune dead basic
blocks in the function cloner. This removes the last case of trivially
dead code that I've been seeing in the wild getting inlined, analyzed,
re-inlined, optimized, only to be deleted. Nukes a FIXME from the
cleanup tests.

llvm-svn: 153572
2012-03-28 08:38:27 +00:00
Kostya Serebryany
5fa635b5bf add EP_OptimizerLast extension point
llvm-svn: 153353
2012-03-23 23:22:59 +00:00
Andrew Trick
74bcc63b1b Remove unused simplifyIVUsers
llvm-svn: 153262
2012-03-22 17:47:30 +00:00
Chandler Carruth
1d0b0955da Start removing the use of an ad-hoc 'never inline' set and instead
directly query the function information which this set was representing.
This simplifies the interface of the inline cost analysis, and makes the
always-inline pass significantly more efficient.

Previously, always-inline would first make a single set of every
function in the module *except* those marked with the always-inline
attribute. It would then query this set at every call site to see if the
function was a member of the set, and if so, refuse to inline it. This
is quite wasteful. Instead, simply check the function attribute directly
when looking at the callsite.

The normal inliner also had similar redundancy. It added every function
in the module with the noinline attribute to its set to ignore, even
though inside the cost analysis function we *already tested* the
noinline attribute and produced the same result.

The only tricky part of removing this is that we have to be able to
correctly remove only the functions inlined by the always-inline pass
when finalizing, which requires a bit of a hack. Still, much less of
a hack than the set of all non-always-inline functions was. While I was
touching this function, I switched a heavy-weight set to a vector with
sort+unique. The algorithm already had a two-phase insert and removal
pattern, we were just needlessly paying the uniquing cost on every
insert.

This probably speeds up some compiles by a small amount (-O0 compiles
with lots of always-inline, so potentially heavy libc++ users), but I've
not tried to measure it.

I believe there is no functional change here, but yell if you spot one.
None are intended.

Finally, the direction this is going in is to greatly simplify the
inline cost query interface so that we can replace its implementation
with a much more clever one. Along the way, all the APIs get simplified,
so it seems incrementally good.

llvm-svn: 152903
2012-03-16 06:10:13 +00:00
Chandler Carruth
02f89bdb9c Remove the basic inliner. This was added in 2007, and hasn't really
changed since. No one was using it. It is yet another consumer of the
InlineCost interface that I'd like to change.

llvm-svn: 152769
2012-03-15 01:37:56 +00:00
Chad Rosier
096b8c0365 Add support for disabling llvm.lifetime intrinsics in the AlwaysInliner. These
are optimization hints, but at -O0 we're not optimizing.  This becomes a problem
when the alwaysinline attribute is abused.
rdar://10921594

llvm-svn: 151429
2012-02-25 02:56:01 +00:00
Kostya Serebryany
5cd1e1380f ThreadSanitizer, a race detector. First LLVM commit.
Clang patch (flags) will follow shortly.
The run-time library will also follow, but not immediately.

llvm-svn: 150423
2012-02-13 22:50:51 +00:00
Bill Wendling
8c63e349bc [unwind removal] Remove all of the code for the dead 'unwind' instruction. There
were no 'unwind' instructions being generated before this, so this is in effect
a no-op.

llvm-svn: 149906
2012-02-06 21:44:22 +00:00
Dan Gohman
d18622bd02 Fix SSAUpdaterImpl's RecordMatchingPHI to record exactly the
PHI nodes which were matched, rather than climbing up the
original PHI node's operands to rediscover PHI nodes for
recording, since the PHI nodes found that are not
necessarily part of the matched set.
This fixes rdar://10589171.

llvm-svn: 149654
2012-02-03 01:07:01 +00:00
Hal Finkel
8cf5de5774 Add a basic-block autovectorization pass.
This is the initial checkin of the basic-block autovectorization pass along with some supporting vectorization infrastructure.
Special thanks to everyone who helped review this code over the last several months (especially Tobias Grosser).

llvm-svn: 149468
2012-02-01 03:51:43 +00:00
Dan Gohman
9b37a5592c Add a new ObjC ARC optimization pass to eliminate unneeded
autorelease push+pop pairs.

llvm-svn: 148330
2012-01-17 20:52:24 +00:00
Dan Gohman
45976c3482 Add a new PassManagerBuilder customization point,
EP_ModuleOptimizerEarly, to allow passes to be added before the
main ModulePass optimizers.

llvm-svn: 148329
2012-01-17 20:51:32 +00:00
Mon P Wang
e062e56cab When not destroying the source, the linker is not remapping the types. Added support
to CloneFunctionInto to allow remapping for this case.

llvm-svn: 147217
2011-12-23 02:18:32 +00:00
David Blaikie
576aba04f1 Unweaken vtables as per http://llvm.org/docs/CodingStandards.html#ll_virtual_anch
llvm-svn: 146960
2011-12-20 02:50:00 +00:00
Pete Cooper
6627aaa416 Refactor code used in InstCombine::FoldAndOfICmps to new file.
This will be used by SimplifyCfg in a later commit.

llvm-svn: 146803
2011-12-17 01:20:32 +00:00
Kostya Serebryany
65849ee22a [asan] fix a bug (issue 19) where dlclose and the following mmap caused a false positive. compiler part.
llvm-svn: 146688
2011-12-15 21:59:03 +00:00
Jakub Staszak
4077c5b401 SplitBlockPredecessors uses ArrayRef instead of Data and Size.
llvm-svn: 146277
2011-12-09 21:19:53 +00:00
Andrew Trick
4f0b3bb42b Add -unroll-runtime for unrolling loops with run-time trip counts.
Patch by Brendon Cahoon!

This extends the existing LoopUnroll and LoopUnrollPass. Brendon
measured no regressions in the llvm test suite with -unroll-runtime
enabled. This implementation works by using the existing loop
unrolling code to unroll the loop by a power-of-two (default 8). It
generates an if-then-else sequence of code prior to the loop to
execute the extra iterations before entering the unrolled loop.

llvm-svn: 146245
2011-12-09 06:19:40 +00:00
Eli Friedman
2e78501430 Remove reference to dead GEPSplitterPass. PR11506.
llvm-svn: 146195
2011-12-08 22:28:17 +00:00
Nick Lewycky
d59dcc5ddb Expose a switch for the new gcov format.
llvm-svn: 145880
2011-12-06 00:29:13 +00:00
Kostya Serebryany
aa616ec887 make asan work at -O0, llvm part. Patch by glider@google.com
llvm-svn: 145530
2011-11-30 22:19:26 +00:00
Eli Friedman
d02d82d355 Add support for custom names for library functions in TargetLibraryInfo. Add a custom name for fwrite and fputs on x86-32 OSX. Make SimplifyLibCalls honor the custom
names for fwrite and fputs.

Fixes <rdar://problem/9815881>.

llvm-svn: 144876
2011-11-17 01:27:36 +00:00
Kostya Serebryany
4105068ea9 AddressSanitizer, first commit (compiler module only)
llvm-svn: 144758
2011-11-16 01:35:23 +00:00
Benjamin Kramer
4a8534a158 StringRefize and simplify.
llvm-svn: 144675
2011-11-15 19:12:09 +00:00
Benjamin Kramer
6d85bbc486 Make headers standalone, move a virtual method out of line.
llvm-svn: 144536
2011-11-14 17:22:45 +00:00
Devang Patel
85caca9190 Add utility to append a function to the list of global constructors.
Patch by Kostya Serebryany.

llvm-svn: 143405
2011-10-31 23:58:51 +00:00
Devang Patel
8f9c569a13 svn mv Target/ARM/ARMGlobalMerge.cpp Transforms/Scalar/GlobalMerge.cpp
There is no reason to have simple IR level pass in lib/Target.

llvm-svn: 142200
2011-10-17 17:17:43 +00:00
Andrew Trick
c60e2addd9 LSR should avoid redundant edge splitting.
This handles the case in which LSR rewrites an IV user that is a phi and
splits critical edges originating from a switch.
Fixes <rdar://problem/6453893> LSR is not splitting edges "nicely"

llvm-svn: 141059
2011-10-04 03:50:44 +00:00
Andrew Trick
94d203b172 whitespace
llvm-svn: 141058
2011-10-04 03:34:49 +00:00
Bill Wendling
66d2eeb730 Use ArrayRef instead of an explicit 'const std::vector &'.
llvm-svn: 140172
2011-09-20 19:05:04 +00:00
Rafael Espindola
3bb0f9391c Remove the old tail duplication pass. It is not used and is unable to update
ssa, so it has to be run really early in the pipeline. Any replacement
should probably use the SSAUpdater.

llvm-svn: 138841
2011-08-30 23:03:45 +00:00
Bill Wendling
3079e1ccda Add SplitLandingPadPredecessors().
SplitLandingPadPredecessors is similar to SplitBlockPredecessors in that it
splits the current block and attaches a set of predecessors to the new basic
block. However, it differs from SplitBlockPredecessors in that it's specifically
designed to handle landing pad blocks.

Two new basic blocks are created: one that is has the vector of predecessors as
its predecessors and one that has the remaining predecessors as its
predecessors. Those two new blocks then receive a cloned copy of the landingpad
instruction from the original block. The landingpad instructions are joined in a
PHI, etc. Like SplitBlockPredecessors, it updates the LLVM IR, AliasAnalysis,
DominatorTree, DominanceFrontier, LoopInfo, and LCCSA analyses.

llvm-svn: 138014
2011-08-19 00:05:40 +00:00
David Chisnall
cfbcd5cf21 Add a mechanism for optimisation plugins to register passes that all front ends can use without needing to be aware of the plugin (or the plugin be aware of the front end).
Before 3.0, I'd like to add a mechanism for automatically loading a set of plugins from a config file.  API suggestions welcome...

llvm-svn: 137717
2011-08-16 13:58:41 +00:00
Andrew Trick
f598747450 Cleanup. Make ScalarEvolution an explicit argument of the
SimplifyIndVar utility since it is required.

llvm-svn: 137202
2011-08-10 04:22:26 +00:00
Andrew Trick
b85da1c369 Added a SimplifyIndVar utility to simplify induction variable users
based on ScalarEvolution without changing the induction variable phis.

This utility is the main tool of IndVarSimplifyPass, but the pass also
restructures induction variables in strange ways that are sensitive to
pass ordering. This provides a way for other loop passes to simplify
new uses of induction variables created during transformation. The
utility may be used by any pass that preserves ScalarEvolution. Soon
LoopUnroll will use it.

The net effect in this checkin is to cleanup the IndVarSimplify pass
by factoring out the SimplifyIndVar algorithm into a standalone utility.

llvm-svn: 137197
2011-08-10 03:46:27 +00:00
Bill Wendling
fdea9930ac Remove the LowerSetJmp pass. It wasn't used effectively by any of the targets.
This is some of my original LLVM code. *wipes tear*

llvm-svn: 136821
2011-08-03 22:18:20 +00:00
Jay Foad
095dcdbd48 Use cast<> instead of a C-style cast to get some free assertions.
llvm-svn: 136771
2011-08-03 10:05:04 +00:00
Rafael Espindola
2b0d064f3b Move methods in PassManagerBuilder offline.
llvm-svn: 136727
2011-08-02 21:50:27 +00:00
Rafael Espindola
73efcf56f6 move PassManagerBuilder.h to IPO. This is a non intuitive place to put it,
but it solves a layering violation since things in Support are not supposed to
use things in Transforms.

llvm-svn: 136726
2011-08-02 21:50:24 +00:00
Jay Foad
fe750c827a Fix typo in comment.
llvm-svn: 136068
2011-07-26 09:36:52 +00:00
Andrew Trick
53ed0491c0 Move trip count discovery outside of the generic LoopUnroll helper. This
removes its dependence on canonical induction variables.

llvm-svn: 135829
2011-07-23 00:33:05 +00:00
Chris Lattner
e1fe7061ce land David Blaikie's patch to de-constify Type, with a few tweaks.
llvm-svn: 135375
2011-07-18 04:54:35 +00:00