Code-section alignment should be at least as high as the minimum
stub alignment. If the section alignment is lower it can cause
padding to be emitted resulting in alignment errors if the section
is mapped to a higher alignment on the target.
E.g. If a text section with a 4-byte alignment gets 4-bytes of
padding to guarantee 8-byte alignment for stubs but is re-mapped to
an 8-byte alignment on the target, the 4-bytes of padding will push
the stubs to 4-byte alignment causing a crash.
No test case: There is currently no way to control host section
alignment in llvm-rtdyld. This could be made testable by adding
a custom memory manager. I'll look at that in a follow-up patch.
llvm-svn: 245031
This introduces the basic functionality to support "token types".
The motivation stems from the need to perform operations on a Value
whose provenance cannot be obscured.
There are several applications for such a type but my immediate
motivation stems from WinEH. Our personality routine enforces a
single-entry - single-exit regime for cleanups. After several rounds of
optimizations, we may be left with a terminator whose "cleanup-entry
block" is not entirely clear because control flow has merged two
cleanups together. We have experimented with using labels as operands
inside of instructions which are not terminators to indicate where we
came from but found that LLVM does not expect such exotic uses of
BasicBlocks.
Instead, we can use this new type to clearly associate the "entry point"
and "exit point" of our cleanup. This is done by having the cleanuppad
yield a Token and consuming it at the cleanupret.
The token type makes it impossible to obscure or otherwise hide the
Value, making it trivial to track the relationship between the two
points.
What is the burden to the optimizer? Well, it turns out we have already
paid down this cost by accepting that there are certain calls that we
are not permitted to duplicate, optimizations have to watch out for
such instructions anyway. There are additional places in the optimizer
that we will probably have to update but early examination has given me
the impression that this will not be heroic.
Differential Revision: http://reviews.llvm.org/D11861
llvm-svn: 245029
its creation function.
This required shifting a bunch of method definitions to be out-of-line
so that we could leave most of the implementation guts in the .cpp file.
llvm-svn: 245021
I've used forward declarations and reorderd the source code some to make
this reasonably clean and keep as much of the code as possible in the
source file, including all the stratified set details. Just the basic AA
interface and the create function are in the header file, and the header
file is now included into the relevant locations.
llvm-svn: 245009
the AA counter pass.
For pointsToConstantMemory, I think this is a "bug fix" as I think the
code as written will actually infloop if ever reached. For the
getModRefInfo, this is a no-op change but with a significantly simpler
form.
llvm-svn: 245007
.cpp file to make the header much less noisy.
Also makes it easy to use a static helper rather than a public method
for printing lines of stats.
llvm-svn: 245006
pattern.
Also hoist the creation routine out of the generic header and into the
pass header now that we have one.
I've worked to not make any changes, even formatting ones here. I'll
clean up the formatting and other things in a follow-up patch now that
the code is in the right place.
llvm-svn: 245004
Summary:
This patch implements my promised optimization to reunites certain sexts from
operands after we extract the constant offset. See the header comment of
reuniteExts for its motivation.
One key building block that enables this optimization is Bjarke's poison value
analysis (D11212). That helps to prove "a +nsw b" can't overflow.
Reviewers: broune
Subscribers: jholewinski, sanjoy, llvm-commits
Differential Revision: http://reviews.llvm.org/D12016
llvm-svn: 245003
AliasAnalysis in LoopIdiomRecognize.
The previous commit to LIR, r244879, exposed some scary bug in the loop
pass pipeline with an assert failure that showed up on several bots.
This patch got reverted as part of getting that revision reverted, but
they're actually independent and unrelated. This patch has no functional
change and should be completely safe. It is also useful for my current
work on the AA infrastructure.
llvm-svn: 244993
This commit modifies the way the machine basic blocks are serialized - now the
machine basic blocks are serialized using a custom syntax instead of relying on
YAML primitives. Instead of using YAML mappings to represent the individual
machine basic blocks in a machine function's body, the new syntax uses a single
YAML block scalar which contains all of the machine basic blocks and
instructions for that function.
This is an example of a function's body that uses the old syntax:
body:
- id: 0
name: entry
instructions:
- '%eax = MOV32r0 implicit-def %eflags'
- 'RETQ %eax'
...
The same body is now written like this:
body: |
bb.0.entry:
%eax = MOV32r0 implicit-def %eflags
RETQ %eax
...
This syntax change is motivated by the fact that the bundled machine
instructions didn't map that well to the old syntax which was using a single
YAML sequence to store all of the machine instructions in a block. The bundled
machine instructions internally use flags like BundledPred and BundledSucc to
determine the bundles, and serializing them as MI flags using the old syntax
would have had a negative impact on the readability and the ease of editing
for MIR files. The new syntax allows me to serialize the bundled machine
instructions using a block construct without relying on the internal flags,
for example:
BUNDLE implicit-def dead %itstate, implicit-def %s1 ... {
t2IT 1, 24, implicit-def %itstate
%s1 = VMOVS killed %s0, 1, killed %cpsr, implicit killed %itstate
}
This commit also converts the MIR testcases to the new syntax. I developed
a script that can convert from the old syntax to the new one. I will post the
script on the llvm-commits mailing list in the thread for this commit.
llvm-svn: 244982
We used to just say "invalid type suffix for instruction", which is
misleading. This is because we fallback to the long-form matcher if the
short-form matcher failed, losing the error information on the way.
Save it, so that we can provide a little better diagnostics when the
long-form matcher thinks a suffix is the cause of the error.
llvm-svn: 244955
Follow up to D10947 - D9746 added general SMAX/SMIN/UMAX/UMIN pattern matching to SelectionDAGBuilder::visitSelect.
This patch removes the X86 implementation and improves the AVX1/AVX2 support to correctly lower 256-bit integer vectors.
Differential Revision: http://reviews.llvm.org/D12006
llvm-svn: 244949
If <src> is non-zero we can safely set the flag to true, and this
results in less code generated for, e.g. ffs(x) + 1 on FreeBSD.
Thanks to majnemer for suggesting the fix and reviewing.
Code generated before the patch was applied:
0: 0f bc c7 bsf %edi,%eax
3: b9 20 00 00 00 mov $0x20,%ecx
8: 0f 45 c8 cmovne %eax,%ecx
b: 83 c1 02 add $0x2,%ecx
e: b8 01 00 00 00 mov $0x1,%eax
13: 85 ff test %edi,%edi
15: 0f 45 c1 cmovne %ecx,%eax
18: c3 retq
Code generated after the patch was applied:
0: 0f bc cf bsf %edi,%ecx
3: 83 c1 02 add $0x2,%ecx
6: 85 ff test %edi,%edi
8: b8 01 00 00 00 mov $0x1,%eax
d: 0f 45 c1 cmovne %ecx,%eax
10: c3 retq
It seems we can still use cmove and save another 'test' instruction, but
that can be tackled separately.
Differential Revision: http://reviews.llvm.org/D11989
llvm-svn: 244947
This commit extracts the code that parses the memory operand's alignment into
a new method named 'parseAlignment' so that it can be reused when parsing the
basic block's alignment attribute.
llvm-svn: 244945
This commit renames the method 'diagFromLLVMAssemblyDiag' to
'diagFromBlockStringDiag'. This method will be used when converting diagnostics
from other YAML block strings, and not just the LLVM module block string, so
the new name should reflect that.
llvm-svn: 244943
We used to be over-conservative about preserving inbounds. Actually, the second
GEP (which applies the constant offset) can inherit the inbounds attribute of
the original GEP, because the resultant pointer is equivalent to that of the
original GEP. For example,
x = GEP inbounds a, i+5
=>
y = GEP a, i // inbounds removed
x = GEP inbounds y, 5 // inbounds preserved
llvm-svn: 244937
After r244870 flush() will only compare two null pointers and return,
doing nothing but wasting run time. The call is not required any more
as the stream and its SmallString are always in sync.
Thanks to David Blaikie for reviewing.
llvm-svn: 244928
This patch corresponds to review:
http://reviews.llvm.org/D11471
It improves the code generated for converting a scalar to a vector value. With
direct moves from GPRs to VSRs, we no longer require expensive stack operations
for this. Subsequent patches will handle the reverse case and more general
operations between vectors and their scalar elements.
llvm-svn: 244921
This was my error. We've got f32 marked as legal because they're simulated using a v2f32 instruction, but there's no equivalent for f64.
This will get test coverage imminently when D12015 lands.
llvm-svn: 244916