A subtle bug was found where attempting to copy a non-const function_ref
lvalue would actually invoke the generic forwarding constructor (as it
was a closer match - being T& rather than the const T& of the implicit
copy constructor). In the particular case this lead to a dangling
function_ref member (since it had referenced the function_ref passed by
value to its ctor, rather than the outer function_ref that was still
alive)
SFINAE the converting constructor to not be considered if the copy
constructor is available and demonstrate that this causes the copy to
refer to the original functor, not to the function_ref it was copied
from. (without the code change, the test would fail as Y would be
referencing X and Y() would see the result of the mutation to X, ie: 2)
llvm-svn: 221753
With this patch MCDisassembler::getInstruction takes an ArrayRef<uint8_t>
instead of a MemoryObject.
Even on X86 there is a maximum size an instruction can have. Given
that, it seems way simpler and more efficient to just pass an ArrayRef
to the disassembler instead of a MemoryObject and have it do a virtual
call every time it wants some extra bytes.
llvm-svn: 221751
For historical reasons archives on mach-o have two possible names for the
file containing the table of contents for the archive: "__.SYMDEF SORTED"
and "__.SYMDEF". But the libObject archive reader only supported the former.
This patch fixes llvm::object::Archive to support both names.
llvm-svn: 221747
Currently, we have a type parameter mechanism for intrinsics. Rather than having to specify a separate intrinsic for each combination of argument and return types, we can specify a single intrinsic with one or more type parameters. These type parameters are passed explicitly to Intrinsic::getDeclaration or can be specified implicitly in the naming of the intrinsic function in an LL file.
Today, the types are limited to integer, floating point, and pointer types. With a goal of supporting symbolic targets for patchpoints and statepoints, this change adds support for function types. The change also includes support for first class aggregate types (named structures and arrays) since these appear in function types we've encountered.
Reviewed by: atrick, ributzka
Differential Revision: http://reviews.llvm.org/D4608
llvm-svn: 221742
We currently have two ways of informing the optimizer that the result of a load is never null: metadata and assume. This change converts the second in to the former. This avoids a need to implement optimizations using both forms.
We should probably extend this basic idea to metadata of other forms; in particular, range metadata. We view is that assumes should be considered a "last resort" for when there isn't a more canonical way to represent something.
Reviewed by: Hal
Differential Revision: http://reviews.llvm.org/D5951
llvm-svn: 221737
Add API for specifying which `LLVMContext` each `lto_module_t` and
`lto_code_gen_t` is in.
In particular, this enables the following flow:
for (auto &File : Files) {
lto_module_t M = lto_module_create_in_local_context(File...);
querySymbols(M);
lto_module_dispose(M);
}
lto_code_gen_t CG = lto_codegen_create_in_local_context();
for (auto &File : FilesToLink) {
lto_module_t M = lto_module_create_in_codegen_context(File..., CG);
lto_codegen_add_module(CG, M);
lto_module_dispose(M);
}
lto_codegen_compile(CG);
lto_codegen_write_merged_modules(CG, ...);
lto_codegen_dispose(CG);
This flow has a few benefits.
- Only one module (two if you count the combined module in the code
generator) is in memory at a time.
- Metadata (and constants) from files that are parsed to query symbols
but not linked into the code generator don't pollute the global
context.
- The first for loop can be parallelized, since each module is in its
own context.
- When the code generator is disposed, the memory from LTO gets freed.
rdar://problem/18767512
llvm-svn: 221733
This is a reapplication of r221171, but we only perform the transformation
on expressions which include a multiplication. We do not transform rem/div
operations as this doesn't appear to be safe in all cases.
llvm-svn: 221721
Summary:
This change moves asan-coverage instrumentation
into a separate Module pass.
The other part of the change in clang introduces a new flag
-fsanitize-coverage=N.
Another small patch will update tests in compiler-rt.
With this patch no functionality change is expected except for the flag name.
The following changes will make the coverage instrumentation work with tsan/msan
Test Plan: Run regression tests, chromium.
Reviewers: nlewycky, samsonov
Reviewed By: nlewycky, samsonov
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D6152
llvm-svn: 221718
Instead, we're going to separate metadata from the Value hierarchy. See
PR21532.
This reverts commit r221375.
This reverts commit r221373.
This reverts commit r221359.
This reverts commit r221167.
This reverts commit r221027.
This reverts commit r221024.
This reverts commit r221023.
This reverts commit r220995.
This reverts commit r220994.
llvm-svn: 221711
What would happen before that commit is that the SDDbgValues associated with
a deallocated SDNode would be marked Invalidated, but SDDbgInfo would keep
a map entry keyed by the SDNode pointer pointing to this list of invalidated
SDDbgNodes. As the memory gets reused, the list might get wrongly associated
with another new SDNode. As the SDDbgValues are cloned when they are transfered,
this can lead to an exponential number of SDDbgValues being produced during
DAGCombine like in http://llvm.org/bugs/show_bug.cgi?id=20893
Note that the previous behavior wasn't really buggy as the invalidation made
sure that the SDDbgValues won't be used. This commit can be considered a
memory optimization and as such is really hard to validate in a unit-test.
llvm-svn: 221709
This commit adds a new pass that can inject checks before indirect calls to
make sure that these calls target known locations. It supports three types of
checks and, at compile time, it can take the name of a custom function to call
when an indirect call check fails. The default failure function ignores the
error and continues.
This pass incidentally moves the function JumpInstrTables::transformType from
private to public and makes it static (with a new argument that specifies the
table type to use); this is so that the CFI code can transform function types
at call sites to determine which jump-instruction table to use for the check at
that site.
Also, this removes support for jumptables in ARM, pending further performance
analysis and discussion.
Review: http://reviews.llvm.org/D4167
llvm-svn: 221708
This is a first step for generating SSE rcp instructions for reciprocal
calcs when fast-math allows it. This is very similar to the rsqrt optimization
enabled in D5658 ( http://reviews.llvm.org/rL220570 ).
For now, be conservative and only enable this for AMD btver2 where performance
improves significantly both in terms of latency and throughput.
We may never enable this codegen for Intel Core* chips because the divider circuits
are just too fast. On SandyBridge, divss can be as fast as 10 cycles versus the 21
cycle critical path for the rcp + mul + sub + mul + add estimate.
Follow-on patches may allow configuration of the number of Newton-Raphson refinement
steps, add AVX512 support, and enable the optimization for more chips.
More background here: http://llvm.org/bugs/show_bug.cgi?id=21385
Differential Revision: http://reviews.llvm.org/D6175
llvm-svn: 221706
My original support for the general dynamic and local dynamic TLS
models contained some fairly obtuse hacks to generate calls to
__tls_get_addr when lowering a TargetGlobalAddress. Rather than
generating real calls, special GET_TLS_ADDR nodes were used to wrap
the calls and only reveal them at assembly time. I attempted to
provide correct parameter and return values by chaining CopyToReg and
CopyFromReg nodes onto the GET_TLS_ADDR nodes, but this was also not
fully correct. Problems were seen with two back-to-back stores to TLS
variables, where the call sequences ended up overlapping with unhappy
results. Additionally, since these weren't real calls, the proper
register side effects of a call were not recorded, so clobbered values
were kept live across the calls.
The proper thing to do is to lower these into calls in the first
place. This is relatively straightforward; see the changes to
PPCTargetLowering::LowerGlobalTLSAddress() in PPCISelLowering.cpp.
The changes here are standard call lowering, except that we need to
track the fact that these calls will require a relocation. This is
done by adding a machine operand flag of MO_TLSLD or MO_TLSGD to the
TargetGlobalAddress operand that appears earlier in the sequence.
The calls to LowerCallTo() eventually find their way to
LowerCall_64SVR4() or LowerCall_32SVR4(), which call FinishCall(),
which calls PrepareCall(). In PrepareCall(), we detect the calls to
__tls_get_addr and immediately snag the TargetGlobalTLSAddress with
the annotated relocation information. This becomes an extra operand
on the call following the callee, which is expected for nodes of type
tlscall. We change the call opcode to CALL_TLS for this case. Back
in FinishCall(), we change it again to CALL_NOP_TLS for 64-bit only,
since we require a TOC-restore nop following the call for the 64-bit
ABIs.
During selection, patterns in PPCInstrInfo.td and PPCInstr64Bit.td
convert the CALL_TLS nodes into BL_TLS nodes, and convert the
CALL_NOP_TLS nodes into BL8_NOP_TLS nodes. This replaces the code
removed from PPCAsmPrinter.cpp, as the BL_TLS or BL8_NOP_TLS
nodes can now be emitted normally using their patterns and the
associated printTLSCall print method.
Finally, as a result of these changes, all references to get-tls-addr
in its various guises are no longer used, so they have been removed.
There are existing TLS tests to verify the changes haven't messed
anything up). I've added one new test that verifies that the problem
with the original code has been fixed.
llvm-svn: 221703
The ISel lowering for global TLS access in PIC mode was creating a pseudo
instruction that is later expanded to a call, but the code was not
setting the hasCalls flag in the MachineFrameInfo alongside the adjustsStack
flag. This caused some functions to be mistakenly recognized as leaf functions,
and this in turn affected the decision to eliminate the frame pointer.
With the fix, hasCalls is properly set and the leaf frame pointer is correctly
preserved.
llvm-svn: 221695
LLVM replaces the SelectionDAG pattern (xor (set_cc cc x y) 1) with
(set_cc !cc x y), which is only correct when the xor has type i1.
Instead, we should check that the constant operand to the xor is all
ones.
llvm-svn: 221693
Summary:
This patch enables code generation for the MIPS II target. Pre-Mips32
targets don't have the MUL instruction, so we add the correspondent
pattern that uses the MULT/MFLO combination in order to retrieve the
product.
This is WIP as we don't support code generation for select nodes due to
the lack of conditional-move instructions.
Reviewers: dsanders
Subscribers: llvm-commits
Differential Revision: http://reviews.llvm.org/D6150
llvm-svn: 221686
The canonical name when printing assembly is still $29. The reason is that
GAS does not accept "$hwr_ulr" at the moment.
This addresses the comments from r221307, which reverted the original
commit r221299.
llvm-svn: 221685
The original commit r221299 was reverted in r221307. I removed the name
"hrw_ulr" ($29) from the original commit because two tests were failing.
llvm-svn: 221681
Referencing one symbol from another in the same section does not
generally require a relocation. However, the MS linker has a feature
called /INCREMENTAL which enables incremental links. It achieves this
by creating thunks to the actual function and redirecting all
relocations to point to the thunk.
This breaks down with the old scheme if you have a function which
references, say, itself. On x86_64, we would use %rip relative
addressing to reference the start of the function from out current
position. This would lead to miscompiles because other references might
reference the thunk instead, breaking function pointer equality.
This fixes PR21520.
llvm-svn: 221678
cost model for signed division by power of 2 was improved for AArch64.
The revision r218607 missed test case for Loop Vectorization.
Adding it in this revision.
Differential Revision: http://reviews.llvm.org/D6181
llvm-svn: 221674
This fixes an issue with matching trunc -> assertsext -> zext on x86-64, which would not zero the high 32-bits. See PR20494 for details.
Recommitting - This time, with a hopefully working test.
Differential Revision: http://reviews.llvm.org/D6128
llvm-svn: 221672
This adds const to a few methods that already return const references or
creates a const version when they reterun non-const references.
llvm-svn: 221666
AVX2 is available.
According to IACA, the new lowering has a throughput of 8 cycles instead of 13
with the previous one.
Althought this lowering kicks in some SPECs benchmarks, the performance
improvement was within the noise.
Correctness testing has been done for the whole range of uint32_t with the
following program:
uint4 v = (uint4) {0,1,2,3};
uint32_t i;
//Check correctness over entire range for uint4 -> float4 conversion
for( i = 0; i < 1U << (32-2); i++ )
{
float4 t = test(v);
float4 c = correct(v);
if( 0xf != _mm_movemask_ps( t == c ))
{
printf( "Error @ %vx: %vf vs. %vf\n", v, c, t);
return -1;
}
v += 4;
}
Where "correct" is the old lowering and "test" the new one.
The patch adds a test case for the two custom lowering instruction.
It also modifies the vector cost model, which is why cast.ll and uitofp.ll are
modified.
2009-02-26-MachineLICMBug.ll is also modified because we now hoist 7
instructions instead of 4 (3 more constant loads).
rdar://problem/18153096>
llvm-svn: 221657