Although it seems like clang will never emit scattered relocations in
the debug information (at least I couldn't find a way), we have too
support them for the benefit of other compilers.
As clang doesn't generate them, the included testcase was produced
from hacked up assembly.
llvm-svn: 259339
Those commits created an artificial edge from a cleanup to a synthesized
catchswitch in order to get the MSVC personality routine to execute
cleanups which don't cleanupret and are not wrapped by a catchswitch.
This worked well enough but is not a complete solution in situations
where there the cleanup infinite loops.
However, the real deal breaker behind this approach comes about from a
degenerate case where the cleanup is post-dominated by unreachable *and*
throws an exception. This ends poorly because the catchswitch will
inadvertently catch the exception.
Because of this we should go back to our previous behavior of not
executing certain cleanups (identical behavior with the Itanium ABI
implementation in clang, GCC and ICC).
N.B. I think this could be salvaged by making the catchpad rethrow the
exception and properly transforming throwing calls in the cleanup into
invokes.
llvm-svn: 259338
With poorly chosen custom parameters, the line table encoding logic would
sometimes end up generating a special opcode bigger than 255, which is wrong.
The set of default parameters that LLVM uses isn't subject to this bug.
When carefully chosing the line table parameters, it's impossible to fall into the
corner case that this patch fixes. The standard however doesn't require that these
parameters be carefully chosen. And even if it did, we shouldn't generate broken
encoding.
Add a unittest for this specific encoding bug, and while at it, create some unit
tests for the encoding logic using different sets of parameters.
llvm-svn: 259334
llvm-dsymutil was misinterpreting the value of common symbols as their
address when it actually contains their size. This didn't impact
llvm-dsymutil's ability to link the debug information for common symbols
because these are always found by name and not by address. Things could
however go wrong when the size of a common object matched the object
file address of another symbol. Depending on the link order of the symbols
the common object might incorrectly evict this other object from the
address to symbol mapping, and then link the evicted symbol with a wrong
binary address.
Use the new ability to have symbols without an object file address to fix
this.
llvm-svn: 259318
This change just changes the data structure that ties symbol names,
object file address and linked binary addresses to accept mappings
with no object file address. Such symbol mappings are not fed into
the debug map yet, so this patch is NFC.
A subsequent patch will make use of this functionality for common
symbols.
llvm-svn: 259317
1. Mentioned that CUDA support works best with trunk.
2. Simplified the example by removing its dependency on the CUDA samples.
3. Explain the --cuda-gpu-arch flag.
llvm-svn: 259307
Previously the code assumed all uses of FI on loads and stores were as
addresses. This checks whether the use is the address or a value and
handles the latter case as it does for non-memory instructions.
llvm-svn: 259306
The previous code was incorrect (can't getReg a frameindex). We could instead optimize it to reduce tree height, but I'm not sure that's worthwhile yet because we then try to eliminate the frameindex.
This patch also fixes frame index elimination for operations which may load or store: it used to assume the base was operand 2 and immediate offset operand 1. That's not true for stores, where they're 4 and 3.
llvm-svn: 259305
The AMDGPUPromoteAlloca pass was emitting the read.local.size
calls, which with HSA was incorrectly selected to reading from
the offset mesa uses off of the kernarg pointer.
Error on intrinsics which aren't supported by HSA, and start
emitting the correct IR to read the workgroup size
out of the dispatch pointer.
Also initialize the pass so it can be tested with opt, and
start moving towards not depending on the subtarget as an
argument.
Start emitting errors for the intrinsics not handled with HSA.
llvm-svn: 259297
Only the dispatch.ptr intrinsic is supposed to be used now to get
the workgroup size, and the read.local.size intrinsics do not
work correctly.
llvm-svn: 259296
Refine the test for whether an instruction is in an expression tree so that
it detects when one tree ends and another begins, so we can place a block
at that point, rather than continuing to find the first instruction not in
a tree at all.
llvm-svn: 259294