Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
llvm-svn: 100304
Added support for address spaces and added a isVolatile field to memcpy, memmove, and memset,
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
llvm-svn: 100191
e.g., llvm.memcpy.i32(i8*, i8*, i32, i32) -> llvm.memcpy.p0i8.p0i8.i32(i8*, i8*, i32, i32, i1)
A update of langref will occur in a subsequent checkin.
llvm-svn: 99928
This allows code gen and the exception table writer to cooperate to make sure
landing pads are associated with the correct invoke locations.
llvm-svn: 94726
This patch also cleans up code that expects there to be a bitcast in the first argument and testcases that call llvm.dbg.declare.
It also strips old llvm.dbg.declare intrinsics that did not pass metadata as the first argument.
llvm-svn: 93531
The semantics of llvm.dbg.value are that starting from where it is executed, an offset into the specified user source variable is specified to get a new value.
An example:
call void @llvm.dbg.value(metadata !{ i32 7 }, i64 0, metadata !2)
Here the user source variable associated with metadata #2 gets the value "i32 7" at offset 0.
llvm-svn: 90788
so get rid of eh.selector.i64 and rename eh.selector.i32 to eh.selector.
Likewise for eh.typeid.for. This aligns us with gcc, which always uses a
32 bit value for the selector on all platforms. My understanding is that
the register allocator used to assert if the selector intrinsic size didn't
match the pointer size, and this was the reason for introducing the two
variants. However my testing shows that this is no longer the case (I
fixed some bugs in selector lowering yesterday, and some more today in the
fastisel path; these might have caused the original problems).
llvm-svn: 84106
and short. Well, it's kinda short. Definitely nasty and brutish.
The front-end generates the register/unregister calls into the SjLj runtime,
call-site indices and landing pad dispatch. The back end fills in the LSDA
with the call-site information provided by the front end. Catch blocks are
not yet implemented.
Built on Darwin and verified no llvm-core "make check" regressions.
llvm-svn: 78625
implemented in codegen, have no frontend to generate them, and are
better implemented with pattern matching (like the ppc backend does
to generate rlwimi/rlwinm etc).
PR4543
llvm-svn: 75430
llvm.eh.sjlj.* for better clarity as to their purpose and scope. Add
a description of llvm.eh.sjlj.setjmp to ExceptionHandling.html.
(llvm.eh.sjlj.longjmp documentation coming when that implementation is
added).
llvm-svn: 71758
a supporting preliminary patch for GCC-compatible SjLJ exception handling. Note that these intrinsics are not designed to be invoked directly by the user, but
rather used by the front-end as target hooks for exception handling.
llvm-svn: 71610
While the patch is clearly correct in itself, it's become
apparent other places are assuming debug intrinsics are
marked as touching memory...this needs more testing.
llvm-svn: 65992
taken advantage of anywhere. Change the definition
of IntrWriteArgMem to no longer imply nocapture, and
explicitly add nocapture attributes everywhere (well,
not quite everywhere, because some of these intrinsics
did capture their arguments!). Also, make clear that
the lack of other side-effects does not exclude doing
volatile loads or stores - the atomic intrinsics do
these, yet they are all marked IntrWriteArgMem (this
change is safe because nothing exploited it).
llvm-svn: 64539