2017-02-22 23:32:51 +01:00
|
|
|
//===- StackProtector.cpp - Stack Protector Insertion ---------------------===//
|
2008-11-04 03:10:20 +01:00
|
|
|
//
|
2019-01-19 09:50:56 +01:00
|
|
|
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
|
|
|
|
// See https://llvm.org/LICENSE.txt for license information.
|
|
|
|
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
|
2008-11-04 03:10:20 +01:00
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
//
|
2008-11-04 22:53:09 +01:00
|
|
|
// This pass inserts stack protectors into functions which need them. A variable
|
|
|
|
// with a random value in it is stored onto the stack before the local variables
|
|
|
|
// are allocated. Upon exiting the block, the stored value is checked. If it's
|
2008-11-04 03:10:20 +01:00
|
|
|
// changed, then there was some sort of violation and the program aborts.
|
|
|
|
//
|
|
|
|
//===----------------------------------------------------------------------===//
|
|
|
|
|
2017-11-17 02:07:10 +01:00
|
|
|
#include "llvm/CodeGen/StackProtector.h"
|
2013-01-23 07:43:53 +01:00
|
|
|
#include "llvm/ADT/SmallPtrSet.h"
|
|
|
|
#include "llvm/ADT/Statistic.h"
|
2014-12-01 05:27:03 +01:00
|
|
|
#include "llvm/Analysis/BranchProbabilityInfo.h"
|
[stack-protection] Add support for MSVC buffer security check
Summary:
This patch is adding support for the MSVC buffer security check implementation
The buffer security check is turned on with the '/GS' compiler switch.
* https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
* To be added to clang here: http://reviews.llvm.org/D20347
Some overview of buffer security check feature and implementation:
* https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
* http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
* http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html
For the following example:
```
int example(int offset, int index) {
char buffer[10];
memset(buffer, 0xCC, index);
return buffer[index];
}
```
The MSVC compiler is adding these instructions to perform stack integrity check:
```
push ebp
mov ebp,esp
sub esp,50h
[1] mov eax,dword ptr [__security_cookie (01068024h)]
[2] xor eax,ebp
[3] mov dword ptr [ebp-4],eax
push ebx
push esi
push edi
mov eax,dword ptr [index]
push eax
push 0CCh
lea ecx,[buffer]
push ecx
call _memset (010610B9h)
add esp,0Ch
mov eax,dword ptr [index]
movsx eax,byte ptr buffer[eax]
pop edi
pop esi
pop ebx
[4] mov ecx,dword ptr [ebp-4]
[5] xor ecx,ebp
[6] call @__security_check_cookie@4 (01061276h)
mov esp,ebp
pop ebp
ret
```
The instrumentation above is:
* [1] is loading the global security canary,
* [3] is storing the local computed ([2]) canary to the guard slot,
* [4] is loading the guard slot and ([5]) re-compute the global canary,
* [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.
Overview of the current stack-protection implementation:
* lib/CodeGen/StackProtector.cpp
* There is a default stack-protection implementation applied on intermediate representation.
* The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
* An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
* Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
* Guard manipulation and comparison are added directly to the intermediate representation.
* lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
* lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
* There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
* see long comment above 'class StackProtectorDescriptor' declaration.
* The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
* 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
* The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.
* include/llvm/Target/TargetLowering.h
* Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.
* lib/Target/X86/X86ISelLowering.cpp
* Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.
Function-based Instrumentation:
* The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
* To support function-based instrumentation, this patch is
* adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
* If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
* modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
* generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
* if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).
Modifications
* adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
* adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)
Results
* IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```
```
*** Final LLVM Code input to ISel ***
; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
%StackGuardSlot = alloca i8* <<<-- Allocated guard slot
%0 = call i8* @llvm.stackguard() <<<-- Loading Stack Guard value
call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot) <<<-- Prologue intrinsic call (store to Guard slot)
%index.addr = alloca i32, align 4
%offset.addr = alloca i32, align 4
%buffer = alloca [10 x i8], align 1
store i32 %index, i32* %index.addr, align 4
store i32 %offset, i32* %offset.addr, align 4
%arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
%1 = load i32, i32* %index.addr, align 4
call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
%2 = load i32, i32* %index.addr, align 4
%arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
%3 = load i8, i8* %arrayidx, align 1
%conv = sext i8 %3 to i32
%4 = load volatile i8*, i8** %StackGuardSlot <<<-- Loading Guard slot
call void @__security_check_cookie(i8* %4) <<<-- Epilogue function-based check
ret i32 %conv
}
```
* SelectionDAG generated instrumentation:
```
clang-cl /GS test.cc /O1 /c /FA
```
```
"?example@@YAHHH@Z": # @"\01?example@@YAHHH@Z"
# BB#0: # %entry
pushl %esi
subl $16, %esp
movl ___security_cookie, %eax <<<-- Loading Stack Guard value
movl 28(%esp), %esi
movl %eax, 12(%esp) <<<-- Store to Guard slot
leal 2(%esp), %eax
pushl %esi
pushl $204
pushl %eax
calll _memset
addl $12, %esp
movsbl 2(%esp,%esi), %esi
movl 12(%esp), %ecx <<<-- Loading Guard slot
calll @__security_check_cookie@4 <<<-- Epilogue function-based check
movl %esi, %eax
addl $16, %esp
popl %esi
retl
```
Reviewers: kcc, pcc, eugenis, rnk
Subscribers: majnemer, llvm-commits, hans, thakis, rnk
Differential Revision: http://reviews.llvm.org/D20346
llvm-svn: 272053
2016-06-07 22:15:35 +02:00
|
|
|
#include "llvm/Analysis/EHPersonalities.h"
|
2020-03-05 18:18:47 +01:00
|
|
|
#include "llvm/Analysis/MemoryLocation.h"
|
2017-10-10 01:19:02 +02:00
|
|
|
#include "llvm/Analysis/OptimizationRemarkEmitter.h"
|
2014-01-07 12:48:04 +01:00
|
|
|
#include "llvm/CodeGen/Passes.h"
|
2017-11-17 02:07:10 +01:00
|
|
|
#include "llvm/CodeGen/TargetLowering.h"
|
2017-06-06 00:59:21 +02:00
|
|
|
#include "llvm/CodeGen/TargetPassConfig.h"
|
2017-11-17 02:07:10 +01:00
|
|
|
#include "llvm/CodeGen/TargetSubtargetInfo.h"
|
2013-01-02 12:36:10 +01:00
|
|
|
#include "llvm/IR/Attributes.h"
|
2017-02-22 23:32:51 +01:00
|
|
|
#include "llvm/IR/BasicBlock.h"
|
2013-01-02 12:36:10 +01:00
|
|
|
#include "llvm/IR/Constants.h"
|
|
|
|
#include "llvm/IR/DataLayout.h"
|
2016-06-30 20:49:04 +02:00
|
|
|
#include "llvm/IR/DebugInfo.h"
|
2017-02-22 23:32:51 +01:00
|
|
|
#include "llvm/IR/DebugLoc.h"
|
2013-01-02 12:36:10 +01:00
|
|
|
#include "llvm/IR/DerivedTypes.h"
|
2017-06-08 01:53:32 +02:00
|
|
|
#include "llvm/IR/Dominators.h"
|
2013-01-02 12:36:10 +01:00
|
|
|
#include "llvm/IR/Function.h"
|
2013-09-09 19:38:01 +02:00
|
|
|
#include "llvm/IR/IRBuilder.h"
|
2017-02-22 23:32:51 +01:00
|
|
|
#include "llvm/IR/Instruction.h"
|
2013-01-02 12:36:10 +01:00
|
|
|
#include "llvm/IR/Instructions.h"
|
[StackProtector] Ignore certain intrinsics when calculating sspstrong heuristic.
Summary:
The 'strong' StackProtector heuristic takes into consideration call instructions.
Certain intrinsics, such as lifetime.start, can cause the
StackProtector to protect functions that do not need to be protected.
Specifically, a volatile variable, (not optimized away), but belonging to a stack
allocation will encourage a llvm.lifetime.start to be inserted during
compilation. Because that intrinsic is a 'call' the strong StackProtector
will see that the alloca'd variable is being passed to a call instruction, and
insert a stack protector. In this case the intrinsic isn't really lowered to a
call. This can cause unnecessary stack checking, at the cost of additional
(wasted) CPU cycles.
In the future we should rely on TargetTransformInfo::isLoweredToCall, but as of
now that routine considers all intrinsics as not being lowerable. That needs
to be corrected, and such a change is on my list of things to get moving on.
As a side note, the updated stack-protector-dbginfo.ll test always seems to
pass. I never see the dbg.declare/dbg.value reaching the
StackProtector::HasAddressTaken, but I don't see any code excluding dbg
intrinsic calls either, so I think it's the safest thing to do.
Reviewers: void, timshen
Reviewed By: timshen
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D45331
llvm-svn: 329450
2018-04-06 22:14:13 +02:00
|
|
|
#include "llvm/IR/IntrinsicInst.h"
|
2013-01-02 12:36:10 +01:00
|
|
|
#include "llvm/IR/Intrinsics.h"
|
2014-12-01 05:27:03 +01:00
|
|
|
#include "llvm/IR/MDBuilder.h"
|
2013-01-02 12:36:10 +01:00
|
|
|
#include "llvm/IR/Module.h"
|
2017-02-22 23:32:51 +01:00
|
|
|
#include "llvm/IR/Type.h"
|
|
|
|
#include "llvm/IR/User.h"
|
Sink all InitializePasses.h includes
This file lists every pass in LLVM, and is included by Pass.h, which is
very popular. Every time we add, remove, or rename a pass in LLVM, it
caused lots of recompilation.
I found this fact by looking at this table, which is sorted by the
number of times a file was changed over the last 100,000 git commits
multiplied by the number of object files that depend on it in the
current checkout:
recompiles touches affected_files header
342380 95 3604 llvm/include/llvm/ADT/STLExtras.h
314730 234 1345 llvm/include/llvm/InitializePasses.h
307036 118 2602 llvm/include/llvm/ADT/APInt.h
213049 59 3611 llvm/include/llvm/Support/MathExtras.h
170422 47 3626 llvm/include/llvm/Support/Compiler.h
162225 45 3605 llvm/include/llvm/ADT/Optional.h
158319 63 2513 llvm/include/llvm/ADT/Triple.h
140322 39 3598 llvm/include/llvm/ADT/StringRef.h
137647 59 2333 llvm/include/llvm/Support/Error.h
131619 73 1803 llvm/include/llvm/Support/FileSystem.h
Before this change, touching InitializePasses.h would cause 1345 files
to recompile. After this change, touching it only causes 550 compiles in
an incremental rebuild.
Reviewers: bkramer, asbirlea, bollu, jdoerfert
Differential Revision: https://reviews.llvm.org/D70211
2019-11-13 22:15:01 +01:00
|
|
|
#include "llvm/InitializePasses.h"
|
2017-02-22 23:32:51 +01:00
|
|
|
#include "llvm/Pass.h"
|
|
|
|
#include "llvm/Support/Casting.h"
|
2008-11-04 03:10:20 +01:00
|
|
|
#include "llvm/Support/CommandLine.h"
|
2017-02-22 23:32:51 +01:00
|
|
|
#include "llvm/Target/TargetMachine.h"
|
|
|
|
#include "llvm/Target/TargetOptions.h"
|
|
|
|
#include <utility>
|
|
|
|
|
2008-11-04 03:10:20 +01:00
|
|
|
using namespace llvm;
|
|
|
|
|
2014-04-22 04:02:50 +02:00
|
|
|
#define DEBUG_TYPE "stack-protector"
|
|
|
|
|
2013-01-23 07:43:53 +01:00
|
|
|
STATISTIC(NumFunProtected, "Number of functions protected");
|
|
|
|
STATISTIC(NumAddrTaken, "Number of local variables that have their address"
|
|
|
|
" taken.");
|
|
|
|
|
2013-10-30 03:25:14 +01:00
|
|
|
static cl::opt<bool> EnableSelectionDAGSP("enable-selectiondag-sp",
|
|
|
|
cl::init(true), cl::Hidden);
|
2013-08-20 10:36:53 +02:00
|
|
|
|
2008-11-04 03:10:20 +01:00
|
|
|
char StackProtector::ID = 0;
|
2017-06-08 01:53:32 +02:00
|
|
|
|
Sink all InitializePasses.h includes
This file lists every pass in LLVM, and is included by Pass.h, which is
very popular. Every time we add, remove, or rename a pass in LLVM, it
caused lots of recompilation.
I found this fact by looking at this table, which is sorted by the
number of times a file was changed over the last 100,000 git commits
multiplied by the number of object files that depend on it in the
current checkout:
recompiles touches affected_files header
342380 95 3604 llvm/include/llvm/ADT/STLExtras.h
314730 234 1345 llvm/include/llvm/InitializePasses.h
307036 118 2602 llvm/include/llvm/ADT/APInt.h
213049 59 3611 llvm/include/llvm/Support/MathExtras.h
170422 47 3626 llvm/include/llvm/Support/Compiler.h
162225 45 3605 llvm/include/llvm/ADT/Optional.h
158319 63 2513 llvm/include/llvm/ADT/Triple.h
140322 39 3598 llvm/include/llvm/ADT/StringRef.h
137647 59 2333 llvm/include/llvm/Support/Error.h
131619 73 1803 llvm/include/llvm/Support/FileSystem.h
Before this change, touching InitializePasses.h would cause 1345 files
to recompile. After this change, touching it only causes 550 compiles in
an incremental rebuild.
Reviewers: bkramer, asbirlea, bollu, jdoerfert
Differential Revision: https://reviews.llvm.org/D70211
2019-11-13 22:15:01 +01:00
|
|
|
StackProtector::StackProtector() : FunctionPass(ID), SSPBufferSize(8) {
|
|
|
|
initializeStackProtectorPass(*PassRegistry::getPassRegistry());
|
|
|
|
}
|
|
|
|
|
2017-05-25 23:26:32 +02:00
|
|
|
INITIALIZE_PASS_BEGIN(StackProtector, DEBUG_TYPE,
|
2017-05-18 19:21:13 +02:00
|
|
|
"Insert stack protectors", false, true)
|
|
|
|
INITIALIZE_PASS_DEPENDENCY(TargetPassConfig)
|
2017-05-25 23:26:32 +02:00
|
|
|
INITIALIZE_PASS_END(StackProtector, DEBUG_TYPE,
|
2017-05-18 19:21:13 +02:00
|
|
|
"Insert stack protectors", false, true)
|
2008-11-04 03:10:20 +01:00
|
|
|
|
2017-05-18 19:21:13 +02:00
|
|
|
FunctionPass *llvm::createStackProtectorPass() { return new StackProtector(); }
|
2008-11-04 03:10:20 +01:00
|
|
|
|
2017-06-06 00:59:21 +02:00
|
|
|
void StackProtector::getAnalysisUsage(AnalysisUsage &AU) const {
|
|
|
|
AU.addRequired<TargetPassConfig>();
|
|
|
|
AU.addPreserved<DominatorTreeWrapperPass>();
|
|
|
|
}
|
|
|
|
|
2008-11-04 03:10:20 +01:00
|
|
|
bool StackProtector::runOnFunction(Function &Fn) {
|
|
|
|
F = &Fn;
|
|
|
|
M = F->getParent();
|
2014-01-13 14:07:17 +01:00
|
|
|
DominatorTreeWrapperPass *DTWP =
|
|
|
|
getAnalysisIfAvailable<DominatorTreeWrapperPass>();
|
2014-04-14 02:51:57 +02:00
|
|
|
DT = DTWP ? &DTWP->getDomTree() : nullptr;
|
2017-05-18 19:21:13 +02:00
|
|
|
TM = &getAnalysis<TargetPassConfig>().getTM<TargetMachine>();
|
|
|
|
Trip = TM->getTargetTriple();
|
2015-01-27 09:48:42 +01:00
|
|
|
TLI = TM->getSubtargetImpl(Fn)->getTargetLowering();
|
2016-04-08 23:26:31 +02:00
|
|
|
HasPrologue = false;
|
|
|
|
HasIRCheck = false;
|
2008-11-04 03:10:20 +01:00
|
|
|
|
2015-02-14 02:44:41 +01:00
|
|
|
Attribute Attr = Fn.getFnAttribute("stack-protector-buffer-size");
|
2014-01-21 11:24:35 +01:00
|
|
|
if (Attr.isStringAttribute() &&
|
|
|
|
Attr.getValueAsString().getAsInteger(10, SSPBufferSize))
|
[stack-protection] Add support for MSVC buffer security check
Summary:
This patch is adding support for the MSVC buffer security check implementation
The buffer security check is turned on with the '/GS' compiler switch.
* https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
* To be added to clang here: http://reviews.llvm.org/D20347
Some overview of buffer security check feature and implementation:
* https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
* http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
* http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html
For the following example:
```
int example(int offset, int index) {
char buffer[10];
memset(buffer, 0xCC, index);
return buffer[index];
}
```
The MSVC compiler is adding these instructions to perform stack integrity check:
```
push ebp
mov ebp,esp
sub esp,50h
[1] mov eax,dword ptr [__security_cookie (01068024h)]
[2] xor eax,ebp
[3] mov dword ptr [ebp-4],eax
push ebx
push esi
push edi
mov eax,dword ptr [index]
push eax
push 0CCh
lea ecx,[buffer]
push ecx
call _memset (010610B9h)
add esp,0Ch
mov eax,dword ptr [index]
movsx eax,byte ptr buffer[eax]
pop edi
pop esi
pop ebx
[4] mov ecx,dword ptr [ebp-4]
[5] xor ecx,ebp
[6] call @__security_check_cookie@4 (01061276h)
mov esp,ebp
pop ebp
ret
```
The instrumentation above is:
* [1] is loading the global security canary,
* [3] is storing the local computed ([2]) canary to the guard slot,
* [4] is loading the guard slot and ([5]) re-compute the global canary,
* [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.
Overview of the current stack-protection implementation:
* lib/CodeGen/StackProtector.cpp
* There is a default stack-protection implementation applied on intermediate representation.
* The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
* An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
* Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
* Guard manipulation and comparison are added directly to the intermediate representation.
* lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
* lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
* There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
* see long comment above 'class StackProtectorDescriptor' declaration.
* The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
* 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
* The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.
* include/llvm/Target/TargetLowering.h
* Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.
* lib/Target/X86/X86ISelLowering.cpp
* Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.
Function-based Instrumentation:
* The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
* To support function-based instrumentation, this patch is
* adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
* If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
* modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
* generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
* if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).
Modifications
* adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
* adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)
Results
* IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```
```
*** Final LLVM Code input to ISel ***
; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
%StackGuardSlot = alloca i8* <<<-- Allocated guard slot
%0 = call i8* @llvm.stackguard() <<<-- Loading Stack Guard value
call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot) <<<-- Prologue intrinsic call (store to Guard slot)
%index.addr = alloca i32, align 4
%offset.addr = alloca i32, align 4
%buffer = alloca [10 x i8], align 1
store i32 %index, i32* %index.addr, align 4
store i32 %offset, i32* %offset.addr, align 4
%arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
%1 = load i32, i32* %index.addr, align 4
call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
%2 = load i32, i32* %index.addr, align 4
%arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
%3 = load i8, i8* %arrayidx, align 1
%conv = sext i8 %3 to i32
%4 = load volatile i8*, i8** %StackGuardSlot <<<-- Loading Guard slot
call void @__security_check_cookie(i8* %4) <<<-- Epilogue function-based check
ret i32 %conv
}
```
* SelectionDAG generated instrumentation:
```
clang-cl /GS test.cc /O1 /c /FA
```
```
"?example@@YAHHH@Z": # @"\01?example@@YAHHH@Z"
# BB#0: # %entry
pushl %esi
subl $16, %esp
movl ___security_cookie, %eax <<<-- Loading Stack Guard value
movl 28(%esp), %esi
movl %eax, 12(%esp) <<<-- Store to Guard slot
leal 2(%esp), %eax
pushl %esi
pushl $204
pushl %eax
calll _memset
addl $12, %esp
movsbl 2(%esp,%esi), %esi
movl 12(%esp), %ecx <<<-- Loading Guard slot
calll @__security_check_cookie@4 <<<-- Epilogue function-based check
movl %esi, %eax
addl $16, %esp
popl %esi
retl
```
Reviewers: kcc, pcc, eugenis, rnk
Subscribers: majnemer, llvm-commits, hans, thakis, rnk
Differential Revision: http://reviews.llvm.org/D20346
llvm-svn: 272053
2016-06-07 22:15:35 +02:00
|
|
|
return false; // Invalid integer string
|
2013-07-22 22:15:21 +02:00
|
|
|
|
2014-04-17 21:08:36 +02:00
|
|
|
if (!RequiresStackProtector())
|
|
|
|
return false;
|
|
|
|
|
[stack-protection] Add support for MSVC buffer security check
Summary:
This patch is adding support for the MSVC buffer security check implementation
The buffer security check is turned on with the '/GS' compiler switch.
* https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
* To be added to clang here: http://reviews.llvm.org/D20347
Some overview of buffer security check feature and implementation:
* https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
* http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
* http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html
For the following example:
```
int example(int offset, int index) {
char buffer[10];
memset(buffer, 0xCC, index);
return buffer[index];
}
```
The MSVC compiler is adding these instructions to perform stack integrity check:
```
push ebp
mov ebp,esp
sub esp,50h
[1] mov eax,dword ptr [__security_cookie (01068024h)]
[2] xor eax,ebp
[3] mov dword ptr [ebp-4],eax
push ebx
push esi
push edi
mov eax,dword ptr [index]
push eax
push 0CCh
lea ecx,[buffer]
push ecx
call _memset (010610B9h)
add esp,0Ch
mov eax,dword ptr [index]
movsx eax,byte ptr buffer[eax]
pop edi
pop esi
pop ebx
[4] mov ecx,dword ptr [ebp-4]
[5] xor ecx,ebp
[6] call @__security_check_cookie@4 (01061276h)
mov esp,ebp
pop ebp
ret
```
The instrumentation above is:
* [1] is loading the global security canary,
* [3] is storing the local computed ([2]) canary to the guard slot,
* [4] is loading the guard slot and ([5]) re-compute the global canary,
* [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.
Overview of the current stack-protection implementation:
* lib/CodeGen/StackProtector.cpp
* There is a default stack-protection implementation applied on intermediate representation.
* The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
* An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
* Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
* Guard manipulation and comparison are added directly to the intermediate representation.
* lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
* lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
* There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
* see long comment above 'class StackProtectorDescriptor' declaration.
* The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
* 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
* The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.
* include/llvm/Target/TargetLowering.h
* Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.
* lib/Target/X86/X86ISelLowering.cpp
* Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.
Function-based Instrumentation:
* The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
* To support function-based instrumentation, this patch is
* adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
* If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
* modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
* generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
* if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).
Modifications
* adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
* adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)
Results
* IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```
```
*** Final LLVM Code input to ISel ***
; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
%StackGuardSlot = alloca i8* <<<-- Allocated guard slot
%0 = call i8* @llvm.stackguard() <<<-- Loading Stack Guard value
call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot) <<<-- Prologue intrinsic call (store to Guard slot)
%index.addr = alloca i32, align 4
%offset.addr = alloca i32, align 4
%buffer = alloca [10 x i8], align 1
store i32 %index, i32* %index.addr, align 4
store i32 %offset, i32* %offset.addr, align 4
%arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
%1 = load i32, i32* %index.addr, align 4
call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
%2 = load i32, i32* %index.addr, align 4
%arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
%3 = load i8, i8* %arrayidx, align 1
%conv = sext i8 %3 to i32
%4 = load volatile i8*, i8** %StackGuardSlot <<<-- Loading Guard slot
call void @__security_check_cookie(i8* %4) <<<-- Epilogue function-based check
ret i32 %conv
}
```
* SelectionDAG generated instrumentation:
```
clang-cl /GS test.cc /O1 /c /FA
```
```
"?example@@YAHHH@Z": # @"\01?example@@YAHHH@Z"
# BB#0: # %entry
pushl %esi
subl $16, %esp
movl ___security_cookie, %eax <<<-- Loading Stack Guard value
movl 28(%esp), %esi
movl %eax, 12(%esp) <<<-- Store to Guard slot
leal 2(%esp), %eax
pushl %esi
pushl $204
pushl %eax
calll _memset
addl $12, %esp
movsbl 2(%esp,%esi), %esi
movl 12(%esp), %ecx <<<-- Loading Guard slot
calll @__security_check_cookie@4 <<<-- Epilogue function-based check
movl %esi, %eax
addl $16, %esp
popl %esi
retl
```
Reviewers: kcc, pcc, eugenis, rnk
Subscribers: majnemer, llvm-commits, hans, thakis, rnk
Differential Revision: http://reviews.llvm.org/D20346
llvm-svn: 272053
2016-06-07 22:15:35 +02:00
|
|
|
// TODO(etienneb): Functions with funclets are not correctly supported now.
|
|
|
|
// Do nothing if this is funclet-based personality.
|
|
|
|
if (Fn.hasPersonalityFn()) {
|
|
|
|
EHPersonality Personality = classifyEHPersonality(Fn.getPersonalityFn());
|
|
|
|
if (isFuncletEHPersonality(Personality))
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2013-01-23 07:43:53 +01:00
|
|
|
++NumFunProtected;
|
2008-11-05 01:00:21 +01:00
|
|
|
return InsertStackProtectors();
|
2008-11-04 03:10:20 +01:00
|
|
|
}
|
|
|
|
|
2013-10-29 22:16:16 +01:00
|
|
|
/// \param [out] IsLarge is set to true if a protectable array is found and
|
|
|
|
/// it is "large" ( >= ssp-buffer-size). In the case of a structure with
|
|
|
|
/// multiple arrays, this gets set if any of them is large.
|
|
|
|
bool StackProtector::ContainsProtectableArray(Type *Ty, bool &IsLarge,
|
2013-10-30 03:25:14 +01:00
|
|
|
bool Strong,
|
|
|
|
bool InStruct) const {
|
|
|
|
if (!Ty)
|
|
|
|
return false;
|
2012-08-17 22:59:56 +02:00
|
|
|
if (ArrayType *AT = dyn_cast<ArrayType>(Ty)) {
|
|
|
|
if (!AT->getElementType()->isIntegerTy(8)) {
|
|
|
|
// If we're on a non-Darwin platform or we're inside of a structure, don't
|
|
|
|
// add stack protectors unless the array is a character array.
|
2013-10-29 22:16:16 +01:00
|
|
|
// However, in strong mode any array, regardless of type and size,
|
|
|
|
// triggers a protector.
|
|
|
|
if (!Strong && (InStruct || !Trip.isOSDarwin()))
|
|
|
|
return false;
|
2012-08-17 22:59:56 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
// If an array has more than SSPBufferSize bytes of allocated space, then we
|
|
|
|
// emit stack protectors.
|
2015-07-08 01:38:49 +02:00
|
|
|
if (SSPBufferSize <= M->getDataLayout().getTypeAllocSize(AT)) {
|
2013-10-29 22:16:16 +01:00
|
|
|
IsLarge = true;
|
|
|
|
return true;
|
2013-10-30 03:25:14 +01:00
|
|
|
}
|
2013-10-29 22:16:16 +01:00
|
|
|
|
|
|
|
if (Strong)
|
|
|
|
// Require a protector for all arrays in strong mode
|
2012-08-17 22:59:56 +02:00
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
const StructType *ST = dyn_cast<StructType>(Ty);
|
2013-10-30 03:25:14 +01:00
|
|
|
if (!ST)
|
|
|
|
return false;
|
2012-08-17 22:59:56 +02:00
|
|
|
|
2013-10-29 22:16:16 +01:00
|
|
|
bool NeedsProtector = false;
|
2012-08-17 22:59:56 +02:00
|
|
|
for (StructType::element_iterator I = ST->element_begin(),
|
2013-10-30 03:25:14 +01:00
|
|
|
E = ST->element_end();
|
|
|
|
I != E; ++I)
|
2013-10-29 22:16:16 +01:00
|
|
|
if (ContainsProtectableArray(*I, IsLarge, Strong, true)) {
|
|
|
|
// If the element is a protectable array and is large (>= SSPBufferSize)
|
|
|
|
// then we are done. If the protectable array is not large, then
|
|
|
|
// keep looking in case a subsequent element is a large array.
|
|
|
|
if (IsLarge)
|
|
|
|
return true;
|
|
|
|
NeedsProtector = true;
|
|
|
|
}
|
2012-08-17 22:59:56 +02:00
|
|
|
|
2013-10-29 22:16:16 +01:00
|
|
|
return NeedsProtector;
|
2012-08-17 22:59:56 +02:00
|
|
|
}
|
|
|
|
|
2020-03-05 18:18:47 +01:00
|
|
|
bool StackProtector::HasAddressTaken(const Instruction *AI,
|
|
|
|
uint64_t AllocSize) {
|
|
|
|
const DataLayout &DL = M->getDataLayout();
|
2019-09-30 17:01:35 +02:00
|
|
|
for (const User *U : AI->users()) {
|
2019-09-30 17:08:38 +02:00
|
|
|
const auto *I = cast<Instruction>(U);
|
2020-03-05 18:18:47 +01:00
|
|
|
// If this instruction accesses memory make sure it doesn't access beyond
|
|
|
|
// the bounds of the allocated object.
|
|
|
|
Optional<MemoryLocation> MemLoc = MemoryLocation::getOrNone(I);
|
|
|
|
if (MemLoc.hasValue() && MemLoc->Size.getValue() > AllocSize)
|
|
|
|
return true;
|
2019-09-30 17:08:38 +02:00
|
|
|
switch (I->getOpcode()) {
|
|
|
|
case Instruction::Store:
|
|
|
|
if (AI == cast<StoreInst>(I)->getValueOperand())
|
2019-09-30 17:01:35 +02:00
|
|
|
return true;
|
2019-09-30 17:08:38 +02:00
|
|
|
break;
|
2019-09-30 17:11:23 +02:00
|
|
|
case Instruction::AtomicCmpXchg:
|
|
|
|
// cmpxchg conceptually includes both a load and store from the same
|
|
|
|
// location. So, like store, the value being stored is what matters.
|
|
|
|
if (AI == cast<AtomicCmpXchgInst>(I)->getNewValOperand())
|
|
|
|
return true;
|
|
|
|
break;
|
2019-09-30 17:08:38 +02:00
|
|
|
case Instruction::PtrToInt:
|
|
|
|
if (AI == cast<PtrToIntInst>(I)->getOperand(0))
|
2019-09-30 17:01:35 +02:00
|
|
|
return true;
|
2019-09-30 17:08:38 +02:00
|
|
|
break;
|
|
|
|
case Instruction::Call: {
|
2019-09-30 17:11:23 +02:00
|
|
|
// Ignore intrinsics that do not become real instructions.
|
|
|
|
// TODO: Narrow this to intrinsics that have store-like effects.
|
2019-09-30 17:08:38 +02:00
|
|
|
const auto *CI = cast<CallInst>(I);
|
2019-09-30 17:01:35 +02:00
|
|
|
if (!isa<DbgInfoIntrinsic>(CI) && !CI->isLifetimeStartOrEnd())
|
|
|
|
return true;
|
2019-09-30 17:08:38 +02:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
case Instruction::Invoke:
|
2019-09-30 17:01:35 +02:00
|
|
|
return true;
|
2020-03-05 18:18:47 +01:00
|
|
|
case Instruction::GetElementPtr: {
|
|
|
|
// If the GEP offset is out-of-bounds, or is non-constant and so has to be
|
|
|
|
// assumed to be potentially out-of-bounds, then any memory access that
|
|
|
|
// would use it could also be out-of-bounds meaning stack protection is
|
|
|
|
// required.
|
|
|
|
const GetElementPtrInst *GEP = cast<GetElementPtrInst>(I);
|
|
|
|
unsigned TypeSize = DL.getIndexTypeSizeInBits(I->getType());
|
|
|
|
APInt Offset(TypeSize, 0);
|
|
|
|
APInt MaxOffset(TypeSize, AllocSize);
|
|
|
|
if (!GEP->accumulateConstantOffset(DL, Offset) || Offset.ugt(MaxOffset))
|
|
|
|
return true;
|
|
|
|
// Adjust AllocSize to be the space remaining after this offset.
|
|
|
|
if (HasAddressTaken(I, AllocSize - Offset.getLimitedValue()))
|
|
|
|
return true;
|
|
|
|
break;
|
|
|
|
}
|
2019-09-30 17:08:38 +02:00
|
|
|
case Instruction::BitCast:
|
|
|
|
case Instruction::Select:
|
2019-09-30 17:11:23 +02:00
|
|
|
case Instruction::AddrSpaceCast:
|
2020-03-05 18:18:47 +01:00
|
|
|
if (HasAddressTaken(I, AllocSize))
|
2019-09-30 17:01:35 +02:00
|
|
|
return true;
|
2019-09-30 17:08:38 +02:00
|
|
|
break;
|
|
|
|
case Instruction::PHI: {
|
2019-09-30 17:01:35 +02:00
|
|
|
// Keep track of what PHI nodes we have already visited to ensure
|
|
|
|
// they are only visited once.
|
2019-09-30 17:08:38 +02:00
|
|
|
const auto *PN = cast<PHINode>(I);
|
2019-09-30 17:01:35 +02:00
|
|
|
if (VisitedPHIs.insert(PN).second)
|
2020-03-05 18:18:47 +01:00
|
|
|
if (HasAddressTaken(PN, AllocSize))
|
2019-09-30 17:01:35 +02:00
|
|
|
return true;
|
2019-09-30 17:08:38 +02:00
|
|
|
break;
|
|
|
|
}
|
2019-09-30 17:11:23 +02:00
|
|
|
case Instruction::Load:
|
|
|
|
case Instruction::AtomicRMW:
|
|
|
|
case Instruction::Ret:
|
|
|
|
// These instructions take an address operand, but have load-like or
|
|
|
|
// other innocuous behavior that should not trigger a stack protector.
|
|
|
|
// atomicrmw conceptually has both load and store semantics, but the
|
|
|
|
// value being stored must be integer; so if a pointer is being stored,
|
|
|
|
// we'll catch it in the PtrToInt case above.
|
2019-09-30 17:08:38 +02:00
|
|
|
break;
|
2019-09-30 17:11:23 +02:00
|
|
|
default:
|
|
|
|
// Conservatively return true for any instruction that takes an address
|
|
|
|
// operand, but is not handled above.
|
|
|
|
return true;
|
2019-09-30 17:01:35 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2018-11-29 14:22:53 +01:00
|
|
|
/// Search for the first call to the llvm.stackprotector intrinsic and return it
|
|
|
|
/// if present.
|
|
|
|
static const CallInst *findStackProtectorIntrinsic(Function &F) {
|
|
|
|
for (const BasicBlock &BB : F)
|
|
|
|
for (const Instruction &I : BB)
|
|
|
|
if (const CallInst *CI = dyn_cast<CallInst>(&I))
|
|
|
|
if (CI->getCalledFunction() ==
|
|
|
|
Intrinsic::getDeclaration(F.getParent(), Intrinsic::stackprotector))
|
|
|
|
return CI;
|
|
|
|
return nullptr;
|
|
|
|
}
|
|
|
|
|
2018-05-01 17:54:18 +02:00
|
|
|
/// Check whether or not this function needs a stack protector based
|
2013-01-23 07:43:53 +01:00
|
|
|
/// upon the stack protector level.
|
|
|
|
///
|
|
|
|
/// We use two heuristics: a standard (ssp) and strong (sspstrong).
|
|
|
|
/// The standard heuristic which will add a guard variable to functions that
|
|
|
|
/// call alloca with a either a variable size or a size >= SSPBufferSize,
|
|
|
|
/// functions with character buffers larger than SSPBufferSize, and functions
|
|
|
|
/// with aggregates containing character buffers larger than SSPBufferSize. The
|
|
|
|
/// strong heuristic will add a guard variables to functions that call alloca
|
|
|
|
/// regardless of size, functions with any buffer regardless of type and size,
|
|
|
|
/// functions with aggregates that contain any buffer regardless of type and
|
|
|
|
/// size, and functions that contain stack-based variables that have had their
|
|
|
|
/// address taken.
|
|
|
|
bool StackProtector::RequiresStackProtector() {
|
|
|
|
bool Strong = false;
|
2013-10-29 22:16:16 +01:00
|
|
|
bool NeedsProtector = false;
|
2018-11-29 14:22:53 +01:00
|
|
|
HasPrologue = findStackProtectorIntrinsic(*F);
|
2016-04-08 23:26:31 +02:00
|
|
|
|
2016-04-12 00:27:48 +02:00
|
|
|
if (F->hasFnAttribute(Attribute::SafeStack))
|
|
|
|
return false;
|
|
|
|
|
2017-02-28 17:02:37 +01:00
|
|
|
// We are constructing the OptimizationRemarkEmitter on the fly rather than
|
|
|
|
// using the analysis pass to avoid building DominatorTree and LoopInfo which
|
|
|
|
// are not available this late in the IR pipeline.
|
|
|
|
OptimizationRemarkEmitter ORE(F);
|
|
|
|
|
2015-02-14 02:44:41 +01:00
|
|
|
if (F->hasFnAttribute(Attribute::StackProtectReq)) {
|
2017-10-11 19:12:59 +02:00
|
|
|
ORE.emit([&]() {
|
|
|
|
return OptimizationRemark(DEBUG_TYPE, "StackProtectorRequested", F)
|
2017-03-09 07:10:27 +01:00
|
|
|
<< "Stack protection applied to function "
|
|
|
|
<< ore::NV("Function", F)
|
2017-10-11 19:12:59 +02:00
|
|
|
<< " due to a function attribute or command-line switch";
|
|
|
|
});
|
2013-10-29 22:16:16 +01:00
|
|
|
NeedsProtector = true;
|
|
|
|
Strong = true; // Use the same heuristic as strong to determine SSPLayout
|
2015-02-14 02:44:41 +01:00
|
|
|
} else if (F->hasFnAttribute(Attribute::StackProtectStrong))
|
2013-01-23 07:43:53 +01:00
|
|
|
Strong = true;
|
2016-04-08 23:26:31 +02:00
|
|
|
else if (HasPrologue)
|
|
|
|
NeedsProtector = true;
|
2015-02-14 02:44:41 +01:00
|
|
|
else if (!F->hasFnAttribute(Attribute::StackProtect))
|
2008-11-18 06:32:11 +01:00
|
|
|
return false;
|
|
|
|
|
2014-12-20 22:37:51 +01:00
|
|
|
for (const BasicBlock &BB : *F) {
|
|
|
|
for (const Instruction &I : BB) {
|
|
|
|
if (const AllocaInst *AI = dyn_cast<AllocaInst>(&I)) {
|
2013-01-23 07:43:53 +01:00
|
|
|
if (AI->isArrayAllocation()) {
|
2017-10-11 19:12:59 +02:00
|
|
|
auto RemarkBuilder = [&]() {
|
|
|
|
return OptimizationRemark(DEBUG_TYPE, "StackProtectorAllocaOrArray",
|
|
|
|
&I)
|
|
|
|
<< "Stack protection applied to function "
|
|
|
|
<< ore::NV("Function", F)
|
|
|
|
<< " due to a call to alloca or use of a variable length "
|
|
|
|
"array";
|
|
|
|
};
|
2014-12-20 22:37:51 +01:00
|
|
|
if (const auto *CI = dyn_cast<ConstantInt>(AI->getArraySize())) {
|
2013-10-29 22:16:16 +01:00
|
|
|
if (CI->getLimitedValue(SSPBufferSize) >= SSPBufferSize) {
|
2013-01-23 07:43:53 +01:00
|
|
|
// A call to alloca with size >= SSPBufferSize requires
|
|
|
|
// stack protectors.
|
2018-07-13 02:08:38 +02:00
|
|
|
Layout.insert(std::make_pair(AI,
|
|
|
|
MachineFrameInfo::SSPLK_LargeArray));
|
2017-10-11 19:12:59 +02:00
|
|
|
ORE.emit(RemarkBuilder);
|
2013-10-29 22:16:16 +01:00
|
|
|
NeedsProtector = true;
|
|
|
|
} else if (Strong) {
|
|
|
|
// Require protectors for all alloca calls in strong mode.
|
2018-07-13 02:08:38 +02:00
|
|
|
Layout.insert(std::make_pair(AI,
|
|
|
|
MachineFrameInfo::SSPLK_SmallArray));
|
2017-10-11 19:12:59 +02:00
|
|
|
ORE.emit(RemarkBuilder);
|
2013-10-29 22:16:16 +01:00
|
|
|
NeedsProtector = true;
|
|
|
|
}
|
2013-07-22 22:15:21 +02:00
|
|
|
} else {
|
|
|
|
// A call to alloca with a variable size requires protectors.
|
2018-07-13 02:08:38 +02:00
|
|
|
Layout.insert(std::make_pair(AI,
|
|
|
|
MachineFrameInfo::SSPLK_LargeArray));
|
2017-10-11 19:12:59 +02:00
|
|
|
ORE.emit(RemarkBuilder);
|
2013-10-29 22:16:16 +01:00
|
|
|
NeedsProtector = true;
|
2013-07-22 22:15:21 +02:00
|
|
|
}
|
2013-10-29 22:16:16 +01:00
|
|
|
continue;
|
2013-01-23 07:43:53 +01:00
|
|
|
}
|
|
|
|
|
2013-10-29 22:16:16 +01:00
|
|
|
bool IsLarge = false;
|
|
|
|
if (ContainsProtectableArray(AI->getAllocatedType(), IsLarge, Strong)) {
|
2018-07-13 02:08:38 +02:00
|
|
|
Layout.insert(std::make_pair(AI, IsLarge
|
|
|
|
? MachineFrameInfo::SSPLK_LargeArray
|
|
|
|
: MachineFrameInfo::SSPLK_SmallArray));
|
2017-10-11 19:12:59 +02:00
|
|
|
ORE.emit([&]() {
|
|
|
|
return OptimizationRemark(DEBUG_TYPE, "StackProtectorBuffer", &I)
|
2017-03-09 07:10:27 +01:00
|
|
|
<< "Stack protection applied to function "
|
|
|
|
<< ore::NV("Function", F)
|
|
|
|
<< " due to a stack allocated buffer or struct containing a "
|
2017-10-11 19:12:59 +02:00
|
|
|
"buffer";
|
|
|
|
});
|
2013-10-29 22:16:16 +01:00
|
|
|
NeedsProtector = true;
|
|
|
|
continue;
|
|
|
|
}
|
2008-11-18 06:32:11 +01:00
|
|
|
|
2020-03-05 18:18:47 +01:00
|
|
|
if (Strong && HasAddressTaken(AI, M->getDataLayout().getTypeAllocSize(
|
|
|
|
AI->getAllocatedType()))) {
|
2013-08-20 10:46:16 +02:00
|
|
|
++NumAddrTaken;
|
2018-07-13 02:08:38 +02:00
|
|
|
Layout.insert(std::make_pair(AI, MachineFrameInfo::SSPLK_AddrOf));
|
2017-10-11 19:12:59 +02:00
|
|
|
ORE.emit([&]() {
|
|
|
|
return OptimizationRemark(DEBUG_TYPE, "StackProtectorAddressTaken",
|
|
|
|
&I)
|
|
|
|
<< "Stack protection applied to function "
|
|
|
|
<< ore::NV("Function", F)
|
|
|
|
<< " due to the address of a local variable being taken";
|
|
|
|
});
|
2013-10-29 22:16:16 +01:00
|
|
|
NeedsProtector = true;
|
2013-01-23 07:43:53 +01:00
|
|
|
}
|
2020-03-05 18:18:47 +01:00
|
|
|
// Clear any PHIs that we visited, to make sure we examine all uses of
|
|
|
|
// any subsequent allocas that we look at.
|
|
|
|
VisitedPHIs.clear();
|
2008-11-18 06:32:11 +01:00
|
|
|
}
|
2013-01-23 07:43:53 +01:00
|
|
|
}
|
2008-11-18 06:32:11 +01:00
|
|
|
}
|
|
|
|
|
2013-10-29 22:16:16 +01:00
|
|
|
return NeedsProtector;
|
2008-11-18 06:32:11 +01:00
|
|
|
}
|
|
|
|
|
2016-04-19 21:40:37 +02:00
|
|
|
/// Create a stack guard loading and populate whether SelectionDAG SSP is
|
|
|
|
/// supported.
|
|
|
|
static Value *getStackGuard(const TargetLoweringBase *TLI, Module *M,
|
|
|
|
IRBuilder<> &B,
|
|
|
|
bool *SupportsSelectionDAGSP = nullptr) {
|
|
|
|
if (Value *Guard = TLI->getIRStackGuard(B))
|
2019-02-01 21:44:24 +01:00
|
|
|
return B.CreateLoad(B.getInt8PtrTy(), Guard, true, "StackGuard");
|
2016-04-19 21:40:37 +02:00
|
|
|
|
|
|
|
// Use SelectionDAG SSP handling, since there isn't an IR guard.
|
|
|
|
//
|
|
|
|
// This is more or less weird, since we optionally output whether we
|
|
|
|
// should perform a SelectionDAG SP here. The reason is that it's strictly
|
|
|
|
// defined as !TLI->getIRStackGuard(B), where getIRStackGuard is also
|
|
|
|
// mutating. There is no way to get this bit without mutating the IR, so
|
|
|
|
// getting this bit has to happen in this right time.
|
|
|
|
//
|
|
|
|
// We could have define a new function TLI::supportsSelectionDAGSP(), but that
|
|
|
|
// will put more burden on the backends' overriding work, especially when it
|
|
|
|
// actually conveys the same information getIRStackGuard() already gives.
|
|
|
|
if (SupportsSelectionDAGSP)
|
|
|
|
*SupportsSelectionDAGSP = true;
|
|
|
|
TLI->insertSSPDeclarations(*M);
|
|
|
|
return B.CreateCall(Intrinsic::getDeclaration(M, Intrinsic::stackguard));
|
|
|
|
}
|
|
|
|
|
|
|
|
/// Insert code into the entry block that stores the stack guard
|
2013-07-22 22:44:11 +02:00
|
|
|
/// variable onto the stack:
|
|
|
|
///
|
|
|
|
/// entry:
|
|
|
|
/// StackGuardSlot = alloca i8*
|
2016-04-19 21:40:37 +02:00
|
|
|
/// StackGuard = <stack guard>
|
|
|
|
/// call void @llvm.stackprotector(StackGuard, StackGuardSlot)
|
2013-07-22 22:44:11 +02:00
|
|
|
///
|
2013-08-20 10:36:53 +02:00
|
|
|
/// Returns true if the platform/triple supports the stackprotectorcreate pseudo
|
|
|
|
/// node.
|
|
|
|
static bool CreatePrologue(Function *F, Module *M, ReturnInst *RI,
|
2016-04-19 21:40:37 +02:00
|
|
|
const TargetLoweringBase *TLI, AllocaInst *&AI) {
|
2013-08-20 10:36:53 +02:00
|
|
|
bool SupportsSelectionDAGSP = false;
|
2016-04-06 00:41:50 +02:00
|
|
|
IRBuilder<> B(&F->getEntryBlock().front());
|
2016-04-08 23:26:31 +02:00
|
|
|
PointerType *PtrTy = Type::getInt8PtrTy(RI->getContext());
|
2014-04-14 02:51:57 +02:00
|
|
|
AI = B.CreateAlloca(PtrTy, nullptr, "StackGuardSlot");
|
2016-04-19 21:40:37 +02:00
|
|
|
|
[stack-protection] Add support for MSVC buffer security check
Summary:
This patch is adding support for the MSVC buffer security check implementation
The buffer security check is turned on with the '/GS' compiler switch.
* https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
* To be added to clang here: http://reviews.llvm.org/D20347
Some overview of buffer security check feature and implementation:
* https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
* http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
* http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html
For the following example:
```
int example(int offset, int index) {
char buffer[10];
memset(buffer, 0xCC, index);
return buffer[index];
}
```
The MSVC compiler is adding these instructions to perform stack integrity check:
```
push ebp
mov ebp,esp
sub esp,50h
[1] mov eax,dword ptr [__security_cookie (01068024h)]
[2] xor eax,ebp
[3] mov dword ptr [ebp-4],eax
push ebx
push esi
push edi
mov eax,dword ptr [index]
push eax
push 0CCh
lea ecx,[buffer]
push ecx
call _memset (010610B9h)
add esp,0Ch
mov eax,dword ptr [index]
movsx eax,byte ptr buffer[eax]
pop edi
pop esi
pop ebx
[4] mov ecx,dword ptr [ebp-4]
[5] xor ecx,ebp
[6] call @__security_check_cookie@4 (01061276h)
mov esp,ebp
pop ebp
ret
```
The instrumentation above is:
* [1] is loading the global security canary,
* [3] is storing the local computed ([2]) canary to the guard slot,
* [4] is loading the guard slot and ([5]) re-compute the global canary,
* [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.
Overview of the current stack-protection implementation:
* lib/CodeGen/StackProtector.cpp
* There is a default stack-protection implementation applied on intermediate representation.
* The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
* An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
* Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
* Guard manipulation and comparison are added directly to the intermediate representation.
* lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
* lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
* There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
* see long comment above 'class StackProtectorDescriptor' declaration.
* The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
* 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
* The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.
* include/llvm/Target/TargetLowering.h
* Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.
* lib/Target/X86/X86ISelLowering.cpp
* Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.
Function-based Instrumentation:
* The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
* To support function-based instrumentation, this patch is
* adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
* If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
* modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
* generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
* if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).
Modifications
* adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
* adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)
Results
* IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```
```
*** Final LLVM Code input to ISel ***
; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
%StackGuardSlot = alloca i8* <<<-- Allocated guard slot
%0 = call i8* @llvm.stackguard() <<<-- Loading Stack Guard value
call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot) <<<-- Prologue intrinsic call (store to Guard slot)
%index.addr = alloca i32, align 4
%offset.addr = alloca i32, align 4
%buffer = alloca [10 x i8], align 1
store i32 %index, i32* %index.addr, align 4
store i32 %offset, i32* %offset.addr, align 4
%arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
%1 = load i32, i32* %index.addr, align 4
call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
%2 = load i32, i32* %index.addr, align 4
%arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
%3 = load i8, i8* %arrayidx, align 1
%conv = sext i8 %3 to i32
%4 = load volatile i8*, i8** %StackGuardSlot <<<-- Loading Guard slot
call void @__security_check_cookie(i8* %4) <<<-- Epilogue function-based check
ret i32 %conv
}
```
* SelectionDAG generated instrumentation:
```
clang-cl /GS test.cc /O1 /c /FA
```
```
"?example@@YAHHH@Z": # @"\01?example@@YAHHH@Z"
# BB#0: # %entry
pushl %esi
subl $16, %esp
movl ___security_cookie, %eax <<<-- Loading Stack Guard value
movl 28(%esp), %esi
movl %eax, 12(%esp) <<<-- Store to Guard slot
leal 2(%esp), %eax
pushl %esi
pushl $204
pushl %eax
calll _memset
addl $12, %esp
movsbl 2(%esp,%esi), %esi
movl 12(%esp), %ecx <<<-- Loading Guard slot
calll @__security_check_cookie@4 <<<-- Epilogue function-based check
movl %esi, %eax
addl $16, %esp
popl %esi
retl
```
Reviewers: kcc, pcc, eugenis, rnk
Subscribers: majnemer, llvm-commits, hans, thakis, rnk
Differential Revision: http://reviews.llvm.org/D20346
llvm-svn: 272053
2016-06-07 22:15:35 +02:00
|
|
|
Value *GuardSlot = getStackGuard(TLI, M, B, &SupportsSelectionDAGSP);
|
2015-05-19 00:13:54 +02:00
|
|
|
B.CreateCall(Intrinsic::getDeclaration(M, Intrinsic::stackprotector),
|
[stack-protection] Add support for MSVC buffer security check
Summary:
This patch is adding support for the MSVC buffer security check implementation
The buffer security check is turned on with the '/GS' compiler switch.
* https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
* To be added to clang here: http://reviews.llvm.org/D20347
Some overview of buffer security check feature and implementation:
* https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
* http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
* http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html
For the following example:
```
int example(int offset, int index) {
char buffer[10];
memset(buffer, 0xCC, index);
return buffer[index];
}
```
The MSVC compiler is adding these instructions to perform stack integrity check:
```
push ebp
mov ebp,esp
sub esp,50h
[1] mov eax,dword ptr [__security_cookie (01068024h)]
[2] xor eax,ebp
[3] mov dword ptr [ebp-4],eax
push ebx
push esi
push edi
mov eax,dword ptr [index]
push eax
push 0CCh
lea ecx,[buffer]
push ecx
call _memset (010610B9h)
add esp,0Ch
mov eax,dword ptr [index]
movsx eax,byte ptr buffer[eax]
pop edi
pop esi
pop ebx
[4] mov ecx,dword ptr [ebp-4]
[5] xor ecx,ebp
[6] call @__security_check_cookie@4 (01061276h)
mov esp,ebp
pop ebp
ret
```
The instrumentation above is:
* [1] is loading the global security canary,
* [3] is storing the local computed ([2]) canary to the guard slot,
* [4] is loading the guard slot and ([5]) re-compute the global canary,
* [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.
Overview of the current stack-protection implementation:
* lib/CodeGen/StackProtector.cpp
* There is a default stack-protection implementation applied on intermediate representation.
* The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
* An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
* Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
* Guard manipulation and comparison are added directly to the intermediate representation.
* lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
* lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
* There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
* see long comment above 'class StackProtectorDescriptor' declaration.
* The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
* 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
* The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.
* include/llvm/Target/TargetLowering.h
* Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.
* lib/Target/X86/X86ISelLowering.cpp
* Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.
Function-based Instrumentation:
* The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
* To support function-based instrumentation, this patch is
* adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
* If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
* modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
* generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
* if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).
Modifications
* adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
* adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)
Results
* IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```
```
*** Final LLVM Code input to ISel ***
; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
%StackGuardSlot = alloca i8* <<<-- Allocated guard slot
%0 = call i8* @llvm.stackguard() <<<-- Loading Stack Guard value
call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot) <<<-- Prologue intrinsic call (store to Guard slot)
%index.addr = alloca i32, align 4
%offset.addr = alloca i32, align 4
%buffer = alloca [10 x i8], align 1
store i32 %index, i32* %index.addr, align 4
store i32 %offset, i32* %offset.addr, align 4
%arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
%1 = load i32, i32* %index.addr, align 4
call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
%2 = load i32, i32* %index.addr, align 4
%arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
%3 = load i8, i8* %arrayidx, align 1
%conv = sext i8 %3 to i32
%4 = load volatile i8*, i8** %StackGuardSlot <<<-- Loading Guard slot
call void @__security_check_cookie(i8* %4) <<<-- Epilogue function-based check
ret i32 %conv
}
```
* SelectionDAG generated instrumentation:
```
clang-cl /GS test.cc /O1 /c /FA
```
```
"?example@@YAHHH@Z": # @"\01?example@@YAHHH@Z"
# BB#0: # %entry
pushl %esi
subl $16, %esp
movl ___security_cookie, %eax <<<-- Loading Stack Guard value
movl 28(%esp), %esi
movl %eax, 12(%esp) <<<-- Store to Guard slot
leal 2(%esp), %eax
pushl %esi
pushl $204
pushl %eax
calll _memset
addl $12, %esp
movsbl 2(%esp,%esi), %esi
movl 12(%esp), %ecx <<<-- Loading Guard slot
calll @__security_check_cookie@4 <<<-- Epilogue function-based check
movl %esi, %eax
addl $16, %esp
popl %esi
retl
```
Reviewers: kcc, pcc, eugenis, rnk
Subscribers: majnemer, llvm-commits, hans, thakis, rnk
Differential Revision: http://reviews.llvm.org/D20346
llvm-svn: 272053
2016-06-07 22:15:35 +02:00
|
|
|
{GuardSlot, AI});
|
2013-08-20 10:36:53 +02:00
|
|
|
return SupportsSelectionDAGSP;
|
2013-07-22 22:44:11 +02:00
|
|
|
}
|
|
|
|
|
2008-11-05 01:00:21 +01:00
|
|
|
/// InsertStackProtectors - Insert code into the prologue and epilogue of the
|
|
|
|
/// function.
|
|
|
|
///
|
|
|
|
/// - The prologue code loads and stores the stack guard onto the stack.
|
|
|
|
/// - The epilogue checks the value stored in the prologue against the original
|
|
|
|
/// value. It calls __stack_chk_fail if they differ.
|
|
|
|
bool StackProtector::InsertStackProtectors() {
|
2017-12-05 21:22:20 +01:00
|
|
|
// If the target wants to XOR the frame pointer into the guard value, it's
|
|
|
|
// impossible to emit the check in IR, so the target *must* support stack
|
|
|
|
// protection in SDAG.
|
2013-08-20 10:56:26 +02:00
|
|
|
bool SupportsSelectionDAGSP =
|
2017-12-05 21:22:20 +01:00
|
|
|
TLI->useStackGuardXorFP() ||
|
2018-11-29 14:22:53 +01:00
|
|
|
(EnableSelectionDAGSP && !TM->Options.EnableFastISel &&
|
|
|
|
!TM->Options.EnableGlobalISel);
|
2014-04-14 02:51:57 +02:00
|
|
|
AllocaInst *AI = nullptr; // Place on stack that stores the stack guard.
|
2008-11-07 02:23:58 +01:00
|
|
|
|
2013-10-30 03:25:14 +01:00
|
|
|
for (Function::iterator I = F->begin(), E = F->end(); I != E;) {
|
2015-10-10 00:56:24 +02:00
|
|
|
BasicBlock *BB = &*I++;
|
2008-11-18 06:32:11 +01:00
|
|
|
ReturnInst *RI = dyn_cast<ReturnInst>(BB->getTerminator());
|
2013-08-20 10:56:28 +02:00
|
|
|
if (!RI)
|
|
|
|
continue;
|
2008-11-18 06:32:11 +01:00
|
|
|
|
[stack-protection] Add support for MSVC buffer security check
Summary:
This patch is adding support for the MSVC buffer security check implementation
The buffer security check is turned on with the '/GS' compiler switch.
* https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
* To be added to clang here: http://reviews.llvm.org/D20347
Some overview of buffer security check feature and implementation:
* https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
* http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
* http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html
For the following example:
```
int example(int offset, int index) {
char buffer[10];
memset(buffer, 0xCC, index);
return buffer[index];
}
```
The MSVC compiler is adding these instructions to perform stack integrity check:
```
push ebp
mov ebp,esp
sub esp,50h
[1] mov eax,dword ptr [__security_cookie (01068024h)]
[2] xor eax,ebp
[3] mov dword ptr [ebp-4],eax
push ebx
push esi
push edi
mov eax,dword ptr [index]
push eax
push 0CCh
lea ecx,[buffer]
push ecx
call _memset (010610B9h)
add esp,0Ch
mov eax,dword ptr [index]
movsx eax,byte ptr buffer[eax]
pop edi
pop esi
pop ebx
[4] mov ecx,dword ptr [ebp-4]
[5] xor ecx,ebp
[6] call @__security_check_cookie@4 (01061276h)
mov esp,ebp
pop ebp
ret
```
The instrumentation above is:
* [1] is loading the global security canary,
* [3] is storing the local computed ([2]) canary to the guard slot,
* [4] is loading the guard slot and ([5]) re-compute the global canary,
* [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.
Overview of the current stack-protection implementation:
* lib/CodeGen/StackProtector.cpp
* There is a default stack-protection implementation applied on intermediate representation.
* The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
* An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
* Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
* Guard manipulation and comparison are added directly to the intermediate representation.
* lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
* lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
* There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
* see long comment above 'class StackProtectorDescriptor' declaration.
* The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
* 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
* The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.
* include/llvm/Target/TargetLowering.h
* Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.
* lib/Target/X86/X86ISelLowering.cpp
* Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.
Function-based Instrumentation:
* The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
* To support function-based instrumentation, this patch is
* adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
* If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
* modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
* generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
* if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).
Modifications
* adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
* adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)
Results
* IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```
```
*** Final LLVM Code input to ISel ***
; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
%StackGuardSlot = alloca i8* <<<-- Allocated guard slot
%0 = call i8* @llvm.stackguard() <<<-- Loading Stack Guard value
call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot) <<<-- Prologue intrinsic call (store to Guard slot)
%index.addr = alloca i32, align 4
%offset.addr = alloca i32, align 4
%buffer = alloca [10 x i8], align 1
store i32 %index, i32* %index.addr, align 4
store i32 %offset, i32* %offset.addr, align 4
%arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
%1 = load i32, i32* %index.addr, align 4
call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
%2 = load i32, i32* %index.addr, align 4
%arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
%3 = load i8, i8* %arrayidx, align 1
%conv = sext i8 %3 to i32
%4 = load volatile i8*, i8** %StackGuardSlot <<<-- Loading Guard slot
call void @__security_check_cookie(i8* %4) <<<-- Epilogue function-based check
ret i32 %conv
}
```
* SelectionDAG generated instrumentation:
```
clang-cl /GS test.cc /O1 /c /FA
```
```
"?example@@YAHHH@Z": # @"\01?example@@YAHHH@Z"
# BB#0: # %entry
pushl %esi
subl $16, %esp
movl ___security_cookie, %eax <<<-- Loading Stack Guard value
movl 28(%esp), %esi
movl %eax, 12(%esp) <<<-- Store to Guard slot
leal 2(%esp), %eax
pushl %esi
pushl $204
pushl %eax
calll _memset
addl $12, %esp
movsbl 2(%esp,%esi), %esi
movl 12(%esp), %ecx <<<-- Loading Guard slot
calll @__security_check_cookie@4 <<<-- Epilogue function-based check
movl %esi, %eax
addl $16, %esp
popl %esi
retl
```
Reviewers: kcc, pcc, eugenis, rnk
Subscribers: majnemer, llvm-commits, hans, thakis, rnk
Differential Revision: http://reviews.llvm.org/D20346
llvm-svn: 272053
2016-06-07 22:15:35 +02:00
|
|
|
// Generate prologue instrumentation if not already generated.
|
2013-08-09 23:26:18 +02:00
|
|
|
if (!HasPrologue) {
|
|
|
|
HasPrologue = true;
|
2016-04-19 21:40:37 +02:00
|
|
|
SupportsSelectionDAGSP &= CreatePrologue(F, M, RI, TLI, AI);
|
2013-08-20 10:46:16 +02:00
|
|
|
}
|
2013-08-20 10:36:53 +02:00
|
|
|
|
[stack-protection] Add support for MSVC buffer security check
Summary:
This patch is adding support for the MSVC buffer security check implementation
The buffer security check is turned on with the '/GS' compiler switch.
* https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
* To be added to clang here: http://reviews.llvm.org/D20347
Some overview of buffer security check feature and implementation:
* https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
* http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
* http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html
For the following example:
```
int example(int offset, int index) {
char buffer[10];
memset(buffer, 0xCC, index);
return buffer[index];
}
```
The MSVC compiler is adding these instructions to perform stack integrity check:
```
push ebp
mov ebp,esp
sub esp,50h
[1] mov eax,dword ptr [__security_cookie (01068024h)]
[2] xor eax,ebp
[3] mov dword ptr [ebp-4],eax
push ebx
push esi
push edi
mov eax,dword ptr [index]
push eax
push 0CCh
lea ecx,[buffer]
push ecx
call _memset (010610B9h)
add esp,0Ch
mov eax,dword ptr [index]
movsx eax,byte ptr buffer[eax]
pop edi
pop esi
pop ebx
[4] mov ecx,dword ptr [ebp-4]
[5] xor ecx,ebp
[6] call @__security_check_cookie@4 (01061276h)
mov esp,ebp
pop ebp
ret
```
The instrumentation above is:
* [1] is loading the global security canary,
* [3] is storing the local computed ([2]) canary to the guard slot,
* [4] is loading the guard slot and ([5]) re-compute the global canary,
* [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.
Overview of the current stack-protection implementation:
* lib/CodeGen/StackProtector.cpp
* There is a default stack-protection implementation applied on intermediate representation.
* The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
* An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
* Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
* Guard manipulation and comparison are added directly to the intermediate representation.
* lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
* lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
* There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
* see long comment above 'class StackProtectorDescriptor' declaration.
* The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
* 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
* The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.
* include/llvm/Target/TargetLowering.h
* Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.
* lib/Target/X86/X86ISelLowering.cpp
* Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.
Function-based Instrumentation:
* The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
* To support function-based instrumentation, this patch is
* adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
* If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
* modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
* generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
* if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).
Modifications
* adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
* adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)
Results
* IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```
```
*** Final LLVM Code input to ISel ***
; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
%StackGuardSlot = alloca i8* <<<-- Allocated guard slot
%0 = call i8* @llvm.stackguard() <<<-- Loading Stack Guard value
call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot) <<<-- Prologue intrinsic call (store to Guard slot)
%index.addr = alloca i32, align 4
%offset.addr = alloca i32, align 4
%buffer = alloca [10 x i8], align 1
store i32 %index, i32* %index.addr, align 4
store i32 %offset, i32* %offset.addr, align 4
%arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
%1 = load i32, i32* %index.addr, align 4
call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
%2 = load i32, i32* %index.addr, align 4
%arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
%3 = load i8, i8* %arrayidx, align 1
%conv = sext i8 %3 to i32
%4 = load volatile i8*, i8** %StackGuardSlot <<<-- Loading Guard slot
call void @__security_check_cookie(i8* %4) <<<-- Epilogue function-based check
ret i32 %conv
}
```
* SelectionDAG generated instrumentation:
```
clang-cl /GS test.cc /O1 /c /FA
```
```
"?example@@YAHHH@Z": # @"\01?example@@YAHHH@Z"
# BB#0: # %entry
pushl %esi
subl $16, %esp
movl ___security_cookie, %eax <<<-- Loading Stack Guard value
movl 28(%esp), %esi
movl %eax, 12(%esp) <<<-- Store to Guard slot
leal 2(%esp), %eax
pushl %esi
pushl $204
pushl %eax
calll _memset
addl $12, %esp
movsbl 2(%esp,%esi), %esi
movl 12(%esp), %ecx <<<-- Loading Guard slot
calll @__security_check_cookie@4 <<<-- Epilogue function-based check
movl %esi, %eax
addl $16, %esp
popl %esi
retl
```
Reviewers: kcc, pcc, eugenis, rnk
Subscribers: majnemer, llvm-commits, hans, thakis, rnk
Differential Revision: http://reviews.llvm.org/D20346
llvm-svn: 272053
2016-06-07 22:15:35 +02:00
|
|
|
// SelectionDAG based code generation. Nothing else needs to be done here.
|
|
|
|
// The epilogue instrumentation is postponed to SelectionDAG.
|
|
|
|
if (SupportsSelectionDAGSP)
|
|
|
|
break;
|
|
|
|
|
2018-11-29 14:22:53 +01:00
|
|
|
// Find the stack guard slot if the prologue was not created by this pass
|
|
|
|
// itself via a previous call to CreatePrologue().
|
|
|
|
if (!AI) {
|
|
|
|
const CallInst *SPCall = findStackProtectorIntrinsic(*F);
|
|
|
|
assert(SPCall && "Call to llvm.stackprotector is missing");
|
|
|
|
AI = cast<AllocaInst>(SPCall->getArgOperand(1));
|
|
|
|
}
|
|
|
|
|
[stack-protection] Add support for MSVC buffer security check
Summary:
This patch is adding support for the MSVC buffer security check implementation
The buffer security check is turned on with the '/GS' compiler switch.
* https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
* To be added to clang here: http://reviews.llvm.org/D20347
Some overview of buffer security check feature and implementation:
* https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
* http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
* http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html
For the following example:
```
int example(int offset, int index) {
char buffer[10];
memset(buffer, 0xCC, index);
return buffer[index];
}
```
The MSVC compiler is adding these instructions to perform stack integrity check:
```
push ebp
mov ebp,esp
sub esp,50h
[1] mov eax,dword ptr [__security_cookie (01068024h)]
[2] xor eax,ebp
[3] mov dword ptr [ebp-4],eax
push ebx
push esi
push edi
mov eax,dword ptr [index]
push eax
push 0CCh
lea ecx,[buffer]
push ecx
call _memset (010610B9h)
add esp,0Ch
mov eax,dword ptr [index]
movsx eax,byte ptr buffer[eax]
pop edi
pop esi
pop ebx
[4] mov ecx,dword ptr [ebp-4]
[5] xor ecx,ebp
[6] call @__security_check_cookie@4 (01061276h)
mov esp,ebp
pop ebp
ret
```
The instrumentation above is:
* [1] is loading the global security canary,
* [3] is storing the local computed ([2]) canary to the guard slot,
* [4] is loading the guard slot and ([5]) re-compute the global canary,
* [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.
Overview of the current stack-protection implementation:
* lib/CodeGen/StackProtector.cpp
* There is a default stack-protection implementation applied on intermediate representation.
* The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
* An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
* Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
* Guard manipulation and comparison are added directly to the intermediate representation.
* lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
* lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
* There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
* see long comment above 'class StackProtectorDescriptor' declaration.
* The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
* 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
* The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.
* include/llvm/Target/TargetLowering.h
* Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.
* lib/Target/X86/X86ISelLowering.cpp
* Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.
Function-based Instrumentation:
* The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
* To support function-based instrumentation, this patch is
* adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
* If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
* modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
* generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
* if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).
Modifications
* adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
* adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)
Results
* IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```
```
*** Final LLVM Code input to ISel ***
; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
%StackGuardSlot = alloca i8* <<<-- Allocated guard slot
%0 = call i8* @llvm.stackguard() <<<-- Loading Stack Guard value
call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot) <<<-- Prologue intrinsic call (store to Guard slot)
%index.addr = alloca i32, align 4
%offset.addr = alloca i32, align 4
%buffer = alloca [10 x i8], align 1
store i32 %index, i32* %index.addr, align 4
store i32 %offset, i32* %offset.addr, align 4
%arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
%1 = load i32, i32* %index.addr, align 4
call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
%2 = load i32, i32* %index.addr, align 4
%arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
%3 = load i8, i8* %arrayidx, align 1
%conv = sext i8 %3 to i32
%4 = load volatile i8*, i8** %StackGuardSlot <<<-- Loading Guard slot
call void @__security_check_cookie(i8* %4) <<<-- Epilogue function-based check
ret i32 %conv
}
```
* SelectionDAG generated instrumentation:
```
clang-cl /GS test.cc /O1 /c /FA
```
```
"?example@@YAHHH@Z": # @"\01?example@@YAHHH@Z"
# BB#0: # %entry
pushl %esi
subl $16, %esp
movl ___security_cookie, %eax <<<-- Loading Stack Guard value
movl 28(%esp), %esi
movl %eax, 12(%esp) <<<-- Store to Guard slot
leal 2(%esp), %eax
pushl %esi
pushl $204
pushl %eax
calll _memset
addl $12, %esp
movsbl 2(%esp,%esi), %esi
movl 12(%esp), %ecx <<<-- Loading Guard slot
calll @__security_check_cookie@4 <<<-- Epilogue function-based check
movl %esi, %eax
addl $16, %esp
popl %esi
retl
```
Reviewers: kcc, pcc, eugenis, rnk
Subscribers: majnemer, llvm-commits, hans, thakis, rnk
Differential Revision: http://reviews.llvm.org/D20346
llvm-svn: 272053
2016-06-07 22:15:35 +02:00
|
|
|
// Set HasIRCheck to true, so that SelectionDAG will not generate its own
|
|
|
|
// version. SelectionDAG called 'shouldEmitSDCheck' to check whether
|
|
|
|
// instrumentation has already been generated.
|
|
|
|
HasIRCheck = true;
|
|
|
|
|
|
|
|
// Generate epilogue instrumentation. The epilogue intrumentation can be
|
|
|
|
// function-based or inlined depending on which mechanism the target is
|
|
|
|
// providing.
|
2019-02-01 21:43:25 +01:00
|
|
|
if (Function *GuardCheck = TLI->getSSPStackGuardCheck(*M)) {
|
[stack-protection] Add support for MSVC buffer security check
Summary:
This patch is adding support for the MSVC buffer security check implementation
The buffer security check is turned on with the '/GS' compiler switch.
* https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
* To be added to clang here: http://reviews.llvm.org/D20347
Some overview of buffer security check feature and implementation:
* https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
* http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
* http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html
For the following example:
```
int example(int offset, int index) {
char buffer[10];
memset(buffer, 0xCC, index);
return buffer[index];
}
```
The MSVC compiler is adding these instructions to perform stack integrity check:
```
push ebp
mov ebp,esp
sub esp,50h
[1] mov eax,dword ptr [__security_cookie (01068024h)]
[2] xor eax,ebp
[3] mov dword ptr [ebp-4],eax
push ebx
push esi
push edi
mov eax,dword ptr [index]
push eax
push 0CCh
lea ecx,[buffer]
push ecx
call _memset (010610B9h)
add esp,0Ch
mov eax,dword ptr [index]
movsx eax,byte ptr buffer[eax]
pop edi
pop esi
pop ebx
[4] mov ecx,dword ptr [ebp-4]
[5] xor ecx,ebp
[6] call @__security_check_cookie@4 (01061276h)
mov esp,ebp
pop ebp
ret
```
The instrumentation above is:
* [1] is loading the global security canary,
* [3] is storing the local computed ([2]) canary to the guard slot,
* [4] is loading the guard slot and ([5]) re-compute the global canary,
* [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.
Overview of the current stack-protection implementation:
* lib/CodeGen/StackProtector.cpp
* There is a default stack-protection implementation applied on intermediate representation.
* The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
* An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
* Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
* Guard manipulation and comparison are added directly to the intermediate representation.
* lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
* lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
* There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
* see long comment above 'class StackProtectorDescriptor' declaration.
* The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
* 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
* The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.
* include/llvm/Target/TargetLowering.h
* Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.
* lib/Target/X86/X86ISelLowering.cpp
* Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.
Function-based Instrumentation:
* The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
* To support function-based instrumentation, this patch is
* adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
* If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
* modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
* generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
* if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).
Modifications
* adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
* adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)
Results
* IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```
```
*** Final LLVM Code input to ISel ***
; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
%StackGuardSlot = alloca i8* <<<-- Allocated guard slot
%0 = call i8* @llvm.stackguard() <<<-- Loading Stack Guard value
call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot) <<<-- Prologue intrinsic call (store to Guard slot)
%index.addr = alloca i32, align 4
%offset.addr = alloca i32, align 4
%buffer = alloca [10 x i8], align 1
store i32 %index, i32* %index.addr, align 4
store i32 %offset, i32* %offset.addr, align 4
%arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
%1 = load i32, i32* %index.addr, align 4
call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
%2 = load i32, i32* %index.addr, align 4
%arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
%3 = load i8, i8* %arrayidx, align 1
%conv = sext i8 %3 to i32
%4 = load volatile i8*, i8** %StackGuardSlot <<<-- Loading Guard slot
call void @__security_check_cookie(i8* %4) <<<-- Epilogue function-based check
ret i32 %conv
}
```
* SelectionDAG generated instrumentation:
```
clang-cl /GS test.cc /O1 /c /FA
```
```
"?example@@YAHHH@Z": # @"\01?example@@YAHHH@Z"
# BB#0: # %entry
pushl %esi
subl $16, %esp
movl ___security_cookie, %eax <<<-- Loading Stack Guard value
movl 28(%esp), %esi
movl %eax, 12(%esp) <<<-- Store to Guard slot
leal 2(%esp), %eax
pushl %esi
pushl $204
pushl %eax
calll _memset
addl $12, %esp
movsbl 2(%esp,%esi), %esi
movl 12(%esp), %ecx <<<-- Loading Guard slot
calll @__security_check_cookie@4 <<<-- Epilogue function-based check
movl %esi, %eax
addl $16, %esp
popl %esi
retl
```
Reviewers: kcc, pcc, eugenis, rnk
Subscribers: majnemer, llvm-commits, hans, thakis, rnk
Differential Revision: http://reviews.llvm.org/D20346
llvm-svn: 272053
2016-06-07 22:15:35 +02:00
|
|
|
// Generate the function-based epilogue instrumentation.
|
|
|
|
// The target provides a guard check function, generate a call to it.
|
|
|
|
IRBuilder<> B(RI);
|
2019-02-01 21:44:24 +01:00
|
|
|
LoadInst *Guard = B.CreateLoad(B.getInt8PtrTy(), AI, true, "Guard");
|
[stack-protection] Add support for MSVC buffer security check
Summary:
This patch is adding support for the MSVC buffer security check implementation
The buffer security check is turned on with the '/GS' compiler switch.
* https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
* To be added to clang here: http://reviews.llvm.org/D20347
Some overview of buffer security check feature and implementation:
* https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
* http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
* http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html
For the following example:
```
int example(int offset, int index) {
char buffer[10];
memset(buffer, 0xCC, index);
return buffer[index];
}
```
The MSVC compiler is adding these instructions to perform stack integrity check:
```
push ebp
mov ebp,esp
sub esp,50h
[1] mov eax,dword ptr [__security_cookie (01068024h)]
[2] xor eax,ebp
[3] mov dword ptr [ebp-4],eax
push ebx
push esi
push edi
mov eax,dword ptr [index]
push eax
push 0CCh
lea ecx,[buffer]
push ecx
call _memset (010610B9h)
add esp,0Ch
mov eax,dword ptr [index]
movsx eax,byte ptr buffer[eax]
pop edi
pop esi
pop ebx
[4] mov ecx,dword ptr [ebp-4]
[5] xor ecx,ebp
[6] call @__security_check_cookie@4 (01061276h)
mov esp,ebp
pop ebp
ret
```
The instrumentation above is:
* [1] is loading the global security canary,
* [3] is storing the local computed ([2]) canary to the guard slot,
* [4] is loading the guard slot and ([5]) re-compute the global canary,
* [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.
Overview of the current stack-protection implementation:
* lib/CodeGen/StackProtector.cpp
* There is a default stack-protection implementation applied on intermediate representation.
* The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
* An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
* Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
* Guard manipulation and comparison are added directly to the intermediate representation.
* lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
* lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
* There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
* see long comment above 'class StackProtectorDescriptor' declaration.
* The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
* 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
* The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.
* include/llvm/Target/TargetLowering.h
* Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.
* lib/Target/X86/X86ISelLowering.cpp
* Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.
Function-based Instrumentation:
* The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
* To support function-based instrumentation, this patch is
* adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
* If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
* modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
* generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
* if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).
Modifications
* adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
* adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)
Results
* IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```
```
*** Final LLVM Code input to ISel ***
; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
%StackGuardSlot = alloca i8* <<<-- Allocated guard slot
%0 = call i8* @llvm.stackguard() <<<-- Loading Stack Guard value
call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot) <<<-- Prologue intrinsic call (store to Guard slot)
%index.addr = alloca i32, align 4
%offset.addr = alloca i32, align 4
%buffer = alloca [10 x i8], align 1
store i32 %index, i32* %index.addr, align 4
store i32 %offset, i32* %offset.addr, align 4
%arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
%1 = load i32, i32* %index.addr, align 4
call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
%2 = load i32, i32* %index.addr, align 4
%arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
%3 = load i8, i8* %arrayidx, align 1
%conv = sext i8 %3 to i32
%4 = load volatile i8*, i8** %StackGuardSlot <<<-- Loading Guard slot
call void @__security_check_cookie(i8* %4) <<<-- Epilogue function-based check
ret i32 %conv
}
```
* SelectionDAG generated instrumentation:
```
clang-cl /GS test.cc /O1 /c /FA
```
```
"?example@@YAHHH@Z": # @"\01?example@@YAHHH@Z"
# BB#0: # %entry
pushl %esi
subl $16, %esp
movl ___security_cookie, %eax <<<-- Loading Stack Guard value
movl 28(%esp), %esi
movl %eax, 12(%esp) <<<-- Store to Guard slot
leal 2(%esp), %eax
pushl %esi
pushl $204
pushl %eax
calll _memset
addl $12, %esp
movsbl 2(%esp,%esi), %esi
movl 12(%esp), %ecx <<<-- Loading Guard slot
calll @__security_check_cookie@4 <<<-- Epilogue function-based check
movl %esi, %eax
addl $16, %esp
popl %esi
retl
```
Reviewers: kcc, pcc, eugenis, rnk
Subscribers: majnemer, llvm-commits, hans, thakis, rnk
Differential Revision: http://reviews.llvm.org/D20346
llvm-svn: 272053
2016-06-07 22:15:35 +02:00
|
|
|
CallInst *Call = B.CreateCall(GuardCheck, {Guard});
|
2019-02-01 21:43:25 +01:00
|
|
|
Call->setAttributes(GuardCheck->getAttributes());
|
|
|
|
Call->setCallingConv(GuardCheck->getCallingConv());
|
[stack-protection] Add support for MSVC buffer security check
Summary:
This patch is adding support for the MSVC buffer security check implementation
The buffer security check is turned on with the '/GS' compiler switch.
* https://msdn.microsoft.com/en-us/library/8dbf701c.aspx
* To be added to clang here: http://reviews.llvm.org/D20347
Some overview of buffer security check feature and implementation:
* https://msdn.microsoft.com/en-us/library/aa290051(VS.71).aspx
* http://www.ksyash.com/2011/01/buffer-overflow-protection-3/
* http://blog.osom.info/2012/02/understanding-vs-c-compilers-buffer.html
For the following example:
```
int example(int offset, int index) {
char buffer[10];
memset(buffer, 0xCC, index);
return buffer[index];
}
```
The MSVC compiler is adding these instructions to perform stack integrity check:
```
push ebp
mov ebp,esp
sub esp,50h
[1] mov eax,dword ptr [__security_cookie (01068024h)]
[2] xor eax,ebp
[3] mov dword ptr [ebp-4],eax
push ebx
push esi
push edi
mov eax,dword ptr [index]
push eax
push 0CCh
lea ecx,[buffer]
push ecx
call _memset (010610B9h)
add esp,0Ch
mov eax,dword ptr [index]
movsx eax,byte ptr buffer[eax]
pop edi
pop esi
pop ebx
[4] mov ecx,dword ptr [ebp-4]
[5] xor ecx,ebp
[6] call @__security_check_cookie@4 (01061276h)
mov esp,ebp
pop ebp
ret
```
The instrumentation above is:
* [1] is loading the global security canary,
* [3] is storing the local computed ([2]) canary to the guard slot,
* [4] is loading the guard slot and ([5]) re-compute the global canary,
* [6] is validating the resulting canary with the '__security_check_cookie' and performs error handling.
Overview of the current stack-protection implementation:
* lib/CodeGen/StackProtector.cpp
* There is a default stack-protection implementation applied on intermediate representation.
* The target can overload 'getIRStackGuard' method if it has a standard location for the stack protector cookie.
* An intrinsic 'Intrinsic::stackprotector' is added to the prologue. It will be expanded by the instruction selection pass (DAG or Fast).
* Basic Blocks are added to every instrumented function to receive the code for handling stack guard validation and errors handling.
* Guard manipulation and comparison are added directly to the intermediate representation.
* lib/CodeGen/SelectionDAG/SelectionDAGISel.cpp
* lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp
* There is an implementation that adds instrumentation during instruction selection (for better handling of sibbling calls).
* see long comment above 'class StackProtectorDescriptor' declaration.
* The target needs to override 'getSDagStackGuard' to activate SDAG stack protection generation. (note: getIRStackGuard MUST be nullptr).
* 'getSDagStackGuard' returns the appropriate stack guard (security cookie)
* The code is generated by 'SelectionDAGBuilder.cpp' and 'SelectionDAGISel.cpp'.
* include/llvm/Target/TargetLowering.h
* Contains function to retrieve the default Guard 'Value'; should be overriden by each target to select which implementation is used and provide Guard 'Value'.
* lib/Target/X86/X86ISelLowering.cpp
* Contains the x86 specialisation; Guard 'Value' used by the SelectionDAG algorithm.
Function-based Instrumentation:
* The MSVC doesn't inline the stack guard comparison in every function. Instead, a call to '__security_check_cookie' is added to the epilogue before every return instructions.
* To support function-based instrumentation, this patch is
* adding a function to get the function-based check (llvm 'Value', see include/llvm/Target/TargetLowering.h),
* If provided, the stack protection instrumentation won't be inlined and a call to that function will be added to the prologue.
* modifying (SelectionDAGISel.cpp) do avoid producing basic blocks used for inline instrumentation,
* generating the function-based instrumentation during the ISEL pass (SelectionDAGBuilder.cpp),
* if FastISEL (not SelectionDAG), using the fallback which rely on the same function-based implemented over intermediate representation (StackProtector.cpp).
Modifications
* adding support for MSVC (lib/Target/X86/X86ISelLowering.cpp)
* adding support function-based instrumentation (lib/CodeGen/SelectionDAG/SelectionDAGBuilder.cpp, .h)
Results
* IR generated instrumentation:
```
clang-cl /GS test.cc /Od /c -mllvm -print-isel-input
```
```
*** Final LLVM Code input to ISel ***
; Function Attrs: nounwind sspstrong
define i32 @"\01?example@@YAHHH@Z"(i32 %offset, i32 %index) #0 {
entry:
%StackGuardSlot = alloca i8* <<<-- Allocated guard slot
%0 = call i8* @llvm.stackguard() <<<-- Loading Stack Guard value
call void @llvm.stackprotector(i8* %0, i8** %StackGuardSlot) <<<-- Prologue intrinsic call (store to Guard slot)
%index.addr = alloca i32, align 4
%offset.addr = alloca i32, align 4
%buffer = alloca [10 x i8], align 1
store i32 %index, i32* %index.addr, align 4
store i32 %offset, i32* %offset.addr, align 4
%arraydecay = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 0
%1 = load i32, i32* %index.addr, align 4
call void @llvm.memset.p0i8.i32(i8* %arraydecay, i8 -52, i32 %1, i32 1, i1 false)
%2 = load i32, i32* %index.addr, align 4
%arrayidx = getelementptr inbounds [10 x i8], [10 x i8]* %buffer, i32 0, i32 %2
%3 = load i8, i8* %arrayidx, align 1
%conv = sext i8 %3 to i32
%4 = load volatile i8*, i8** %StackGuardSlot <<<-- Loading Guard slot
call void @__security_check_cookie(i8* %4) <<<-- Epilogue function-based check
ret i32 %conv
}
```
* SelectionDAG generated instrumentation:
```
clang-cl /GS test.cc /O1 /c /FA
```
```
"?example@@YAHHH@Z": # @"\01?example@@YAHHH@Z"
# BB#0: # %entry
pushl %esi
subl $16, %esp
movl ___security_cookie, %eax <<<-- Loading Stack Guard value
movl 28(%esp), %esi
movl %eax, 12(%esp) <<<-- Store to Guard slot
leal 2(%esp), %eax
pushl %esi
pushl $204
pushl %eax
calll _memset
addl $12, %esp
movsbl 2(%esp,%esi), %esi
movl 12(%esp), %ecx <<<-- Loading Guard slot
calll @__security_check_cookie@4 <<<-- Epilogue function-based check
movl %esi, %eax
addl $16, %esp
popl %esi
retl
```
Reviewers: kcc, pcc, eugenis, rnk
Subscribers: majnemer, llvm-commits, hans, thakis, rnk
Differential Revision: http://reviews.llvm.org/D20346
llvm-svn: 272053
2016-06-07 22:15:35 +02:00
|
|
|
} else {
|
|
|
|
// Generate the epilogue with inline instrumentation.
|
2013-08-20 10:46:13 +02:00
|
|
|
// If we do not support SelectionDAG based tail calls, generate IR level
|
|
|
|
// tail calls.
|
|
|
|
//
|
2013-08-20 10:36:53 +02:00
|
|
|
// For each block with a return instruction, convert this:
|
|
|
|
//
|
|
|
|
// return:
|
|
|
|
// ...
|
|
|
|
// ret ...
|
|
|
|
//
|
|
|
|
// into this:
|
|
|
|
//
|
|
|
|
// return:
|
|
|
|
// ...
|
2016-04-19 21:40:37 +02:00
|
|
|
// %1 = <stack guard>
|
2013-08-20 10:36:53 +02:00
|
|
|
// %2 = load StackGuardSlot
|
|
|
|
// %3 = cmp i1 %1, %2
|
|
|
|
// br i1 %3, label %SP_return, label %CallStackCheckFailBlk
|
|
|
|
//
|
|
|
|
// SP_return:
|
|
|
|
// ret ...
|
|
|
|
//
|
|
|
|
// CallStackCheckFailBlk:
|
|
|
|
// call void @__stack_chk_fail()
|
|
|
|
// unreachable
|
|
|
|
|
|
|
|
// Create the FailBB. We duplicate the BB every time since the MI tail
|
|
|
|
// merge pass will merge together all of the various BB into one including
|
2013-08-20 10:46:16 +02:00
|
|
|
// fail BB generated by the stack protector pseudo instruction.
|
2013-08-20 10:36:53 +02:00
|
|
|
BasicBlock *FailBB = CreateFailBB();
|
2013-08-20 10:46:16 +02:00
|
|
|
|
2013-08-20 10:36:53 +02:00
|
|
|
// Split the basic block before the return instruction.
|
2015-10-10 00:56:24 +02:00
|
|
|
BasicBlock *NewBB = BB->splitBasicBlock(RI->getIterator(), "SP_return");
|
2013-08-20 10:46:16 +02:00
|
|
|
|
2013-08-20 10:36:53 +02:00
|
|
|
// Update the dominator tree if we need to.
|
|
|
|
if (DT && DT->isReachableFromEntry(BB)) {
|
|
|
|
DT->addNewBlock(NewBB, BB);
|
|
|
|
DT->addNewBlock(FailBB, BB);
|
|
|
|
}
|
2013-08-20 10:46:16 +02:00
|
|
|
|
2013-08-20 10:36:53 +02:00
|
|
|
// Remove default branch instruction to the new BB.
|
|
|
|
BB->getTerminator()->eraseFromParent();
|
2013-08-20 10:46:16 +02:00
|
|
|
|
2013-08-20 10:36:53 +02:00
|
|
|
// Move the newly created basic block to the point right after the old
|
|
|
|
// basic block so that it's in the "fall through" position.
|
|
|
|
NewBB->moveAfter(BB);
|
2013-08-20 10:46:16 +02:00
|
|
|
|
2013-08-20 10:36:53 +02:00
|
|
|
// Generate the stack protector instructions in the old basic block.
|
2013-09-09 19:38:01 +02:00
|
|
|
IRBuilder<> B(BB);
|
2016-04-19 21:40:37 +02:00
|
|
|
Value *Guard = getStackGuard(TLI, M, B);
|
2019-02-01 21:44:24 +01:00
|
|
|
LoadInst *LI2 = B.CreateLoad(B.getInt8PtrTy(), AI, true);
|
2016-04-19 21:40:37 +02:00
|
|
|
Value *Cmp = B.CreateICmpEQ(Guard, LI2);
|
2015-12-22 19:56:14 +01:00
|
|
|
auto SuccessProb =
|
|
|
|
BranchProbabilityInfo::getBranchProbStackProtector(true);
|
|
|
|
auto FailureProb =
|
|
|
|
BranchProbabilityInfo::getBranchProbStackProtector(false);
|
2014-12-01 05:27:03 +01:00
|
|
|
MDNode *Weights = MDBuilder(F->getContext())
|
2015-12-22 19:56:14 +01:00
|
|
|
.createBranchWeights(SuccessProb.getNumerator(),
|
|
|
|
FailureProb.getNumerator());
|
2014-12-01 05:27:03 +01:00
|
|
|
B.CreateCondBr(Cmp, NewBB, FailBB, Weights);
|
2011-01-08 18:01:52 +01:00
|
|
|
}
|
2008-11-04 03:10:20 +01:00
|
|
|
}
|
2008-11-05 01:00:21 +01:00
|
|
|
|
2014-12-21 22:52:38 +01:00
|
|
|
// Return if we didn't modify any basic blocks. i.e., there are no return
|
2008-11-07 00:55:49 +01:00
|
|
|
// statements in the function.
|
2015-10-25 01:11:13 +02:00
|
|
|
return HasPrologue;
|
2008-11-04 03:10:20 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/// CreateFailBB - Create a basic block to jump to when the stack protector
|
|
|
|
/// check fails.
|
2008-11-05 01:00:21 +01:00
|
|
|
BasicBlock *StackProtector::CreateFailBB() {
|
2013-06-07 18:35:57 +02:00
|
|
|
LLVMContext &Context = F->getContext();
|
|
|
|
BasicBlock *FailBB = BasicBlock::Create(Context, "CallStackCheckFailBlk", F);
|
2013-09-09 19:38:01 +02:00
|
|
|
IRBuilder<> B(FailBB);
|
2016-06-30 20:49:04 +02:00
|
|
|
B.SetCurrentDebugLocation(DebugLoc::get(0, 0, F->getSubprogram()));
|
2014-11-29 20:18:21 +01:00
|
|
|
if (Trip.isOSOpenBSD()) {
|
[opaque pointer types] Add a FunctionCallee wrapper type, and use it.
Recommit r352791 after tweaking DerivedTypes.h slightly, so that gcc
doesn't choke on it, hopefully.
Original Message:
The FunctionCallee type is effectively a {FunctionType*,Value*} pair,
and is a useful convenience to enable code to continue passing the
result of getOrInsertFunction() through to EmitCall, even once pointer
types lose their pointee-type.
Then:
- update the CallInst/InvokeInst instruction creation functions to
take a Callee,
- modify getOrInsertFunction to return FunctionCallee, and
- update all callers appropriately.
One area of particular note is the change to the sanitizer
code. Previously, they had been casting the result of
`getOrInsertFunction` to a `Function*` via
`checkSanitizerInterfaceFunction`, and storing that. That would report
an error if someone had already inserted a function declaraction with
a mismatching signature.
However, in general, LLVM allows for such mismatches, as
`getOrInsertFunction` will automatically insert a bitcast if
needed. As part of this cleanup, cause the sanitizer code to do the
same. (It will call its functions using the expected signature,
however they may have been declared.)
Finally, in a small number of locations, callers of
`getOrInsertFunction` actually were expecting/requiring that a brand
new function was being created. In such cases, I've switched them to
Function::Create instead.
Differential Revision: https://reviews.llvm.org/D57315
llvm-svn: 352827
2019-02-01 03:28:03 +01:00
|
|
|
FunctionCallee StackChkFail = M->getOrInsertFunction(
|
|
|
|
"__stack_smash_handler", Type::getVoidTy(Context),
|
|
|
|
Type::getInt8PtrTy(Context));
|
2013-06-07 18:35:57 +02:00
|
|
|
|
2013-09-09 19:38:01 +02:00
|
|
|
B.CreateCall(StackChkFail, B.CreateGlobalStringPtr(F->getName(), "SSH"));
|
2013-06-07 18:35:57 +02:00
|
|
|
} else {
|
[opaque pointer types] Add a FunctionCallee wrapper type, and use it.
Recommit r352791 after tweaking DerivedTypes.h slightly, so that gcc
doesn't choke on it, hopefully.
Original Message:
The FunctionCallee type is effectively a {FunctionType*,Value*} pair,
and is a useful convenience to enable code to continue passing the
result of getOrInsertFunction() through to EmitCall, even once pointer
types lose their pointee-type.
Then:
- update the CallInst/InvokeInst instruction creation functions to
take a Callee,
- modify getOrInsertFunction to return FunctionCallee, and
- update all callers appropriately.
One area of particular note is the change to the sanitizer
code. Previously, they had been casting the result of
`getOrInsertFunction` to a `Function*` via
`checkSanitizerInterfaceFunction`, and storing that. That would report
an error if someone had already inserted a function declaraction with
a mismatching signature.
However, in general, LLVM allows for such mismatches, as
`getOrInsertFunction` will automatically insert a bitcast if
needed. As part of this cleanup, cause the sanitizer code to do the
same. (It will call its functions using the expected signature,
however they may have been declared.)
Finally, in a small number of locations, callers of
`getOrInsertFunction` actually were expecting/requiring that a brand
new function was being created. In such cases, I've switched them to
Function::Create instead.
Differential Revision: https://reviews.llvm.org/D57315
llvm-svn: 352827
2019-02-01 03:28:03 +01:00
|
|
|
FunctionCallee StackChkFail =
|
2017-04-11 17:01:18 +02:00
|
|
|
M->getOrInsertFunction("__stack_chk_fail", Type::getVoidTy(Context));
|
|
|
|
|
2015-05-19 00:13:54 +02:00
|
|
|
B.CreateCall(StackChkFail, {});
|
2013-06-07 18:35:57 +02:00
|
|
|
}
|
2013-09-09 19:38:01 +02:00
|
|
|
B.CreateUnreachable();
|
2008-11-05 01:00:21 +01:00
|
|
|
return FailBB;
|
2008-11-04 03:10:20 +01:00
|
|
|
}
|
2016-04-08 23:26:31 +02:00
|
|
|
|
|
|
|
bool StackProtector::shouldEmitSDCheck(const BasicBlock &BB) const {
|
2019-04-05 18:16:23 +02:00
|
|
|
return HasPrologue && !HasIRCheck && isa<ReturnInst>(BB.getTerminator());
|
2016-04-08 23:26:31 +02:00
|
|
|
}
|
2018-07-13 02:08:38 +02:00
|
|
|
|
|
|
|
void StackProtector::copyToMachineFrameInfo(MachineFrameInfo &MFI) const {
|
|
|
|
if (Layout.empty())
|
|
|
|
return;
|
|
|
|
|
|
|
|
for (int I = 0, E = MFI.getObjectIndexEnd(); I != E; ++I) {
|
|
|
|
if (MFI.isDeadObjectIndex(I))
|
|
|
|
continue;
|
|
|
|
|
|
|
|
const AllocaInst *AI = MFI.getObjectAllocation(I);
|
|
|
|
if (!AI)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
SSPLayoutMap::const_iterator LI = Layout.find(AI);
|
|
|
|
if (LI == Layout.end())
|
|
|
|
continue;
|
|
|
|
|
|
|
|
MFI.setObjectSSPLayout(I, LI->second);
|
|
|
|
}
|
|
|
|
}
|