1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-19 11:02:59 +02:00
Commit Graph

174119 Commits

Author SHA1 Message Date
Nico Weber
a5aba5c47a gn build: Merge r351320 (the 9.0.0 version bump)
llvm-svn: 352002
2019-01-24 01:00:52 +00:00
David Callahan
0d31c41ce9 Update entry count for cold calls
Summary:
Profile sample files include the number of times each entry or inlined
call site is sampled. This is translated into the entry count metadta
on functions.

When sample data is being read, if a call site that was inlined
in the sample program is considered cold and not inlined, then
the entry count of the out-of-line functions does not reflect
the current compilation.

In this patch, we note call sites where the function was not inlined
and as a last action of the sample profile loading, we update the
called function's entry count to reflect the calls from these
call sites which are not included in the profile file.

Reviewers: danielcdh, wmi, Kader, modocache

Reviewed By: wmi

Subscribers: davidxl, eraman, llvm-commits

Differential Revision: https://reviews.llvm.org/D52845

llvm-svn: 352001
2019-01-24 00:55:23 +00:00
Douglas Yung
52ae0002e5 [llvm-symbolizer] Add support for -i and -inlines as aliases for -inlining
This change adds two options, -i and -inlines as aliases for the -inlining option to llvm-symbolizer to improve compatibility with the GNU addr2line utility which accepts these options.

It also modifies existing tests that use -inlining to exercise these new aliases as well.

This fixes PR40073.

Reviewed by: jhenderson, Quolyk, ruiu

Differential Revision: https://reviews.llvm.org/D57083

llvm-svn: 351999
2019-01-24 00:34:09 +00:00
Amara Emerson
a57b77e247 Revert "[mips] Handle MipsMCExpr sub-expression for the MEK_DTPREL tag"
This reverts commit r351987 as it broke some bots.

llvm-svn: 351998
2019-01-24 00:24:59 +00:00
Mircea Trofin
5d88f37725 [llvm] Clarify responsiblity of some of DILocation discriminator APIs
Summary:
Renamed setBaseDiscriminator to cloneWithBaseDiscriminator, to match
similar APIs. Also changed its behavior to copy over the other
discriminator components, instead of eliding them.

Renamed cloneWithDuplicationFactor to
cloneByMultiplyingDuplicationFactor, which more closely matches what
this API does.

Reviewers: dblaikie, wmi

Reviewed By: dblaikie

Subscribers: zzheng, llvm-commits

Differential Revision: https://reviews.llvm.org/D56220

llvm-svn: 351996
2019-01-24 00:10:25 +00:00
Reid Kleckner
aec41a7a6c [ADT] Notify ilist traits about in-list transfers
Summary:
Previously no client of ilist traits has needed to know about transfers
of nodes within the same list, so as an optimization, ilist doesn't call
transferNodesFromList in that case. However, now there are clients that
want to use ilist traits to cache instruction ordering information to
optimize dominance queries of instructions in the same basic block.
This change updates the existing ilist traits users to detect in-list
transfers and do nothing in that case.

After this change, we can start caching instruction ordering information
in LLVM IR data structures. There are two main ways to do that:
- by putting an order integer into the Instruction class
- by maintaining order integers in a hash table on BasicBlock

I plan to implement and measure both, but I wanted to commit this change
first to enable other out of tree ilist clients to implement this
optimization as well.

Reviewers: lattner, hfinkel, chandlerc

Subscribers: hiraditya, dexonsmith, llvm-commits

Differential Revision: https://reviews.llvm.org/D57120

llvm-svn: 351992
2019-01-23 22:59:52 +00:00
Hideki Saito
054ce2fa8a [LV][VPlan] Change to implement VPlan based predication for
VPlan-native path

Context: Patch Series #2 for outer loop vectorization support in LV
using VPlan. (RFC:
http://lists.llvm.org/pipermail/llvm-dev/2017-December/119523.html).

Patch series #2 checks that inner loops are still trivially lock-step
among all vector elements. Non-loop branches are blindly assumed as
divergent.

Changes here implement VPlan based predication algorithm to compute
predicates for blocks that need predication. Predicates are computed
for the VPLoop region in reverse post order. A block's predicate is
computed as OR of the masks of all incoming edges. The mask for an
incoming edge is computed as AND of predecessor block's predicate and
either predecessor's Condition bit or NOT(Condition bit) depending on
whether the edge from predecessor block to the current block is true
or false edge.

Reviewers: fhahn, rengolin, hsaito, dcaballe

Reviewed By: fhahn

Patch by Satish Guggilla, thanks!

Differential Revision: https://reviews.llvm.org/D53349

llvm-svn: 351990
2019-01-23 22:43:12 +00:00
Peter Collingbourne
061384674e hwasan: Read shadow address from ifunc if we don't need a frame record.
This saves a cbz+cold call in the interceptor ABI, as well as a realign
in both ABIs, trading off a dcache entry against some branch predictor
entries and some code size.

Unfortunately the functionality is hidden behind a flag because ifunc is
known to be broken on static binaries on Android.

Differential Revision: https://reviews.llvm.org/D57084

llvm-svn: 351989
2019-01-23 22:39:11 +00:00
Simon Atanasyan
01e1317138 [mips] Handle MipsMCExpr sub-expression for the MEK_DTPREL tag
This is a fix for a regression introduced by the rL348194 commit. In
that change new type (MEK_DTPREL) of MipsMCExpr expression was added,
but in some places of the code this type of expression considered as
unexpected.

This change fixes the bug. The MEK_DTPREL type of expression is used for
marking TLS DIEExpr only and contains a regular sub-expression. Where we
need to handle the expression, we retrieve the sub-expression and
handle it in a common way.

llvm-svn: 351987
2019-01-23 22:02:53 +00:00
Reid Kleckner
309d17ca33 Revert r351938 "[ARM] Alter the register allocation order for minsize on Thumb2"
This change caused fatal backend errors when compiling a file in libvpx
for Android.

llvm-svn: 351979
2019-01-23 21:10:48 +00:00
Alexey Bataev
cd962467b2 [DEBUGINFO, NVPTX] Enable support for the debug info on NVPTX target.
Enable full support for the debug info.

Differential revision: https://reviews.llvm.org/D46189

llvm-svn: 351974
2019-01-23 18:59:54 +00:00
Alexey Bataev
6b763c18bf Revert "[DEBUGINFO, NVPTX] Enable support for the debug info on NVPTX target."
This reverts commit r351972. Some pieces of the patch was not applied
correctly.

llvm-svn: 351973
2019-01-23 18:48:36 +00:00
Alexey Bataev
a2dcedc345 [DEBUGINFO, NVPTX] Enable support for the debug info on NVPTX target.
Enable full support for the debug info. Recommit to fix the emission of
the not required closing brace.

Differential revision: https://reviews.llvm.org/D46189

llvm-svn: 351972
2019-01-23 18:28:59 +00:00
Craig Topper
b5d57aaae1 [X86] Autogenerate complete checks. NFC
llvm-svn: 351970
2019-01-23 18:25:49 +00:00
James Henderson
4949c8df9b [llvm-symbolizer] Improve compatibility of --functions with GNU addr2line
This fixes https://bugs.llvm.org/show_bug.cgi?id=40072.

GNU addr2line's --functions switch is off by default, has a short alias
of -f, and does not take an argument. This patch changes llvm-symbolizer
to allow the second and third point (changing the default behaviour may
have negative impacts on users). If the option is missing a value, it
now treats it as "linkage".

This change does cause one previously valid command-line to behave
differently. Before --functions <value> was accepted, but now only
--functions=<value> is allowed (as well as --functions). The old
behaviour will result in the value being treated as a positional
argument.

The previous testing for --functions=short has been pulled out into a
new test that also tests the other accepted values and option formats.

Reviewed by: ruiu

Differential Revision: https://reviews.llvm.org/D57049

llvm-svn: 351968
2019-01-23 17:27:48 +00:00
Haojian Wu
88704a4bd3 Revert "[DEBUGINFO, NVPTX] Enable support for the debug info on NVPTX target."
This reverts commit r351846.

This patch may generate illegal assembly code, see

```
$ ./bin/clang -cc1 -triple nvptx64-nvidia-cuda -aux-triple x86_64-grtev4-linux-gnu -S -disable-free -disable-llvm-verifier -discard-value-names -main-file-name new.cc -mrelocation-model pic -pic-level 2 -mthread-model posix -fmerge-all-constants -mdisable-fp-elim -relaxed-aliasing -no-integrated-as -mpie-copy-relocations -munwind-tables -fcuda-is-device -target-feature +ptx60 -target-cpu sm_35 -dwarf-column-info -debug-info-kind=line-directives-only -dwarf-version=2 -debugger-tuning=gdb -o empty.s -x cuda empty.cc
$  cat empty.s
//
// Generated by LLVM NVPTX Back-End
//

.version 6.0
.target sm_35
.address_size 64

	}
```

llvm-svn: 351966
2019-01-23 16:39:57 +00:00
Andrea Di Biagio
5d3783c0d0 [MC][X86] Correctly model additional operand latency caused by transfer delays from the integer to the floating point unit.
This patch adds a new ReadAdvance definition named ReadInt2Fpu.
ReadInt2Fpu allows x86 scheduling models to accurately describe delays caused by
data transfers from the integer unit to the floating point unit.
ReadInt2Fpu currently defaults to a delay of zero cycles (i.e. no delay) for all
x86 models excluding BtVer2. That means, this patch is only a functional change
for the Jaguar cpu model only.

Tablegen definitions for instructions (V)PINSR* have been updated to account for
the new ReadInt2Fpu. That read is mapped to the the GPR input operand.
On Jaguar, int-to-fpu transfers are modeled as a +6cy delay. Before this patch,
that extra delay was added to the opcode latency. In practice, the insert opcode
only executes for 1cy. Most of the actual latency is actually contributed by the
so-called operand-latency. According to the AMD SOG for family 16h, (V)PINSR*
latency is defined by expression f+1, where f is defined as a forwarding delay
from the integer unit to the fpu.

When printing instruction latency from MCA (see InstructionInfoView.cpp) and LLC
(only when flag -print-schedule is speified), we now need to account for any
extra forwarding delays. We do this by checking if scheduling classes declare
any negative ReadAdvance entries. Quoting a code comment in TargetSchedule.td:
"A negative advance effectively increases latency, which may be used for
cross-domain stalls". When computing the instruction latency for the purpose of
our scheduling tests, we now add any extra delay to the formula. This avoids
regressing existing codegen and mca schedule tests. It comes with the cost of an
extra (but very simple) hook in MCSchedModel.

Differential Revision: https://reviews.llvm.org/D57056

llvm-svn: 351965
2019-01-23 16:35:07 +00:00
James Henderson
d701db82cf [llvm-readelf] Don't suppress static symbol table with --dyn-symbols + --symbols
In r287786, a bug was introduced into llvm-readelf where it didn't print
the static symbol table if both --symbols and --dyn-symbols were
specified, even if there was no dynamic symbol table. This is obviously
incorrect.

This patch fixes this issue, by delegating the decision of which symbol
tables should be printed to the final dumper, rather than trying to
decide in the command-line option handling layer. The decision was made
to follow the approach taken in this patch because the LLVM style dumper
uses a different order to the original GNU style behaviour (and GNU
readelf) for ELF output. Other approaches resulted in behaviour changes
for other dumpers which felt wrong. In particular, I wanted to avoid
changing the order of the output for --symbols --dyn-symbols for LLVM
style, keep what is emitted by --symbols unchanged for all dumpers, and
avoid having different orders of .dynsym and .symtab dumping for GNU
"--symbols" and "--symbols --dyn-symbols".

Reviewed by: grimar, rupprecht

Differential Revision: https://reviews.llvm.org/D57016

llvm-svn: 351960
2019-01-23 16:15:39 +00:00
Simon Pilgrim
5b603a4027 Fix indentation. NFCI.
llvm-svn: 351958
2019-01-23 16:01:19 +00:00
Simon Pilgrim
53cb95b13e [IR] Match intrinsic parameter by scalar/vectorwidth
This patch replaces the existing LLVMVectorSameWidth matcher with LLVMScalarOrSameVectorWidth.

The matching args must be either scalars or vectors with the same number of elements, but in either case the scalar/element type can differ, specified by LLVMScalarOrSameVectorWidth.

I've updated the _overflow intrinsics to demonstrate this - allowing it to return a i1 or <N x i1> overflow result, matching the scalar/vectorwidth of the other (add/sub/mul) result type.

The masked load/store/gather/scatter intrinsics have also been updated to use this, although as we specify the reference type to be llvm_anyvector_ty we guarantee the mask will be <N x i1> so no change in behaviour

Differential Revision: https://reviews.llvm.org/D57090

llvm-svn: 351957
2019-01-23 16:00:22 +00:00
Krzysztof Parzyszek
e0e9bde045 [Hexagon] Remove incorrect bit negation
llvm-svn: 351956
2019-01-23 15:36:33 +00:00
Benjamin Kramer
ca8adadeee [AArch64] Fix out of bounds strlen
CFIInst is not zero-terminated. This is one of more annoying functional
differences between StringRef and ArrayRef.

Found by asan.

llvm-svn: 351955
2019-01-23 14:51:21 +00:00
Clement Courbet
6a2ddc7b79 Re-land rL322538 "Add a value_type to ArrayRef."
llvm-svn: 351954
2019-01-23 14:20:59 +00:00
Simon Pilgrim
20dbf82008 Move saturated arithmetic intrinsics to other integer intrinsics. NFCI.
They were in the floating point group.

llvm-svn: 351953
2019-01-23 13:49:10 +00:00
George Rimar
1f9e83d373 [llvm-objdump] - Move common code to a new printRelocation() helper. NFC.
This extracts the common code for printing relocations into a new helper function.

llvm-svn: 351951
2019-01-23 13:39:12 +00:00
Tim Renouf
388d17336e [AMDGPU] With XNACK, cannot clause a load with result coalesced with operand
Summary:
With XNACK, an smem load whose result is coalesced with an operand (thus
it overwrites its own operand) cannot appear in a clause, because some
other instruction might XNACK and restart the whole clause.

The clause breaker already realized that an smem that overwrites an
operand cannot appear in a clause, and broke the clause. The problem
that this commit fixes is that the SIFormMemoryClauses optimization
formed a bundle with early clobber, which caused the earlier code that
set up the coalesced operand to be removed as dead.

Differential Revision: https://reviews.llvm.org/D57008

Change-Id: I703c4d5b0bf7d6060222bec491f45c18bb3c0016
llvm-svn: 351950
2019-01-23 13:38:06 +00:00
Martin Storsjo
da693adf63 [llvm-objcopy] [COFF] Error out on use of unhandled options
Prefer erroring out than silently not doing what was requested.

Differential Revision: https://reviews.llvm.org/D57045

llvm-svn: 351948
2019-01-23 11:54:55 +00:00
Martin Storsjo
9d872b4a9a [llvm-objcopy] [COFF] Fix handling of aux symbols for big objects
The aux symbols were stored in an opaque std::vector<uint8_t>,
with contents interpreted according to the rest of the symbol.

All aux symbol types but one fit in 18 bytes (sizeof(coff_symbol16)),
and if written to a bigobj, two extra padding bytes are written (as
sizeof(coff_symbol32) is 20). In the storage agnostic intermediate
representation, store the aux symbols as a series of coff_symbol16
sized opaque blobs. (In practice, all such aux symbols only consist
of one aux symbol, so this is more flexible than what reality needs.)

The special case is the file aux symbols, which are written in
potentially more than one aux symbol slot, without any padding,
as one single long string. This can't be stored in the same opaque
vector of fixed sized aux symbol entries. The file aux symbols will
occupy a different number of aux symbol slots depending on the type
of output object file. As nothing in the intermediate process needs
to have accurate raw symbol indices, updating that is moved into the
writer class.

Differential Revision: https://reviews.llvm.org/D57009

llvm-svn: 351947
2019-01-23 11:54:51 +00:00
Martin Storsjo
ed23b6aaef [llvm-objcopy] [COFF] Remove testcase debugging lines. NFC.
These are no longer necessary as the testcase now seems to run fine
on the buildbots that previously failed on this case, after SVN r351934.

llvm-svn: 351946
2019-01-23 11:54:36 +00:00
Florian Hahn
311cd51eee [HotColdSplitting] Remove unused SSAUpdater.h include (NFC).
llvm-svn: 351945
2019-01-23 11:51:38 +00:00
George Rimar
bd84239c39 [llvm-objdump] - Move variable. NFC.
It was too far from the place where it is used.

llvm-svn: 351942
2019-01-23 10:52:38 +00:00
George Rimar
8cca3e4394 [llvm-objdump] - Split disassembleObject() into two methods. NFCI.
Currently, disassembleObject() is a ~550 lines length function.

This patch splits it into two, where first do all helper objects initializations
and calls the second which does all the rest job.
This is a straightforward split.

Differential revision: https://reviews.llvm.org/D57020

llvm-svn: 351940
2019-01-23 10:33:26 +00:00
Jonas Paulsson
69d9f3a0e2 [SystemZ] Fix test case for buildbot.
llvm-clang-x86_64-expensive-checks-win triggered this assert:

"llvm.dbg.value intrinsic requires a !dbg attachment"

Hopefully, adding reasonable !dbg operands solves this.

llvm-svn: 351939
2019-01-23 10:29:12 +00:00
David Green
6d5c9d0658 [ARM] Alter the register allocation order for minsize on Thumb2
Currently in Arm code, we allocate LR first, under the assumption that
it needs to be saved anyway. Unfortunately this has the disadvantage
that it will require any instructions using it to be the longer thumb2
instructions, not the shorter thumb1 ones.

This switches the order when we are optimising for minsize, returning to
the default order so that more lower registers can be used. It can end
up requiring more pushed registers, but on average produces smaller code.

Differential Revision: https://reviews.llvm.org/D56008

llvm-svn: 351938
2019-01-23 10:18:30 +00:00
Dmitry Venikov
fa6ede7c28 [llvm-symbolizer] Allow single letter command flags grouping.
Summary: Currently llvm-symbolizer doesn't allow flags combining. This patch allows such grouping behavior just like addr2line. Motivation: https://bugs.llvm.org/show_bug.cgi?id=40304

Reviewers: jhenderson, ruiu

Reviewed By: jhenderson

Subscribers: rupprecht, llvm-commits

Differential Revision: https://reviews.llvm.org/D57046

llvm-svn: 351936
2019-01-23 09:49:37 +00:00
Sam Parker
4cf7644825 [ARM][CGP] Check trunc type before replacing
In the last stage of type promotion, we replace any zext that uses a
new trunc with the operand of the trunc. This is okay when we only
allowed one type to be optimised, but now its the case that the trunc
maybe needed to produce a more narrow type than the one we were
optimising for. So we need to check this before doing the replacement.

Differential Revision: https://reviews.llvm.org/D57041

llvm-svn: 351935
2019-01-23 09:18:44 +00:00
Martin Storsjo
f0946e1b01 [llvm-objcopy] [COFF] Clear the unwritten tail of coff_section::Header::Name
This should fix the add-gnu-debuglink test on all buildbots.

llvm-svn: 351934
2019-01-23 09:12:53 +00:00
Sam Parker
3eb6a1b6b8 [DAGCombine] Enable more pre-indexed stores
The current check in CombineToPreIndexedLoadStore is too
conversative, preventing a pre-indexed store when the base pointer
is a predecessor of the value being stored. Instead, we should check
the pointer operand of the store.

Differential Revision: https://reviews.llvm.org/D56719

llvm-svn: 351933
2019-01-23 09:11:49 +00:00
Kristof Beyls
f09a113351 [SLH][AArch64] Remove accidentally retained -debug-only line from test.
llvm-svn: 351932
2019-01-23 09:10:12 +00:00
Martin Storsjo
f11aaf2e15 Reapply: [llvm-objcopy] [COFF] Implement --add-gnu-debuglink
This was reverted since it broke a couple buildbots. The reason
for the breakage is not yet known, but this time, the test has
got more diagnostics added, to hopefully allow figuring out
what goes wrong.

Differential Revision: https://reviews.llvm.org/D57007

llvm-svn: 351931
2019-01-23 08:25:28 +00:00
Kristof Beyls
1b3bd06827 [SLH] AArch64: correctly pick temporary register to mask SP
As part of speculation hardening, the stack pointer gets masked with the
taint register (X16) before a function call or before a function return.
Since there are no instructions that can directly mask writing to the
stack pointer, the stack pointer must first be transferred to another
register, where it can be masked, before that value is transferred back
to the stack pointer.
Before, that temporary register was always picked to be x17, since the
ABI allows clobbering x17 on any function call, resulting in the
following instruction pattern being inserted before function calls and
returns/tail calls:

mov x17, sp
and x17, x17, x16
mov sp, x17
However, x17 can be live in those locations, for example when the call
is an indirect call, using x17 as the target address (blr x17).

To fix this, this patch looks for an available register just before the
call or terminator instruction and uses that.

In the rare case when no register turns out to be available (this
situation is only encountered twice across the whole test-suite), just
insert a full speculation barrier at the start of the basic block where
this occurs.

Differential Revision: https://reviews.llvm.org/D56717

llvm-svn: 351930
2019-01-23 08:18:39 +00:00
Jonas Paulsson
4da2ed90b6 [SystemZ] Handle DBG_VALUE instructions in two places in backend.
Two backend optimizations failed to handle cases when compiled with -g, due
to failing to consider DBG_VALUE instructions. This was in
SystemZTargetLowering::emitSelect() and
SystemZElimCompare::getRegReferences().

This patch makes sure that DBG_VALUEs are recognized so that they do not
affect these optimizations.

Tests for branch-on-count, load-and-trap and consecutive selects.

Review: Ulrich Weigand
https://reviews.llvm.org/D57048

llvm-svn: 351928
2019-01-23 07:42:26 +00:00
Max Kazantsev
d86d523c5e [IRCE] Support narrow latch condition for wide range checks
This patch relaxes restrictions on types of latch condition and range check.
In current implementation, they should match. This patch allows to handle
wide range checks against narrow condition. The motivating example is the
following:

  int N = ...
  for (long i = 0; (int) i < N; i++) {
    if (i >= length) deopt;
  }

In this patch, the option that enables this support is turned off by
default. We'll wait until it is switched to true.

Differential Revision: https://reviews.llvm.org/D56837
Reviewed By: reames

llvm-svn: 351926
2019-01-23 07:20:56 +00:00
Brendon Cahoon
59f22c6fd7 [Pipeliner] Add two pragmas to control software pipelining optimization
#pragma clang loop pipeline(disable)
  
    Disable SWP optimization for the next loop.
    “disable” is the only possible value.
  
#pragma clang loop pipeline_initiation_interval(number)
  
    Set value of initiation interval for SWP
    optimization to specified number value for
    the next loop. Number is the positive value
    greater than 0.
  
These pragmas could be used for debugging or reducing
compile time purposes. It is possible to disable SWP for
concrete loops to save compilation time or to find bugs
by not doing SWP to certain loops. It is possible to set
value of initiation interval to concrete number to save
compilation time by not doing extra pipeliner passes or
to check created schedule for specific initiation interval.

That is llvm part of the fix

Clang part of fix: https://reviews.llvm.org/D55710

Patch by Alexey Lapshin!

Differential Revision: https://reviews.llvm.org/D56403

llvm-svn: 351923
2019-01-23 03:26:10 +00:00
Peter Collingbourne
2818e607ab hwasan: Move memory access checks into small outlined functions on aarch64.
Each hwasan check requires emitting a small piece of code like this:
https://clang.llvm.org/docs/HardwareAssistedAddressSanitizerDesign.html#memory-accesses

The problem with this is that these code blocks typically bloat code
size significantly.

An obvious solution is to outline these blocks of code. In fact, this
has already been implemented under the -hwasan-instrument-with-calls
flag. However, as currently implemented this has a number of problems:
- The functions use the same calling convention as regular C functions.
  This means that the backend must spill all temporary registers as
  required by the platform's C calling convention, even though the
  check only needs two registers on the hot path.
- The functions take the address to be checked in a fixed register,
  which increases register pressure.
Both of these factors can diminish the code size effect and increase
the performance hit of -hwasan-instrument-with-calls.

The solution that this patch implements is to involve the aarch64
backend in outlining the checks. An intrinsic and pseudo-instruction
are created to represent a hwasan check. The pseudo-instruction
is register allocated like any other instruction, and we allow the
register allocator to select almost any register for the address to
check. A particular combination of (register selection, type of check)
triggers the creation in the backend of a function to handle the check
for specifically that pair. The resulting functions are deduplicated by
the linker. The pseudo-instruction (really the function) is specified
to preserve all registers except for the registers that the AAPCS
specifies may be clobbered by a call.

To measure the code size and performance effect of this change, I
took a number of measurements using Chromium for Android on aarch64,
comparing a browser with inlined checks (the baseline) against a
browser with outlined checks.

Code size: Size of .text decreases from 243897420 to 171619972 bytes,
or a 30% decrease.

Performance: Using Chromium's blink_perf.layout microbenchmarks I
measured a median performance regression of 6.24%.

The fact that a perf/size tradeoff is evident here suggests that
we might want to make the new behaviour conditional on -Os/-Oz.
But for now I've enabled it unconditionally, my reasoning being that
hwasan users typically expect a relatively large perf hit, and ~6%
isn't really adding much. We may want to revisit this decision in
the future, though.

I also tried experimenting with varying the number of registers
selectable by the hwasan check pseudo-instruction (which would result
in fewer variants being created), on the hypothesis that creating
fewer variants of the function would expose another perf/size tradeoff
by reducing icache pressure from the check functions at the cost of
register pressure. Although I did observe a code size increase with
fewer registers, I did not observe a strong correlation between the
number of registers and the performance of the resulting browser on the
microbenchmarks, so I conclude that we might as well use ~all registers
to get the maximum code size improvement. My results are below:

Regs | .text size | Perf hit
-----+------------+---------
~all | 171619972  | 6.24%
  16 | 171765192  | 7.03%
   8 | 172917788  | 5.82%
   4 | 177054016  | 6.89%

Differential Revision: https://reviews.llvm.org/D56954

llvm-svn: 351920
2019-01-23 02:20:10 +00:00
Peter Collingbourne
aa6dcf0b5b gn build: Merge r351820.
llvm-svn: 351919
2019-01-23 02:19:56 +00:00
Nico Weber
08e216f48b gn build: Merge r351880
llvm-svn: 351918
2019-01-23 02:10:10 +00:00
Rui Ueyama
8b03b383c5 MemoryBlock: Do not automatically extend a given size to a multiple of page size.
Previously, MemoryBlock automatically extends a requested buffer size to a
multiple of page size because (I believe) doing it was thought to be harmless
and with that you could get more memory (on average 2KiB on 4KiB-page systems)
"for free".

That programming interface turned out to be error-prone. If you request N
bytes, you usually expect that a resulting object returns N for `size()`.
That's not the case for MemoryBlock.

Looks like there is only one place where we take the advantage of
allocating more memory than the requested size. So, with this patch, I
simply removed the automatic size expansion feature from MemoryBlock
and do it on the caller side when needed. MemoryBlock now always
returns a buffer whose size is equal to the requested size.

Differential Revision: https://reviews.llvm.org/D56941

llvm-svn: 351916
2019-01-23 02:03:26 +00:00
Jordan Rupprecht
2d5bdba006 [llvm-objcopy] Remove os-dependent message from test
llvm-svn: 351914
2019-01-23 01:42:02 +00:00
Josh Stone
6f8b9c5423 [CodeView] Allow empty types in member functions
Summary:
`CodeViewDebug::lowerTypeMemberFunction` used to default to a `Void`
return type if the function's type array was empty. After D54667, it
started blindly indexing the 0th item for the return type, which fails
in `getOperand` for empty arrays if assertions are enabled.

This patch restores the `Void` return type for empty type arrays, and
adds a test generated by Rust in line-only debuginfo mode.

Reviewers: zturner, rnk

Reviewed By: rnk

Subscribers: hiraditya, JDevlieghere, llvm-commits

Differential Revision: https://reviews.llvm.org/D57070

llvm-svn: 351910
2019-01-23 00:53:22 +00:00