1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-10-18 10:32:48 +02:00

[llvm] NFC: fix trivial typos in documents

Reviewers: hans, Jim

Reviewed By: Jim

Subscribers: jvesely, nhaehnle, mgorny, arphaman, bmahjour, kerbowa, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D73017
This commit is contained in:
Kazuaki Ishizaki 2020-01-22 11:30:57 +08:00 committed by Jim Lin
parent d68dfedd72
commit 09203c3fb4
48 changed files with 84 additions and 84 deletions

View File

@ -22,8 +22,8 @@ Notation
Notation used in this document is explained :ref:`here<amdgpu_syn_instruction_notation>`.
Overvew
=======
Overview
========
An overview of generic syntax and other features of AMDGPU instructions may be found :ref:`in this document<amdgpu_syn_instructions>`.

View File

@ -22,8 +22,8 @@ Notation
Notation used in this document is explained :ref:`here<amdgpu_syn_instruction_notation>`.
Overvew
=======
Overview
========
An overview of generic syntax and other features of AMDGPU instructions may be found :ref:`in this document<amdgpu_syn_instructions>`.

View File

@ -22,8 +22,8 @@ Notation
Notation used in this document is explained :ref:`here<amdgpu_syn_instruction_notation>`.
Overvew
=======
Overview
========
An overview of generic syntax and other features of AMDGPU instructions may be found :ref:`in this document<amdgpu_syn_instructions>`.

View File

@ -22,8 +22,8 @@ Notation
Notation used in this document is explained :ref:`here<amdgpu_syn_instruction_notation>`.
Overvew
=======
Overview
========
An overview of generic syntax and other features of AMDGPU instructions may be found :ref:`in this document<amdgpu_syn_instructions>`.

View File

@ -24,8 +24,8 @@ Notation
Notation used in this document is explained :ref:`here<amdgpu_syn_instruction_notation>`.
Overvew
=======
Overview
========
An overview of generic syntax and other features of AMDGPU instructions may be found :ref:`in this document<amdgpu_syn_instructions>`.

View File

@ -24,8 +24,8 @@ Notation
Notation used in this document is explained :ref:`here<amdgpu_syn_instruction_notation>`.
Overvew
=======
Overview
========
An overview of generic syntax and other features of AMDGPU instructions may be found :ref:`in this document<amdgpu_syn_instructions>`.

View File

@ -24,8 +24,8 @@ Notation
Notation used in this document is explained :ref:`here<amdgpu_syn_instruction_notation>`.
Overvew
=======
Overview
========
An overview of generic syntax and other features of AMDGPU instructions may be found :ref:`in this document<amdgpu_syn_instructions>`.

View File

@ -24,8 +24,8 @@ Notation
Notation used in this document is explained :ref:`here<amdgpu_syn_instruction_notation>`.
Overvew
=======
Overview
========
An overview of generic syntax and other features of AMDGPU instructions may be found :ref:`in this document<amdgpu_syn_instructions>`.

View File

@ -562,7 +562,7 @@ various reasons, it is not practical to emit the instructions inline.
There's two typical examples of this.
Some CPUs support multiple instruction sets which can be swiched back and forth
Some CPUs support multiple instruction sets which can be switched back and forth
on function-call boundaries. For example, MIPS supports the MIPS16 ISA, which
has a smaller instruction encoding than the usual MIPS32 ISA. ARM, similarly,
has the Thumb ISA. In MIPS16 and earlier versions of Thumb, the atomic

View File

@ -12,7 +12,7 @@ Generating code for big endian ARM processors is for the most part straightforwa
The aim of this document is to explain the problem with NEON loads and stores, and the solution that has been implemented in LLVM.
In this document the term "vector" refers to what the ARM ABI calls a "short vector", which is a sequence of items that can fit in a NEON register. This sequence can be 64 or 128 bits in length, and can constitute 8, 16, 32 or 64 bit items. This document refers to A64 instructions throughout, but is almost applicable to the A32/ARMv7 instruction sets also. The ABI format for passing vectors in A32 is sligtly different to A64. Apart from that, the same concepts apply.
In this document the term "vector" refers to what the ARM ABI calls a "short vector", which is a sequence of items that can fit in a NEON register. This sequence can be 64 or 128 bits in length, and can constitute 8, 16, 32 or 64 bit items. This document refers to A64 instructions throughout, but is almost applicable to the A32/ARMv7 instruction sets also. The ABI format for passing vectors in A32 is slightly different to A64. Apart from that, the same concepts apply.
Example: C-level intrinsics -> assembly
---------------------------------------

View File

@ -82,7 +82,7 @@ by a cut edge should equal ``UINT64_MAX``. In other words, mass is conserved
as it "falls" through the DAG.
If a function's basic block graph is a DAG, then block masses are valid block
frequencies. This works poorly in practise though, since downstream users rely
frequencies. This works poorly in practice though, since downstream users rely
on adding block frequencies together without hitting the maximum.
Loop Scale

View File

@ -121,7 +121,7 @@ non-obvious ways. Here are some hints and tips:
miscompilation. Programs should be temporarily modified to disable outputs
that are likely to vary from run to run.
* In the `crash debugger`_, ``bugpoint`` does not distiguish different crashes
* In the `crash debugger`_, ``bugpoint`` does not distinguish different crashes
during reduction. Thus, if new crash or miscompilation happens, ``bugpoint``
will continue with the new crash instead. If you would like to stick to
particular crash, you should write check scripts to validate the error

View File

@ -333,7 +333,7 @@ When defining a CMake command handling arguments is very useful. The examples
in this section will all use the CMake ``function`` block, but this all applies
to the ``macro`` block as well.
CMake commands can have named arguments that are requried at every call site. In
CMake commands can have named arguments that are required at every call site. In
addition, all commands will implicitly accept a variable number of extra
arguments (In C parlance, all commands are varargs functions). When a command is
invoked with extra arguments (beyond the named ones) CMake will store the full

View File

@ -1272,7 +1272,7 @@ compatible with a given physical, this code can be used:
Sometimes, mostly for debugging purposes, it is useful to change the number of
physical registers available in the target architecture. This must be done
statically, inside the ``TargetRegsterInfo.td`` file. Just ``grep`` for
statically, inside the ``TargetRegisterInfo.td`` file. Just ``grep`` for
``RegisterClass``, the last parameter of which is a list of registers. Just
commenting some out is one simple way to avoid them being used. A more polite
way is to explicitly exclude some registers from the *allocation order*. See the
@ -2418,7 +2418,7 @@ to spill registers r3-r10. This allows callees blind to the call signature,
such as thunks and vararg functions, enough space to cache the argument
registers. Therefore, the parameter area is minimally 32 bytes (64 bytes in 64
bit mode.) Also note that since the parameter area is a fixed offset from the
top of the frame, that a callee can access its spilt arguments using fixed
top of the frame, that a callee can access its split arguments using fixed
offsets from the stack pointer (or base pointer.)
Combining the information about the linkage, parameter areas and alignment. A

View File

@ -647,7 +647,7 @@ Beware of non-deterministic sorting order of equal elements
``std::sort`` uses a non-stable sorting algorithm in which the order of equal
elements is not guaranteed to be preserved. Thus using ``std::sort`` for a
container having equal elements may result in non-determinstic behavior.
container having equal elements may result in non-deterministic behavior.
To uncover such instances of non-determinism, LLVM has introduced a new
llvm::sort wrapper function. For an EXPENSIVE_CHECKS build this will randomly
shuffle the container before sorting. Default to using ``llvm::sort`` instead
@ -1206,7 +1206,7 @@ Don't evaluate ``end()`` every time through a loop
In cases where range-based ``for`` loops can't be used and it is necessary
to write an explicit iterator-based loop, pay close attention to whether
``end()`` is re-evaluted on each loop iteration. One common mistake is to
``end()`` is re-evaluated on each loop iteration. One common mistake is to
write a loop in this style:
.. code-block:: c++

View File

@ -176,7 +176,7 @@ SELECTION OPTIONS
"shards", and run only one of them. Must be used with the
``--run-shard=N`` option, which selects the shard to run. The environment
variable ``LIT_NUM_SHARDS`` can also be used in place of this
option. These two options provide a coarse mechanism for paritioning large
option. These two options provide a coarse mechanism for partitioning large
testsuites, for parallel execution on separate machines (say in a large
testing farm).

View File

@ -98,7 +98,7 @@ OPTIONS
.. option:: -gen-dag-isel
Generate a DAG (Directed Acycle Graph) instruction selector.
Generate a DAG (Directed Acyclic Graph) instruction selector.
.. option:: -gen-asm-matcher

View File

@ -512,7 +512,7 @@ LLVM to make it generate good GPU code. Among these changes are:
* `Memory space inference
<http://llvm.org/doxygen/NVPTXInferAddressSpaces_8cpp_source.html>`_ --
In PTX, we can operate on pointers that are in a paricular "address space"
In PTX, we can operate on pointers that are in a particular "address space"
(global, shared, constant, or local), or we can operate on pointers in the
"generic" address space, which can point to anything. Operations in a
non-generic address space are faster, but pointers in CUDA are not explicitly
@ -528,7 +528,7 @@ LLVM to make it generate good GPU code. Among these changes are:
which fit in 32-bits at runtime. This optimization provides a fast path for
this common case.
* Aggressive loop unrooling and function inlining -- Loop unrolling and
* Aggressive loop unrolling and function inlining -- Loop unrolling and
function inlining need to be more aggressive for GPUs than for CPUs because
control flow transfer in GPU is more expensive. More aggressive unrolling and
inlining also promote other optimizations, such as constant propagation and

View File

@ -12,7 +12,7 @@ Introduction
============
LLVM's code coverage mapping format is used to provide code coverage
analysis using LLVM's and Clang's instrumenation based profiling
analysis using LLVM's and Clang's instrumentation based profiling
(Clang's ``-fprofile-instr-generate`` option).
This document is aimed at those who use LLVM's code coverage mapping to provide

View File

@ -131,7 +131,7 @@ graph described in [1]_ in the following ways:
1. The graph nodes in the paper represent three main program components, namely *assignment statements*, *for loop headers* and *while loop headers*. In this implementation, DDG nodes naturally represent LLVM IR instructions. An assignment statement in this implementation typically involves a node representing the ``store`` instruction along with a number of individual nodes computing the right-hand-side of the assignment that connect to the ``store`` node via a def-use edge. The loop header instructions are not represented as special nodes in this implementation because they have limited uses and can be easily identified, for example, through ``LoopAnalysis``.
2. The paper describes five types of dependency edges between nodes namely *loop dependency*, *flow-*, *anti-*, *output-*, and *input-* dependencies. In this implementation *memory* edges represent the *flow-*, *anti-*, *output-*, and *input-* dependencies. However, *loop dependencies* are not made explicit, because they mainly represent association between a loop structure and the program elements inside the loop and this association is fairly obvious in LLVM IR itself.
3. The paper describes two types of pi-blocks; *recurrences* whose bodies are SCCs and *IN* nodes whose bodies are not part of any SCC. In this impelmentation, pi-blocks are only created for *recurrences*. *IN* nodes remain as simple DDG nodes in the graph.
3. The paper describes two types of pi-blocks; *recurrences* whose bodies are SCCs and *IN* nodes whose bodies are not part of any SCC. In this implementation, pi-blocks are only created for *recurrences*. *IN* nodes remain as simple DDG nodes in the graph.
References

View File

@ -384,8 +384,8 @@ after they are committed, depending on the nature of the change). You are
encouraged to review other peoples' patches as well, but you aren't required
to do so.
Current Contributors - Transfering from SVN
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Current Contributors - Transferring from SVN
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
If you had commit access to SVN and would like to request commit access to
GitHub, please email `llvm-admin <mailto:llvm-admin@lists.llvm.org>`_ with your
SVN username and GitHub username.
@ -744,7 +744,7 @@ OpenMP, etc), Polly, and all other subprojects. There are a few exceptions:
is used by LLVM.
* Some subprojects are impractical or uninteresting to relicense (e.g. llvm-gcc
and dragonegg). These will be split off from the LLVM project (e.g. to
separate Github projects), allowing interested people to continue their
separate GitHub projects), allowing interested people to continue their
development elsewhere.
To relicense LLVM, we will be seeking approval from all of the copyright holders
@ -875,7 +875,7 @@ holds though)::
Q2: If at any time after my contribution, I am able to license other patent
claims that would have been subject to Apache's Grant of Patent License if
they were licenseable by me at the time of my contribution, do those other
they were licensable by me at the time of my contribution, do those other
claims become subject to the Grant of Patent License?
A2: Yes.

View File

@ -213,7 +213,7 @@ ELF-Dependent
^^^^^^^^^^^^^^^^^^^^^^
In order to support creating multiple sections with the same name and comdat,
it is possible to add an unique number at the end of the ``.seciton`` directive.
it is possible to add an unique number at the end of the ``.section`` directive.
For example, the following code creates two sections named ``.text``.
.. code-block:: gas

View File

@ -255,7 +255,7 @@ couple specific suggestions:
#. For languages with numerous rarely executed guard conditions (e.g. null
checks, type checks, range checks) consider adding an extra execution or
two of LoopUnswith and LICM to your pass order. The standard pass order,
two of LoopUnswitch and LICM to your pass order. The standard pass order,
which is tuned for C and C++ applications, may not be sufficient to remove
all dischargeable checks from loops.

View File

@ -106,7 +106,7 @@ llvm-opt-fuzzer
A |LLVM IR fuzzer| aimed at finding bugs in optimization passes.
It receives optimzation pipeline and runs it for each fuzzer input.
It receives optimization pipeline and runs it for each fuzzer input.
Interface of this fuzzer almost directly mirrors ``llvm-isel-fuzzer``. Both
``mtriple`` and ``passes`` arguments are required. Passes are specified in a

View File

@ -599,7 +599,7 @@ used by people developing LLVM.
| | overridden with ``LLVM_DYLIB_COMPONENTS``. The |
| | default contains most of LLVM and is defined in |
| | ``tools/llvm-shlib/CMakelists.txt``. This option is|
| | not avialable on Windows. |
| | not available on Windows. |
+-------------------------+----------------------------------------------------+
| LLVM_OPTIMIZED_TABLEGEN | Builds a release tablegen that gets used during |
| | the LLVM build. This can dramatically speed up |

View File

@ -633,7 +633,7 @@ G_INTRINSIC, G_INTRINSIC_W_SIDE_EFFECTS
Call an intrinsic
The _W_SIDE_EFFECTS version is considered to have unknown side-effects and
as such cannot be reordered acrosss other side-effecting instructions.
as such cannot be reordered across other side-effecting instructions.
.. note::

View File

@ -55,7 +55,7 @@ Allocator Support
GWP-ASan is not a replacement for a traditional allocator. Instead, it works by
inserting stubs into a supporting allocator to redirect allocations to GWP-ASan
when they're chosen to be sampled. These stubs are generally implemented in the
implementaion of ``malloc()``, ``free()`` and ``realloc()``. The stubs are
implementation of ``malloc()``, ``free()`` and ``realloc()``. The stubs are
extremely small, which makes using GWP-ASan in most allocators fairly trivial.
The stubs follow the same general pattern (example ``malloc()`` pseudocode
below):

View File

@ -22,7 +22,7 @@ on the ARMv6 and ARMv7 architectures and may be inapplicable to older chips.
Pandaboard, have become hard-float platforms. There are a number of
choices when using CMake. Autoconf usage is deprecated as of 3.8.
Building LLVM/Clang in ``Relese`` mode is preferred since it consumes
Building LLVM/Clang in ``Release`` mode is preferred since it consumes
a lot less memory. Otherwise, the building process will very likely
fail due to insufficient memory. It's also a lot quicker to only build
the relevant back-ends (ARM and AArch64), since it's very unlikely that
@ -42,7 +42,7 @@ on the ARMv6 and ARMv7 architectures and may be inapplicable to older chips.
Use Ninja instead of Make: "-G Ninja"
Build with assertions on: "-DLLVM_ENABLE_ASSERTIONS=True"
Force Python2: "-DPYTHON_EXECUTABLE=/usr/bin/python2"
Local (non-sudo) install path: "-DCMAKE_INSTALL_PREFIX=$HOME/llvm/instal"
Local (non-sudo) install path: "-DCMAKE_INSTALL_PREFIX=$HOME/llvm/install"
CPU flags: "DCMAKE_C_FLAGS=-mcpu=cortex-a15" (same for CXX_FLAGS)
After that, just typing ``make -jN`` or ``ninja`` will build everything.

View File

@ -43,7 +43,7 @@ compiler-rt must be placed in the runtimes directory.
``qemu-arm`` should be available as a package for your Linux distribution.
The most complicated of the prequisites to satisfy is the arm-linux-gnueabihf
The most complicated of the prerequisites to satisfy is the arm-linux-gnueabihf
sysroot. In theory it is possible to use the Linux distributions multiarch
support to fulfill the dependencies for building but unfortunately due to
/usr/local/include being added some host includes are selected. The easiest way

View File

@ -4866,9 +4866,9 @@ refines this address to produce a concrete location for the source variable.
A ``llvm.dbg.value`` intrinsic describes the direct value of a source variable.
The first operand of the intrinsic may be a direct or indirect value. A
DIExpresion attached to the intrinsic refines the first operand to produce a
DIExpression attached to the intrinsic refines the first operand to produce a
direct value. For example, if the first operand is an indirect value, it may be
necessary to insert ``DW_OP_deref`` into the DIExpresion in order to produce a
necessary to insert ``DW_OP_deref`` into the DIExpression in order to produce a
valid debug intrinsic.
.. note::
@ -6349,7 +6349,7 @@ The list is encoded in the IR using named metadata with the name
``!llvm.dependent-libraries``. Each operand is expected to be a metadata node
which should contain a single string operand.
For example, the following metadata section contains two library specfiers::
For example, the following metadata section contains two library specifiers::
!0 = !{!"a library specifier"}
!1 = !{!"another library specifier"}
@ -15138,7 +15138,7 @@ This is an overloaded intrinsic. Several values of integer, floating point or po
Overview:
"""""""""
Reads a number of scalar values sequentially from memory location provided in '``ptr``' and spreads them in a vector. The '``mask``' holds a bit for each vector lane. The number of elements read from memory is equal to the number of '1' bits in the mask. The loaded elements are positioned in the destination vector according to the sequence of '1' and '0' bits in the mask. E.g., if the mask vector is '10010001', "explandload" reads 3 values from memory addresses ptr, ptr+1, ptr+2 and places them in lanes 0, 3 and 7 accordingly. The masked-off lanes are filled by elements from the corresponding lanes of the '``passthru``' operand.
Reads a number of scalar values sequentially from memory location provided in '``ptr``' and spreads them in a vector. The '``mask``' holds a bit for each vector lane. The number of elements read from memory is equal to the number of '1' bits in the mask. The loaded elements are positioned in the destination vector according to the sequence of '1' and '0' bits in the mask. E.g., if the mask vector is '10010001', "expandload" reads 3 values from memory addresses ptr, ptr+1, ptr+2 and places them in lanes 0, 3 and 7 accordingly. The masked-off lanes are filled by elements from the corresponding lanes of the '``passthru``' operand.
Arguments:

View File

@ -287,7 +287,7 @@ The most important command line options are:
that trigger new code coverage will be merged into the first corpus
directory. Defaults to 0. This flag can be used to minimize a corpus.
``-merge_control_file``
Specify a control file used for the merge proccess.
Specify a control file used for the merge process.
If a merge process gets killed it tries to leave this file in a state
suitable for resuming the merge. By default a temporary file will be used.
``-minimize_crash``
@ -515,7 +515,7 @@ and extra run-time flag ``-use_value_profile=1`` the fuzzer will
collect value profiles for the parameters of compare instructions
and treat some new values as new coverage.
The current imlpementation does roughly the following:
The current implementation does roughly the following:
* The compiler instruments all CMP instructions with a callback that receives both CMP arguments.
* The callback computes `(caller_pc&4095) | (popcnt(Arg1 ^ Arg2) << 12)` and uses this value to set a bit in a bitset.

View File

@ -41,7 +41,7 @@ Instruction Annotations
Contextual markups
------------------
Annoated assembly display will supply contextual markup to help clients more
Annotated assembly display will supply contextual markup to help clients more
efficiently implement things like pretty printers. Most markup will be target
independent, so clients can effectively provide good display without any target
specific knowledge.

View File

@ -37,7 +37,7 @@ Implementation
==============
See `HardwareAssistedAddressSanitizer`_ for a general overview of a
tag-based approach to memory safety. MemTagSanitizer followes a
tag-based approach to memory safety. MemTagSanitizer follows a
similar implementation strategy, but with the tag storage (shadow)
provided by the hardware.

View File

@ -124,7 +124,7 @@ module ``M`` loaded on a ThreadSafeContext ``Ctx``:
// Call into JIT'd code.
Entry();
The builder clasess provide a number of configuration options that can be
The builder classes provide a number of configuration options that can be
specified before the JIT instance is constructed. For example:
.. code-block:: c++
@ -483,7 +483,7 @@ to be aware of:
references are resolved, and symbol resolvers are no longer used. See the
section `Design Overview`_ for more details.
Unless multiple JITDylibs are needed to model linkage relationsips, ORCv1
Unless multiple JITDylibs are needed to model linkage relationships, ORCv1
clients should place all code in the main JITDylib (returned by
``ExecutionSession::getMainJITDylib()``). MCJIT clients should use LLJIT
(see `LLJIT and LLLazyJIT`_).

View File

@ -2996,7 +2996,7 @@ proper operation in multithreaded mode.
Note that, on Unix-like platforms, LLVM requires the presence of GCC's atomic
intrinsics in order to support threaded operation. If you need a
multhreading-capable LLVM on a platform without a suitably modern system
multithreading-capable LLVM on a platform without a suitably modern system
compiler, consider compiling LLVM and LLVM-GCC in single-threaded mode, and
using the resultant compiler to build a copy of LLVM with multithreading
support.
@ -3307,7 +3307,7 @@ place the ``vptr`` in the first word of the instances.)
.. _polymorphism:
Designing Type Hiercharies and Polymorphic Interfaces
Designing Type Hierarchies and Polymorphic Interfaces
-----------------------------------------------------
There are two different design patterns that tend to result in the use of
@ -3351,7 +3351,7 @@ by Sean Parent in several of his talks and papers:
describing this technique in more detail.
#. `Sean Parent's Papers and Presentations
<http://github.com/sean-parent/sean-parent.github.com/wiki/Papers-and-Presentations>`_
- A Github project full of links to slides, video, and sometimes code.
- A GitHub project full of links to slides, video, and sometimes code.
When deciding between creating a type hierarchy (with either tagged or virtual
dispatch) and using templates or concepts-based polymorphism, consider whether
@ -3400,7 +3400,7 @@ The Core LLVM Class Hierarchy Reference
header source: `Type.h <http://llvm.org/doxygen/Type_8h_source.html>`_
doxygen info: `Type Clases <http://llvm.org/doxygen/classllvm_1_1Type.html>`_
doxygen info: `Type Classes <http://llvm.org/doxygen/classllvm_1_1Type.html>`_
The Core LLVM classes are the primary means of representing the program being
inspected or transformed. The core LLVM classes are defined in header files in

View File

@ -202,14 +202,14 @@ Step #4 : Post Move
14. Update links on the LLVM website pointing to viewvc/klaus/phab etc. to
point to GitHub instead.
Github Repository Description
GitHub Repository Description
=============================
Monorepo
----------------
The LLVM git repository hosted at https://github.com/llvm/llvm-project contains all
sub-projects in a single source tree. It is often refered to as a monorepo and
sub-projects in a single source tree. It is often referred to as a monorepo and
mimics an export of the current SVN repository, with each sub-project having its
own top-level directory. Not all sub-projects are used for building toolchains.
For example, www/ and test-suite/ are not part of the monorepo.
@ -281,7 +281,7 @@ Monorepo Drawbacks
1GB for the monorepo), and the commit rate of LLVM may cause more frequent
`git push` collisions when upstreaming. Affected contributors may be able to
use the SVN bridge or the single-subproject Git mirrors. However, it's
undecided if these projects will continue to be mantained.
undecided if these projects will continue to be maintained.
* Using the monolithic repository may add overhead for those *integrating* a
standalone sub-project, even if they aren't contributing to it, due to the
same disk space concern as the point above. The availability of the
@ -356,7 +356,7 @@ Before you push, you'll need to fetch and rebase (`git pull --rebase`) as
usual.
Note that when you fetch you'll likely pull in changes to sub-projects you don't
care about. If you are using spasre checkout, the files from other projects
care about. If you are using sparse checkout, the files from other projects
won't appear on your disk. The only effect is that your commit hash changes.
You can check whether the changes in the last fetch are relevant to your commit
@ -657,7 +657,7 @@ done for each branch. Ref paths will need to be updated to map the
local branch to the corresponding upstream branch. If local branches
have no corresponding upstream branch, then the creation of
``local/octopus/<local branch>`` need not use ``git-merge-base`` to
pinpont its root commit; it may simply be branched from the
pinpoint its root commit; it may simply be branched from the
appropriate component branch (say, ``llvm/local_release_X``).
Zipping local history
@ -812,7 +812,7 @@ The tool handles nested submodules (e.g. llvm is a submodule in
umbrella and clang is a submodule in llvm). The file
``submodule-map.txt`` is a list of pairs, one per line. The first
pair item describes the path to a submodule in the umbrella
repository. The second pair item secribes the path where trees for
repository. The second pair item describes the path where trees for
that submodule should be written in the zipped history.
Let's say your umbrella repository is actually the llvm repository and
@ -945,7 +945,7 @@ getting them into the monorepo. A recipe follows::
--tag-prefix="myrepo-"
)
# Preserve release braches.
# Preserve release branches.
for ref in $(git -C my-monorepo for-each-ref --format="%(refname)" \
refs/remotes/myrepo/release); do
branch=${ref#refs/remotes/myrepo/}

View File

@ -1,5 +1,5 @@
=====================
Test-Suite Extentions
Test-Suite Extensions
=====================
.. contents::
@ -191,7 +191,7 @@ CORAL-2 Benchmarks
------------------
https://asc.llnl.gov/coral-2-benchmarks/
Many of its programs have already been integreated in
Many of its programs have already been integrated in
MultiSource/Benchmarks/DOE-ProxyApps-C and
MultiSource/Benchmarks/DOE-ProxyApps-C++.

View File

@ -153,7 +153,7 @@ TargetRegisterInfo tri
In some cases renaming acronyms to the full type name will result in overly
verbose code. Unlike most classes, a variable's scope is limited and therefore
some of its purpose can implied from that scope, meaning that fewer words are
necessary to give it a clear name. For example, in an optization pass the reader
necessary to give it a clear name. For example, in an optimization pass the reader
can assume that a variable's purpose relates to optimization and therefore an
``OptimizationRemarkEmitter`` variable could be given the name ``remarkEmitter``
or even ``remarker``.

View File

@ -204,7 +204,7 @@ and that will be the official binary.
* Rename (or link) ``clang+llvm-REL-ARCH-ENV`` to the .install directory
* Tar that into the same name with ``.tar.gz`` extensioan from outside the
* Tar that into the same name with ``.tar.gz`` extension from outside the
directory
* Make it available for the release manager to download

View File

@ -102,7 +102,7 @@ appropriateness of our response, but we don't guarantee we'll act on it.
After any incident, the advisory committee will make a report on the situation
to the LLVM Foundation board. The board may choose to make a public statement
about the incident. If that's the case, the identities of anyone involved will
remain confidential unless instructed by those inviduals otherwise.
remain confidential unless instructed by those individuals otherwise.
Appealing
=========
@ -114,7 +114,7 @@ the case.
In general, it is **not** appropriate to appeal a particular decision on
a public mailing list. Doing so would involve disclosure of information which
whould be confidential. Disclosing this kind of information publicly may be
would be confidential. Disclosing this kind of information publicly may be
considered a separate and (potentially) more serious violation of the Code of
Conduct. This is not meant to limit discussion of the Code of Conduct, the
advisory board itself, or the appropriateness of responses in general, but

View File

@ -1006,13 +1006,13 @@ Given a class declaration with copy constructor declared as deleted:
foo(const foo&) = deleted;
};
A C++ frontend would generate follwing:
A C++ frontend would generate following:
.. code-block:: text
!17 = !DISubprogram(name: "foo", scope: !11, file: !1, line: 5, type: !18, scopeLine: 5, flags: DIFlagPublic | DIFlagPrototyped, spFlags: DISPFlagDeleted)
and this will produce an additional DWARF attibute as:
and this will produce an additional DWARF attribute as:
.. code-block:: text
@ -2107,7 +2107,7 @@ The most straightforward way to use ``debugify`` is as follows::
This will inject synthetic DI to ``sample.ll`` run the ``pass-to-test``
and then check for missing DI.
Some other ways to run debugify are avaliable:
Some other ways to run debugify are available:
.. code-block:: bash

View File

@ -142,7 +142,7 @@ values can be used in the class body.
A given class can only be defined once. A ``class`` declaration is
considered to define the class if any of the following is true:
.. break ObjectBody into its consituents so that they are present here?
.. break ObjectBody into its constituents so that they are present here?
#. The :token:`TemplateArgList` is present.
#. The :token:`Body` in the :token:`ObjectBody` is present and is not empty.

View File

@ -343,7 +343,7 @@ all of the aforementioned output loops.
It is recommended to add ``llvm.loop.disable_nonforced`` to
``llvm.loop.distribute.followup_fallback``. This avoids that the
fallback version (which is likely never executed) is further optimzed
fallback version (which is likely never executed) is further optimized
which would increase the code size.
Versioning LICM

View File

@ -28,7 +28,7 @@ Each trace file corresponds to a sequence of events in a particular thread.
The file has a header followed by a sequence of discriminated record types.
The endianness of byte fields matches the endianess of the platform which
The endianness of byte fields matches the endianness of the platform which
produced the trace file.

View File

@ -730,7 +730,7 @@ The YAML syntax supports tags as a way to specify the type of a node before
it is parsed. This allows dynamic types of nodes. But the YAML I/O model uses
static typing, so there are limits to how you can use tags with the YAML I/O
model. Recently, we added support to YAML I/O for checking/setting the optional
tag on a map. Using this functionality it is even possbile to support different
tag on a map. Using this functionality it is even possible to support different
mappings, as long as they are convertible.
To check a tag, inside your mapping() method you can use io.mapTag() to specify

View File

@ -174,7 +174,7 @@ The ConcurrentIRCompiler utility will use the JITTargetMachineBuilder to build
llvm TargetMachines (which are not thread safe) as needed for compiles. After
this, we initialize our supporting members: ``DL``, ``Mangler`` and ``Ctx`` with
the input DataLayout, the ExecutionSession and DL member, and a new default
constucted LLVMContext respectively. Now that our members have been initialized,
constructed LLVMContext respectively. Now that our members have been initialized,
so the one thing that remains to do is to tweak the configuration of the
*JITDylib* that we will store our code in. We want to modify this dylib to
contain not only the symbols that we add to it, but also the symbols from our
@ -204,7 +204,7 @@ REPL process as well. We do this by attaching a
Next we have a named constructor, ``Create``, which will build a KaleidoscopeJIT
instance that is configured to generate code for our host process. It does this
by first generating a JITTargetMachineBuilder instance using that clases's
by first generating a JITTargetMachineBuilder instance using that classes'
detectHost method and then using that instance to generate a datalayout for
the target process. Each of these operations can fail, so each returns its
result wrapped in an Expected value [3]_ that we must check for error before
@ -320,4 +320,4 @@ Here is the code:
+-----------------------------+-----------------------------------------------+
.. [3] See the ErrorHandling section in the LLVM Programmer's Manual
(http://llvm.org/docs/ProgrammersManual.html#error-handling)
(http://llvm.org/docs/ProgrammersManual.html#error-handling)

View File

@ -170,7 +170,7 @@ can be implemented.
TransformFunction Transform;
};
// From IRTransfomrLayer.cpp:
// From IRTransformLayer.cpp:
IRTransformLayer::IRTransformLayer(ExecutionSession &ES,
IRLayer &BaseLayer,

View File

@ -59,7 +59,7 @@ parser, which will be used to report errors found during code generation
let double_type = double_type context
The static variables will be used during code generation.
``Codgen.the_module`` is the LLVM construct that contains all of the
``Codegen.the_module`` is the LLVM construct that contains all of the
functions and global variables in a chunk of code. In many ways, it is
the top-level structure that the LLVM IR uses to contain code.
@ -78,7 +78,7 @@ function body.
With these basics in place, we can start talking about how to generate
code for each expression. Note that this assumes that the
``Codgen.builder`` has been set up to generate code *into* something.
``Codegen.builder`` has been set up to generate code *into* something.
For now, we'll assume that this has already been done, and we'll just
use it to emit code.