1
0
mirror of https://github.com/RPCS3/llvm-mirror.git synced 2024-11-23 11:13:28 +01:00
Commit Graph

194409 Commits

Author SHA1 Message Date
Sanjay Patel
1e300cdd8c [ValueTracking] add tests for smin/smax; NFC 2020-04-04 13:44:06 -04:00
Sanjay Patel
024f788a82 [InstCombine] add more tests for min/max folding; NFC 2020-04-04 13:44:06 -04:00
Florian Hahn
16f85870fd [LV] Simplify tryToWiden as recipes are not re-used (NFC).
After 49d00824bbbb, VPWidenRecipe only stores a single instruction.
tryToWiden can simply return the widen recipe, like other helpers in
VPRecipeBuilder.
2020-04-04 18:30:50 +01:00
Heejin Ahn
b34315963f [WebAssembly] Fix a sanitizer error in WasmEHPrepare
Summary:
D77423 started using a dominator tree in WasmEHPrepare, but we deleted
BBs in `prepareThrows` before we used the domtree in `prepareEHPads`,
and those CFG changes were not reflected in the domtree. This uses
`DomTreeUpdater` to make sure we update the domtree every time we delete
BBs from the CFG. This fixes ubsan/msan/expensive_check errors caught in
LLVM buildbots.

Reviewers: dschuff

Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77465
2020-04-04 09:57:07 -07:00
Nikita Popov
59ebc3b1d3 [InstCombine] Don't limit uses in eraseInstFromFunction()
eraseInstFromFunction() adds the operands of the erased instructions,
as those might now be dead as well. However, this is limited to
instructions with less than 8 operands.

This check doesn't make a lot of sense to me. As the instruction
gets removed afterwards, I don't see a potential for anything
overly pathological happening here (as we can only add those
operands to the worklist once). The impact on CTMark is in
the noise. We also have the same code in instruction sinking
and don't limit the operand count there.

Differential Revision: https://reviews.llvm.org/D77325
2020-04-04 18:37:30 +02:00
Luofan Chen
981bfd3954 [Attributor] Deduce attributes for non-exact functions
This patch is based on D63312 and D63319. For now we create shallow wrappers for all functions that are IPO amendable.
See also [this github issue](https://github.com/llvm/llvm-project/issues/172).

Reviewed By: jdoerfert

Differential Revision: https://reviews.llvm.org/D76404
2020-04-04 11:34:58 -05:00
Nico Weber
e9d1f14be0 Disable relative paths in lit.site.cfg in presence of symlinks
See https://reviews.llvm.org/D77184#1961208
2020-04-04 12:35:40 -04:00
Heejin Ahn
9f0b3d7141 [WebAssembly] Fix wasm.lsda() optimization in WasmEHPrepare
Summary:
When we insert a call to the personality function wrapper
(`_Unwind_CallPersonality`) for a catch pad, we store some necessary
info in `__wasm_lpad_context` struct and pass it. One of the info is the
LSDA address for the function. For this, we insert a call to
`wasm.lsda()`, which will be lowered down to the address of LSDA, and
store it in a field in `__wasm_lpad_context`.

There are exceptions to this personality call insertion: catchpads for
`catch (...)` and cleanuppads (for destructors) don't need personality
function calls, because we don't need to figure out whether the current
exception should be caught or not. (They always should.)

There was a little optimization to `wasm.lsda()` call insertion. Because
the LSDA address is the same throughout a function, we don't need to
insert a store of `wasm.lsda()` return value in every catchpad. For
example:
```
try {
  foo();
} catch (int) {
  // wasm.lsda() call and a store are inserted here, like, in
  // pseudocode,
  // %lsda = wasm.lsda();
  // store %lsda to a field in __wasm_lpad_context
  try {
    foo();
  } catch (int) {
    // We don't need to insert the wasm.lsda() and store again, because
    // to arrive here, we have already stored the LSDA address to
    // __wasm_lpad_context in the outer catch.
  }
}
```
So the previous algorithm checked if the current catch has a parent EH
pad, we didn't insert a call to `wasm.lsda()` and its store.

But this was incorrect, because what if the outer catch is `catch (...)`
or a cleanuppad?
```
try {
  foo();
} catch (...) {
  // wasm.lsda() call and a store are NOT inserted here
  try {
    foo();
  } catch (int) {
    // We need wasm.lsda() here!
  }
}
```
In this case we need to insert `wasm.lsda()` in the inner catchpad,
because the outer catchpad does not have one.

To minimize the number of inserted `wasm.lsda()` calls and stores, we
need a way to figure out whether we have encountered `wasm.lsda()` call
in any of EH pads that dominates the current EH pad. To figure that
out, we now visit EH pads in BFS order in the dominator tree so that we
visit parent BBs first before visiting its child BBs in the domtree.

We keep a set named `ExecutedLSDA`, which basically means "Do we have
`wasm.lsda()` either in the current EH pad or any of its parent EH
pads in the dominator tree?". This is to prevent scanning the domtree up
to the root in the worst case every time we examine an EH pad: each EH
pad only needs to examine its immediate parent EH pad.

- If any of its parent EH pads in the domtree has `wasm.lsda()`, this
  means we don't need `wasm.lsda()` in the current EH pad. We also insert
  the current EH pad in `ExecutedLSDA` set.
- If none of its parent EH pad has `wasm.lsda()`
  - If the current EH pad is a `catch (...)` or a cleanuppad, done.
  - If the current EH pad is neither a `catch (...)` nor a cleanuppad,
    add `wasm.lsda()` and the store in the current EH pad, and add the
    current EH pad to `ExecutedLSDA` set.

Reviewers: dschuff

Subscribers: sbc100, jgravelle-google, hiraditya, sunfish, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77423
2020-04-04 07:02:50 -07:00
Simon Pilgrim
e534f63a3b [CostModel][X86] Add shuffle cost tests for sub-128bit vectors 2020-04-04 13:08:25 +01:00
Simon Pilgrim
47a6dd97f5 [CostModel][X86] Add insert/extract cost tests for sub-128bit vXi8/vXi16 vectors 2020-04-04 13:08:25 +01:00
Simon Pilgrim
66f51ec795 [X86][SSE] lowerV8I16Shuffle - lower compaction shuffles using PACKUSDW(PBLENDW,PBLENDW) on SSE41+
Similar to the lowerV16I8Shuffle implementation, for binary compaction v8i16 shuffles we can avoid the PUNPCKLDQ(PSHUFB,PSHUFB) pattern on SSE41+ targets by using PACKUSDW and PBLENDW. Before SSE41 we would need to use PACKSSDW but that requires sign extension that seems to destroy any gains, even on targets without PSHUFB.

This is a bigger gain on AMD than Intel targets but should never be a regression, and avoiding the shuffle mask load(s) is always useful.

Noticed in codegen while dealing with PR31443.
2020-04-04 13:08:25 +01:00
Nikita Popov
8d10f18654 [IRBuilder] Move some code into the cpp file; NFC
Since D73835 we no longer need to define the whole IRBuilder
implementation in the header. This patch moves some of the larger
methods out of line, into the C++ file.

Differential Revision: https://reviews.llvm.org/D77332
2020-04-04 12:52:56 +02:00
Nikita Popov
677df00dcf [VNCoercion] Use IRBuilderBase; NFC
And remove include from header.
2020-04-04 12:44:50 +02:00
vgxbj
0f8b127b0f [Object] object::ELFObjectFile::dynamic_symbol_begin(): skip symbol index 0
Summary:
Note: This revision is very similar to D62296.

In D75756, we need `getDynamicSymbolIterators()` to skip first NULL symbol in `.dynsym`. And I believe it might be worth pointing this out in a separate patch to gather you experts' opinions.

I have checked that current code base will not be affected by this change.

```
dynamic_symbol_begin()
|- dynamic_symbol_end(): Ok
`- getDynamicSymbolIterators()
   |- addDynamicElfSymbols(): llvm/tools/llvm-objdump/llvm-objdump.cpp, Line 934
   |                          Ok, NULL symbol will be omitted by Line 945-947
   |                          StringRef Name = unwrapOrError(Symbol.getName(), Obj->getName());
   |                          if (Name.empty()) continue;
   |- dumpSymbolNameFromObject(): llvm/tools/llvm-nm/llvm-nm.cpp, Line 1192
   |                          There's no test for dumping dynamic debugging symbol. This patch helps improve llvm-nm behavior. (we should add test for this later)
   `- computeSymbolSizes(): llvm/lib/Object/SymbolSize.cpp, Line 52
      |- OProfileJITEventListener::notifyObjectLoaded(): llvm/lib/ExecutionEngine/OProfileJIT/OProfileJITEventListener.cpp, Line 92
      |                                                  Ok, NULL symbol will be omitted by Line 94-95
      |                                                  if (!Sym.getType() || *Sym.getType() != SF_Function) continue;
      |- IntelJITEventListener::notifyObjectLoaded(): llvm/lib/ExecutionEngine/IntelJITEvents/IntelJITEventListener.cpp, Line 98
      |                                               Ok, NULL symbol will be omitted by Line 124-126 (same as previous one)
      |- PerfJITEventListener::notifyObjectLoaded(): llvm/lib/ExecutionEngine/PerfJITEvents/PerfJITEventListener.cpp, Line 244
      |                                              Ok, NULL symbol will be omitted by Line 254-256, (same as previous one)
      |- SymbolizableObjectFile::create(): llvm/lib/DebugInfo/Symbolize/SymbolizableObjectFile.cpp, Line 73
      |                                    Ok, NULL symbol will be omitted by Line 75
      |                                    res->addSymbol()
      |                                    In addSymbol(), Line 167-168
      |                                    if (!Sec || (Obj && Obj->section_end() == *Sec)) return std::error_code();
      |- dumpCXXData(): llvm/tools/llvm-cxxdump/llvm-cxxdump.cpp, Line 189
      |                 Ok, NULL symbol will be omitted by Line 199-202
      |                 object::section_iterator SecI = *SecIOrErr;
      |                 // Skip external symbols.
      |                 if (SecI == Obj->section_end())
      |                   continue;
      `- printLineInfoForInput(): llvm/tools/llvm-rtdyld/llvm-rtdyld.cpp, Line 418
                                  Ok, NULL symbol will be omitted by Line 430-477
                                  if (Type == object::SymbolRef::ST_Function) {
                                    ...
                                  }
```

Reviewers: grimar, jhenderson, MaskRay

Reviewed By: jhenderson, MaskRay

Subscribers: rupprecht, arphaman, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D76081
2020-04-04 18:45:52 +08:00
Nikita Popov
ed72276673 [Reassociate] Use IRBuilderBase; NFC
And remove now unnecessary IRBuilder.h include in header.
2020-04-04 12:34:16 +02:00
Nikita Popov
b87bc671ab [IVDescriptors] Remove IRBuilder.h include; NFC
IVDescriptors.h itself does not reference IRBuilder at all.
Move the include into transformation passes that do.
2020-04-04 12:07:57 +02:00
Nikita Popov
34ec684e36 [IVDescriptors] Remove unnecessary DemandedBits.h include; NFC
Forward declare DemandedBits in IVDescriptors, and move include
into the cpp file. Also drop the include from LoopUtils, which
does not need it at all.
2020-04-04 12:07:57 +02:00
Matt Arsenault
e41d374974 AMDGPU: Fix a few more tests with old denormal subtarget features 2020-04-03 23:42:13 -04:00
Mehdi Amini
27adb21285 Add mention of advantages of arc in the Phabricator doc.
Differential Revision: https://reviews.llvm.org/D76952
2020-04-04 03:22:29 +00:00
Nemanja Ivanovic
5c1da92dff [NFC][PowerPC] Pre-commit a test case for D77448
Pre-committing the new test case so the review shows only the diffs.
2020-04-03 20:43:04 -05:00
Eli Friedman
7ed9262dbb [llvm-stress][opaque pointers] Remove use of deprecated constructor
(See also D76269.)
2020-04-03 18:00:33 -07:00
LLVM GN Syncbot
544e601c6b [gn build] Port 1d42c0db9a2 2020-04-04 00:07:07 +00:00
Craig Topper
e39e44f2a9 Revert "[X86] Add a Pass that builds a Condensed CFG for Load Value Injection (LVI) Gadgets"
This reverts commit c74dd640fd740c6928f66a39c7c15a014af3f66f.

Reverting to address coding standard issues raised in post-commit
review.
2020-04-03 16:56:08 -07:00
Craig Topper
d67dcc4798 Revert "[X86] Add Support for Load Hardening to Mitigate Load Value Injection (LVI)"
This reverts commit 62c42e29ba43c9d79cd5bd2084b641fbff6a96d5

Reverting to address coding standard issues raised in post-commit
review.
2020-04-03 16:55:53 -07:00
Sanjay Patel
2e2fc47f38 [InstCombine] add tests for freelyNegateValue with 'not'; NFC 2020-04-03 17:28:29 -04:00
Nico Weber
4b48a417a9 Fix standalone clang builds after fb80b6b2d58.
When clang is built against a prebuilt LLVM, LLVM_SOURCE_DIR is
empty, which due to a cmake quirk caused list lengths to get out
of sync. Add a workaround.
2020-04-03 17:15:09 -04:00
Nick Desaulniers
2c9dae12e1 [test] preformat test with update_llc_test_checks.py NFC
Summary:
Prior to landing D76961, preprocess via:
    $ llvm/utils/update_llc_test_checks.py \
      llvm/test/CodeGen/X86/callbr-asm-outputs.ll

Reviewers: void, MaskRay

Reviewed By: void, MaskRay

Subscribers: MaskRay, llvm-commits, srhines

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77356
2020-04-03 14:07:21 -07:00
Scott Constable
40fb959a78 [X86] Add Support for Load Hardening to Mitigate Load Value Injection (LVI)
After finding all such gadgets in a given function, the pass minimally inserts
LFENCE instructions in such a manner that the following property is satisfied:
for all SOURCE+SINK pairs, all paths in the CFG from SOURCE to SINK contain at
least one LFENCE instruction. The algorithm that implements this minimal
insertion is influenced by an academic paper that minimally inserts memory
fences for high-performance concurrent programs:

http://www.cs.ucr.edu/~lesani/companion/oopsla15/OOPSLA15.pdf

The algorithm implemented in this pass is as follows:

1. Build a condensed CFG (i.e., a GadgetGraph) consisting only of the following components:
  -SOURCE instructions (also includes function arguments)
  -SINK instructions
  -Basic block entry points
  -Basic block terminators
  -LFENCE instructions
2. Analyze the GadgetGraph to determine which SOURCE+SINK pairs (i.e., gadgets) are already mitigated by existing LFENCEs. If all gadgets have been mitigated, go to step 6.
3. Use a heuristic or plugin to approximate minimal LFENCE insertion.
4. Insert one LFENCE along each CFG edge that was cut in step 3.
5. Go to step 2.
6. If any LFENCEs were inserted, return true from runOnFunction() to tell LLVM that the function was modified.

By default, the heuristic used in Step 3 is a greedy heuristic that avoids
inserting LFENCEs into loops unless absolutely necessary. There is also a
CLI option to load a plugin that can provide even better optimization,
inserting fewer fences, while still mitigating all of the LVI gadgets.
The plugin can be found here: https://github.com/intel/lvi-llvm-optimization-plugin,
and a description of the pass's behavior with the plugin can be found here:
https://software.intel.com/security-software-guidance/insights/optimized-mitigation-approach-load-value-injection.

Differential Revision: https://reviews.llvm.org/D75937
2020-04-03 13:45:50 -07:00
LLVM GN Syncbot
8b6e307b13 [gn build] Port c74dd640fd7 2020-04-03 20:07:19 +00:00
Julian Lettner
4ee9cd0894 [lit] Cleanly exit on user keyboard interrupt
Graceful lit shutdown on user keyboard interrupt [Ctrl+C] was a
longstanding goal of mine.  After a few refactorings this revision
finally enables it.  We use the following strategy to deal with
KeyboardInterrupt:
https://noswap.com/blog/python-multiprocessing-keyboardinterrupt

Printing of a helpful summary for interrupted runs (just as the one for
completed runs) will be tackled in future revisions.

Reviewed By: serge-sans-paille, rnk

Differential Revision: https://reviews.llvm.org/D77365
2020-04-03 13:03:44 -07:00
Scott Constable
4303527cf0 [X86] Add a Pass that builds a Condensed CFG for Load Value Injection (LVI) Gadgets
Adds a new data structure, ImmutableGraph, and uses RDF to find LVI gadgets and add them to a MachineGadgetGraph.

More specifically, a new X86 machine pass finds Load Value Injection (LVI) gadgets consisting of a load from memory (i.e., SOURCE), and any operation that may transmit the value loaded from memory over a covert channel, or use the value loaded from memory to determine a branch/call target (i.e., SINK).

Also adds a new target feature to X86: +lvi-load-hardening

The feature can be added via the clang CLI using -mlvi-hardening.

Differential Revision: https://reviews.llvm.org/D75936
2020-04-03 13:02:04 -07:00
LLVM GN Syncbot
b81cbc287f [gn build] Port f95a67d8b8a 2020-04-03 19:47:51 +00:00
Andrew Ng
e79cb065c8 Don't use relpaths in lit cfg if build/source dir are on different drives.
See discussion on https://reviews.llvm.org/D77184.
2020-04-03 15:43:50 -04:00
Paul Robinson
8c03abb401 Test had incorrect check for nonzero count 2020-04-03 12:37:13 -07:00
Lang Hames
fe14ec0a0c [ORC] Improve documention of memory ownership in the new Orc C bindings. 2020-04-03 12:33:02 -07:00
Alina Sbirlea
24bd9b3b24 [GraphDiff] Extend GraphDiff to track a list of updates.
Summary:
This patch includes two extensions:
1. It extends the GraphDiff to also keep the original list of updates
after legalization, not just the deletes/insert vectors.
It also provides an API to pop the first update (the updates are store
in reverse, such that the first update is at the end of the list)
2. It adds a bool to mark whether the given updates should be applied as
given, or applied in reverse. This moves the task of reversing the
updates (when the caller needs this) to a functionality inside
GraphDiff, versus having the caller do this.

The two changes could be split into two patches, but they seemed
reasonably small to be reviewed together.

Reviewers: kuhar, dblaikie

Subscribers: hiraditya, george.burgess.iv, mgrang, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77167
2020-04-03 12:10:36 -07:00
Scott Constable
89f19db618 [X86] Add RET-hardening Support to mitigate Load Value Injection (LVI)
Adding a pass that replaces every ret instruction with the sequence:

pop <scratch-reg>
lfence
jmp *<scratch-reg>

where <scratch-reg> is some available scratch register, according to the
calling convention of the function being mitigated.

Differential Revision: https://reviews.llvm.org/D75935
2020-04-03 12:08:34 -07:00
Matt Arsenault
2c672c9e7d Support: Add specializations for reverseBits to use builtin 2020-04-03 14:52:54 -04:00
Matt Arsenault
d5df445655 CodeGen: Convert some TII hooks to use Register 2020-04-03 14:52:54 -04:00
Matt Arsenault
4eb760ce6c AMDGPU: Use Register in more places 2020-04-03 14:52:54 -04:00
Matt Arsenault
3fbbb59e29 AMDGPU: Remove redundant virtual 2020-04-03 14:52:53 -04:00
Stanislav Mekhanoshin
7179790282 [AMDGPU] Added label to test. NFC. 2020-04-03 11:36:32 -07:00
Christopher Tetreault
6fdad00e46 Clean up usages of asserting vector getters in Type
Summary:
Remove usages of asserting vector getters in Type in preparation for the
VectorType refactor. The existence of these functions complicates the
refactor while adding little value.

Reviewers: kparzysz, sdesmalen, efriedma

Reviewed By: kparzysz

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D77267
2020-04-03 11:26:51 -07:00
Stephen Neuendorffer
0820ad9f38 [CMAKE] Plumb include_directories() into tablegen()
Previously, the tablegen() cmake command, which defines custom
commands for running tablegen, included several hardcoded paths.  This
becomes unwieldy as there are more users for which these paths are
insufficient.  For most targets, cmake uses include_directories() and
the INCLUDE_DIRECTORIES directory property to specify include paths.
This change picks up the INCLUDE_DIRECTORIES property and adds it
to the include path used when running tablegen.  As a side effect, this
allows us to remove several hard coded paths to tablegen that are redundant
with specified include_directories().

I haven't removed the hardcoded path to CMAKE_CURRENT_SOURCE_DIR, which
seems generically useful.  There are several users in clang which apparently
don't have the current directory as an include_directories().  This could
be considered separately.

The new version of this path uses list APPEND rather than list TRANSFORM,
in order to be compatible with cmake 3.4.3. If we update to cmake 3.12 then
we can use list TRANSFORM instead.

Differential Revision: https://reviews.llvm.org/D77156
2020-04-03 11:23:38 -07:00
Stanislav Mekhanoshin
ed36d749eb [AMDGPU] Propagate AGPR RC from PHI to its PHI operands
We can fix register class of PHI based on its all AGPR uses.
That leaves behind all PHIs which were already processed
earlier. Propagate RC back to PHI operands of a PHI.

Differential Revision: https://reviews.llvm.org/D77344
2020-04-03 11:23:02 -07:00
Simon Pilgrim
658fa76c7a [YAMLParser] Scanner::setError - ensure we use the StringRef::iterator argument (PR45043)
As detailed on PR45043, static analysis was warning that the StringRef::iterator Position argument was being ignored and the function was hardwired to use the Current iterator.

This patch ensures we use the provided iterator and removes the (barely necessary) setError wrapper that always used Current.

Differential Revision: https://reviews.llvm.org/D76512
2020-04-03 18:55:38 +01:00
Sanjay Patel
24269f9eb6 [VectorCombine] try to form a better extractelement
Extracting to the same index that we are going to insert back into
allows forming select ("blend") shuffles and enables further transforms.

Admittedly, this is a quick-fix for a more general problem that I'm
hoping to solve by adding transforms for patterns that start with an
insertelement.

But this might resolve some regressions known to be caused by the
extract-extract transform (although I have not gotten more details on
those yet).

In the motivating case from PR34724:
https://bugs.llvm.org/show_bug.cgi?id=34724

The combination of subsequent instcombine and codegen transforms gets us this improvement:

  vmovshdup	%xmm0, %xmm2    ## xmm2 = xmm0[1,1,3,3]
  vhaddps	%xmm1, %xmm1, %xmm4
  vmovshdup	%xmm1, %xmm3    ## xmm3 = xmm1[1,1,3,3]
  vaddps	%xmm0, %xmm2, %xmm0
  vaddps	%xmm1, %xmm3, %xmm1
  vshufps	$200, %xmm4, %xmm0, %xmm0 ## xmm0 = xmm0[0,2],xmm4[0,3]
  vinsertps	$177, %xmm1, %xmm0, %xmm0 ## xmm0 = zero,xmm0[1,2],xmm1[2]

  -->

  vmovshdup	%xmm0, %xmm2    ## xmm2 = xmm0[1,1,3,3]
  vhaddps	%xmm1, %xmm1, %xmm1
  vaddps	%xmm0, %xmm2, %xmm0
  vshufps	$200, %xmm1, %xmm0, %xmm0 ## xmm0 = xmm0[0,2],xmm1[0,3]

Differential Revision: https://reviews.llvm.org/D76623
2020-04-03 13:55:13 -04:00
Sylvain Audi
4e03034619 [Support/Path] sys::path::replace_path_prefix fix and simplifications
Added unit tests for 2 scenarios that were failing.
Made replace_path_prefix back to 3 parameters instead of 5, simplifying the implementation. The other 2 were always used with the default value.

This commit is intended to be the first of 3:
1) simplify/fix replace_path_prefix.
2) use it in the context of -fdebug-prefix-map and -fmacro-prefix-map (see D76869).
3) Make Windows version of replace_path_prefix insensitive to both case and separators (slash vs backslash).

Differential Revision: https://reviews.llvm.org/D77223
2020-04-03 13:50:23 -04:00
Stephen Neuendorffer
7f5ee4136e Revert "[CMAKE] Plumb include_directories() into tablegen()"
This reverts commit ae044c5b0caa095602b6ef4cca40d57efc26a8f6.

This breaks the buildbots, which use an older version of cmake.
2020-04-03 10:47:36 -07:00
Stephen Neuendorffer
d8ac72d584 [CMAKE] Plumb include_directories() into tablegen()
Previously, the tablegen() cmake command, which defines custom
commands for running tablegen, included several hardcoded paths.  This
becomes unwieldy as there are more users for which these paths are
insufficient.  For most targets, cmake uses include_directories() and
the INCLUDE_DIRECTORIES directory property to specify include paths.
This change picks up the INCLUDE_DIRECTORIES property and adds it
to the include path used when running tablegen.  As a side effect, this
allows us to remove several hard coded paths to tablegen that are redundant
with specified include_directories().

I haven't removed the hardcoded path to CMAKE_CURRENT_SOURCE_DIR, which
seems generically useful.  There are several users in clang which apparently
don't have the current directory as an include_directories().  This could
be considered separately.

Differential Revision: https://reviews.llvm.org/D77156
2020-04-03 10:38:25 -07:00