-
Notifications
You must be signed in to change notification settings - Fork 14.5k
[TableGen][DecoderEmitter] Add option to emit type-specialized code #146593
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
What's the motivation? |
Please see the PR description that I just added. I also have data to support this, will add soon (tabulating it ATM) |
Failure is in the unit test (TableGen/VarLenDecoder.td), I need to update it. |
Why can't the disassembler emitter figure out the bitwidths from the tablegen input? It already makes separate tables for each instruction size. |
Oh is it because RISC-V uses 3 bit widths, but only 2 types for DecodeInstruction? |
Can we store the type in the Instruction class in the .td files like the bitwidth instead of introducing a complex command line argument? |
Right, this is POC at this which shows that the proposed optimization works. I am open to changing the interface here as well. The command line one was simple enough to not mess with tablegen instruction class etc, but that is an option, though it feels more intrusive. The command line is moderately complex and localized to the decoder emitter. |
Repeating the type per-instruction record might be redundant (and we would need more verification as well to verify for a given size, all insts of that size have the C++ type specified and its consistent). One option is to add a new InstructionTypeAndSize class that records this information, and DecoderEmitter can use that if its present else fall back to templated code. Something like
and a particular backend can define a single record of type InstructionDecoderTypeAndSizes<> which the DecoderEmitter will use. This is essentially encoding the command line option as a record.
or more simply
|
30d0838
to
2d7d1dc
Compare
RISCV uses a common base class for each of the 3 instruction sizes. Other targets may be similar.
|
Right, but nonetheless, we will have the type specified per instruction instance and we will still need to validate for example that for all instructions with a particular size, the type string is same. To me that seems unnecessary duplication of this information and then additional verification to make sure that it's consistent. Also, unlike the size in bytes, which is a core property of the instruction, its C++ type to represent its bits in memory seems not a core property. Many backends seems to choose the same type (for example uint64_t) for all their 16/32/48/64 bit insts. Adoption wise as well, sticking it in the per-inst record seems more invasive (for example, in our and several other downstream backends the core instruction records are auto-generated so the adoption curve for this increases further). |
Requesting not review per-se but opinion on the user interface for this optimization. Choices proposed are:
|
I'm probably going to change to uint64_t for RISC-V. The 48-bit instructions are only used by one vendor and are relatively recent additions. I think the duplication cost just wasn't considered when they were added. I agree adding to the Inst class might be too invasive. I still think it should be in .td files somehow. Needing to change a CMake file and replicating to GN and the other build systems when a new instruction width is added seems bad. |
Right, is the option #3 above palatable? We essentially encode it as a standalone record that the DecoderEmitter will look for. |
Maybe it should be stored in the |
…6. NFC Insn is passed to decodeInstruction which is a template function based on the type of Insn. By using uint64_t we ensure only one version of decodeInstruction is created. This reduces the file size of RISCVDisassembler.cpp.o by ~25% in my local build. This should get even more size benefit than llvm#146593.
|
2d7d1dc
to
04366ee
Compare
@topperc Please let me know if this new scheme looks ok. If yes, I'll migrate the rest of the targets (Right now I just changed AARCH64 and RISCV) to use this, and add some unit tests for a final review. |
I have not tested. My speculation is, no binary size change but just minor compile time improvement by avoiding template specialization. I'll check and report back. |
Looks like templating adds a little bit to the code size. Building the RISCVDisassembler.cpp.o in a release config with/without this change results in the following:
So, 16 bytes less. Not significant though. |
Could just be a difference in the name mangling of the function name? Or are you checking the .text size? |
yeah, your guess was right. I dumped the sizes with
That is, text sizes are the same but mangled names are different and that likely leads to larger object file sizes. |
Note though that what you did for RISCV may not be applicable/desirable for all targets. For example, AMDGPU has 128 bit instructions, so I am assuming if we just use a 128-bit type for all instructions, we may pay a penalty in terms of the bit extraction costs (32 vs 64-bit may not be as bad). |
@topperc My question is still unanswered. WDYT of this new interface to op-in into this optimization? |
✅ With the latest revision this PR passed the C/C++ code formatter. |
d51627f
to
1824e68
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor notes/suggestions
1824e68
to
17a507f
Compare
23e63c2
to
357a409
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.
9c3defd
to
180acbb
Compare
decodeToMCInst
180acbb
to
326a481
Compare
I did one other minor change. We used to generate, for each bitwidth, a
Now it will just have:
|
This change attempts to reduce the size of the disassembler code generated by DecoderEmitter.
Current state:
decodeInstruction
which is the entry point into the generated code anddecodeToMCInst
which is invoked when a decode op is reached in the while traversing through the decoder table. Both functions are templated onInsnType
which is the raw instruction bits that are supplied todecodeInstruction
.decodeInstruction
with different types, leading to several template instantiations of this function in the final code. As an example, AMDGPU instantiates this function with typeDecoderUInt128
type for decoding 96/128 bit instructions,uint64_t
for decoding 64-bit instructions, anduint32_t
for decoding 32-bit instructions.decodeToMCInst
generated, it has code that handles all instruction sizes. The decoders emitted for different instructions sizes rarely have any intersection with each other. That means, in the AMDGPU case, the instantiation with InsnType == DecoderUInt128 has decoder code for 32/64-bit instructions that is never exercised. Conversely, the instantiation with InsnType == uint64_t has decoder code for 128/96/32 bit instructions that is never exercised. This leads to unnecessary dead code in the generated disassembler binary.With this change, the DecoderEmitter will stop generating a single templated
decodeInstruction
and will instead generate several overloaded versions of this function and the associateddecodeToMCInst
function as well. Instead of using the templatedInsnType
, it will use an auto-inferred type which can be either a standard C++ integrer type, APInt, or a std::bitset. As a results, decoders for 32-bit instructions will appear only in the 32-bit variant ofdecodeToMCinst
and 64-bit decoders will appear only in 64-bit variant and that will fix the code duplication in the templated variant.Additionally, the
DecodeIndex
will now be computed per-instruction bitwidth as instead of being computed globally across all bitwidths in the earlier case. So, the values will generally be smaller than before and hence will consume less bytes in their ULEB128 encoding in the decoder tables, resulting in further reduction in the size of the decode tables.Since this non-templated decoder also needs some changes in the C++ code, added an option
GenerateTemplatedDecoder
toInstrInfo
that is defaulted to false, but targets can set to true to fall back to using templated code. The goal is to migrate all targets to use non-templated decoder and deprecate this option in future.Adopt this feature for the AMDGPU backend. In a release build, this results in a net 35% reduction in the .text size of libLLVMAMDGPUDisassembler.so and a 5% reduction in the .rodata size. Actual numbers measured locally for a Linux x86_64 build using clang-18.1.3 toolchain are:
For targets that do not use multiple instantiations of
decodeInstruction
, opting in into this feature may not result in code/data size improvement but potential compile time improvements by avoiding the use of templated code.