add LD/ST-Index-Shifted provisional instructions to optable.csv for
[libreriscv.git] / openpower / sv / rfc / ls012.mdwn
1 # External RFC ls012: Discuss priorities of Libre-SOC Scalar(Vector) ops
2
3 **Date: 2023apr10. v2 released: TODO**
4
5 * Funded by NLnet Grants under EU Horizon Grants 101069594 825310
6 * <https://git.openpower.foundation/isa/PowerISA/issues/121>
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=1051>
8 * <https://bugs.libre-soc.org/show_bug.cgi?id=1052>
9 * <https://bugs.libre-soc.org/show_bug.cgi?id=1054>
10
11 The purpose of this RFC is:
12
13 * to give a full list of upcoming Scalar opcodes developed by Libre-SOC
14 (being cognisant that *all* of them are Vectoriseable)
15 * to give OPF Members and non-Members alike the opportunity to comment and get
16 involved early in RFC submission
17 * formally agree a priority order on an iterative basis with new versions
18 of this RFC,
19 * which ones should be EXT022 Sandbox, which in EXT0xx, which in EXT2xx, which
20 not proposed at all,
21 * keep readers summarily informed of ongoing RFC submissions, with new versions
22 of this RFC,
23 * for IBM (in their capacity as Allocator of Opcodes)
24 to get a clear advance picture of Opcode Allocation
25 *prior* to submission
26
27 As this is a Formal ISA RFC the evaluation shall ultimately define
28 (in advance of the actual submission of the instructions themselves)
29 which instructions will be submitted over the next 1-18 months.
30
31 *It is expected that readers visit and interact with the Libre-SOC
32 resources in order to do due-diligence on the prioritisation
33 evaluation. Otherwise the ISA WG is overwhelmed by "drip-fed" RFCs
34 that may turn out not to be useful, against a background of having
35 no guiding overview or pre-filtering, and everybody's precious time
36 is wasted. Also note that the Libre-SOC Team, being funded by NLnet
37 under Privacy and Enhanced Trust Grants, are **prohibited** from signing
38 Commercial-Confidentiality NDAs, as doing so is a direct conflict of
39 interest with their funding body's Charitable Foundation Status and
40 remit, and therefore the **entire** set of almost 150 new SFFS instructions
41 can only go via the External RFC Process. Also be advised and aware
42 that "Libre-SOC" != "RED Semiconductor Ltd". The two are completely **separate**
43 organisations*.
44
45 Worth bearing in mind during evaluation that every "Defined Word" may
46 or may not be Vectoriseable, but that every "Defined Word" should have
47 merits on its own, not just when Vectorised. An example of a borderline
48 Vectoriseable Defined Word is `mv.swizzle` which only really becomes
49 high-priority for Audio/Video, Vector GPU and HPC Workloads, but has
50 less merit as a Scalar-only operation, yet when SVP64Single-Prefixed
51 can be part of an atomic Compare-and-Swap sequence.
52
53 Although one of the top world-class ISAs,
54 Power ISA Scalar (SFFS) has not been significantly advanced in 12
55 years: IBM's primary focus has understandably been on PackedSIMD VSX.
56 Unfortunately, with VSX being 914 instructions and 128-bit it is far too
57 much for any new team to consider (10+ years development effort) and far
58 outside of Embedded or Tablet/Desktop/Laptop power budgets. Thus bringing
59 Power Scalar up-to-date to modern standards *and on its own merits*
60 is a reasonable goal, and the advantages of the reduced focus is that
61 SFFS remains RISC-paradigm, with lessons being be learned from other
62 ISAs from the intervening years. Good examples here include `bmask`.
63
64 SVP64 Prefixing - also known by the terms "Zero-Overhead-Loop-Prefixing"
65 as well as "True-Scalable-Vector Prefixing" - also literally brings new
66 dimensions to the Power ISA. Thus when adding new Scalar "Defined Words"
67 it has to unavoidably and simultaneously be taken into consideration
68 their value when Vector-Prefixed, *as well as* SVP64Single-Prefixed.
69
70 **Target areas**
71
72 Whilst entirely general-purpose there are some categories that these
73 instructions are targetting: Bit-manipulation, Big-integer, cryptography,
74 Audio/Visual, High-Performance Compute, GPU workloads and DSP.
75
76 **Instruction count guide and approximate priority order**
77
78 * 6 - SVP64 Management [[ls008]] [[ls009]] [[ls010]]
79 * 5 - CR weirds [[sv/cr_int_predication]]
80 * 4 - INT<->FP mv [[ls006]]
81 * 19 - GPR LD/ST-PostIncrement-Update (saves hugely in hot-loops) [[ls011]]
82 * ~12 - FPR LD/ST-PostIncrement-Update (ditto) [[ls011]]
83 * 26 - GPR LD/ST-Shifted (again saves hugely in hot-loops) [[ls004]]
84 * 11 - FPR LD/ST-Shifted (ditto) [[ls004]]
85 * 2 - Float-Load-Immediate (always saves one LD L1/2/3 D-Cache op) [[ls002]]
86 * 5 - Big-Integer Chained 3-in 2-out (64-bit Carry) [[sv/biginteger]]
87 * 6 - Bitmanip LUT2/3 operations. high cost high reward [[sv/bitmanip]]
88 * 1 - fclass (Scalar variant of xvtstdcsp) [[sv/fclass]]
89 * 5 - Audio-Video [[sv/av_opcodes]]
90 * 2 - Shift-and-Add (mitigates LD-ST-Shift; Cryptography e.g. twofish) [[ls004]]
91 * 2 - BMI group [[sv/vector_ops]]
92 * 2 - GPU swizzle [[sv/mv.swizzle]]
93 * 9 - FP DCT/FFT Butterfly (2/3-in 2-out)
94 * ~9 Integer DCT/FFT Butterfly <https://bugs.libre-soc.org/show_bug.cgi?id=1028>
95 * 18 - Trigonometric (1-arg) [[openpower/transcendentals]]
96 * 15 - Transcendentals (1-arg) [[openpower/transcendentals]]
97 * 25 - Transcendentals (2-arg) [[openpower/transcendentals]]
98
99 Summary tables are created below by different sort categories. Additional
100 columns (and tables) as necessary can be requested to be added as part of update revisions
101 to this RFC.
102
103 \newpage{}
104
105 # Target Area summaries
106
107 Please note that there are some instructions developed thanks to NLnet
108 funding that have not been included here for assessment. Examples
109 include `pcdec` and the Galois Field arithmetic operations. From a purely
110 practical perspective due to the quantity the lower-priority instructions
111 were simply left out. However they remain in the Libre-SOC resources.
112
113 Some of these SFFS instructions appear to be duplicates of VSX.
114 A frequent argument comes up that if instructions
115 are in VSX already they should not be added to SFFS, especially if
116 they are nominally the same. The logic that this effectively damages
117 performance of an SFFS-only implementation was raised earlier, however
118 there is a more subtle reason why the instructions are needed.
119
120 Future versions of SVP64 and SVP64Single are expected to be developed
121 by future Power ISA Stakeholders on top of VSX. The decisions made
122 there about the meaning of Prefixed Vectorised VSX may be **completely**
123 different from those made for Prefixed SFFS instructions. At which
124 point the lack of SFFS equivalents would penalise SFFS implementors
125 in a much more severe way, effectively expecting them and SFFS programmers
126 to work with a non-orthogonal paradigm, to their detriment.
127 The solution is to give the SFFS Subset the space and respect that it deserves
128 and allow it to be stand-alone on its own merits.
129
130 ## SVP64 Management instructions
131
132 These without question have to go in EXT0xx. Future extended variants,
133 bringing even more powerful capabilities, can be followed up later with
134 EXT1xx prefixed variants, which is not possible if placed in EXT2xx.
135 *Only `svstep` is actually Vectoriseable*, all other Management
136 instructions are UnVectoriseable. PO1-Prefixed examples include adding
137 psvshape in order to support both Inner and Outer Product Matrix
138 Schedules, by providing the option to directly reverse the order of the
139 triple loops. Outer is used for standard Matrix Multiply (on top
140 of a standard MAC or FMAC instruction), but Inner is
141 required for Warshall Transitive Closure (on top of a cumulatively-applied
142 max instruction).
143
144 The Management Instructions themselves are all Scalar Operations, so
145 PO1-Prefixing is perfectly reasonable. SVP64 Management instructions of
146 which there are only 6 are all 5 or 6 bit XO, meaning that the opcode
147 space they take up in EXT0xx is not alarmingly high for their intrinsic
148 strategic value.
149
150 ## Transcendentals
151
152 Found at [[openpower/transcendentals]] these subdivide into high
153 priority for accelerating general-purpose and High-Performance Compute,
154 specialist 3D GPU operations suited to 3D visualisation, and low-priority
155 less common instructions where IEEE754 full bit-accuracy is paramount.
156 In 3D GPU scenarios for example even 12-bit accuracy can be overkill,
157 but for HPC Scientific scenarios 12-bit would be disastrous.
158
159 There are a **lot** of operations here, and they also bring Power
160 ISA up-to-date to IEEE754-2019. Fortunately the number of critical
161 instructions is quite low, but the caveat is that if those operations
162 are utilised to synthesise other IEEE754 operations (divide by `pi` for
163 example) full bit-level accuracy (a hard requirement for IEEE754) is lost.
164
165 Also worth noting that the Khronos Group defines minimum acceptable
166 bit-accuracy levels for 3D Graphics: these are **nowhere near** the full
167 accuracy demanded by IEEE754, the reason for the Khronos definitions is
168 a massive reduction often four-fold in power consumption and gate count
169 when 3D Graphics simply has no need for full accuracy.
170
171 *For 3D GPU markets this definitely needs addressing*
172
173 ## Audio/Video
174
175 Found at [[sv/av_opcodes]] these do not require Saturated variants
176 because Saturation is added via [[sv/svp64]] (Vector Prefixing) and via
177 [[sv/svp64_single]] Scalar Prefixing. This is important to note for
178 Opcode Allocation because placing these operations in the UnVectoriseable
179 areas would irredeemably damage their value. Unlike PackedSIMD ISAs
180 the actual number of AV Opcodes is remarkably small once the usual
181 cascading-option-multipliers (SIMD width, bitwidth, saturation,
182 HI/LO) are abstracted out to RISC-paradigm Prefixing, leaving just
183 absolute-diff-accumulate, min-max, average-add etc. as "basic primitives".
184
185 ## Twin-Butterfly FFT/DCT/DFT for DSP/HPC/AI/AV
186
187 The number of uses in Computer Science for DCT, NTT, FFT and DFT,
188 is astonishing. The wikipedia page lists over a hundred separate and
189 distinct areas: Audio, Video, Radar, Baseband processing, AI, Solomon-Reed
190 Error Correction, the list goes on and on. ARM has special dedicated
191 Integer Twin-butterfly instructions. TI's MSP Series DSPs have had FFT
192 Inner loop support for over 30 years. Qualcomm's Hexagon VLIW Baseband
193 DSP can do full FFT triple loops in one VLIW group.
194
195 It should be pretty clear this is high priority.
196
197 With SVP64 [[sv/remap]] providing the Loop Schedules it falls to
198 the Scalar side of the ISA to add the prerequisite "Twin Butterfly"
199 operations, typically performing for example one multiply but in-place
200 subtracting that product from one operand and adding it to the other.
201 The *in-place* aspect is strategically extremely important for significant
202 reductions in Vectorised register usage, particularly for DCT.
203
204 ## CR Weird group
205
206 Outlined in [[sv/cr_int_predication]] these instructions massively save
207 on CR-Field instruction count. Multi-bit to single-bit and vice-versa
208 normally requiring several CR-ops (crand, crxor) are done in one single
209 instruction. The reason for their addition is down to SVP64 overloading
210 CR Fields as Vector Predicate Masks. Reducing instruction count in
211 hot-loops is considered high priority.
212
213 An additional need is to do popcount on CR Field bit vectors but adding
214 such instructions to the *Condition Register* side was deemed to be far
215 too much. Therefore, priority was given instead to transferring several
216 CR Field bits into GPRs, whereupon the full set of Standard Scalar GPR
217 Logical Operations may be used. This strategy has the side-effect of
218 keeping the CRweird group down to only five instructions.
219
220 ## Big-integer Math
221
222 [[sv/biginteger]] has always been a high priority area for commercial
223 applications, privacy, Banking, as well as HPC Numerical Accuracy:
224 libgmp as well as cryptographic uses in Asymmetric Ciphers. poly1305
225 and ec25519 are finding their way into everyday use via OpenSSL.
226
227 A very early variant of the Power ISA had a 32-bit Carry-in Carry-out
228 SPR. Its removal from subsequent revisions is regrettable. An alternative
229 concept is to add six explicit 3-in 2-out operations that, on close
230 inspection, always turn out to be supersets of *existing Scalar
231 operations* that discard upper or lower DWords, or parts thereof.
232
233 *Thus it is critical to note that not one single one of these operations
234 expands the bitwidth of any existing Scalar pipelines*.
235
236 The `dsld` instruction for example merely places additional LSBs into the
237 64-bit shift (64-bit carry-in), and then places the (normally discarded)
238 MSBs into the second output register (64-bit carry-out). It does **not**
239 require a 128-bit shifter to replace the existing Scalar Power ISA
240 64-bit shifters.
241
242 The reduction in instruction count these operations bring, in critical
243 hot loops, is remarkably high, to the extent where a Scalar-to-Vector
244 operation of *arbitrary length* becomes just the one Vector-Prefixed
245 instruction.
246
247 Whilst these are 5-6 bit XO their utility is considered high strategic
248 value and as such are strongly advocated to be in EXT04. The alternative
249 is to bring back a 64-bit Carry SPR but how it is retrospectively
250 applicable to pre-existing Scalar Power ISA multiply, divide, and shift
251 operations at this late stage of maturity of the Power ISA is an entire
252 area of research on its own deemed unlikely to be achievable.
253
254 ## fclass and GPR-FPR moves
255
256 [[sv/fclass]] - just one instruction. With SFFS being locked down to
257 exclude VSX, and there being no desire within the nascent OpenPOWER
258 ecosystem outside of IBM to implement the VSX PackedSIMD paradigm, it
259 becomes necessary to upgrade SFFS such that it is stand-alone capable. One
260 omission based on the assumption that VSX would always be present is an
261 equivalent to `xvtstdcsp`.
262
263 Similar arguments apply to the GPR-INT move operations, proposed in
264 [[ls006]], with the opportunity taken to add rounding modes present
265 in other ISAs that Power ISA VSX PackedSIMD does not have. Javascript
266 rounding, one of the worst offenders of Computer Science, requires a
267 phenomenal 35 instructions with *six branches* to emulate in Power
268 ISA! For desktop as well as Server HTML/JS back-end execution of
269 javascript this becomes an obvious priority, recognised already by ARM
270 as just one example.
271
272 ## Bitmanip LUT2/3
273
274 These LUT2/3 operations are high cost high reward. Outlined in
275 [[sv/bitmanip]], the simplest ones already exist in PackedSIMD VSX:
276 `xxeval`. The same reasoning applies as to fclass: SFFS needs to be
277 stand-alone on its own merits and should an implementor
278 choose not to implement any aspect of PackedSIMD VSX the performance
279 of their product should not be penalised for making that decision.
280
281 With Predication being such a high priority in GPUs and HPC, CR Field
282 variants of Ternary and Binary LUT instructions were considered high
283 priority, and again just like in the CRweird group the opportunity was
284 taken to work on *all* bits of a CR Field rather than just one bit as
285 is done with the existing CR operations crand, cror etc.
286
287 The other high strategic value instruction is `grevlut` (and `grevluti`
288 which can generate a remarkably large number of regular-patterned magic
289 constants). The grevlut set require of the order of 20,000 gates but
290 provide an astonishing plethora of innovative bit-permuting instructions
291 never seen in any other ISA.
292
293 The downside of all of these instructions is the extremely low XO bit
294 requirements: 2-3 bit XO due to the large immediates *and* the number of
295 operands required. The LUT3 instructions are already compacted down to
296 "Overwrite" variants. (By contrast the Float-Load-Immediate instructions
297 are a much larger XO because despite having 16-bit immediate only one
298 Register Operand is needed).
299
300 Realistically these high-value instructions should be proposed in EXT2xx
301 where their XO cost does not overwhelm EXT0xx.
302
303
304 ## (f)mv.swizzle
305
306 [[sv/mv.swizzle]] is dicey. It is a 2-in 2-out operation whose value
307 as a Scalar instruction is limited *except* if combined with `cmpi` and
308 SVP64Single Predication, whereupon the end result is the RISC-synthesis
309 of Compare-and-Swap, in two instructions.
310
311 Where this instruction comes into its full value is when Vectorised.
312 3D GPU and HPC numerical workloads astonishingly contain between 10 to 15%
313 swizzle operations: access to YYZ, XY, of an XYZW Quaternion, performing
314 balancing of ARGB pixel data. The usage is so high that 3D GPU ISAs make
315 Swizzle a first-class priority in their VLIW words. Even 64-bit Embedded
316 GPU ISAs have a staggering 24-bits dedicated to 2-operand Swizzle.
317
318 So as not to radicalise the Power ISA the Libre-SOC team decided to
319 introduce mv Swizzle operations, which can always be Macro-op fused
320 in exactly the same way that ARM SVE predicated-move extends 3-operand
321 "overwrite" opcodes to full independent 3-in 1-out.
322
323 ## BMI (bit-manipulation) group.
324
325 Whilst the [[sv/vector_ops]] instructions are only two in number, in
326 reality the `bmask` instruction has a Mode field allowing it to cover
327 **24** instructions, more than have been added to any other CPUs by
328 ARM, Intel or AMD. Analysis of the BMI sets of these CPUs shows simple
329 patterns that can greatly simplify both Decode and implementation. These
330 are sufficiently commonly used, saving instruction count regularly,
331 that they justify going into EXT0xx.
332
333 The other instruction is `cprop` - Carry-Propagation - which takes
334 the P and Q from carry-propagation algorithms and generates carry
335 look-ahead. Greatly increases the efficiency of arbitrary-precision
336 integer arithmetic by combining what would otherwise be half a dozen
337 instructions into one. However it is still not a huge priority unlike
338 `bmask` so is probably best placed in EXT2xx.
339
340 ## Float-Load-Immediate
341
342 Very easily justified. As explained in [[ls002]] these always saves one
343 LD L1/2/3 D-Cache memory-lookup operation, by virtue of the Immediate
344 FP value being in the I-Cache side. It is such a high priority that
345 these instructions are easily justifiable adding into EXT0xx, despite
346 requiring a 16-bit immediate. By designing the second-half instruction
347 as a Read-Modify-Write it saves on XO bit-length (only 5 bits), and can be
348 macro-op fused with its first-half to store a full IEEE754 FP32 immediate
349 into a register.
350
351 There is little point in putting these instructions into EXT2xx. Their
352 very benefit and inherent value *is* as 32-bit instructions, not 64-bit
353 ones. Likewise there is less value in taking up EXT1xx Encoding space
354 because EXT1xx only brings an additional 16 bits (approx) to the table,
355 and that is provided already by the second-half instruction.
356
357 Thus they qualify as both high priority and also EXT0xx candidates.
358
359 ## FPR/GPR LD/ST-PostIncrement-Update
360
361 These instruction, outlined in [[ls011]], save hugely in hot-loops.
362 Early ISAs such as PDP-8, PDP-11, which inspired the iconic Motorola
363 68000, 88100, Mitch Alsup's MyISA 66000, and can even be traced back to
364 the iconic ultra-RISC CDC 6600, all had both pre- and post- increment
365 Addressing Modes.
366
367 The reason is very simple: it is a direct recognition of the practice
368 in c to frequently utilise both `*p++` and `*++p` which itself stems
369 from common need in Computer Science algorithms.
370
371 The problem for the Power ISA is - was - that the opcode space needed
372 to support both was far too great, and the decision was made to go with
373 pre-increment, on the basis that outside the loop a "pre-subtraction"
374 may be performed.
375
376 Whilst this is a "solution" it is less than ideal, and the opportunity
377 exists now with the EXT2xx Primary Opcodes to correct this and bring
378 Power ISA up a level.
379
380 ## Shift-and-add (and LD/ST Indexed-Shift)
381
382 Shift-and-Add are proposed in [[ls004]]. They mitigate the need to add
383 LD-ST-Shift instructions which are a high-priority aspect of both x86
384 and ARM. LD-ST-Shift is normally just the one instruction: Shift-and-add
385 brings that down to two, where Power ISA presently requires three.
386 Cryptography e.g. twofish also makes use of Integer double-and-add,
387 so the value of these instructions is not limited to Effective Address
388 computation. They will also have value in Audio DSP.
389
390 Being a 10-bit XO it would be somewhat punitive to place these in EXT2xx
391 when their whole purpose and value is to reduce binary size in Address
392 offset computation, thus they are best placed in EXT0xx.
393
394 Also included because it is important to see the quantity of instructions:
395 LD/ST-Indexed-Shifted. Across Update variants, Byte-reverse variants,
396 Arithmetic and FP, the total is a slightly-eye-watering **37** instructions,
397 only ameliorated by the fact that they are all 9-bit XO. The upside as
398 far as adding them is concerned is that existing hardware will already
399 have amalgamated pipelines with very few actual back-end (Micro-Coded)
400 internal operations (likely just two: one load, one store).
401 Passing a 2-bit additional immediate field down to those pipelines really
402 is not hard.
403
404 *(Readers unfamiliar with Micro-coding should look at the Microwatt VHDL
405 source code)*
406
407 \newpage{}
408
409 # Vectorisation: SVP64 and SVP64Single
410
411 To be submitted as part of [[ls001]], [[ls008]], [[ls009]] and [[ls010]],
412 with SVP64Single to follow in a subsequent RFC, SVP64 is conceptually
413 identical to the 50+ year old 8080 `REP` instruction and the Zilog Z80
414 `CPIR` and `LDIR` instructions. Parallelism is best achieved by exploiting
415 a Multi-Issue Out-of-Order Micro-architecture. It is extremely important
416 to bear in mind that at no time does SVP64 add even one single actual
417 Vector instruction. It is a *pure* RISC-paradigm Prefixing concept only.
418
419 This has some implications which need unpacking. Firstly: in the future,
420 the Prefixing may be applied to VSX. The only reason it was not included
421 in the initial proposal of SVP64 is because due to the number of VSX
422 instructions the Due Diligence required is obviously five times higher
423 than the 3+ years work done so far on the SFFS Subset.
424
425 Secondly: **any** Scalar instruction involving registers **automatically**
426 becomes a candidate for Vector-Prefixing. This in turn means that when
427 a new instruction is proposed, it becomes a hard requirement to consider
428 not only the implications of its inclusion as a Scalar-only instruction,
429 but how it will best be utilised as a Vectorised instruction **as well**.
430 Extreme examples of this are the Big-Integer 3-in 2-out instructions that
431 use one 64-bit register effectively as a Carry-in and Carry-out. The
432 instructions were designed in a *Scalar* context to be inline-efficient
433 in hardware (use of Operand-Forwarding to reduce the chain down to 2-in 1-out),
434 but in a *Vector* context it is extremely straightforward to Micro-code
435 an entire batch onto 128-bit SIMD pipelines, 256-bit SIMD pipelines, and
436 to perform a large internal Forward-Carry-Propagation on for example the
437 Vectorised-Multiply instruction.
438
439 Thirdly: as far as Opcode Allocation is concerned, SVP64 needs to be
440 considered as an independent stand-alone instruction (just like `REP`).
441 In other words, the Suffix **never** gets decoded as a completely different
442 instruction just because of the Prefix. The cost of doing so is simply
443 too high in hardware.
444
445 --------
446
447 # Guidance for evaluation
448
449 Deciding which instructions go into an ISA is extremely complex, costly,
450 and a huge responsibility. In public standards mistakes are irrevocable,
451 and in the case of an ISA the Opcode Allocation is a finite resource,
452 meaning that mistakes punish future instructions as well. This section
453 therefore provides some Evaluation Guidance on the decision process,
454 particularly for people new to ISA development, given that this RFC
455 is circulated widely and publicly. Constructive feedback from experienced
456 ISA Architects welcomed to improve this section.
457
458 **Does anyone want it?**
459
460 Sounds like an obvious question but if there is no driving need (no
461 "Stakeholder") then why is the instruction being proposed? If it is
462 purely out of curiosity or part of a Research effort not intended for
463 production then it's probably best left in the EXT022 Sandbox.
464
465 **How many registers does it need?**
466
467 The basic RISC Paradigm is not only to make instruction encoding simple
468 (often "wasting" encoding space compared to highly-compacted ISAs such
469 as x86), but also to keep the number of registers used down to a minimum.
470
471 Counter-examples are FMAC which had to be added to IEEE754 because the
472 *internal* product requires more accuracy than can fit into a register
473 (it is well-known that FMUL followed by FADD performs an additional
474 rounding on the intermediate register which loses accuracy compared to
475 FMAC). Another would be a dot-product instruction, which again requires
476 an accumulator of at least double the width of the two vector inputs.
477 And in the AMDGPU ISA, there are Texture-mapping instructions taking up
478 to an astounding *twelve* input operands!
479
480 The downside of going too far however has to be a trade-off with the
481 next question. Both MIPS and RISC-V lack Condition Codes, which means
482 that emulating x86 Branch-Conditional requires *ten* MIPS instructions.
483
484 The downside of creating too complex instructions is that the Dependency
485 Hazard Management in high-performance multi-issue out-of-order
486 microarchitectures becomes infeasibly large, and even simple in-order
487 systems may have performance severely compromised by an overabundance
488 of stalls. Also worth remembering is that register file ports are
489 insanely costly, not just to design but also use considerable power.
490
491 That said there do exist genuine reasons why more registers is better than
492 less: Compare-and-Swap has huge benefits but is costly to implement,
493 and DCT/FFT Twin-Butterfly instructions allow creation of in-place
494 in-register algorithms reducing the number of registers needed and
495 thus saving power due to making the *overall* algorithm more efficient,
496 as opposed to micro-focussing on a localised power increase.
497
498 **How many register files does it use?**
499
500 Complex instructions pulling in data from multiple register files can
501 create unnecessary issues surrounding Dependency Hazard Management in
502 Out-of-Order systems. As a general rule it is better to keep complex
503 instructions reading and writing to the same register file, relying
504 on much simpler (1-in 1-out) instructions to transfer data between
505 register files.
506
507 **Can other existing instructions (plural) do the same job**
508
509 The general rule being: if two or more instructions can do the
510 same job, leave it out... *unless* the number of occurrences of
511 that instruction being missing is causing huge increases in binary
512 size. RISC-V has gone too far in this regard, as explained here:
513 <https://news.ycombinator.com/item?id=24459314>
514
515 Good examples are LD-ST-Indexed-shifted (multiply RB by 2, 4 8 or 16)
516 which are high-priority instructions in x86 and ARM, but lacking in
517 Power ISA, MIPS, and RISC-V. With many critical hot-loops in Computer
518 Science having to perform shift and add as explicit instructions,
519 adding LD/ST-shifted should be considered high priority, except that
520 the sheer *number* of such instructions needing to be added takes us
521 into the next question
522
523 **How costly is the encoding?**
524
525 This can either be a single instruction that is costly (several operands
526 or a few long ones) or it could be a group of simpler ones that purely
527 due to their number increases overall encoding cost. An example of an
528 extreme costly instruction would be those with their own Primary Opcode:
529 addi is a good candidate. However the sheer overwhelming number of
530 times that instruction is used easily makes a case for its inclusion.
531
532 Mentioned above was Load-Store-Indexed-Shifted, which only needs 2
533 bits to specify how much to shift: x2 x4 x8 or x16. And they are all
534 a 10-bit XO Field, so not that costly for any one given instruction.
535 Unfortunately there are *around 30* Load-Store-Indexed Instructions in the
536 Power ISA, which means an extra *five* bits taken up of precious XO space.
537 Then let us not forget the two needed for the Shift amount. Now we are
538 up to *three* bit XO for the group.
539
540 Is this a worthwhile tradeoff? Honestly it could well be. And that's
541 the decision process that the OpenPOWER ISA Working Group could use some
542 assistance on, to make the evaluation easier.
543
544 **How many gates does it need?**
545
546 `grevlut` comes in at an astonishing 20,000 gates, where for comparison
547 an FP64 Multiply typically takes between 12 to 15,000. Not counting
548 the cost in hardware terms is just asking for trouble.
549
550 **How long will it take to complete?**
551
552 In the case of divide or Transcendentals the algorithms needed are so
553 complex that simple implementations can often take an astounding 128
554 clock cycles to complete. Other instructions waiting for the results
555 will back up and eventually stall, where in-order systems pretty much
556 just stall straight away.
557
558 Less extreme examples include instructions that take only a few cycles
559 to complete, but if used in tight loops with Conditional Branches, an
560 Out-of-Order system with Speculative capability may need significantly
561 more Reservation Stations to hold in-flight data for instructions which
562 take longer than those which do not.
563
564 **Can one instruction do the job of many?**
565
566 Large numbers of disparate instructions adversely affects resource
567 utilisation in In-Order systems. However it is not always that simple:
568 every one of the Power ISA "add" and "subtract" instructions, as shown by
569 the Microwatt source code, may be micro-coded as one single instruction
570 where RA may optionally be inverted, output likewise, and Carry-In set to
571 1, 0 or XER.CA. From these options the *entire* suite of add/subtract
572 may be synthesised (subtract by inverting RA and adding an extra 1 it
573 produces a 2s-complement of RA).
574
575 `bmask` for example is to be proposed as a single instruction with
576 a 5-bit "Mode" operand, greatly simplifying some micro-architectural
577 implementations. Likewise the FP-INT conversion instructions are grouped
578 as a set of four, instead of over 30 separate instructions. Aside from
579 anything this strategy makes the ISA Working Group's evaluation task
580 easier, as well as reducing the work of writing a Compliance Test Suite.
581
582 **Summary**
583
584 There are many tradeoffs here, it is a huge list of considerations: any
585 others known about please do submit feedback so they may be included,
586 here. Then the evaluation process may take place: again, constructive
587 feedback on that as to which instructions are a priority also appreciated.
588 The above helps explain the columns in the tables that follow.
589
590 # Tables
591
592 The original tables are available publicly as as CSV file at
593 <https://git.libre-soc.org/?p=libreriscv.git;a=blob;f=openpower/sv/rfc/ls012/optable.csv;hb=HEAD>.
594 A python program auto-generates the tables in the following sections
595 by sorting into different useful priorities.
596
597 The key to headings and sections are as follows:
598
599 * **Area** - Target Area as described in above sections
600 * **XO Cost** - the number of bits required in the XO Field. whilst not
601 the full picture it is a good indicator as to how costly in terms
602 of Opcode Allocation a given instruction will be. Lower number is
603 a higher cost for the Power ISA's precious remaining Opcode space.
604 "PO" indicates that an entire Primary Opcode is required.
605 * **rfc** the Libre-SOC External RFC resource,
606 <https://libre-soc.org/openpower/sv/rfc/> where advance notice of
607 upcoming RFCs in development may be found.
608 *Reading advance Draft RFCs and providing feedback strongly advised*,
609 it saves time and effort for the OPF ISA Workgroup.
610 * **SVP64** - Vectoriseable (SVP64-Prefixable) - also implies that
611 SVP64Single is also permitted (required).
612 * **page** - Libre-SOC wiki page at which further information can
613 be found. Again: **advance reading strongly advised due to the
614 sheer volume of information**.
615 * **PO1** - the instruction is capable of being PO1-Prefixed
616 (given an EXT1xx Opcode Allocation). Bear in mind that this option
617 is **mutually exclusively incompatible** with Vectorisation.
618 * **group** - the Primary Opcode Group recommended for this instruction.
619 Options are EXT0xx (EXT000-EXT063), EXT1xx and EXT2xx. A third area
620 (UnVectoriseable),
621 EXT3xx, was available in an early Draft RFC but has been made "RESERVED"
622 instead. see [[sv/po9_encoding]].
623 * **regs** - a guide to register usage, to how costly Hazard Management
624 will be, in hardware:
625
626 ```
627 - 1R: reads one GPR/FPR/SPR/CR.
628 - 1W: writes one GPR/FPR/SPR/CR.
629 - 1r: reads one CR *Field* (not necessarily the entire CR)
630 - 1w: writes one CR *Field* (not necessarily the entire CR)
631 ```
632
633 [[!inline pages="openpower/sv/rfc/ls012/areas.mdwn" raw=yes ]]
634 [[!inline pages="openpower/sv/rfc/ls012/xo_cost.mdwn" raw=yes ]]
635
636 [[!tag opf_rfc]]