Revert unauthorized change to the specification without proper consultation
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 # Appendix
2
3 * <https://bugs.libre-soc.org/show_bug.cgi?id=574> Saturation
4 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47> Parallel Prefix
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=697> Reduce Modes
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=864> parallel prefix simulator
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=809> OV sv.addex discussion
8 * ARM SVE Fault-first <https://alastairreid.github.io/papers/sve-ieee-micro-2017.pdf>
9
10 This is the appendix to [[sv/svp64]], providing explanations of modes
11 etc. leaving the main svp64 page's primary purpose as outlining the
12 instruction format.
13
14 Table of contents:
15
16 [[!toc]]
17
18 ## Partial Implementations
19
20 It is perfectly legal to implement subsets of SVP64 as long as illegal
21 instruction traps are always raised on unimplemented features,
22 so that soft-emulation is possible,
23 even for future revisions of SVP64. With SVP64 being partly controlled
24 through contextual SPRs, a little care has to be taken.
25
26 **All** SPRs
27 not implemented including reserved ones for future use must raise an illegal
28 instruction trap if read or written. This allows software the
29 opportunity to emulate the context created by the given SPR.
30
31 See [[sv/compliancy_levels]] for full details.
32
33 ## XER, SO and other global flags
34
35 Vector systems are expected to be high performance. This is achieved
36 through parallelism, which requires that elements in the vector be
37 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
38 Read-Write Hazards on single-bit global resources, having a significant
39 detrimental effect.
40
41 Consequently in SV, XER.SO behaviour is disregarded (including
42 in `cmp` instructions). XER.SO is not read, but XER.OV may be written,
43 breaking the Read-Modify-Write Hazard Chain that complicates
44 microarchitectural implementations.
45 This includes when `scalar identity behaviour` occurs. If precise
46 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
47 instructions should be used without an SV Prefix.
48
49 TODO jacob add about OV https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf
50
51 Of note here is that XER.SO and OV may already be disregarded in the
52 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
53 SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets,
54 but only for SVP64 Prefixed Operations.
55
56 XER.CA/CA32 on the other hand is expected and required to be implemented
57 according to standard Power ISA Scalar behaviour. Interestingly, due
58 to SVP64 being in effect a hardware for-loop around Scalar instructions
59 executing in precise Program Order, a little thought shows that a Vectorized
60 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
61 and producing, at the end, a single bit Carry out. High performance
62 implementations may exploit this observation to deploy efficient
63 Parallel Carry Lookahead.
64
65 ```
66 # assume VL=4, this results in 4 sequential ops (below)
67 sv.adde r0.v, r4.v, r8.v
68
69 # instructions that get executed in backend hardware:
70 adde r0, r4, r8 # takes carry-in, produces carry-out
71 adde r1, r5, r9 # takes carry from previous
72 ...
73 adde r3, r7, r11 # likewise
74 ```
75
76 It can clearly be seen that the carry chains from one
77 64 bit add to the next, the end result being that a
78 256-bit "Big Integer Add with Carry" has been performed, and that
79 CA contains the 257th bit. A one-instruction 512-bit Add-with-Carry
80 may be performed by setting VL=8, and a one-instruction
81 1024-bit Add-with-Carry by setting VL=16, and so on. More on
82 this in [[openpower/sv/biginteger]]
83
84 ## EXTRA Field Mapping
85
86 The purpose of the 9-bit EXTRA field mapping is to mark individual
87 registers (RT, RA, BFA) as either scalar or vector, and to extend
88 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
89 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
90 Predication) leaving a mere 6 bits for qualifying registers. As can
91 be seen there is significant pressure on these (and in fact all) SVP64 bits.
92
93 In Power ISA v3.1 prefixing there are bits which describe and classify
94 the prefix in a fashion that is independent of the suffix. MLSS for
95 example. For SVP64 there is insufficient space to make the SVP64 Prefix
96 "self-describing", and consequently every single Scalar instruction
97 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
98 This process was semi-automated and is described in this section.
99 The final results, which are part of the SVP64 Specification, are here:
100 [[openpower/opcode_regs_deduped]]
101
102 * Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
103 from reading the markdown formatted version of the Scalar pseudocode which
104 is machine-readable and found in [[openpower/isatables]]. The analysis
105 gives, by instruction, a "Register Profile". `add RT, RA, RB` for
106 example is given a designation `RM-2R-1W` because it requires two GPR
107 reads and one GPR write.
108 * Secondly, the total number of registers was added up (2R-1W is 3
109 registers) and if less than or equal to three then that instruction
110 could be given an EXTRA3 designation. Four or more is given an EXTRA2
111 designation because there are only 9 bits available.
112 * Thirdly, the instruction was analysed to see if Twin or Single
113 Predication was suitable. As a general rule this was if there
114 was only a single operand and a single result (`extw` and LD/ST)
115 however it was found that some 2 or 3 operand instructions also
116 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
117 in Twin Predication, some compromises were made, here. LDST is
118 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
119 * Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
120 could have been decided that RA would be indexed 0 (EXTRA bits 0-2), RB
121 indexed 1 (EXTRA bits 3-5) and RT indexed 2 (EXTRA bits 6-8). In some
122 cases (LD/ST with update) RA-as-a-source is given a **different** EXTRA
123 index from RA-as-a-result (because it is possible to do, and perceived
124 to be useful). Rc=1 co-results (CR0, CR1) are always given the same
125 EXTRA index as their main result (RT, FRT).
126 * Fifthly, in an automated process the results of the analysis were
127 outputted in CSV Format for use in machine-readable form by sv_analysis.py
128 <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
129
130 This process was laborious but logical, and, crucially, once a decision
131 is made (and ratified) cannot be reversed. Qualifying future Power ISA
132 Scalar instructions for SVP64 is **strongly** advised to utilise this
133 same process and the same sv_analysis.py program as a canonical method
134 of maintaining the relationships. Alterations to that same program
135 which change the Designation is **prohibited** once finalised (ratified
136 through the Power ISA WG Process). It would be similar to deciding that
137 `add` should be changed from X-Form
138 to D-Form.
139
140 ## Single Predication <a name="1p"> </a>
141
142 This is a standard mode normally found in Vector ISAs. every element
143 in every source Vector and in the destination uses the same bit of one
144 single predicate mask.
145
146 In SVSTATE, for Single-predication, implementors MUST increment both
147 srcstep and dststep, but depending on whether sz and/or dz are set,
148 srcstep and dststep can still potentially become different indices.
149 Only when sz=dz is srcstep guaranteed to equal dststep at all times.
150
151 Note that in some Mode Formats there is only one flag (zz). This indicates
152 that *both* sz *and* dz are set to the same.
153
154 Example 1:
155
156 * VL=4
157 * mask=0b1101
158 * sz=0, dz=1
159
160 The following schedule for srcstep and dststep will occur:
161
162 | srcstep | dststep | comment |
163 | ---- | ----- | -------- |
164 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
165 | 1 | 2 | sz=1 but dz=0: dst skips mask[1], src soes not |
166 | 2 | 3 | mask[src=2] and mask[dst=3] are 1 |
167 | 3 | end | loop has ended because dst reached VL-1 |
168
169 Example 2:
170
171 * VL=4
172 * mask=0b1101
173 * sz=1, dz=0
174
175 The following schedule for srcstep and dststep will occur:
176
177 | srcstep | dststep | comment |
178 | ---- | ----- | -------- |
179 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
180 | 2 | 1 | sz=0 but dz=1: src skips mask[1], dst does not |
181 | 3 | 2 | mask[src=3] and mask[dst=2] are 1 |
182 | end | 3 | loop has ended because src reached VL-1 |
183
184 In both these examples it is crucial to note that despite there being
185 a single predicate mask, with sz and dz being different, srcstep and
186 dststep are being requested to react differently.
187
188 Example 3:
189
190 * VL=4
191 * mask=0b1101
192 * sz=0, dz=0
193
194 The following schedule for srcstep and dststep will occur:
195
196 | srcstep | dststep | comment |
197 | ---- | ----- | -------- |
198 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
199 | 2 | 2 | sz=0 and dz=0: both src and dst skip mask[1] |
200 | 3 | 3 | mask[src=3] and mask[dst=3] are 1 |
201 | end | end | loop has ended because src and dst reached VL-1 |
202
203 Here, both srcstep and dststep remain in lockstep because sz=dz=0
204
205 ## Twin Predication <a name="2p"> </a>
206
207 This is a novel concept that allows predication to be applied to a single
208 source and a single dest register. The following types of traditional
209 Vector operations may be encoded with it, *without requiring explicit
210 opcodes to do so*
211
212 * VSPLAT (a single scalar distributed across a vector)
213 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
214 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
215 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
216 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
217
218 Those patterns (and more) may be applied to:
219
220 * mv (the usual way that V\* ISA operations are created)
221 * exts\* sign-extension
222 * rwlinm and other RS-RA shift operations (**note**: excluding
223 those that take RA as both a src and dest. These are not
224 1-src 1-dest, they are 2-src, 1-dest)
225 * LD and ST (treating AGEN as one source)
226 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
227 * Condition Register ops mfcr, mtcr and other similar
228
229 This is a huge list that creates extremely powerful combinations,
230 particularly given that one of the predicate options is `(1<<r3)`
231
232 Additional unusual capabilities of Twin Predication include a back-to-back
233 version of VCOMPRESS-VEXPAND which is effectively the ability to do
234 sequentially ordered multiple VINSERTs. The source predicate selects a
235 sequentially ordered subset of elements to be inserted; the destination
236 predicate specifies the sequentially ordered recipient locations.
237 This is equivalent to
238 `llvm.masked.compressstore.*`
239 followed by
240 `llvm.masked.expandload.*`
241 with a single instruction, but abstracted out from Load/Store and applicable
242 in general to any 2P instruction.
243
244 This extreme power and flexibility comes down to the fact that SVP64
245 is not actually a Vector ISA: it is a loop-abstraction-concept that
246 is applied *in general* to Scalar operations, just like the x86 `REP`
247 instruction (if put on steroids).
248
249 ## Pack/Unpack
250
251 The pack/unpack concept of VSX `vpack` is abstracted out as Sub-Vector
252 reordering. Two bits in the `SVSHAPE` [[sv/spr]] enable either "packing"
253 or "unpacking" on the subvectors vec2/3/4.
254
255 First, illustrating a "normal" SVP64 operation with `SUBVL!=1:` (assuming
256 no elwidth overrides), note that the VL loop is outer and the SUBVL
257 loop inner:
258
259 ```
260 def index():
261 for i in range(VL):
262 for j in range(SUBVL):
263 yield i*SUBVL+j
264
265 for idx in index():
266 operation_on(RA+idx)
267 ```
268
269 For pack/unpack (again, no elwidth overrides), note that now there is the
270 option to swap the SUBVL and VL loop orders.
271 In effect the Pack/Unpack performs a Transpose of the subvector elements.
272 Illustrated this time with a GPR mv operation:
273
274 ```
275 # yield an outer-SUBVL or inner VL loop with SUBVL
276 def index_p(outer):
277 if outer:
278 for j in range(SUBVL): # subvl is outer
279 for i in range(VL): # vl is inner
280 yield i+VL*j
281 else:
282 for i in range(VL): # vl is outer
283 for j in range(SUBVL): # subvl is inner
284 yield i*SUBVL+j
285
286 # walk through both source and dest indices simultaneously
287 for src_idx, dst_idx in zip(index_p(PACK), index_p(UNPACK)):
288 move_operation(RT+dst_idx, RA+src_idx)
289 ```
290
291 "yield" from python is used here for simplicity and clarity.
292 The two Finite State Machines for the generation of the source
293 and destination element offsets progress incrementally in
294 lock-step.
295
296 Example VL=2, SUBVL=3, PACK_en=1 - elements grouped by
297 vec3 will be redistributed such that Sub-elements 0 are
298 packed together, Sub-elements 1 are packed together, as
299 are Sub-elements 2.
300
301 ```
302 srcstep=0 srcstep=1
303 0 1 2 3 4 5
304
305 dststep=0 dststep=1 dststep=2
306 0 3 1 4 2 5
307 ```
308
309 Setting of both `PACK` and `UNPACK` is neither prohibited nor `UNDEFINED`
310 because the reordering is fully deterministic, and additional REMAP
311 reordering may be applied. Combined with Matrix REMAP this would give
312 potentially up to 4 Dimensions of reordering.
313
314 Pack/Unpack has quirky interactions on [[sv/mv.swizzle]] because it can
315 set a different subvector length for destination, and has a slightly
316 different pseudocode algorithm for Vertical-First Mode.
317
318 Ordering is as follows:
319
320 * SVSHAPE srcstep, dststep, ssubstep and dsubstep are advanced sequentially
321 depending on PACK/UNPACK.
322 * srcstep and dststep are pushed through REMAP to compute actual Element offsets.
323 * Swizzle is independently applied to ssubstep and dsubstep
324
325 Pack/Unpack is enabled (set up) through [[sv/svstep]].
326
327 ## Reduce modes
328
329 Reduction in SVP64 is deterministic and somewhat of a misnomer.
330 A normal Vector ISA would have explicit Reduce opcodes with defined
331 characteristics per operation: in SX Aurora there is even an additional
332 scalar argument containing the initial reduction value, and the default
333 is either 0 or 1 depending on the specifics of the explicit opcode.
334 SVP64 fundamentally has to utilise *existing* Scalar Power ISA v3.0B
335 operations, which presents some unique challenges.
336
337 The solution turns out to be to simply define reduction as permitting
338 deterministic element-based schedules to be issued using the base Scalar
339 operations, and to rely on the underlying microarchitecture to resolve
340 Register Hazards at the element level. This goes back to the fundamental
341 principle that SV is nothing more than a Sub-Program-Counter sitting
342 between Decode and Issue phases.
343
344 For Scalar Reduction, Microarchitectures *may* take opportunities to
345 parallelise the reduction but only if in doing so they preserve strict
346 Program Order at the Element Level. Opportunities where this is possible
347 include an `OR` operation or a MIN/MAX operation: it may be possible to
348 parallelise the reduction, but for Floating Point it is not permitted
349 due to different results being obtained if the reduction is not executed
350 in strict Program-Sequential Order.
351
352 In essence it becomes the programmer's responsibility to leverage the
353 pre-determined schedules to desired effect.
354
355 ### Scalar result reduction and iteration
356
357 Scalar Reduction per se does not exist, instead is implemented in SVP64
358 as a simple and natural relaxation of the usual restriction on the Vector
359 Looping which would terminate if the destination was marked as a Scalar.
360 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
361 even though the destination register is marked as scalar *and*
362 the same register is used as a source register. Thus it is
363 up to the programmer to be aware of this, observe some conventions,
364 and thus end up achieving the desired outcome of scalar reduction.
365
366 It is also important to appreciate that there is no actual imposition or
367 restriction on how this mode is utilised: there will therefore be several
368 valuable uses (including Vector Iteration and "Reverse-Gear") and it is
369 up to the programmer to make best use of the (strictly deterministic)
370 capability provided.
371
372 In this mode, which is suited to operations involving carry or overflow,
373 one register must be assigned, by convention by the programmer to be the
374 "accumulator". Scalar reduction is thus categorised by:
375
376 * One of the sources is a Vector
377 * the destination is a scalar
378 * optionally but most usefully when one source scalar register is
379 also the scalar destination (which may be informally termed by
380 convention the "accumulator")
381 * That the source register type is the same as the destination register
382 type identified as the "accumulator". Scalar reduction on `cmp`,
383 `setb` or `isel` makes no sense for example because of the mixture
384 between CRs and GPRs.
385
386 *Note that issuing instructions in Scalar reduce mode such as `setb`
387 are neither `UNDEFINED` nor prohibited, despite them not making much
388 sense at first glance. Scalar reduce is strictly defined behaviour,
389 and the cost in hardware terms of prohibition of seemingly non-sensical
390 operations is too great. Therefore it is permitted and required to
391 be executed successfully. Implementors **MAY** choose to optimise
392 such instructions in instances where their use results in "extraneous
393 execution", i.e. where it is clear that the sequence of operations,
394 comprising multiple overwrites to a scalar destination **without**
395 cumulative, iterative, or reductive behaviour (no "accumulator"), may
396 discard all but the last element operation. Identification of such
397 is trivial to do for `setb` and `cmp`: the source register type is a
398 completely different register file from the destination. Likewise Scalar
399 reduction when the destination is a Vector is as if the Reduction Mode
400 was not requested. However it would clearly be unacceptable to perform
401 such optimisations on cache-inhibited LD/ST, so some considerable care
402 needs to be taken.*
403
404 Typical applications include simple operations such as `ADD r3, r10.v,
405 r3` where, clearly, r3 is being used to accumulate the addition of all
406 elements of the vector starting at r10.
407
408 ```
409 # add RT, RA,RB but when RT==RA
410 for i in range(VL):
411 iregs[RA] += iregs[RB+i] # RT==RA
412 ```
413
414 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
415 SV ordinarily **terminates** at the first scalar operation. Only by
416 marking the operation as "mapreduce" will it continue to issue multiple
417 sub-looped (element) instructions in `Program Order`.
418
419 To perform the loop in reverse order, the ```RG``` (reverse gear) bit
420 must be set. This may be useful in situations where the results may be
421 different (floating-point) if executed in a different order. Given that
422 there is no actual prohibition on Reduce Mode being applied when the
423 destination is a Vector, the "Reverse Gear" bit turns out to be a way to
424 apply Iterative or Cumulative Vector operations in reverse. `sv.add/rg
425 r3.v, r4.v, r4.v` for example will start at the opposite end of the
426 Vector and push a cumulative series of overlapping add operations into
427 the Execution units of the underlying hardware.
428
429 Other examples include shift-mask operations where a Vector of inserts
430 into a single destination register is required (see [[sv/bitmanip]],
431 bmset), as a way to construct a value quickly from multiple arbitrary
432 bit-ranges and bit-offsets. Using the same register as both the source
433 and destination, with Vectors of different offsets masks and values to
434 be inserted has multiple applications including Video, cryptography and
435 JIT compilation.
436
437 ```
438 # assume VL=4:
439 # * Vector of shift-offsets contained in RC (r12.v)
440 # * Vector of masks contained in RB (r8.v)
441 # * Vector of values to be masked-in in RA (r4.v)
442 # * Scalar destination RT (r0) to receive all mask-offset values
443 sv.bmset/mr r0, r4.v, r8.v, r12.v
444 ```
445
446 Due to the Deterministic Scheduling, Subtract and Divide are still
447 permitted to be executed in this mode, although from an algorithmic
448 perspective it is strongly discouraged. It would be better to use
449 addition followed by one final subtract, or in the case of divide, to get
450 better accuracy, to perform a multiply cascade followed by a final divide.
451
452 Note that single-operand or three-operand scalar-dest reduce is perfectly
453 well permitted: the programmer may still declare one register, used
454 as both a Vector source and Scalar destination, to be utilised as the
455 "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc this
456 naturally fits well with the normal expected usage of these operations.
457
458 If an interrupt or exception occurs in the middle of the scalar mapreduce,
459 the scalar destination register **MUST** be updated with the current
460 (intermediate) result, because this is how ```Program Order``` is
461 preserved (Vector Loops are to be considered to be just another way
462 of issuing instructions in Program Order). In this way, after return
463 from interrupt, the scalar mapreduce may continue where it left off.
464 This provides "precise" exception behaviour.
465
466 Note that hardware is perfectly permitted to perform multi-issue parallel
467 optimisation of the scalar reduce operation: it's just that as far as
468 the user is concerned, all exceptions and interrupts **MUST** be precise.
469
470 ## Fail-on-first <a name="fail-first"> </a>
471
472 Data-dependent fail-on-first has two distinct variants: one for LD/ST (see
473 [[sv/ldst]], the other for arithmetic operations (actually, CR-driven)
474 [[sv/normal]] and CR operations [[sv/cr_ops]]. Note in each case the
475 assumption is that vector elements are required appear to be executed
476 in sequential Program Order, element 0 being the first.
477
478 * LD/ST ffirst (not to be confused with *Data-Dependent* LD/ST ffirst)
479 treats the first LD/ST in a vector (element 0) as an ordinary one.
480 Exceptions occur "as normal" on the first element. However for elements
481 1 and above, if an exception would occur, then VL is **truncated**
482 to the previous element.
483 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
484 CR-creating operation produces a result (including cmp). Similar to
485 branch, an analysis of the CR is performed and if the test fails,
486 the vector operation terminates and discards all element operations
487 above the current one (and the current one if VLi is not set), and
488 VL is truncated to either the *previous* element or the current one,
489 depending on whether VLi (VL "inclusive") is set.
490
491 Thus the new VL comprises a contiguous vector of results, all of which
492 pass the testing criteria (equal to zero, less than zero).
493
494 The CR-based data-driven fail-on-first is new and not
495 found in ARM SVE or RVV. At the same time it is also
496 "old" because it is a generalisation of the Z80 [Block
497 compare](https://rvbelzen.tripod.com/z80prgtemp/z80prg04.htm)
498 instructions, especially
499 [CPIR](http://z80-heaven.wikidot.com/instructions-set:cpir) which is
500 based on CP (compare) as the ultimate "element" (suffix) operation
501 to which the repeat (prefix) is applied. It is extremely useful for
502 reducing instruction count, however requires speculative execution
503 involving modifications of VL to get high performance implementations.
504 An additional mode (RC1=1) effectively turns what would otherwise be an
505 arithmetic operation into a type of `cmp`. The CR is stored (and the
506 CR.eq bit tested against the `inv` field). If the CR.eq bit is equal to
507 `inv` then the Vector is truncated and the loop ends. Note that when
508 RC1=1 the result elements are never stored, only the CRs.
509
510 VLi is only available as an option when `Rc=0` (or for instructions
511 which do not have Rc). When set, the current element is always also
512 included in the count (the new length that VL will be set to). This may
513 be useful in combination with "inv" to truncate the Vector to *exclude*
514 elements that fail a test, or, in the case of implementations of strncpy,
515 to include the terminating zero.
516
517 In CR-based data-driven fail-on-first there is only the option to select
518 and test one bit of each CR (just as with branch BO). For more complex
519 tests this may be insufficient. If that is the case, a vectorized crops
520 (crand, cror) may be used, and ffirst applied to the crop instead of to
521 the arithmetic vector.
522
523 One extremely important aspect of ffirst is:
524
525 * LDST ffirst may never set VL equal to zero. This because on the first
526 element an exception must be raised "as normal".
527 * CR-based data-dependent ffirst on the other hand **can** set VL equal
528 to zero. This is the only means in the entirety of SV that VL may be set
529 to zero (with the exception of via the SV.STATE SPR). When VL is set
530 zero due to the first element failing the CR bit-test, all subsequent
531 vectorized operations are effectively `nops` which is
532 *precisely the desired and intended behaviour*.
533
534 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
535 to a nonzero value for any implementation-specific reason. For example:
536 it is perfectly reasonable for implementations to alter VL when ffirst
537 LD or ST operations are initiated on a nonaligned boundary, such that
538 within a loop the subsequent iteration of that loop begins subsequent
539 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
540 workloads or balance resources.
541
542 CR-based data-dependent first on the other hand MUST not truncate VL
543 arbitrarily to a length decided by the hardware: VL MUST only be truncated
544 based explicitly on whether a test fails. This because it is a precise
545 test on which algorithms will rely.
546
547 *Note: there is no reverse-direction for Data-dependent Fail-First. REMAP
548 will need to be activated to invert the ordering of element traversal.*
549
550 ### Data-dependent fail-first on CR operations (crand etc)
551
552 Operations that actually produce or alter CR Field as a result do not
553 also in turn have an Rc=1 mode. However it makes no sense to try to test
554 the 4 bits of a CR Field for being equal or not equal to zero. Moreover,
555 the result is already in the form that is desired: it is a CR field.
556 Therefore, CR-based operations have their own SVP64 Mode, described in
557 [[sv/cr_ops]]
558
559 There are two primary different types of CR operations:
560
561 * Those which have a 3-bit operand field (referring to a CR Field)
562 * Those which have a 5-bit operand (referring to a bit within the
563 whole 32-bit CR)
564
565 More details can be found in [[sv/cr_ops]].
566
567 ## CR Operations
568
569 CRs are slightly more involved than INT or FP registers due to the
570 possibility for indexing individual bits (crops BA/BB/BT). Again however
571 the access pattern needs to be understandable in relation to v3.0B / v3.1B
572 numbering, with a clear linear relationship and mapping existing when
573 SV is applied.
574
575 ### CR EXTRA mapping table and algorithm <a name="cr_extra"></a>
576
577 Numbering relationships for CR fields are already complex due to being
578 in BE format (*the relationship is not clearly explained in the v3.0B
579 or v3.1 specification*). However with some care and consideration the
580 exact same mapping used for INT and FP regfiles may be applied, just to
581 the upper bits, as explained below. Firstly and most importantly a new
582 notation `CR{field number}` is used to indicate access to a particular
583 Condition Register Field (as opposed to the notation `CR[bit]` which
584 accesses one bit of the 32 bit Power ISA v3.0B Condition Register).
585
586 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
587
588 ```
589 CR{n} = CR[32+n*4:35+n*4]
590 ```
591
592 For SVP64 the relationship for the sequential numbering of elements is to
593 the CR **fields** within the CR Register, not to individual bits within
594 the CR register.
595
596 The `CR{n}` notation is designed to give *linear sequential
597 numbering* in the Vector domain on a straight sequential Vector Loop.
598
599 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
600 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits *in*
601 that CR (EQ/LT/GT/SO). The numbering was determined (after 4 months of
602 analysis and research) to be as follows:
603
604 ```
605 CR_index = (BA>>2) # top 3 bits
606 bit_index = (BA & 0b11) # low 2 bits
607 CR_reg = CR{CR_index} # get the CR
608 # finally get the bit from the CR.
609 CR_bit = (CR_reg & (1<<bit_index)) != 0
610 ```
611
612 When it comes to applying SV, it is the *CR Field* number `CR_reg`
613 to which SV EXTRA2/3
614 applies, **not** the `CR_bit` portion (bits 3-4):
615
616 ```
617 if extra3_mode:
618 spec = EXTRA3
619 else:
620 spec = EXTRA2<<1 | 0b0
621 if spec[0]:
622 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
623 return ((BA >> 2)<<6) | # hi 3 bits shifted up
624 (spec[1:2]<<4) | # to make room for these
625 (BA & 0b11) # CR_bit on the end
626 else:
627 # scalar constructs "00 spec[1:2] BA[0:4]"
628 return (spec[1:2] << 5) | BA
629 ```
630
631 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
632 algorithm to determine CR\_reg is modified to as follows:
633
634 ```
635 CR_index = (BA>>2) # top 3 bits
636 if spec[0]:
637 # vector mode, 0-124 increments of 4
638 CR_index = (CR_index<<4) | (spec[1:2] << 2)
639 else:
640 # scalar mode, 0-32 increments of 1
641 CR_index = (spec[1:2]<<3) | CR_index
642 # same as for v3.0/v3.1 from this point onwards
643 bit_index = (BA & 0b11) # low 2 bits
644 CR_reg = CR{CR_index} # get the CR
645 # finally get the bit from the CR.
646 CR_bit = (CR_reg & (1<<bit_index)) != 0
647 ```
648
649 Note here that the decoding pattern to determine CR\_bit does not change.
650
651 Note: high-performance implementations may read/write Vectors of CRs in
652 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
653 simplify internal design. If instructions are issued where CR Vectors
654 do not start on a 32-bit aligned boundary, performance may be affected.
655
656 ### CR fields as inputs/outputs of vector operations
657
658 CRs (or, the arithmetic operations associated with them)
659 may be marked as Vectorized or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorized if the destination is Vectorized. Likewise if the destination is scalar then so is the CR.
660
661 When vectorized, the CR inputs/outputs are sequentially read/written
662 to 4-bit CR fields. Vectorized Integer results, when Rc=1, will begin
663 writing to CR8 (TBD evaluate) and increase sequentially from there.
664 This is so that:
665
666 * implementations may rely on the Vector CRs being aligned to 8. This
667 means that CRs may be read or written in aligned batches of 32 bits
668 (8 CRs per batch), for high performance implementations.
669 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
670 overwritten by vector Rc=1 operations except for very large VL
671 * CR-based predication, from CR32, is also not interfered with
672 (except by large VL).
673
674 However when the SV result (destination) is marked as a scalar by the
675 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
676 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
677 for FP operations.
678
679 Note that yes, the CR Fields are genuinely Vectorized. Unlike in SIMD VSX which
680 has a single CR (CR6) for a given SIMD result, SV Vectorized OpenPOWER
681 v3.0B scalar operations produce a **tuple** of element results: the
682 result of the operation as one part of that element *and a corresponding
683 CR element*. Greatly simplified pseudocode:
684
685 ```
686 for i in range(VL):
687 # calculate the vector result of an add
688 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
689 # now calculate CR bits
690 CRs{8+i}.eq = iregs[RT+i] == 0
691 CRs{8+i}.gt = iregs[RT+i] > 0
692 ... etc
693 ```
694
695 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
696 then a followup instruction must be performed, setting "reduce" mode on
697 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
698 more flexibility in analysing vectors than standard Vector ISAs. Normal
699 Vector ISAs are typically restricted to "were all results nonzero" and
700 "were some results nonzero". The application of mapreduce to Vectorized
701 cr operations allows far more sophisticated analysis, particularly in
702 conjunction with the new crweird operations see [[sv/cr_int_predication]].
703
704 Note in particular that the use of a separate instruction in this way
705 ensures that high performance multi-issue OoO inplementations do not
706 have the computation of the cumulative analysis CR as a bottleneck and
707 hindrance, regardless of the length of VL.
708
709 Additionally,
710 SVP64 [[sv/branches]] may be used, even when the branch itself is to
711 the following instruction. The combined side-effects of CTR reduction
712 and VL truncation provide several benefits.
713
714 (see [[discussion]]. some alternative schemes are described there)
715
716 ### Rc=1 when SUBVL!=1
717
718 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
719 predicate is allocated per subvector; likewise only one CR is allocated
720 per subvector.
721
722 This leaves a conundrum as to how to apply CR computation per subvector,
723 when normally Rc=1 is exclusively applied to scalar elements. A solution
724 is to perform a bitwise OR or AND of the subvector tests. Given that
725 OE is ignored in SVP64, this field may (when available) be used to select OR or
726 AND behavior.
727
728 #### Table of CR fields
729
730 CRn is the notation used by the OpenPower spec to refer to CR field #i,
731 so FP instructions with Rc=1 write to CR1 (n=1).
732
733 CRs are not stored in SPRs: they are registers in their own right.
734 Therefore context-switching the full set of CRs involves a Vectorized
735 mfcr or mtcr, using VL=8 to do so. This is exactly as how
736 scalar OpenPOWER context-switches CRs: it is just that there are now
737 more of them.
738
739 The 64 SV CRs are arranged similarly to the way the 128 integer registers
740 are arranged. TODO a python program that auto-generates a CSV file
741 which can be included in a table, which is in a new page (so as not to
742 overwhelm this one). [[svp64/cr_names]]
743
744 ## Register Profiles
745
746 Instructions are broken down by Register Profiles as listed in the
747 following auto-generated page: [[opcode_regs_deduped]]. These tables,
748 despite being auto-generated, are part of the Specification.
749
750 ## SV pseudocode illustration
751
752 ### Single-predicated Instruction
753
754 illustration of normal mode add operation: zeroing not included, elwidth
755 overrides not included. if there is no predicate, it is set to all 1s
756
757 ```
758 function op_add(rd, rs1, rs2) # add not VADD!
759 int i, id=0, irs1=0, irs2=0;
760 predval = get_pred_val(FALSE, rd);
761 for (i = 0; i < VL; i++)
762 STATE.srcoffs = i # save context
763 if (predval & 1<<i) # predication uses intregs
764 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
765 if (!int_vec[rd].isvec) break;
766 if (rd.isvec) { id += 1; }
767 if (rs1.isvec) { irs1 += 1; }
768 if (rs2.isvec) { irs2 += 1; }
769 if (id == VL or irs1 == VL or irs2 == VL) {
770 # end VL hardware loop
771 STATE.srcoffs = 0; # reset
772 return;
773 }
774 ```
775
776 This has several modes:
777
778 * RT.v = RA.v RB.v
779 * RT.v = RA.v RB.s (and RA.s RB.v)
780 * RT.v = RA.s RB.s
781 * RT.s = RA.v RB.v
782 * RT.s = RA.v RB.s (and RA.s RB.v)
783 * RT.s = RA.s RB.s
784
785 All of these may be predicated. Vector-Vector is straightfoward.
786 When one of source is a Vector and the other a Scalar, it is clear that
787 each element of the Vector source should be added to the Scalar source,
788 each result placed into the Vector (or, if the destination is a scalar,
789 only the first nonpredicated result).
790
791 The one that is not obvious is RT=vector but both RA/RB=scalar.
792 Here this acts as a "splat scalar result", copying the same result into
793 all nonpredicated result elements. If a fixed destination scalar was
794 intended, then an all-Scalar operation should be used.
795
796 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
797
798 ## Assembly Annotation
799
800 Assembly code annotation is required for SV to be able to successfully
801 mark instructions as "prefixed".
802
803 A reasonable (prototype) starting point:
804
805 ```
806 svp64 [field=value]*
807 ```
808
809 Fields:
810
811 * ew=8/16/32 - element width
812 * sew=8/16/32 - source element width
813 * vec=2/3/4 - SUBVL
814 * mode=mr/satu/sats/crpred
815 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
816
817 similar to x86 "rex" prefix.
818
819 For actual assembler:
820
821 ```
822 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
823 ```
824
825 Qualifiers:
826
827 * m={pred}: predicate mask mode
828 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
829 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
830 * ew={N}: ew=8/16/32 - sets elwidth override
831 * sw={N}: sw=8/16/32 - sets source elwidth override
832 * ff={xx}: see fail-first mode
833 * sat{x}: satu / sats - see saturation mode
834 * mr: see map-reduce mode
835 * mrr: map-reduce, reverse-gear (VL-1 downto 0)
836 * mr.svm see map-reduce with sub-vector mode
837 * crm: see map-reduce CR mode
838 * crm.svm see map-reduce CR with sub-vector mode
839 * sz: predication with source-zeroing
840 * dz: predication with dest-zeroing
841
842 For modes:
843
844 * fail-first
845 - ff=lt/gt/le/ge/eq/ne/so/ns
846 - RC1 mode
847 * saturation:
848 - sats
849 - satu
850 * map-reduce:
851 - mr OR crm: "normal" map-reduce mode or CR-mode.
852 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
853
854 ## Parallel-reduction algorithm
855
856 The principle of SVP64 is that SVP64 is a fully-independent
857 Abstraction of hardware-looping in between issue and execute phases
858 that has no relation to the operation it issues.
859 Additional state cannot be saved on context-switching beyond that
860 of SVSTATE, making things slightly tricky.
861
862 Executable demo pseudocode, full version
863 [here](https://git.libre-soc.org/?p=libreriscv.git;a=blob;f=openpower/sv/test_preduce.py;hb=HEAD)
864
865 ```
866 [[!inline pages="openpower/sv/preduce.py" raw="yes" ]]
867 ```
868
869 This algorithm works by noting when data remains in-place rather than
870 being reduced, and referring to that alternative position on subsequent
871 layers of reduction. It is re-entrant. If however interrupted and
872 restored, some implementations may take longer to re-establish the
873 context.
874
875 Its application by default is that:
876
877 * RA, FRA or BFA is the first register as the first operand
878 (ci index offset in the above pseudocode)
879 * RB, FRB or BFB is the second (co index offset)
880 * RT (result) also uses ci **if RA==RT**
881
882 For more complex applications a REMAP Schedule must be used
883
884 *Programmers's note: if passed a predicate mask with only one bit set,
885 this algorithm takes no action, similar to when a predicate mask is
886 all zero.*
887
888 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
889 implemented in hardware with MVs that ensure lane-crossing is minimised.
890 The mistake which would be catastrophic to SVP64 to make is to then limit
891 the Reduction Sequence for all implementors based solely and exclusively
892 on what one specific internal microarchitecture does. In SIMD ISAs
893 the internal SIMD Architectural design is exposed and imposed on the
894 programmer. Cray-style Vector ISAs on the other hand provide convenient,
895 compact and efficient encodings of abstract concepts.* **It is the
896 Implementor's responsibility to produce a design that complies with the
897 above algorithm, utilising internal Micro-coding and other techniques to
898 transparently insert micro-architectural lane-crossing Move operations
899 if necessary or desired, to give the level of efficiency or performance
900 required.**
901
902 ## Element-width overrides <a name="elwidth"> </>
903
904 Element-width overrides are best illustrated with a packed structure
905 union in the c programming language. The following should be taken
906 literally, and assume always a little-endian layout:
907
908 ```
909 #pragma pack
910 typedef union {
911 uint8_t b[];
912 uint16_t s[];
913 uint32_t i[];
914 uint64_t l[];
915 uint8_t actual_bytes[8];
916 } el_reg_t;
917
918 elreg_t int_regfile[128];
919 ```
920
921 Accessing (get and set) of registers given a value, register (in `elreg_t`
922 form), and that all arithmetic, numbering and pseudo-Memory format is
923 LE-endian and LSB0-numbered below:
924
925 ```
926 elreg_t& get_polymorphed_reg(elreg_t const& reg, bitwidth, offset):
927 el_reg_t res; // result
928 res.l = 0; // TODO: going to need sign-extending / zero-extending
929 if !reg.isvec: // scalar access has no element offset
930 offset = 0
931 if bitwidth == 8:
932 reg.b = int_regfile[reg].b[offset]
933 elif bitwidth == 16:
934 reg.s = int_regfile[reg].s[offset]
935 elif bitwidth == 32:
936 reg.i = int_regfile[reg].i[offset]
937 elif bitwidth == 64:
938 reg.l = int_regfile[reg].l[offset]
939 return reg
940
941 set_polymorphed_reg(elreg_t& reg, bitwidth, offset, val):
942 if (!reg.isvec):
943 # for safety mask out hi bits
944 bytemask = (8 << bitwidth) - 1
945 val &= bytemask
946 # not a vector: first element only, overwrites high bits.
947 # and with the *Architectural* definition being LE,
948 # storing in the first DWORD works perfectly.
949 int_regfile[reg].l[0] = val
950 elif bitwidth == 8:
951 int_regfile[reg].b[offset] = val
952 elif bitwidth == 16:
953 int_regfile[reg].s[offset] = val
954 elif bitwidth == 32:
955 int_regfile[reg].i[offset] = val
956 elif bitwidth == 64:
957 int_regfile[reg].l[offset] = val
958 ```
959
960 In effect the GPR registers r0 to r127 (and corresponding FPRs fp0
961 to fp127) are reinterpreted to be "starting points" in a byte-addressable
962 memory. Vectors - which become just a virtual naming construct - effectively
963 overlap.
964
965 It is extremely important for implementors to note that the only circumstance
966 where upper portions of an underlying 64-bit register are zero'd out is
967 when the destination is a scalar. The ideal register file has byte-level
968 write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.
969
970 An example ADD operation with predication and element width overrides:
971
972 ```
973  for (i = 0; i < VL; i++)
974 if (predval & 1<<i) # predication
975 src1 = get_polymorphed_reg(RA, srcwid, irs1)
976 src2 = get_polymorphed_reg(RB, srcwid, irs2)
977 result = src1 + src2 # actual add here
978 set_polymorphed_reg(RT, destwid, ird, result)
979 if (!RT.isvec) break
980 if (RT.isvec)  { id += 1; }
981 if (RA.isvec)  { irs1 += 1; }
982 if (RB.isvec)  { irs2 += 1; }
983 ```
984
985 Thus it can be clearly seen that elements are packed by their
986 element width, and the packing starts from the source (or destination)
987 specified by the instruction.
988
989 ## Twin (implicit) result operations
990
991 Some operations in the Power ISA already target two 64-bit scalar
992 registers: `lq` for example, and LD with update. Some mathematical
993 algorithms are more efficient when there are two outputs rather than one,
994 providing feedback loops between elements (the most well-known being add
995 with carry). 64-bit multiply for example actually internally produces
996 a 128 bit result, which clearly cannot be stored in a single 64 bit
997 register. Some ISAs recommend "macro op fusion": the practice of setting
998 a convention whereby if two commonly used instructions (mullo, mulhi) use
999 the same ALU but one selects the low part of an identical operation and
1000 the other selects the high part, then optimised micro-architectures may
1001 "fuse" those two instructions together, using Micro-coding techniques,
1002 internally.
1003
1004 The practice and convention of macro-op fusion however is not compatible
1005 with SVP64 Horizontal-First, because Horizontal Mode may only be applied
1006 to a single instruction at a time, and SVP64 is based on the principle of
1007 strict Program Order even at the element level. Thus it becomes necessary
1008 to add explicit more complex single instructions with more operands than
1009 would normally be seen in the average RISC ISA (3-in, 2-out, in some
1010 cases). If it was not for Power ISA already having LD/ST with update as
1011 well as Condition Codes and `lq` this would be hard to justify.
1012
1013 With limited space in the `EXTRA` Field, and Power ISA opcodes being only
1014 32 bit, 5 operands is quite an ask. `lq` however sets a precedent: `RTp`
1015 stands for "RT pair". In other words the result is stored in RT and RT+1.
1016 For Scalar operations, following this precedent is perfectly reasonable.
1017 In Scalar mode, `maddedu` therefore stores the two halves of the 128-bit
1018 multiply into RT and RT+1.
1019
1020 What, then, of `sv.maddedu`? If the destination is hard-coded to RT and
1021 RT+1 the instruction is not useful when Vectorized because the output
1022 will be overwritten on the next element. To solve this is easy: define
1023 the destination registers as RT and RT+MAXVL respectively. This makes
1024 it easy for compilers to statically allocate registers even when VL
1025 changes dynamically.
1026
1027 Bear in mind that both RT and RT+MAXVL are starting points for Vectors,
1028 and bear in mind that element-width overrides still have to be taken
1029 into consideration, the starting point for the implicit destination is
1030 best illustrated in pseudocode:
1031
1032 ```
1033 # demo of maddedu
1034  for (i = 0; i < VL; i++)
1035 if (predval & 1<<i) # predication
1036 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1037 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1038 src2 = get_polymorphed_reg(RC, srcwid, irs3)
1039 result = src1*src2 + src2
1040 destmask = (2<<destwid)-1
1041 # store two halves of result, both start from RT.
1042 set_polymorphed_reg(RT, destwid, ird , result&destmask)
1043 set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
1044 if (!RT.isvec) break
1045 if (RT.isvec)  { id += 1; }
1046 if (RA.isvec)  { irs1 += 1; }
1047 if (RB.isvec)  { irs2 += 1; }
1048 if (RC.isvec)  { irs3 += 1; }
1049 ```
1050
1051 The significant part here is that the second half is stored
1052 starting not from RT+MAXVL at all: it is the *element* index
1053 that is offset by MAXVL, both halves actually starting from RT.
1054 If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements
1055 RT0 to RT2 are stored:
1056
1057 ```
1058 LSB0: 63:32 31:0
1059 MSB0: 0:31 32:63
1060 r0 unchanged unchanged
1061 r1 RT1.lo RT0.lo
1062 r2 unchanged RT2.lo
1063 r3 RT0.hi unchanged
1064 r4 RT2.hi RT1.hi
1065 r5 unchanged unchanged
1066 ```
1067
1068 Note that all of the LO halves start from r1, but that the HI halves
1069 start from half-way into r3. The reason is that with MAXVL bring 5 and
1070 elwidth being 32, this is the 5th element offset (in 32 bit quantities)
1071 counting from r1.
1072
1073 *Programmer's note: accessing registers that have been placed starting
1074 on a non-contiguous boundary (half-way along a scalar register) can
1075 be inconvenient: REMAP can provide an offset but it requires extra
1076 instructions to set up. A simple solution is to ensure that MAXVL is
1077 rounded up such that the Vector ends cleanly on a contiguous register
1078 boundary. MAXVL=6 in the above example would achieve that*
1079
1080 Additional DRAFT Scalar instructions in 3-in 2-out form with an implicit
1081 2nd destination:
1082
1083 * [[isa/svfixedarith]]
1084 * [[isa/svfparith]]
1085
1086 [[!tag standards]]
1087
1088 ------
1089
1090 \newpage{}
1091