60c8962c7d7df9a0e5d2a37db4e2d6622e10ffec
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 # Appendix
2
3 * <https://bugs.libre-soc.org/show_bug.cgi?id=574> Saturation
4 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47> Parallel Prefix
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=697> Reduce Modes
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=864> parallel prefix simulator
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=809> OV sv.addex discussion
8 * ARM SVE Fault-first <https://alastairreid.github.io/papers/sve-ieee-micro-2017.pdf>
9
10 This is the appendix to [[sv/svp64]], providing explanations of modes
11 etc. leaving the main svp64 page's primary purpose as outlining the
12 instruction format.
13
14 Table of contents:
15
16 [[!toc]]
17
18 ## Partial Implementations
19
20 It is perfectly legal to implement subsets of SVP64 as long as illegal
21 instruction traps are always raised on unimplemented features,
22 so that soft-emulation is possible,
23 even for future revisions of SVP64. With SVP64 being partly controlled
24 through contextual SPRs, a little care has to be taken.
25
26 **All** SPRs
27 not implemented including reserved ones for future use must raise an illegal
28 instruction trap if read or written. This allows software the
29 opportunity to emulate the context created by the given SPR.
30
31 See [[sv/compliancy_levels]] for full details.
32
33 ## XER, SO and other global flags
34
35 Vector systems are expected to be high performance. This is achieved
36 through parallelism, which requires that elements in the vector be
37 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
38 Read-Write Hazards on single-bit global resources, having a significant
39 detrimental effect.
40
41 Consequently in SV, XER.SO behaviour is disregarded (including
42 in `cmp` instructions). XER.SO is not read, but XER.OV may be written,
43 breaking the Read-Modify-Write Hazard Chain that complicates
44 microarchitectural implementations.
45 This includes when `scalar identity behaviour` occurs. If precise
46 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
47 instructions should be used without an SV Prefix.
48
49 TODO jacob add about OV https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf
50
51 Of note here is that XER.SO and OV may already be disregarded in the
52 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
53 SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets,
54 but only for SVP64 Prefixed Operations.
55
56 XER.CA/CA32 on the other hand is expected and required to be implemented
57 according to standard Power ISA Scalar behaviour. Interestingly, due
58 to SVP64 being in effect a hardware for-loop around Scalar instructions
59 executing in precise Program Order, a little thought shows that a Vectorized
60 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
61 and producing, at the end, a single bit Carry out. High performance
62 implementations may exploit this observation to deploy efficient
63 Parallel Carry Lookahead.
64
65 ```
66 # assume VL=4, this results in 4 sequential ops (below)
67 sv.adde r0.v, r4.v, r8.v
68
69 # instructions that get executed in backend hardware:
70 adde r0, r4, r8 # takes carry-in, produces carry-out
71 adde r1, r5, r9 # takes carry from previous
72 ...
73 adde r3, r7, r11 # likewise
74 ```
75
76 It can clearly be seen that the carry chains from one
77 64 bit add to the next, the end result being that a
78 256-bit "Big Integer Add with Carry" has been performed, and that
79 CA contains the 257th bit. A one-instruction 512-bit Add-with-Carry
80 may be performed by setting VL=8, and a one-instruction
81 1024-bit Add-with-Carry by setting VL=16, and so on. More on
82 this in [[openpower/sv/biginteger]]
83
84 ## EXTRA Field Mapping
85
86 The purpose of the 9-bit EXTRA field mapping is to mark individual
87 registers (RT, RA, BFA) as either scalar or vector, and to extend
88 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
89 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
90 Predication) leaving a mere 6 bits for qualifying registers. As can
91 be seen there is significant pressure on these (and in fact all) SVP64 bits.
92
93 In Power ISA v3.1 prefixing there are bits which describe and classify
94 the prefix in a fashion that is independent of the suffix. MLSS for
95 example. For SVP64 there is insufficient space to make the SVP64 Prefix
96 "self-describing", and consequently every single Scalar instruction
97 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
98 This process was semi-automated and is described in this section.
99 The final results, which are part of the SVP64 Specification, are here:
100 [[openpower/opcode_regs_deduped]]
101
102 * Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
103 from reading the markdown formatted version of the Scalar pseudocode which
104 is machine-readable and found in [[openpower/isatables]]. The analysis
105 gives, by instruction, a "Register Profile". `add RT, RA, RB` for
106 example is given a designation `RM-2R-1W` because it requires two GPR
107 reads and one GPR write.
108 * Secondly, the total number of registers was added up (2R-1W is 3
109 registers) and if less than or equal to three then that instruction
110 could be given an EXTRA3 designation. Four or more is given an EXTRA2
111 designation because there are only 9 bits available.
112 * Thirdly, the instruction was analysed to see if Twin or Single
113 Predication was suitable. As a general rule this was if there
114 was only a single operand and a single result (`extw` and LD/ST)
115 however it was found that some 2 or 3 operand instructions also
116 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
117 in Twin Predication, some compromises were made, here. LDST is
118 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
119 * Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
120 could have been decided that RA would be indexed 0 (EXTRA bits 0-2), RB
121 indexed 1 (EXTRA bits 3-5) and RT indexed 2 (EXTRA bits 6-8). In some
122 cases (LD/ST with update) RA-as-a-source is given a **different** EXTRA
123 index from RA-as-a-result (because it is possible to do, and perceived
124 to be useful). Rc=1 co-results (CR0, CR1) are always given the same
125 EXTRA index as their main result (RT, FRT).
126 * Fifthly, in an automated process the results of the analysis were
127 outputted in CSV Format for use in machine-readable form by sv_analysis.py
128 <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
129
130 This process was laborious but logical, and, crucially, once a decision
131 is made (and ratified) cannot be reversed. Qualifying future Power ISA
132 Scalar instructions for SVP64 is **strongly** advised to utilise this
133 same process and the same sv_analysis.py program as a canonical method
134 of maintaining the relationships. Alterations to that same program
135 which change the Designation is **prohibited** once finalised (ratified
136 through the Power ISA WG Process). It would be similar to deciding that
137 `add` should be changed from X-Form
138 to D-Form.
139
140 ## Single Predication <a name="1p"> </a>
141
142 This is a standard mode normally found in Vector ISAs. every element
143 in every source Vector and in the destination uses the same bit of one
144 single predicate mask.
145
146 In SVSTATE, for Single-predication, implementors MUST increment both
147 srcstep and dststep, but depending on whether sz and/or dz are set,
148 srcstep and dststep can still potentially become different indices.
149 Only when sz=dz is srcstep guaranteed to equal dststep at all times.
150
151 Note that in some Mode Formats there is only one flag (zz). This indicates
152 that *both* sz *and* dz are set to the same.
153
154 Example 1:
155
156 * VL=4
157 * mask=0b1101
158 * sz=0, dz=1
159
160 The following schedule for srcstep and dststep will occur:
161
162 | srcstep | dststep | comment |
163 | ---- | ----- | -------- |
164 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
165 | 1 | 2 | sz=1 but dz=0: dst skips mask[1], src soes not |
166 | 2 | 3 | mask[src=2] and mask[dst=3] are 1 |
167 | 3 | end | loop has ended because dst reached VL-1 |
168
169 Example 2:
170
171 * VL=4
172 * mask=0b1101
173 * sz=1, dz=0
174
175 The following schedule for srcstep and dststep will occur:
176
177 | srcstep | dststep | comment |
178 | ---- | ----- | -------- |
179 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
180 | 2 | 1 | sz=0 but dz=1: src skips mask[1], dst does not |
181 | 3 | 2 | mask[src=3] and mask[dst=2] are 1 |
182 | end | 3 | loop has ended because src reached VL-1 |
183
184 In both these examples it is crucial to note that despite there being
185 a single predicate mask, with sz and dz being different, srcstep and
186 dststep are being requested to react differently.
187
188 Example 3:
189
190 * VL=4
191 * mask=0b1101
192 * sz=0, dz=0
193
194 The following schedule for srcstep and dststep will occur:
195
196 | srcstep | dststep | comment |
197 | ---- | ----- | -------- |
198 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
199 | 2 | 2 | sz=0 and dz=0: both src and dst skip mask[1] |
200 | 3 | 3 | mask[src=3] and mask[dst=3] are 1 |
201 | end | end | loop has ended because src and dst reached VL-1 |
202
203 Here, both srcstep and dststep remain in lockstep because sz=dz=0
204
205 ## Twin Predication <a name="2p"> </a>
206
207 This is a novel concept that allows predication to be applied to a single
208 source and a single dest register. The following types of traditional
209 Vector operations may be encoded with it, *without requiring explicit
210 opcodes to do so*
211
212 * VSPLAT (a single scalar distributed across a vector)
213 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
214 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
215 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
216 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
217
218 Those patterns (and more) may be applied to:
219
220 * mv (the usual way that V\* ISA operations are created)
221 * exts\* sign-extension
222 * rwlinm and other RS-RA shift operations (**note**: excluding
223 those that take RA as both a src and dest. These are not
224 1-src 1-dest, they are 2-src, 1-dest)
225 * LD and ST (treating AGEN as one source)
226 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
227 * Condition Register ops mfcr, mtcr and other similar
228
229 This is a huge list that creates extremely powerful combinations,
230 particularly given that one of the predicate options is `(1<<r3)`
231
232 Additional unusual capabilities of Twin Predication include a back-to-back
233 version of VCOMPRESS-VEXPAND which is effectively the ability to do
234 sequentially ordered multiple VINSERTs. The source predicate selects a
235 sequentially ordered subset of elements to be inserted; the destination
236 predicate specifies the sequentially ordered recipient locations.
237 This is equivalent to
238 `llvm.masked.compressstore.*`
239 followed by
240 `llvm.masked.expandload.*`
241 with a single instruction, but abstracted out from Load/Store and applicable
242 in general to any 2P instruction.
243
244 This extreme power and flexibility comes down to the fact that SVP64
245 is not actually a Vector ISA: it is a loop-abstraction-concept that
246 is applied *in general* to Scalar operations, just like the x86 `REP`
247 instruction (if put on steroids).
248
249 ## Pack/Unpack
250
251 The pack/unpack concept of VSX `vpack` is abstracted out as Sub-Vector
252 reordering. Two bits in the `SVSHAPE` [[sv/spr]] enable either "packing"
253 or "unpacking" on the subvectors vec2/3/4.
254
255 First, illustrating a "normal" SVP64 operation with `SUBVL!=1:` (assuming
256 no elwidth overrides), note that the VL loop is outer and the SUBVL
257 loop inner:
258
259 ```
260 def index():
261 for i in range(VL):
262 for j in range(SUBVL):
263 yield i*SUBVL+j
264
265 for idx in index():
266 operation_on(RA+idx)
267 ```
268
269 For pack/unpack (again, no elwidth overrides), note that now there is the
270 option to swap the SUBVL and VL loop orders.
271 In effect the Pack/Unpack performs a Transpose of the subvector elements.
272 Illustrated this time with a GPR mv operation:
273
274 ```
275 # yield an outer-SUBVL or inner VL loop with SUBVL
276 def index_p(outer):
277 if outer:
278 for j in range(SUBVL): # subvl is outer
279 for i in range(VL): # vl is inner
280 yield i+VL*j
281 else:
282 for i in range(VL): # vl is outer
283 for j in range(SUBVL): # subvl is inner
284 yield i*SUBVL+j
285
286 # walk through both source and dest indices simultaneously
287 for src_idx, dst_idx in zip(index_p(PACK), index_p(UNPACK)):
288 move_operation(RT+dst_idx, RA+src_idx)
289 ```
290
291 "yield" from python is used here for simplicity and clarity.
292 The two Finite State Machines for the generation of the source
293 and destination element offsets progress incrementally in
294 lock-step.
295
296 Example VL=2, SUBVL=3, PACK_en=1 - elements grouped by
297 vec3 will be redistributed such that Sub-elements 0 are
298 packed together, Sub-elements 1 are packed together, as
299 are Sub-elements 2.
300
301 ```
302 srcstep=0 srcstep=1
303 0 1 2 3 4 5
304
305 dststep=0 dststep=1 dststep=2
306 0 3 1 4 2 5
307 ```
308
309 Setting of both `PACK` and `UNPACK` is neither prohibited nor `UNDEFINED`
310 because the reordering is fully deterministic, and additional REMAP
311 reordering may be applied. Combined with Matrix REMAP this would give
312 potentially up to 4 Dimensions of reordering.
313
314 Pack/Unpack has quirky interactions on [[sv/mv.swizzle]] because it can
315 set a different subvector length for destination, and has a slightly
316 different pseudocode algorithm for Vertical-First Mode.
317
318 Ordering is as follows:
319
320 * SVSHAPE srcstep, dststep, ssubstep and dsubstep are advanced sequentially
321 depending on PACK/UNPACK.
322 * srcstep and dststep are pushed through REMAP to compute actual Element offsets.
323 * Swizzle is independently applied to ssubstep and dsubstep
324
325 Pack/Unpack is enabled (set up) through [[sv/svstep]].
326
327 ## Reduce modes
328
329 Reduction in SVP64 is deterministic and somewhat of a misnomer.
330 A normal Vector ISA would have explicit Reduce opcodes with defined
331 characteristics per operation: in SX Aurora there is even an additional
332 scalar argument containing the initial reduction value, and the default
333 is either 0 or 1 depending on the specifics of the explicit opcode.
334 SVP64 fundamentally has to utilise *existing* Scalar Power ISA v3.0B
335 operations, which presents some unique challenges.
336
337 The solution turns out to be to simply define reduction as permitting
338 deterministic element-based schedules to be issued using the base Scalar
339 operations, and to rely on the underlying microarchitecture to resolve
340 Register Hazards at the element level. This goes back to the fundamental
341 principle that SV is nothing more than a Sub-Program-Counter sitting
342 between Decode and Issue phases.
343
344 For Scalar Reduction, Microarchitectures *may* take opportunities to
345 parallelise the reduction but only if in doing so they preserve strict
346 Program Order at the Element Level. Opportunities where this is possible
347 include an `OR` operation or a MIN/MAX operation: it may be possible to
348 parallelise the reduction, but for Floating Point it is not permitted
349 due to different results being obtained if the reduction is not executed
350 in strict Program-Sequential Order.
351
352 In essence it becomes the programmer's responsibility to leverage the
353 pre-determined schedules to desired effect.
354
355 ### Scalar result reduction and iteration
356
357 Scalar Reduction per se does not exist, instead is implemented in SVP64
358 as a simple and natural relaxation of the usual restriction on the Vector
359 Looping which would terminate if the destination was marked as a Scalar.
360 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
361 even though the destination register is marked as scalar *and*
362 the same register is used as a source register. Thus it is
363 up to the programmer to be aware of this, observe some conventions,
364 and thus end up achieving the desired outcome of scalar reduction.
365
366 It is also important to appreciate that there is no actual imposition or
367 restriction on how this mode is utilised: there will therefore be several
368 valuable uses (including Vector Iteration and "Reverse-Gear") and it is
369 up to the programmer to make best use of the (strictly deterministic)
370 capability provided.
371
372 In this mode, which is suited to operations involving carry or overflow,
373 one register must be assigned, by convention by the programmer to be the
374 "accumulator". Scalar reduction is thus categorised by:
375
376 * One of the sources is a Vector
377 * the destination is a scalar
378 * optionally but most usefully when one source scalar register is
379 also the scalar destination (which may be informally termed by
380 convention the "accumulator")
381 * That the source register type is the same as the destination register
382 type identified as the "accumulator". Scalar reduction on `cmp`,
383 `setb` or `isel` makes no sense for example because of the mixture
384 between CRs and GPRs.
385
386 *Note that issuing instructions in Scalar reduce mode such as `setb`
387 are neither `UNDEFINED` nor prohibited, despite them not making much
388 sense at first glance. Scalar reduce is strictly defined behaviour,
389 and the cost in hardware terms of prohibition of seemingly non-sensical
390 operations is too great. Therefore it is permitted and required to
391 be executed successfully. Implementors **MAY** choose to optimise
392 such instructions in instances where their use results in "extraneous
393 execution", i.e. where it is clear that the sequence of operations,
394 comprising multiple overwrites to a scalar destination **without**
395 cumulative, iterative, or reductive behaviour (no "accumulator"), may
396 discard all but the last element operation. Identification of such
397 is trivial to do for `setb` and `cmp`: the source register type is a
398 completely different register file from the destination. Likewise Scalar
399 reduction when the destination is a Vector is as if the Reduction Mode
400 was not requested. However it would clearly be unacceptable to perform
401 such optimisations on cache-inhibited LD/ST, so some considerable care
402 needs to be taken.*
403
404 Typical applications include simple operations such as `ADD r3, r10.v,
405 r3` where, clearly, r3 is being used to accumulate the addition of all
406 elements of the vector starting at r10.
407
408 ```
409 # add RT, RA,RB but when RT==RA
410 for i in range(VL):
411 iregs[RA] += iregs[RB+i] # RT==RA
412 ```
413
414 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
415 SV ordinarily **terminates** at the first scalar operation. Only by
416 marking the operation as "mapreduce" will it continue to issue multiple
417 sub-looped (element) instructions in `Program Order`.
418
419 To perform the loop in reverse order, the ```RG``` (reverse gear) bit
420 must be set. This may be useful in situations where the results may be
421 different (floating-point) if executed in a different order. Given that
422 there is no actual prohibition on Reduce Mode being applied when the
423 destination is a Vector, the "Reverse Gear" bit turns out to be a way to
424 apply Iterative or Cumulative Vector operations in reverse. `sv.add/rg
425 r3.v, r4.v, r4.v` for example will start at the opposite end of the
426 Vector and push a cumulative series of overlapping add operations into
427 the Execution units of the underlying hardware.
428
429 Other examples include shift-mask operations where a Vector of inserts
430 into a single destination register is required (see [[sv/bitmanip]],
431 bmset), as a way to construct a value quickly from multiple arbitrary
432 bit-ranges and bit-offsets. Using the same register as both the source
433 and destination, with Vectors of different offsets masks and values to
434 be inserted has multiple applications including Video, cryptography and
435 JIT compilation.
436
437 ```
438 # assume VL=4:
439 # * Vector of shift-offsets contained in RC (r12.v)
440 # * Vector of masks contained in RB (r8.v)
441 # * Vector of values to be masked-in in RA (r4.v)
442 # * Scalar destination RT (r0) to receive all mask-offset values
443 sv.bmset/mr r0, r4.v, r8.v, r12.v
444 ```
445
446 Due to the Deterministic Scheduling, Subtract and Divide are still
447 permitted to be executed in this mode, although from an algorithmic
448 perspective it is strongly discouraged. It would be better to use
449 addition followed by one final subtract, or in the case of divide, to get
450 better accuracy, to perform a multiply cascade followed by a final divide.
451
452 Note that single-operand or three-operand scalar-dest reduce is perfectly
453 well permitted: the programmer may still declare one register, used
454 as both a Vector source and Scalar destination, to be utilised as the
455 "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc this
456 naturally fits well with the normal expected usage of these operations.
457
458 If an interrupt or exception occurs in the middle of the scalar mapreduce,
459 the scalar destination register **MUST** be updated with the current
460 (intermediate) result, because this is how ```Program Order``` is
461 preserved (Vector Loops are to be considered to be just another way
462 of issuing instructions in Program Order). In this way, after return
463 from interrupt, the scalar mapreduce may continue where it left off.
464 This provides "precise" exception behaviour.
465
466 Note that hardware is perfectly permitted to perform multi-issue parallel
467 optimisation of the scalar reduce operation: it's just that as far as
468 the user is concerned, all exceptions and interrupts **MUST** be precise.
469
470 ## Fail-on-first <a name="fail-first"> </a>
471
472 Data-dependent fail-on-first has two distinct variants: one for LD/ST (see
473 [[sv/ldst]], the other for arithmetic operations (actually, CR-driven)
474 [[sv/normal]] and CR operations [[sv/cr_ops]]. Note in each case the
475 assumption is that vector elements are required appear to be executed
476 in sequential Program Order, element 0 being the first.
477
478 * LD/ST ffirst (not to be confused with *Data-Dependent* LD/ST ffirst)
479 treats the first LD/ST in a vector (element 0) as an ordinary one.
480 Exceptions occur "as normal" on the first element. However for elements
481 1 and above, if an exception would occur, then VL is **truncated**
482 to the previous element.
483 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
484 CR-creating operation produces a result (including cmp). Similar to
485 branch, an analysis of the CR is performed and if the test fails,
486 the vector operation terminates and discards all element operations
487 above the current one (and the current one if VLi is not set), and
488 VL is truncated to either the *previous* element or the current one,
489 depending on whether VLi (VL "inclusive") is set.
490
491 Thus the new VL comprises a contiguous vector of results, all of which
492 pass the testing criteria (equal to zero, less than zero).
493
494 The CR-based data-driven fail-on-first is new and not
495 found in ARM SVE or RVV. At the same time it is also
496 "old" because it is a generalisation of the Z80 [Block
497 compare](https://rvbelzen.tripod.com/z80prgtemp/z80prg04.htm)
498 instructions, especially
499 [CPIR](http://z80-heaven.wikidot.com/instructions-set:cpir) which is
500 based on CP (compare) as the ultimate "element" (suffix) operation
501 to which the repeat (prefix) is applied. It is extremely useful for
502 reducing instruction count, however requires speculative execution
503 involving modifications of VL to get high performance implementations.
504 An additional mode (RC1=1) effectively turns what would otherwise be an
505 arithmetic operation into a type of `cmp`. The CR is stored (and the
506 CR.eq bit tested against the `inv` field). If the CR.eq bit is equal to
507 `inv` then the Vector is truncated and the loop ends. Note that when
508 RC1=1 the result elements are never stored, only the CRs.
509
510 VLi is only available as an option when `Rc=0` (or for instructions
511 which do not have Rc). When set, the current element is always also
512 included in the count (the new length that VL will be set to). This may
513 be useful in combination with "inv" to truncate the Vector to *exclude*
514 elements that fail a test, or, in the case of implementations of strncpy,
515 to include the terminating zero.
516
517 In CR-based data-driven fail-on-first there is only the option to select
518 and test one bit of each CR (just as with branch BO). For more complex
519 tests this may be insufficient. If that is the case, a vectorized crops
520 (crand, cror) may be used, and ffirst applied to the crop instead of to
521 the arithmetic vector.
522
523 One extremely important aspect of ffirst is:
524
525 * LDST ffirst may never set VL equal to zero. This because on the first
526 element an exception must be raised "as normal".
527 * CR-based data-dependent ffirst on the other hand **can** set VL equal
528 to zero. This is the only means in the entirety of SV that VL may be set
529 to zero (with the exception of via the SV.STATE SPR). When VL is set
530 zero due to the first element failing the CR bit-test, all subsequent
531 vectorized operations are effectively `nops` which is
532 *precisely the desired and intended behaviour*.
533
534 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
535 to a nonzero value for any implementation-specific reason. For example:
536 it is perfectly reasonable for implementations to alter VL when ffirst
537 LD or ST operations are initiated on a nonaligned boundary, such that
538 within a loop the subsequent iteration of that loop begins subsequent
539 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
540 workloads or balance resources.
541
542 CR-based data-dependent first on the other hand MUST not truncate VL
543 arbitrarily to a length decided by the hardware: VL MUST only be truncated
544 based explicitly on whether a test fails. This because it is a precise
545 test on which algorithms will rely.
546
547 *Note: there is no reverse-direction for Data-dependent Fail-First. REMAP
548 will need to be activated to invert the ordering of element traversal.*
549
550 ### Data-dependent fail-first on CR operations (crand etc)
551
552 Operations that actually produce or alter CR Field as a result do not
553 also in turn have an Rc=1 mode. However it makes no sense to try to test
554 the 4 bits of a CR Field for being equal or not equal to zero. Moreover,
555 the result is already in the form that is desired: it is a CR field.
556 Therefore, CR-based operations have their own SVP64 Mode, described in
557 [[sv/cr_ops]]
558
559 There are two primary different types of CR operations:
560
561 * Those which have a 3-bit operand field (referring to a CR Field)
562 * Those which have a 5-bit operand (referring to a bit within the
563 whole 32-bit CR)
564
565 More details can be found in [[sv/cr_ops]].
566
567 ## CR Operations
568
569 CRs are slightly more involved than INT or FP registers due to the
570 possibility for indexing individual bits (crops BA/BB/BT). Again however
571 the access pattern needs to be understandable in relation to v3.0B / v3.1B
572 numbering, with a clear linear relationship and mapping existing when
573 SV is applied.
574
575 ### CR EXTRA mapping table and algorithm <a name="cr_extra"></a>
576
577 Numbering relationships for CR fields are already complex due to being
578 in BE format (*the relationship is not clearly explained in the v3.0B
579 or v3.1 specification*). However with some care and consideration the
580 exact same mapping used for INT and FP regfiles may be applied, just to
581 the upper bits, as explained below. Firstly and most importantly a new
582 notation `CR{field number}` is used to indicate access to a particular
583 Condition Register Field (as opposed to the notation `CR[bit]` which
584 accesses one bit of the 32 bit Power ISA v3.0B Condition Register).
585
586 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
587
588 ```
589 CR{n} = CR[32+n*4:35+n*4]
590 ```
591
592 For SVP64 the relationship for the sequential numbering of elements is to
593 the CR **fields** within the CR Register, not to individual bits within
594 the CR register.
595
596 The `CR{n}` notation is designed to give *linear sequential
597 numbering* in the Vector domain on a straight sequential Vector Loop.
598
599 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
600 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits *in*
601 that CR (EQ/LT/GT/SO). The numbering was determined (after 4 months of
602 analysis and research) to be as follows:
603
604 ```
605 CR_index = (BA>>2) # top 3 bits
606 bit_index = (BA & 0b11) # low 2 bits
607 CR_reg = CR{CR_index} # get the CR
608 # finally get the bit from the CR.
609 CR_bit = (CR_reg & (1<<bit_index)) != 0
610 ```
611
612 When it comes to applying SV, it is the *CR Field* number `CR_reg`
613 to which SV EXTRA2/3
614 applies, **not** the `CR_bit` portion (bits 3-4):
615
616 ```
617 if extra3_mode:
618 is_vec = EXTRA3[0]
619 extra = EXTRA3[1:2]
620 else:
621 is_vec = EXTRA2[0]
622 if is_vec:
623 extra = EXTRA2[1] << 1
624 else:
625 extra = EXTRA2[1]
626 if is_vec:
627 # vector constructs "BA[0:2] extra 00 BA[3:4]"
628 return ((BA >> 2) << 6) | # hi 3 bits shifted up
629 (extra << 4) | # to make room for these
630 (BA & 0b11) # CR_bit on the end
631 else:
632 # scalar constructs "00 extra BA[0:4]"
633 return (extra << 5) | BA
634 ```
635
636 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
637 algorithm to determine CR\_reg is modified to as follows:
638
639 ```
640 CR_index = (BA>>2) # top 3 bits
641 if is_vec:
642 # vector mode, 0-124 increments of 4
643 CR_index = (CR_index << 4) | (extra << 2)
644 else:
645 # scalar mode, 0-32 increments of 1
646 CR_index = (extra << 3) | CR_index
647 # same as for v3.0/v3.1 from this point onwards
648 bit_index = (BA & 0b11) # low 2 bits
649 CR_reg = CR{CR_index} # get the CR
650 # finally get the bit from the CR.
651 CR_bit = (CR_reg & (1<<bit_index)) != 0
652 ```
653
654 Note here that the decoding pattern to determine CR\_bit does not change.
655
656 Note: high-performance implementations may read/write Vectors of CRs in
657 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
658 simplify internal design. If instructions are issued where CR Vectors
659 do not start on a 32-bit aligned boundary, performance may be affected.
660
661 ### CR fields as inputs/outputs of vector operations
662
663 CRs (or, the arithmetic operations associated with them)
664 may be marked as Vectorized or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorized if the destination is Vectorized. Likewise if the destination is scalar then so is the CR.
665
666 When vectorized, the CR inputs/outputs are sequentially read/written
667 to 4-bit CR fields. Vectorized Integer results, when Rc=1, will begin
668 writing to CR8 (TBD evaluate) and increase sequentially from there.
669 This is so that:
670
671 * implementations may rely on the Vector CRs being aligned to 8. This
672 means that CRs may be read or written in aligned batches of 32 bits
673 (8 CRs per batch), for high performance implementations.
674 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
675 overwritten by vector Rc=1 operations except for very large VL
676 * CR-based predication, from CR32, is also not interfered with
677 (except by large VL).
678
679 However when the SV result (destination) is marked as a scalar by the
680 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
681 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
682 for FP operations.
683
684 Note that yes, the CR Fields are genuinely Vectorized. Unlike in SIMD VSX which
685 has a single CR (CR6) for a given SIMD result, SV Vectorized OpenPOWER
686 v3.0B scalar operations produce a **tuple** of element results: the
687 result of the operation as one part of that element *and a corresponding
688 CR element*. Greatly simplified pseudocode:
689
690 ```
691 for i in range(VL):
692 # calculate the vector result of an add
693 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
694 # now calculate CR bits
695 CRs{8+i}.eq = iregs[RT+i] == 0
696 CRs{8+i}.gt = iregs[RT+i] > 0
697 ... etc
698 ```
699
700 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
701 then a followup instruction must be performed, setting "reduce" mode on
702 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
703 more flexibility in analysing vectors than standard Vector ISAs. Normal
704 Vector ISAs are typically restricted to "were all results nonzero" and
705 "were some results nonzero". The application of mapreduce to Vectorized
706 cr operations allows far more sophisticated analysis, particularly in
707 conjunction with the new crweird operations see [[sv/cr_int_predication]].
708
709 Note in particular that the use of a separate instruction in this way
710 ensures that high performance multi-issue OoO inplementations do not
711 have the computation of the cumulative analysis CR as a bottleneck and
712 hindrance, regardless of the length of VL.
713
714 Additionally,
715 SVP64 [[sv/branches]] may be used, even when the branch itself is to
716 the following instruction. The combined side-effects of CTR reduction
717 and VL truncation provide several benefits.
718
719 (see [[discussion]]. some alternative schemes are described there)
720
721 ### Rc=1 when SUBVL!=1
722
723 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
724 predicate is allocated per subvector; likewise only one CR is allocated
725 per subvector.
726
727 This leaves a conundrum as to how to apply CR computation per subvector,
728 when normally Rc=1 is exclusively applied to scalar elements. A solution
729 is to perform a bitwise OR or AND of the subvector tests. Given that
730 OE is ignored in SVP64, this field may (when available) be used to select OR or
731 AND behavior.
732
733 #### Table of CR fields
734
735 CRn is the notation used by the OpenPower spec to refer to CR field #i,
736 so FP instructions with Rc=1 write to CR1 (n=1).
737
738 CRs are not stored in SPRs: they are registers in their own right.
739 Therefore context-switching the full set of CRs involves a Vectorized
740 mfcr or mtcr, using VL=8 to do so. This is exactly as how
741 scalar OpenPOWER context-switches CRs: it is just that there are now
742 more of them.
743
744 The 64 SV CRs are arranged similarly to the way the 128 integer registers
745 are arranged. TODO a python program that auto-generates a CSV file
746 which can be included in a table, which is in a new page (so as not to
747 overwhelm this one). [[svp64/cr_names]]
748
749 ## Register Profiles
750
751 Instructions are broken down by Register Profiles as listed in the
752 following auto-generated page: [[opcode_regs_deduped]]. These tables,
753 despite being auto-generated, are part of the Specification.
754
755 ## SV pseudocode illustration
756
757 ### Single-predicated Instruction
758
759 illustration of normal mode add operation: zeroing not included, elwidth
760 overrides not included. if there is no predicate, it is set to all 1s
761
762 ```
763 function op_add(rd, rs1, rs2) # add not VADD!
764 int i, id=0, irs1=0, irs2=0;
765 predval = get_pred_val(FALSE, rd);
766 for (i = 0; i < VL; i++)
767 STATE.srcoffs = i # save context
768 if (predval & 1<<i) # predication uses intregs
769 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
770 if (!int_vec[rd].isvec) break;
771 if (rd.isvec) { id += 1; }
772 if (rs1.isvec) { irs1 += 1; }
773 if (rs2.isvec) { irs2 += 1; }
774 if (id == VL or irs1 == VL or irs2 == VL) {
775 # end VL hardware loop
776 STATE.srcoffs = 0; # reset
777 return;
778 }
779 ```
780
781 This has several modes:
782
783 * RT.v = RA.v RB.v
784 * RT.v = RA.v RB.s (and RA.s RB.v)
785 * RT.v = RA.s RB.s
786 * RT.s = RA.v RB.v
787 * RT.s = RA.v RB.s (and RA.s RB.v)
788 * RT.s = RA.s RB.s
789
790 All of these may be predicated. Vector-Vector is straightfoward.
791 When one of source is a Vector and the other a Scalar, it is clear that
792 each element of the Vector source should be added to the Scalar source,
793 each result placed into the Vector (or, if the destination is a scalar,
794 only the first nonpredicated result).
795
796 The one that is not obvious is RT=vector but both RA/RB=scalar.
797 Here this acts as a "splat scalar result", copying the same result into
798 all nonpredicated result elements. If a fixed destination scalar was
799 intended, then an all-Scalar operation should be used.
800
801 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
802
803 ## Assembly Annotation
804
805 Assembly code annotation is required for SV to be able to successfully
806 mark instructions as "prefixed".
807
808 A reasonable (prototype) starting point:
809
810 ```
811 svp64 [field=value]*
812 ```
813
814 Fields:
815
816 * ew=8/16/32 - element width
817 * sew=8/16/32 - source element width
818 * vec=2/3/4 - SUBVL
819 * mode=mr/satu/sats/crpred
820 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
821
822 similar to x86 "rex" prefix.
823
824 For actual assembler:
825
826 ```
827 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
828 ```
829
830 Qualifiers:
831
832 * m={pred}: predicate mask mode
833 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
834 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
835 * ew={N}: ew=8/16/32 - sets elwidth override
836 * sw={N}: sw=8/16/32 - sets source elwidth override
837 * ff={xx}: see fail-first mode
838 * sat{x}: satu / sats - see saturation mode
839 * mr: see map-reduce mode
840 * mrr: map-reduce, reverse-gear (VL-1 downto 0)
841 * mr.svm see map-reduce with sub-vector mode
842 * crm: see map-reduce CR mode
843 * crm.svm see map-reduce CR with sub-vector mode
844 * sz: predication with source-zeroing
845 * dz: predication with dest-zeroing
846
847 For modes:
848
849 * fail-first
850 - ff=lt/gt/le/ge/eq/ne/so/ns
851 - RC1 mode
852 * saturation:
853 - sats
854 - satu
855 * map-reduce:
856 - mr OR crm: "normal" map-reduce mode or CR-mode.
857 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
858
859 ## Parallel-reduction algorithm
860
861 The principle of SVP64 is that SVP64 is a fully-independent
862 Abstraction of hardware-looping in between issue and execute phases
863 that has no relation to the operation it issues.
864 Additional state cannot be saved on context-switching beyond that
865 of SVSTATE, making things slightly tricky.
866
867 Executable demo pseudocode, full version
868 [here](https://git.libre-soc.org/?p=libreriscv.git;a=blob;f=openpower/sv/test_preduce.py;hb=HEAD)
869
870 ```
871 [[!inline pages="openpower/sv/preduce.py" raw="yes" ]]
872 ```
873
874 This algorithm works by noting when data remains in-place rather than
875 being reduced, and referring to that alternative position on subsequent
876 layers of reduction. It is re-entrant. If however interrupted and
877 restored, some implementations may take longer to re-establish the
878 context.
879
880 Its application by default is that:
881
882 * RA, FRA or BFA is the first register as the first operand
883 (ci index offset in the above pseudocode)
884 * RB, FRB or BFB is the second (co index offset)
885 * RT (result) also uses ci **if RA==RT**
886
887 For more complex applications a REMAP Schedule must be used
888
889 *Programmers's note: if passed a predicate mask with only one bit set,
890 this algorithm takes no action, similar to when a predicate mask is
891 all zero.*
892
893 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
894 implemented in hardware with MVs that ensure lane-crossing is minimised.
895 The mistake which would be catastrophic to SVP64 to make is to then limit
896 the Reduction Sequence for all implementors based solely and exclusively
897 on what one specific internal microarchitecture does. In SIMD ISAs
898 the internal SIMD Architectural design is exposed and imposed on the
899 programmer. Cray-style Vector ISAs on the other hand provide convenient,
900 compact and efficient encodings of abstract concepts.* **It is the
901 Implementor's responsibility to produce a design that complies with the
902 above algorithm, utilising internal Micro-coding and other techniques to
903 transparently insert micro-architectural lane-crossing Move operations
904 if necessary or desired, to give the level of efficiency or performance
905 required.**
906
907 ## Element-width overrides <a name="elwidth"> </>
908
909 Element-width overrides are best illustrated with a packed structure
910 union in the c programming language. The following should be taken
911 literally, and assume always a little-endian layout:
912
913 ```
914 #pragma pack
915 typedef union {
916 uint8_t b[];
917 uint16_t s[];
918 uint32_t i[];
919 uint64_t l[];
920 uint8_t actual_bytes[8];
921 } el_reg_t;
922
923 elreg_t int_regfile[128];
924 ```
925
926 Accessing (get and set) of registers given a value, register (in `elreg_t`
927 form), and that all arithmetic, numbering and pseudo-Memory format is
928 LE-endian and LSB0-numbered below:
929
930 ```
931 elreg_t& get_polymorphed_reg(elreg_t const& reg, bitwidth, offset):
932 el_reg_t res; // result
933 res.l = 0; // TODO: going to need sign-extending / zero-extending
934 if !reg.isvec: // scalar access has no element offset
935 offset = 0
936 if bitwidth == 8:
937 reg.b = int_regfile[reg].b[offset]
938 elif bitwidth == 16:
939 reg.s = int_regfile[reg].s[offset]
940 elif bitwidth == 32:
941 reg.i = int_regfile[reg].i[offset]
942 elif bitwidth == 64:
943 reg.l = int_regfile[reg].l[offset]
944 return reg
945
946 set_polymorphed_reg(elreg_t& reg, bitwidth, offset, val):
947 if (!reg.isvec):
948 # for safety mask out hi bits
949 bytemask = (8 << bitwidth) - 1
950 val &= bytemask
951 # not a vector: first element only, overwrites high bits.
952 # and with the *Architectural* definition being LE,
953 # storing in the first DWORD works perfectly.
954 int_regfile[reg].l[0] = val
955 elif bitwidth == 8:
956 int_regfile[reg].b[offset] = val
957 elif bitwidth == 16:
958 int_regfile[reg].s[offset] = val
959 elif bitwidth == 32:
960 int_regfile[reg].i[offset] = val
961 elif bitwidth == 64:
962 int_regfile[reg].l[offset] = val
963 ```
964
965 In effect the GPR registers r0 to r127 (and corresponding FPRs fp0
966 to fp127) are reinterpreted to be "starting points" in a byte-addressable
967 memory. Vectors - which become just a virtual naming construct - effectively
968 overlap.
969
970 It is extremely important for implementors to note that the only circumstance
971 where upper portions of an underlying 64-bit register are zero'd out is
972 when the destination is a scalar. The ideal register file has byte-level
973 write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.
974
975 An example ADD operation with predication and element width overrides:
976
977 ```
978  for (i = 0; i < VL; i++)
979 if (predval & 1<<i) # predication
980 src1 = get_polymorphed_reg(RA, srcwid, irs1)
981 src2 = get_polymorphed_reg(RB, srcwid, irs2)
982 result = src1 + src2 # actual add here
983 set_polymorphed_reg(RT, destwid, ird, result)
984 if (!RT.isvec) break
985 if (RT.isvec)  { id += 1; }
986 if (RA.isvec)  { irs1 += 1; }
987 if (RB.isvec)  { irs2 += 1; }
988 ```
989
990 Thus it can be clearly seen that elements are packed by their
991 element width, and the packing starts from the source (or destination)
992 specified by the instruction.
993
994 ## Twin (implicit) result operations
995
996 Some operations in the Power ISA already target two 64-bit scalar
997 registers: `lq` for example, and LD with update. Some mathematical
998 algorithms are more efficient when there are two outputs rather than one,
999 providing feedback loops between elements (the most well-known being add
1000 with carry). 64-bit multiply for example actually internally produces
1001 a 128 bit result, which clearly cannot be stored in a single 64 bit
1002 register. Some ISAs recommend "macro op fusion": the practice of setting
1003 a convention whereby if two commonly used instructions (mullo, mulhi) use
1004 the same ALU but one selects the low part of an identical operation and
1005 the other selects the high part, then optimised micro-architectures may
1006 "fuse" those two instructions together, using Micro-coding techniques,
1007 internally.
1008
1009 The practice and convention of macro-op fusion however is not compatible
1010 with SVP64 Horizontal-First, because Horizontal Mode may only be applied
1011 to a single instruction at a time, and SVP64 is based on the principle of
1012 strict Program Order even at the element level. Thus it becomes necessary
1013 to add explicit more complex single instructions with more operands than
1014 would normally be seen in the average RISC ISA (3-in, 2-out, in some
1015 cases). If it was not for Power ISA already having LD/ST with update as
1016 well as Condition Codes and `lq` this would be hard to justify.
1017
1018 With limited space in the `EXTRA` Field, and Power ISA opcodes being only
1019 32 bit, 5 operands is quite an ask. `lq` however sets a precedent: `RTp`
1020 stands for "RT pair". In other words the result is stored in RT and RT+1.
1021 For Scalar operations, following this precedent is perfectly reasonable.
1022 In Scalar mode, `maddedu` therefore stores the two halves of the 128-bit
1023 multiply into RT and RT+1.
1024
1025 What, then, of `sv.maddedu`? If the destination is hard-coded to RT and
1026 RT+1 the instruction is not useful when Vectorized because the output
1027 will be overwritten on the next element. To solve this is easy: define
1028 the destination registers as RT and RT+MAXVL respectively. This makes
1029 it easy for compilers to statically allocate registers even when VL
1030 changes dynamically.
1031
1032 Bear in mind that both RT and RT+MAXVL are starting points for Vectors,
1033 and bear in mind that element-width overrides still have to be taken
1034 into consideration, the starting point for the implicit destination is
1035 best illustrated in pseudocode:
1036
1037 ```
1038 # demo of maddedu
1039  for (i = 0; i < VL; i++)
1040 if (predval & 1<<i) # predication
1041 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1042 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1043 src2 = get_polymorphed_reg(RC, srcwid, irs3)
1044 result = src1*src2 + src2
1045 destmask = (2<<destwid)-1
1046 # store two halves of result, both start from RT.
1047 set_polymorphed_reg(RT, destwid, ird , result&destmask)
1048 set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
1049 if (!RT.isvec) break
1050 if (RT.isvec)  { id += 1; }
1051 if (RA.isvec)  { irs1 += 1; }
1052 if (RB.isvec)  { irs2 += 1; }
1053 if (RC.isvec)  { irs3 += 1; }
1054 ```
1055
1056 The significant part here is that the second half is stored
1057 starting not from RT+MAXVL at all: it is the *element* index
1058 that is offset by MAXVL, both halves actually starting from RT.
1059 If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements
1060 RT0 to RT2 are stored:
1061
1062 ```
1063 LSB0: 63:32 31:0
1064 MSB0: 0:31 32:63
1065 r0 unchanged unchanged
1066 r1 RT1.lo RT0.lo
1067 r2 unchanged RT2.lo
1068 r3 RT0.hi unchanged
1069 r4 RT2.hi RT1.hi
1070 r5 unchanged unchanged
1071 ```
1072
1073 Note that all of the LO halves start from r1, but that the HI halves
1074 start from half-way into r3. The reason is that with MAXVL bring 5 and
1075 elwidth being 32, this is the 5th element offset (in 32 bit quantities)
1076 counting from r1.
1077
1078 *Programmer's note: accessing registers that have been placed starting
1079 on a non-contiguous boundary (half-way along a scalar register) can
1080 be inconvenient: REMAP can provide an offset but it requires extra
1081 instructions to set up. A simple solution is to ensure that MAXVL is
1082 rounded up such that the Vector ends cleanly on a contiguous register
1083 boundary. MAXVL=6 in the above example would achieve that*
1084
1085 Additional DRAFT Scalar instructions in 3-in 2-out form with an implicit
1086 2nd destination:
1087
1088 * [[isa/svfixedarith]]
1089 * [[isa/svfparith]]
1090
1091 [[!tag standards]]
1092
1093 ------
1094
1095 \newpage{}
1096