25c62d769f61b7e59b166a96134f7264e5265c92
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 # Appendix
2
3 * <https://bugs.libre-soc.org/show_bug.cgi?id=574> Saturation
4 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47> Parallel Prefix
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=697> Reduce Modes
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=864> parallel prefix simulator
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=809> OV sv.addex discussion
8 * ARM SVE Fault-first <https://alastairreid.github.io/papers/sve-ieee-micro-2017.pdf>
9
10 This is the appendix to [[sv/svp64]], providing explanations of modes
11 etc. leaving the main svp64 page's primary purpose as outlining the
12 instruction format.
13
14 Table of contents:
15
16 [[!toc]]
17
18 ## Partial Implementations
19
20 It is perfectly legal to implement subsets of SVP64 as long as illegal
21 instruction traps are always raised on unimplemented features,
22 so that soft-emulation is possible,
23 even for future revisions of SVP64. With SVP64 being partly controlled
24 through contextual SPRs, a little care has to be taken.
25
26 **All** SPRs
27 not implemented including reserved ones for future use must raise an illegal
28 instruction trap if read or written. This allows software the
29 opportunity to emulate the context created by the given SPR.
30
31 See [[sv/compliancy_levels]] for full details.
32
33 ## XER, SO and other global flags
34
35 Vector systems are expected to be high performance. This is achieved
36 through parallelism, which requires that elements in the vector be
37 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
38 Read-Write Hazards on single-bit global resources, having a significant
39 detrimental effect.
40
41 Consequently in SV, XER.SO behaviour is disregarded (including
42 in `cmp` instructions). XER.SO is not read, but XER.OV may be written,
43 breaking the Read-Modify-Write Hazard Chain that complicates
44 microarchitectural implementations.
45 This includes when `scalar identity behaviour` occurs. If precise
46 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
47 instructions should be used without an SV Prefix.
48
49 TODO jacob add about OV https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf
50
51 Of note here is that XER.SO and OV may already be disregarded in the
52 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
53 SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets,
54 but only for SVP64 Prefixed Operations.
55
56 XER.CA/CA32 on the other hand is expected and required to be implemented
57 according to standard Power ISA Scalar behaviour. Interestingly, due
58 to SVP64 being in effect a hardware for-loop around Scalar instructions
59 executing in precise Program Order, a little thought shows that a Vectorized
60 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
61 and producing, at the end, a single bit Carry out. High performance
62 implementations may exploit this observation to deploy efficient
63 Parallel Carry Lookahead.
64
65 ```
66 # assume VL=4, this results in 4 sequential ops (below)
67 sv.adde r0.v, r4.v, r8.v
68
69 # instructions that get executed in backend hardware:
70 adde r0, r4, r8 # takes carry-in, produces carry-out
71 adde r1, r5, r9 # takes carry from previous
72 ...
73 adde r3, r7, r11 # likewise
74 ```
75
76 It can clearly be seen that the carry chains from one
77 64 bit add to the next, the end result being that a
78 256-bit "Big Integer Add with Carry" has been performed, and that
79 CA contains the 257th bit. A one-instruction 512-bit Add-with-Carry
80 may be performed by setting VL=8, and a one-instruction
81 1024-bit Add-with-Carry by setting VL=16, and so on. More on
82 this in [[openpower/sv/biginteger]]
83
84 ## EXTRA Field Mapping
85
86 The purpose of the 9-bit EXTRA field mapping is to mark individual
87 registers (RT, RA, BFA) as either scalar or vector, and to extend
88 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
89 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
90 Predication) leaving a mere 6 bits for qualifying registers. As can
91 be seen there is significant pressure on these (and in fact all) SVP64 bits.
92
93 In Power ISA v3.1 prefixing there are bits which describe and classify
94 the prefix in a fashion that is independent of the suffix. MLSS for
95 example. For SVP64 there is insufficient space to make the SVP64 Prefix
96 "self-describing", and consequently every single Scalar instruction
97 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
98 This process was semi-automated and is described in this section.
99 The final results, which are part of the SVP64 Specification, are here:
100 [[openpower/opcode_regs_deduped]]
101
102 * Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
103 from reading the markdown formatted version of the Scalar pseudocode which
104 is machine-readable and found in [[openpower/isatables]]. The analysis
105 gives, by instruction, a "Register Profile". `add RT, RA, RB` for
106 example is given a designation `RM-2R-1W` because it requires two GPR
107 reads and one GPR write.
108 * Secondly, the total number of registers was added up (2R-1W is 3
109 registers) and if less than or equal to three then that instruction
110 could be given an EXTRA3 designation. Four or more is given an EXTRA2
111 designation because there are only 9 bits available.
112 * Thirdly, the instruction was analysed to see if Twin or Single
113 Predication was suitable. As a general rule this was if there
114 was only a single operand and a single result (`extw` and LD/ST)
115 however it was found that some 2 or 3 operand instructions also
116 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
117 in Twin Predication, some compromises were made, here. LDST is
118 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
119 * Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
120 could have been decided that RA would be indexed 0 (EXTRA bits 0-2), RB
121 indexed 1 (EXTRA bits 3-5) and RT indexed 2 (EXTRA bits 6-8). In some
122 cases (LD/ST with update) RA-as-a-source is given a **different** EXTRA
123 index from RA-as-a-result (because it is possible to do, and perceived
124 to be useful). Rc=1 co-results (CR0, CR1) are always given the same
125 EXTRA index as their main result (RT, FRT).
126 * Fifthly, in an automated process the results of the analysis were
127 outputted in CSV Format for use in machine-readable form by sv_analysis.py
128 <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
129
130 This process was laborious but logical, and, crucially, once a decision
131 is made (and ratified) cannot be reversed. Qualifying future Power ISA
132 Scalar instructions for SVP64 is **strongly** advised to utilise this
133 same process and the same sv_analysis.py program as a canonical method
134 of maintaining the relationships. Alterations to that same program
135 which change the Designation is **prohibited** once finalised (ratified
136 through the Power ISA WG Process). It would be similar to deciding that
137 `add` should be changed from X-Form
138 to D-Form.
139
140 ## Single Predication <a name="1p"> </a>
141
142 This is a standard mode normally found in Vector ISAs. every element
143 in every source Vector and in the destination uses the same bit of one
144 single predicate mask.
145
146 In SVSTATE, for Single-predication, implementors MUST increment both
147 srcstep and dststep, but depending on whether sz and/or dz are set,
148 srcstep and dststep can still potentially become different indices.
149 Only when sz=dz is srcstep guaranteed to equal dststep at all times.
150
151 Note that in some Mode Formats there is only one flag (zz). This indicates
152 that *both* sz *and* dz are set to the same.
153
154 Example 1:
155
156 * VL=4
157 * mask=0b1101
158 * sz=0, dz=1
159
160 The following schedule for srcstep and dststep will occur:
161
162 | srcstep | dststep | comment |
163 | ---- | ----- | -------- |
164 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
165 | 1 | 2 | sz=1 but dz=0: dst skips mask[1], src soes not |
166 | 2 | 3 | mask[src=2] and mask[dst=3] are 1 |
167 | 3 | end | loop has ended because dst reached VL-1 |
168
169 Example 2:
170
171 * VL=4
172 * mask=0b1101
173 * sz=1, dz=0
174
175 The following schedule for srcstep and dststep will occur:
176
177 | srcstep | dststep | comment |
178 | ---- | ----- | -------- |
179 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
180 | 2 | 1 | sz=0 but dz=1: src skips mask[1], dst does not |
181 | 3 | 2 | mask[src=3] and mask[dst=2] are 1 |
182 | end | 3 | loop has ended because src reached VL-1 |
183
184 In both these examples it is crucial to note that despite there being
185 a single predicate mask, with sz and dz being different, srcstep and
186 dststep are being requested to react differently.
187
188 Example 3:
189
190 * VL=4
191 * mask=0b1101
192 * sz=0, dz=0
193
194 The following schedule for srcstep and dststep will occur:
195
196 | srcstep | dststep | comment |
197 | ---- | ----- | -------- |
198 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
199 | 2 | 2 | sz=0 and dz=0: both src and dst skip mask[1] |
200 | 3 | 3 | mask[src=3] and mask[dst=3] are 1 |
201 | end | end | loop has ended because src and dst reached VL-1 |
202
203 Here, both srcstep and dststep remain in lockstep because sz=dz=0
204
205 ## Twin Predication <a name="2p"> </a>
206
207 This is a novel concept that allows predication to be applied to a single
208 source and a single dest register. The following types of traditional
209 Vector operations may be encoded with it, *without requiring explicit
210 opcodes to do so*
211
212 * VSPLAT (a single scalar distributed across a vector)
213 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
214 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
215 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
216 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
217
218 Those patterns (and more) may be applied to:
219
220 * mv (the usual way that V\* ISA operations are created)
221 * exts\* sign-extension
222 * rwlinm and other RS-RA shift operations (**note**: excluding
223 those that take RA as both a src and dest. These are not
224 1-src 1-dest, they are 2-src, 1-dest)
225 * LD and ST (treating AGEN as one source)
226 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
227 * Condition Register ops mfcr, mtcr and other similar
228
229 This is a huge list that creates extremely powerful combinations,
230 particularly given that one of the predicate options is `(1<<r3)`
231
232 Additional unusual capabilities of Twin Predication include a back-to-back
233 version of VCOMPRESS-VEXPAND which is effectively the ability to do
234 sequentially ordered multiple VINSERTs. The source predicate selects a
235 sequentially ordered subset of elements to be inserted; the destination
236 predicate specifies the sequentially ordered recipient locations.
237 This is equivalent to
238 `llvm.masked.compressstore.*`
239 followed by
240 `llvm.masked.expandload.*`
241 with a single instruction, but abstracted out from Load/Store and applicable
242 in general to any 2P instruction.
243
244 This extreme power and flexibility comes down to the fact that SVP64
245 is not actually a Vector ISA: it is a loop-abstraction-concept that
246 is applied *in general* to Scalar operations, just like the x86 `REP`
247 instruction (if put on steroids).
248
249 ## Pack/Unpack
250
251 The pack/unpack concept of VSX `vpack` is abstracted out as Sub-Vector
252 reordering. Two bits in the `SVSHAPE` [[sv/spr]] enable either "packing"
253 or "unpacking" on the subvectors vec2/3/4.
254
255 First, illustrating a "normal" SVP64 operation with `SUBVL!=1:` (assuming
256 no elwidth overrides), note that the VL loop is outer and the SUBVL
257 loop inner:
258
259 ```
260 def index():
261 for i in range(VL):
262 for j in range(SUBVL):
263 yield i*SUBVL+j
264
265 for idx in index():
266 operation_on(RA+idx)
267 ```
268
269 For pack/unpack (again, no elwidth overrides), note that now there is the
270 option to swap the SUBVL and VL loop orders.
271 In effect the Pack/Unpack performs a Transpose of the subvector elements.
272 Illustrated this time with a GPR mv operation:
273
274 ```
275 # yield an outer-SUBVL or inner VL loop with SUBVL
276 def index_p(outer):
277 if outer:
278 for j in range(SUBVL): # subvl is outer
279 for i in range(VL): # vl is inner
280 yield i+VL*j
281 else:
282 for i in range(VL): # vl is outer
283 for j in range(SUBVL): # subvl is inner
284 yield i*SUBVL+j
285
286 # walk through both source and dest indices simultaneously
287 for src_idx, dst_idx in zip(index_p(PACK), index_p(UNPACK)):
288 move_operation(RT+dst_idx, RA+src_idx)
289 ```
290
291 "yield" from python is used here for simplicity and clarity.
292 The two Finite State Machines for the generation of the source
293 and destination element offsets progress incrementally in
294 lock-step.
295
296 Example VL=2, SUBVL=3, PACK_en=1 - elements grouped by
297 vec3 will be redistributed such that Sub-elements 0 are
298 packed together, Sub-elements 1 are packed together, as
299 are Sub-elements 2.
300
301 ```
302 srcstep=0 srcstep=1
303 0 1 2 3 4 5
304
305 dststep=0 dststep=1 dststep=2
306 0 3 1 4 2 5
307 ```
308
309 Setting of both `PACK` and `UNPACK` is neither prohibited nor `UNDEFINED`
310 because the reordering is fully deterministic, and additional REMAP
311 reordering may be applied. Combined with Matrix REMAP this would give
312 potentially up to 4 Dimensions of reordering.
313
314 Pack/Unpack has quirky interactions on [[sv/mv.swizzle]] because it can
315 set a different subvector length for destination, and has a slightly
316 different pseudocode algorithm for Vertical-First Mode.
317
318 Ordering is as follows:
319
320 * SVSHAPE srcstep, dststep, ssubstep and dsubstep are advanced sequentially
321 depending on PACK/UNPACK.
322 * srcstep and dststep are pushed through REMAP to compute actual Element offsets.
323 * Swizzle is independently applied to ssubstep and dsubstep
324
325 Pack/Unpack is enabled (set up) through [[sv/svstep]].
326
327 ## Reduce modes
328
329 Reduction in SVP64 is deterministic and somewhat of a misnomer.
330 A normal Vector ISA would have explicit Reduce opcodes with defined
331 characteristics per operation: in SX Aurora there is even an additional
332 scalar argument containing the initial reduction value, and the default
333 is either 0 or 1 depending on the specifics of the explicit opcode.
334 SVP64 fundamentally has to utilise *existing* Scalar Power ISA v3.0B
335 operations, which presents some unique challenges.
336
337 The solution turns out to be to simply define reduction as permitting
338 deterministic element-based schedules to be issued using the base Scalar
339 operations, and to rely on the underlying microarchitecture to resolve
340 Register Hazards at the element level. This goes back to the fundamental
341 principle that SV is nothing more than a Sub-Program-Counter sitting
342 between Decode and Issue phases.
343
344 For Scalar Reduction, Microarchitectures *may* take opportunities to
345 parallelise the reduction but only if in doing so they preserve strict
346 Program Order at the Element Level. Opportunities where this is possible
347 include an `OR` operation or a MIN/MAX operation: it may be possible to
348 parallelise the reduction, but for Floating Point it is not permitted
349 due to different results being obtained if the reduction is not executed
350 in strict Program-Sequential Order.
351
352 In essence it becomes the programmer's responsibility to leverage the
353 pre-determined schedules to desired effect.
354
355 ### Scalar result reduction and iteration
356
357 Scalar Reduction per se does not exist, instead is implemented in SVP64
358 as a simple and natural relaxation of the usual restriction on the Vector
359 Looping which would terminate if the destination was marked as a Scalar.
360 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
361 even though the destination register is marked as scalar *and*
362 the same register is used as a source register. Thus it is
363 up to the programmer to be aware of this, observe some conventions,
364 and thus end up achieving the desired outcome of scalar reduction.
365
366 It is also important to appreciate that there is no actual imposition or
367 restriction on how this mode is utilised: there will therefore be several
368 valuable uses (including Vector Iteration and "Reverse-Gear") and it is
369 up to the programmer to make best use of the (strictly deterministic)
370 capability provided.
371
372 In this mode, which is suited to operations involving carry or overflow,
373 one register must be assigned, by convention by the programmer to be the
374 "accumulator". Scalar reduction is thus categorised by:
375
376 * One of the sources is a Vector
377 * the destination is a scalar
378 * optionally but most usefully when one source scalar register is
379 also the scalar destination (which may be informally termed by
380 convention the "accumulator")
381 * That the source register type is the same as the destination register
382 type identified as the "accumulator". Scalar reduction on `cmp`,
383 `setb` or `isel` makes no sense for example because of the mixture
384 between CRs and GPRs.
385
386 *Note that issuing instructions in Scalar reduce mode such as `setb`
387 are neither `UNDEFINED` nor prohibited, despite them not making much
388 sense at first glance. Scalar reduce is strictly defined behaviour,
389 and the cost in hardware terms of prohibition of seemingly non-sensical
390 operations is too great. Therefore it is permitted and required to
391 be executed successfully. Implementors **MAY** choose to optimise
392 such instructions in instances where their use results in "extraneous
393 execution", i.e. where it is clear that the sequence of operations,
394 comprising multiple overwrites to a scalar destination **without**
395 cumulative, iterative, or reductive behaviour (no "accumulator"), may
396 discard all but the last element operation. Identification of such
397 is trivial to do for `setb` and `cmp`: the source register type is a
398 completely different register file from the destination. Likewise Scalar
399 reduction when the destination is a Vector is as if the Reduction Mode
400 was not requested. However it would clearly be unacceptable to perform
401 such optimisations on cache-inhibited LD/ST, so some considerable care
402 needs to be taken.*
403
404 Typical applications include simple operations such as `ADD r3, r10.v,
405 r3` where, clearly, r3 is being used to accumulate the addition of all
406 elements of the vector starting at r10.
407
408 ```
409 # add RT, RA,RB but when RT==RA
410 for i in range(VL):
411 iregs[RA] += iregs[RB+i] # RT==RA
412 ```
413
414 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
415 SV ordinarily **terminates** at the first scalar operation. Only by
416 marking the operation as "mapreduce" will it continue to issue multiple
417 sub-looped (element) instructions in `Program Order`.
418
419 To perform the loop in reverse order, the ```RG``` (reverse gear) bit
420 must be set. This may be useful in situations where the results may be
421 different (floating-point) if executed in a different order. Given that
422 there is no actual prohibition on Reduce Mode being applied when the
423 destination is a Vector, the "Reverse Gear" bit turns out to be a way to
424 apply Iterative or Cumulative Vector operations in reverse. `sv.add/rg
425 r3.v, r4.v, r4.v` for example will start at the opposite end of the
426 Vector and push a cumulative series of overlapping add operations into
427 the Execution units of the underlying hardware.
428
429 Other examples include shift-mask operations where a Vector of inserts
430 into a single destination register is required (see [[sv/bitmanip]],
431 bmset), as a way to construct a value quickly from multiple arbitrary
432 bit-ranges and bit-offsets. Using the same register as both the source
433 and destination, with Vectors of different offsets masks and values to
434 be inserted has multiple applications including Video, cryptography and
435 JIT compilation.
436
437 ```
438 # assume VL=4:
439 # * Vector of shift-offsets contained in RC (r12.v)
440 # * Vector of masks contained in RB (r8.v)
441 # * Vector of values to be masked-in in RA (r4.v)
442 # * Scalar destination RT (r0) to receive all mask-offset values
443 sv.bmset/mr r0, r4.v, r8.v, r12.v
444 ```
445
446 Due to the Deterministic Scheduling, Subtract and Divide are still
447 permitted to be executed in this mode, although from an algorithmic
448 perspective it is strongly discouraged. It would be better to use
449 addition followed by one final subtract, or in the case of divide, to get
450 better accuracy, to perform a multiply cascade followed by a final divide.
451
452 Note that single-operand or three-operand scalar-dest reduce is perfectly
453 well permitted: the programmer may still declare one register, used
454 as both a Vector source and Scalar destination, to be utilised as the
455 "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc this
456 naturally fits well with the normal expected usage of these operations.
457
458 If an interrupt or exception occurs in the middle of the scalar mapreduce,
459 the scalar destination register **MUST** be updated with the current
460 (intermediate) result, because this is how ```Program Order``` is
461 preserved (Vector Loops are to be considered to be just another way
462 of issuing instructions in Program Order). In this way, after return
463 from interrupt, the scalar mapreduce may continue where it left off.
464 This provides "precise" exception behaviour.
465
466 Note that hardware is perfectly permitted to perform multi-issue parallel
467 optimisation of the scalar reduce operation: it's just that as far as
468 the user is concerned, all exceptions and interrupts **MUST** be precise.
469
470 ## Fail-on-first <a name="fail-first"> </a>
471
472 Data-dependent fail-on-first has two distinct variants: one for LD/ST (see
473 [[sv/ldst]], the other for arithmetic operations (actually, CR-driven)
474 [[sv/normal]] and CR operations [[sv/cr_ops]]. Note in each case the
475 assumption is that vector elements are required appear to be executed
476 in sequential Program Order, element 0 being the first.
477
478 * LD/ST ffirst (not to be confused with *Data-Dependent* LD/ST ffirst)
479 treats the first LD/ST in a vector (element 0) as an ordinary one.
480 Exceptions occur "as normal" on the first element. However for elements
481 1 and above, if an exception would occur, then VL is **truncated**
482 to the previous element.
483 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
484 CR-creating operation produces a result (including cmp). Similar to
485 branch, an analysis of the CR is performed and if the test fails,
486 the vector operation terminates and discards all element operations
487 above the current one (and the current one if VLi is not set), and
488 VL is truncated to either the *previous* element or the current one,
489 depending on whether VLi (VL "inclusive") is set.
490
491 Thus the new VL comprises a contiguous vector of results, all of which
492 pass the testing criteria (equal to zero, less than zero).
493
494 The CR-based data-driven fail-on-first is new and not
495 found in ARM SVE or RVV. At the same time it is also
496 "old" because it is a generalisation of the Z80 [Block
497 compare](https://rvbelzen.tripod.com/z80prgtemp/z80prg04.htm)
498 instructions, especially
499 [CPIR](http://z80-heaven.wikidot.com/instructions-set:cpir) which is
500 based on CP (compare) as the ultimate "element" (suffix) operation
501 to which the repeat (prefix) is applied. It is extremely useful for
502 reducing instruction count, however requires speculative execution
503 involving modifications of VL to get high performance implementations.
504 An additional mode (RC1=1) effectively turns what would otherwise be an
505 arithmetic operation into a type of `cmp`. The CR is stored (and the
506 CR.eq bit tested against the `inv` field). If the CR.eq bit is equal to
507 `inv` then the Vector is truncated and the loop ends. Note that when
508 RC1=1 the result elements are never stored, only the CRs.
509
510 VLi is only available as an option when `Rc=0` (or for instructions
511 which do not have Rc). When set, the current element is always also
512 included in the count (the new length that VL will be set to). This may
513 be useful in combination with "inv" to truncate the Vector to *exclude*
514 elements that fail a test, or, in the case of implementations of strncpy,
515 to include the terminating zero.
516
517 In CR-based data-driven fail-on-first there is only the option to select
518 and test one bit of each CR (just as with branch BO). For more complex
519 tests this may be insufficient. If that is the case, a vectorized crops
520 (crand, cror) may be used, and ffirst applied to the crop instead of to
521 the arithmetic vector.
522
523 One extremely important aspect of ffirst is:
524
525 * LDST ffirst may never set VL equal to zero. This because on the first
526 element an exception must be raised "as normal".
527 * CR-based data-dependent ffirst on the other hand **can** set VL equal
528 to zero. This is the only means in the entirety of SV that VL may be set
529 to zero (with the exception of via the SV.STATE SPR). When VL is set
530 zero due to the first element failing the CR bit-test, all subsequent
531 vectorized operations are effectively `nops` which is
532 *precisely the desired and intended behaviour*.
533
534 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
535 to a nonzero value for any implementation-specific reason. For example:
536 it is perfectly reasonable for implementations to alter VL when ffirst
537 LD or ST operations are initiated on a nonaligned boundary, such that
538 within a loop the subsequent iteration of that loop begins subsequent
539 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
540 workloads or balance resources.
541
542 CR-based data-dependent first on the other hand MUST not truncate VL
543 arbitrarily to a length decided by the hardware: VL MUST only be truncated
544 based explicitly on whether a test fails. This because it is a precise
545 test on which algorithms will rely.
546
547 *Note: there is no reverse-direction for Data-dependent Fail-First. REMAP
548 will need to be activated to invert the ordering of element traversal.*
549
550 ### Data-dependent fail-first on CR operations (crand etc)
551
552 Operations that actually produce or alter CR Field as a result do not
553 also in turn have an Rc=1 mode. However it makes no sense to try to test
554 the 4 bits of a CR Field for being equal or not equal to zero. Moreover,
555 the result is already in the form that is desired: it is a CR field.
556 Therefore, CR-based operations have their own SVP64 Mode, described in
557 [[sv/cr_ops]]
558
559 There are two primary different types of CR operations:
560
561 * Those which have a 3-bit operand field (referring to a CR Field)
562 * Those which have a 5-bit operand (referring to a bit within the
563 whole 32-bit CR)
564
565 More details can be found in [[sv/cr_ops]].
566
567 ## CR Operations
568
569 CRs are slightly more involved than INT or FP registers due to the
570 possibility for indexing individual bits (crops BA/BB/BT). Again however
571 the access pattern needs to be understandable in relation to v3.0B / v3.1B
572 numbering, with a clear linear relationship and mapping existing when
573 SV is applied.
574
575 ### CR EXTRA mapping table and algorithm <a name="cr_extra"></a>
576
577 Numbering relationships for CR fields are already complex due to being
578 in BE format (*the relationship is not clearly explained in the v3.0B
579 or v3.1 specification*). However with some care and consideration the
580 exact same mapping used for INT and FP regfiles may be applied, just to
581 the upper bits, as explained below. Firstly and most importantly a new
582 notation `CR{field number}` is used to indicate access to a particular
583 Condition Register Field (as opposed to the notation `CR[bit]` which
584 accesses one bit of the 32 bit Power ISA v3.0B Condition Register).
585
586 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
587
588 ```
589 CR{n} = CR[32+n*4:35+n*4]
590 ```
591
592 For SVP64 the relationship for the sequential numbering of elements is to
593 the CR **fields** within the CR Register, not to individual bits within
594 the CR register.
595
596 The `CR{n}` notation is designed to give *linear sequential
597 numbering* in the Vector domain on a straight sequential Vector Loop.
598
599 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
600 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits *in*
601 that CR (EQ/LT/GT/SO). The numbering was determined (after 4 months of
602 analysis and research) to be as follows:
603
604 ```
605 CR_index = (BA>>2) # top 3 bits
606 bit_index = (BA & 0b11) # low 2 bits
607 CR_reg = CR{CR_index} # get the CR
608 # finally get the bit from the CR.
609 CR_bit = (CR_reg & (1<<bit_index)) != 0
610 ```
611
612 When it comes to applying SV, it is the *CR Field* number `CR_reg`
613 to which SV EXTRA2/3
614 applies, **not** the `CR_bit` portion (bits 3-4):
615
616 ```
617 if extra3_mode:
618 spec = EXTRA3
619 elif EXTRA2[0]: # vector mode
620 spec = EXTRA2 << 1 # same as EXTRA3, shifted
621 else: # scalar mode
622 spec = (EXTRA2[0] << 2) | EXTRA2[1]
623 if spec[0]:
624 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
625 return ((BA >> 2)<<6) | # hi 3 bits shifted up
626 (spec[1:2]<<4) | # to make room for these
627 (BA & 0b11) # CR_bit on the end
628 else:
629 # scalar constructs "00 spec[1:2] BA[0:4]"
630 return (spec[1:2] << 5) | BA
631 ```
632
633 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
634 algorithm to determine CR\_reg is modified to as follows:
635
636 ```
637 CR_index = (BA>>2) # top 3 bits
638 if spec[0]:
639 # vector mode, 0-124 increments of 4
640 CR_index = (CR_index<<4) | (spec[1:2] << 2)
641 else:
642 # scalar mode, 0-32 increments of 1
643 CR_index = (spec[1:2]<<3) | CR_index
644 # same as for v3.0/v3.1 from this point onwards
645 bit_index = (BA & 0b11) # low 2 bits
646 CR_reg = CR{CR_index} # get the CR
647 # finally get the bit from the CR.
648 CR_bit = (CR_reg & (1<<bit_index)) != 0
649 ```
650
651 Note here that the decoding pattern to determine CR\_bit does not change.
652
653 Note: high-performance implementations may read/write Vectors of CRs in
654 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
655 simplify internal design. If instructions are issued where CR Vectors
656 do not start on a 32-bit aligned boundary, performance may be affected.
657
658 ### CR fields as inputs/outputs of vector operations
659
660 CRs (or, the arithmetic operations associated with them)
661 may be marked as Vectorized or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorized if the destination is Vectorized. Likewise if the destination is scalar then so is the CR.
662
663 When vectorized, the CR inputs/outputs are sequentially read/written
664 to 4-bit CR fields. Vectorized Integer results, when Rc=1, will begin
665 writing to CR8 (TBD evaluate) and increase sequentially from there.
666 This is so that:
667
668 * implementations may rely on the Vector CRs being aligned to 8. This
669 means that CRs may be read or written in aligned batches of 32 bits
670 (8 CRs per batch), for high performance implementations.
671 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
672 overwritten by vector Rc=1 operations except for very large VL
673 * CR-based predication, from CR32, is also not interfered with
674 (except by large VL).
675
676 However when the SV result (destination) is marked as a scalar by the
677 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
678 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
679 for FP operations.
680
681 Note that yes, the CR Fields are genuinely Vectorized. Unlike in SIMD VSX which
682 has a single CR (CR6) for a given SIMD result, SV Vectorized OpenPOWER
683 v3.0B scalar operations produce a **tuple** of element results: the
684 result of the operation as one part of that element *and a corresponding
685 CR element*. Greatly simplified pseudocode:
686
687 ```
688 for i in range(VL):
689 # calculate the vector result of an add
690 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
691 # now calculate CR bits
692 CRs{8+i}.eq = iregs[RT+i] == 0
693 CRs{8+i}.gt = iregs[RT+i] > 0
694 ... etc
695 ```
696
697 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
698 then a followup instruction must be performed, setting "reduce" mode on
699 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
700 more flexibility in analysing vectors than standard Vector ISAs. Normal
701 Vector ISAs are typically restricted to "were all results nonzero" and
702 "were some results nonzero". The application of mapreduce to Vectorized
703 cr operations allows far more sophisticated analysis, particularly in
704 conjunction with the new crweird operations see [[sv/cr_int_predication]].
705
706 Note in particular that the use of a separate instruction in this way
707 ensures that high performance multi-issue OoO inplementations do not
708 have the computation of the cumulative analysis CR as a bottleneck and
709 hindrance, regardless of the length of VL.
710
711 Additionally,
712 SVP64 [[sv/branches]] may be used, even when the branch itself is to
713 the following instruction. The combined side-effects of CTR reduction
714 and VL truncation provide several benefits.
715
716 (see [[discussion]]. some alternative schemes are described there)
717
718 ### Rc=1 when SUBVL!=1
719
720 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
721 predicate is allocated per subvector; likewise only one CR is allocated
722 per subvector.
723
724 This leaves a conundrum as to how to apply CR computation per subvector,
725 when normally Rc=1 is exclusively applied to scalar elements. A solution
726 is to perform a bitwise OR or AND of the subvector tests. Given that
727 OE is ignored in SVP64, this field may (when available) be used to select OR or
728 AND behavior.
729
730 #### Table of CR fields
731
732 CRn is the notation used by the OpenPower spec to refer to CR field #i,
733 so FP instructions with Rc=1 write to CR1 (n=1).
734
735 CRs are not stored in SPRs: they are registers in their own right.
736 Therefore context-switching the full set of CRs involves a Vectorized
737 mfcr or mtcr, using VL=8 to do so. This is exactly as how
738 scalar OpenPOWER context-switches CRs: it is just that there are now
739 more of them.
740
741 The 64 SV CRs are arranged similarly to the way the 128 integer registers
742 are arranged. TODO a python program that auto-generates a CSV file
743 which can be included in a table, which is in a new page (so as not to
744 overwhelm this one). [[svp64/cr_names]]
745
746 ## Register Profiles
747
748 Instructions are broken down by Register Profiles as listed in the
749 following auto-generated page: [[opcode_regs_deduped]]. These tables,
750 despite being auto-generated, are part of the Specification.
751
752 ## SV pseudocode illustration
753
754 ### Single-predicated Instruction
755
756 illustration of normal mode add operation: zeroing not included, elwidth
757 overrides not included. if there is no predicate, it is set to all 1s
758
759 ```
760 function op_add(rd, rs1, rs2) # add not VADD!
761 int i, id=0, irs1=0, irs2=0;
762 predval = get_pred_val(FALSE, rd);
763 for (i = 0; i < VL; i++)
764 STATE.srcoffs = i # save context
765 if (predval & 1<<i) # predication uses intregs
766 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
767 if (!int_vec[rd].isvec) break;
768 if (rd.isvec) { id += 1; }
769 if (rs1.isvec) { irs1 += 1; }
770 if (rs2.isvec) { irs2 += 1; }
771 if (id == VL or irs1 == VL or irs2 == VL) {
772 # end VL hardware loop
773 STATE.srcoffs = 0; # reset
774 return;
775 }
776 ```
777
778 This has several modes:
779
780 * RT.v = RA.v RB.v
781 * RT.v = RA.v RB.s (and RA.s RB.v)
782 * RT.v = RA.s RB.s
783 * RT.s = RA.v RB.v
784 * RT.s = RA.v RB.s (and RA.s RB.v)
785 * RT.s = RA.s RB.s
786
787 All of these may be predicated. Vector-Vector is straightfoward.
788 When one of source is a Vector and the other a Scalar, it is clear that
789 each element of the Vector source should be added to the Scalar source,
790 each result placed into the Vector (or, if the destination is a scalar,
791 only the first nonpredicated result).
792
793 The one that is not obvious is RT=vector but both RA/RB=scalar.
794 Here this acts as a "splat scalar result", copying the same result into
795 all nonpredicated result elements. If a fixed destination scalar was
796 intended, then an all-Scalar operation should be used.
797
798 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
799
800 ## Assembly Annotation
801
802 Assembly code annotation is required for SV to be able to successfully
803 mark instructions as "prefixed".
804
805 A reasonable (prototype) starting point:
806
807 ```
808 svp64 [field=value]*
809 ```
810
811 Fields:
812
813 * ew=8/16/32 - element width
814 * sew=8/16/32 - source element width
815 * vec=2/3/4 - SUBVL
816 * mode=mr/satu/sats/crpred
817 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
818
819 similar to x86 "rex" prefix.
820
821 For actual assembler:
822
823 ```
824 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
825 ```
826
827 Qualifiers:
828
829 * m={pred}: predicate mask mode
830 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
831 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
832 * ew={N}: ew=8/16/32 - sets elwidth override
833 * sw={N}: sw=8/16/32 - sets source elwidth override
834 * ff={xx}: see fail-first mode
835 * sat{x}: satu / sats - see saturation mode
836 * mr: see map-reduce mode
837 * mrr: map-reduce, reverse-gear (VL-1 downto 0)
838 * mr.svm see map-reduce with sub-vector mode
839 * crm: see map-reduce CR mode
840 * crm.svm see map-reduce CR with sub-vector mode
841 * sz: predication with source-zeroing
842 * dz: predication with dest-zeroing
843
844 For modes:
845
846 * fail-first
847 - ff=lt/gt/le/ge/eq/ne/so/ns
848 - RC1 mode
849 * saturation:
850 - sats
851 - satu
852 * map-reduce:
853 - mr OR crm: "normal" map-reduce mode or CR-mode.
854 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
855
856 ## Parallel-reduction algorithm
857
858 The principle of SVP64 is that SVP64 is a fully-independent
859 Abstraction of hardware-looping in between issue and execute phases
860 that has no relation to the operation it issues.
861 Additional state cannot be saved on context-switching beyond that
862 of SVSTATE, making things slightly tricky.
863
864 Executable demo pseudocode, full version
865 [here](https://git.libre-soc.org/?p=libreriscv.git;a=blob;f=openpower/sv/test_preduce.py;hb=HEAD)
866
867 ```
868 [[!inline pages="openpower/sv/preduce.py" raw="yes" ]]
869 ```
870
871 This algorithm works by noting when data remains in-place rather than
872 being reduced, and referring to that alternative position on subsequent
873 layers of reduction. It is re-entrant. If however interrupted and
874 restored, some implementations may take longer to re-establish the
875 context.
876
877 Its application by default is that:
878
879 * RA, FRA or BFA is the first register as the first operand
880 (ci index offset in the above pseudocode)
881 * RB, FRB or BFB is the second (co index offset)
882 * RT (result) also uses ci **if RA==RT**
883
884 For more complex applications a REMAP Schedule must be used
885
886 *Programmers's note: if passed a predicate mask with only one bit set,
887 this algorithm takes no action, similar to when a predicate mask is
888 all zero.*
889
890 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
891 implemented in hardware with MVs that ensure lane-crossing is minimised.
892 The mistake which would be catastrophic to SVP64 to make is to then limit
893 the Reduction Sequence for all implementors based solely and exclusively
894 on what one specific internal microarchitecture does. In SIMD ISAs
895 the internal SIMD Architectural design is exposed and imposed on the
896 programmer. Cray-style Vector ISAs on the other hand provide convenient,
897 compact and efficient encodings of abstract concepts.* **It is the
898 Implementor's responsibility to produce a design that complies with the
899 above algorithm, utilising internal Micro-coding and other techniques to
900 transparently insert micro-architectural lane-crossing Move operations
901 if necessary or desired, to give the level of efficiency or performance
902 required.**
903
904 ## Element-width overrides <a name="elwidth"> </>
905
906 Element-width overrides are best illustrated with a packed structure
907 union in the c programming language. The following should be taken
908 literally, and assume always a little-endian layout:
909
910 ```
911 #pragma pack
912 typedef union {
913 uint8_t b[];
914 uint16_t s[];
915 uint32_t i[];
916 uint64_t l[];
917 uint8_t actual_bytes[8];
918 } el_reg_t;
919
920 elreg_t int_regfile[128];
921 ```
922
923 Accessing (get and set) of registers given a value, register (in `elreg_t`
924 form), and that all arithmetic, numbering and pseudo-Memory format is
925 LE-endian and LSB0-numbered below:
926
927 ```
928 elreg_t& get_polymorphed_reg(elreg_t const& reg, bitwidth, offset):
929 el_reg_t res; // result
930 res.l = 0; // TODO: going to need sign-extending / zero-extending
931 if !reg.isvec: // scalar access has no element offset
932 offset = 0
933 if bitwidth == 8:
934 reg.b = int_regfile[reg].b[offset]
935 elif bitwidth == 16:
936 reg.s = int_regfile[reg].s[offset]
937 elif bitwidth == 32:
938 reg.i = int_regfile[reg].i[offset]
939 elif bitwidth == 64:
940 reg.l = int_regfile[reg].l[offset]
941 return reg
942
943 set_polymorphed_reg(elreg_t& reg, bitwidth, offset, val):
944 if (!reg.isvec):
945 # for safety mask out hi bits
946 bytemask = (8 << bitwidth) - 1
947 val &= bytemask
948 # not a vector: first element only, overwrites high bits.
949 # and with the *Architectural* definition being LE,
950 # storing in the first DWORD works perfectly.
951 int_regfile[reg].l[0] = val
952 elif bitwidth == 8:
953 int_regfile[reg].b[offset] = val
954 elif bitwidth == 16:
955 int_regfile[reg].s[offset] = val
956 elif bitwidth == 32:
957 int_regfile[reg].i[offset] = val
958 elif bitwidth == 64:
959 int_regfile[reg].l[offset] = val
960 ```
961
962 In effect the GPR registers r0 to r127 (and corresponding FPRs fp0
963 to fp127) are reinterpreted to be "starting points" in a byte-addressable
964 memory. Vectors - which become just a virtual naming construct - effectively
965 overlap.
966
967 It is extremely important for implementors to note that the only circumstance
968 where upper portions of an underlying 64-bit register are zero'd out is
969 when the destination is a scalar. The ideal register file has byte-level
970 write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.
971
972 An example ADD operation with predication and element width overrides:
973
974 ```
975  for (i = 0; i < VL; i++)
976 if (predval & 1<<i) # predication
977 src1 = get_polymorphed_reg(RA, srcwid, irs1)
978 src2 = get_polymorphed_reg(RB, srcwid, irs2)
979 result = src1 + src2 # actual add here
980 set_polymorphed_reg(RT, destwid, ird, result)
981 if (!RT.isvec) break
982 if (RT.isvec)  { id += 1; }
983 if (RA.isvec)  { irs1 += 1; }
984 if (RB.isvec)  { irs2 += 1; }
985 ```
986
987 Thus it can be clearly seen that elements are packed by their
988 element width, and the packing starts from the source (or destination)
989 specified by the instruction.
990
991 ## Twin (implicit) result operations
992
993 Some operations in the Power ISA already target two 64-bit scalar
994 registers: `lq` for example, and LD with update. Some mathematical
995 algorithms are more efficient when there are two outputs rather than one,
996 providing feedback loops between elements (the most well-known being add
997 with carry). 64-bit multiply for example actually internally produces
998 a 128 bit result, which clearly cannot be stored in a single 64 bit
999 register. Some ISAs recommend "macro op fusion": the practice of setting
1000 a convention whereby if two commonly used instructions (mullo, mulhi) use
1001 the same ALU but one selects the low part of an identical operation and
1002 the other selects the high part, then optimised micro-architectures may
1003 "fuse" those two instructions together, using Micro-coding techniques,
1004 internally.
1005
1006 The practice and convention of macro-op fusion however is not compatible
1007 with SVP64 Horizontal-First, because Horizontal Mode may only be applied
1008 to a single instruction at a time, and SVP64 is based on the principle of
1009 strict Program Order even at the element level. Thus it becomes necessary
1010 to add explicit more complex single instructions with more operands than
1011 would normally be seen in the average RISC ISA (3-in, 2-out, in some
1012 cases). If it was not for Power ISA already having LD/ST with update as
1013 well as Condition Codes and `lq` this would be hard to justify.
1014
1015 With limited space in the `EXTRA` Field, and Power ISA opcodes being only
1016 32 bit, 5 operands is quite an ask. `lq` however sets a precedent: `RTp`
1017 stands for "RT pair". In other words the result is stored in RT and RT+1.
1018 For Scalar operations, following this precedent is perfectly reasonable.
1019 In Scalar mode, `maddedu` therefore stores the two halves of the 128-bit
1020 multiply into RT and RT+1.
1021
1022 What, then, of `sv.maddedu`? If the destination is hard-coded to RT and
1023 RT+1 the instruction is not useful when Vectorized because the output
1024 will be overwritten on the next element. To solve this is easy: define
1025 the destination registers as RT and RT+MAXVL respectively. This makes
1026 it easy for compilers to statically allocate registers even when VL
1027 changes dynamically.
1028
1029 Bear in mind that both RT and RT+MAXVL are starting points for Vectors,
1030 and bear in mind that element-width overrides still have to be taken
1031 into consideration, the starting point for the implicit destination is
1032 best illustrated in pseudocode:
1033
1034 ```
1035 # demo of maddedu
1036  for (i = 0; i < VL; i++)
1037 if (predval & 1<<i) # predication
1038 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1039 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1040 src2 = get_polymorphed_reg(RC, srcwid, irs3)
1041 result = src1*src2 + src2
1042 destmask = (2<<destwid)-1
1043 # store two halves of result, both start from RT.
1044 set_polymorphed_reg(RT, destwid, ird , result&destmask)
1045 set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
1046 if (!RT.isvec) break
1047 if (RT.isvec)  { id += 1; }
1048 if (RA.isvec)  { irs1 += 1; }
1049 if (RB.isvec)  { irs2 += 1; }
1050 if (RC.isvec)  { irs3 += 1; }
1051 ```
1052
1053 The significant part here is that the second half is stored
1054 starting not from RT+MAXVL at all: it is the *element* index
1055 that is offset by MAXVL, both halves actually starting from RT.
1056 If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements
1057 RT0 to RT2 are stored:
1058
1059 ```
1060 LSB0: 63:32 31:0
1061 MSB0: 0:31 32:63
1062 r0 unchanged unchanged
1063 r1 RT1.lo RT0.lo
1064 r2 unchanged RT2.lo
1065 r3 RT0.hi unchanged
1066 r4 RT2.hi RT1.hi
1067 r5 unchanged unchanged
1068 ```
1069
1070 Note that all of the LO halves start from r1, but that the HI halves
1071 start from half-way into r3. The reason is that with MAXVL bring 5 and
1072 elwidth being 32, this is the 5th element offset (in 32 bit quantities)
1073 counting from r1.
1074
1075 *Programmer's note: accessing registers that have been placed starting
1076 on a non-contiguous boundary (half-way along a scalar register) can
1077 be inconvenient: REMAP can provide an offset but it requires extra
1078 instructions to set up. A simple solution is to ensure that MAXVL is
1079 rounded up such that the Vector ends cleanly on a contiguous register
1080 boundary. MAXVL=6 in the above example would achieve that*
1081
1082 Additional DRAFT Scalar instructions in 3-in 2-out form with an implicit
1083 2nd destination:
1084
1085 * [[isa/svfixedarith]]
1086 * [[isa/svfparith]]
1087
1088 [[!tag standards]]
1089
1090 ------
1091
1092 \newpage{}
1093