whoops newpage{} not newpage()
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 # Appendix
2
3 * <https://bugs.libre-soc.org/show_bug.cgi?id=574> Saturation
4 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47> Parallel Prefix
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=697> Reduce Modes
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=864> parallel prefix simulator
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=809> OV sv.addex discussion
8 * ARM SVE Fault-first <https://alastairreid.github.io/papers/sve-ieee-micro-2017.pdf>
9
10 This is the appendix to [[sv/svp64]], providing explanations of modes
11 etc. leaving the main svp64 page's primary purpose as outlining the
12 instruction format.
13
14 Table of contents:
15
16 [[!toc]]
17
18 ## Partial Implementations
19
20 It is perfectly legal to implement subsets of SVP64 as long as illegal
21 instruction traps are always raised on unimplemented features,
22 so that soft-emulation is possible,
23 even for future revisions of SVP64. With SVP64 being partly controlled
24 through contextual SPRs, a little care has to be taken.
25
26 **All** SPRs
27 not implemented including reserved ones for future use must raise an illegal
28 instruction trap if read or written. This allows software the
29 opportunity to emulate the context created by the given SPR.
30
31 See [[sv/compliancy_levels]] for full details.
32
33 ## XER, SO and other global flags
34
35 Vector systems are expected to be high performance. This is achieved
36 through parallelism, which requires that elements in the vector be
37 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
38 Read-Write Hazards on single-bit global resources, having a significant
39 detrimental effect.
40
41 Consequently in SV, XER.SO behaviour is disregarded (including
42 in `cmp` instructions). XER.SO is not read, but XER.OV may be written,
43 breaking the Read-Modify-Write Hazard Chain that complicates
44 microarchitectural implementations.
45 This includes when `scalar identity behaviour` occurs. If precise
46 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
47 instructions should be used without an SV Prefix.
48
49 TODO jacob add about OV https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf
50
51 Of note here is that XER.SO and OV may already be disregarded in the
52 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
53 SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets,
54 but only for SVP64 Prefixed Operations.
55
56 XER.CA/CA32 on the other hand is expected and required to be implemented
57 according to standard Power ISA Scalar behaviour. Interestingly, due
58 to SVP64 being in effect a hardware for-loop around Scalar instructions
59 executing in precise Program Order, a little thought shows that a Vectorised
60 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
61 and producing, at the end, a single bit Carry out. High performance
62 implementations may exploit this observation to deploy efficient
63 Parallel Carry Lookahead.
64
65 # assume VL=4, this results in 4 sequential ops (below)
66 sv.adde r0.v, r4.v, r8.v
67
68 # instructions that get executed in backend hardware:
69 adde r0, r4, r8 # takes carry-in, produces carry-out
70 adde r1, r5, r9 # takes carry from previous
71 ...
72 adde r3, r7, r11 # likewise
73
74 It can clearly be seen that the carry chains from one
75 64 bit add to the next, the end result being that a
76 256-bit "Big Integer Add with Carry" has been performed, and that
77 CA contains the 257th bit. A one-instruction 512-bit Add-with-Carry
78 may be performed by setting VL=8, and a one-instruction
79 1024-bit Add-with-Carry by setting VL=16, and so on. More on
80 this in [[openpower/sv/biginteger]]
81
82 ## EXTRA Field Mapping
83
84 The purpose of the 9-bit EXTRA field mapping is to mark individual
85 registers (RT, RA, BFA) as either scalar or vector, and to extend
86 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
87 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
88 Predication) leaving a mere 6 bits for qualifying registers. As can
89 be seen there is significant pressure on these (and in fact all) SVP64 bits.
90
91 In Power ISA v3.1 prefixing there are bits which describe and classify
92 the prefix in a fashion that is independent of the suffix. MLSS for
93 example. For SVP64 there is insufficient space to make the SVP64 Prefix
94 "self-describing", and consequently every single Scalar instruction
95 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
96 This process was semi-automated and is described in this section.
97 The final results, which are part of the SVP64 Specification, are here:
98 [[openpower/opcode_regs_deduped]]
99
100 * Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
101 from reading the markdown formatted version of the Scalar pseudocode
102 which is machine-readable and found in [[openpower/isatables]]. The
103 analysis gives, by instruction, a "Register Profile". `add RT, RA, RB`
104 for example is given a designation `RM-2R-1W` because it requires
105 two GPR reads and one GPR write.
106 * Secondly, the total number of registers was added up (2R-1W is 3 registers)
107 and if less than or equal to three then that instruction could be given an
108 EXTRA3 designation. Four or more is given an EXTRA2 designation because
109 there are only 9 bits available.
110 * Thirdly, the instruction was analysed to see if Twin or Single
111 Predication was suitable. As a general rule this was if there
112 was only a single operand and a single result (`extw` and LD/ST)
113 however it was found that some 2 or 3 operand instructions also
114 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
115 in Twin Predication, some compromises were made, here. LDST is
116 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
117 * Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
118 could have been decided
119 that RA would be indexed 0 (EXTRA bits 0-2), RB indexed 1 (EXTRA bits 3-5)
120 and RT indexed 2 (EXTRA bits 6-8). In some cases (LD/ST with update)
121 RA-as-a-source is given a **different** EXTRA index from RA-as-a-result
122 (because it is possible to do, and perceived to be useful). Rc=1
123 co-results (CR0, CR1) are always given the same EXTRA index as their
124 main result (RT, FRT).
125 * Fifthly, in an automated process the results of the analysis
126 were outputted in CSV Format for use in machine-readable form
127 by sv_analysis.py <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
128
129 This process was laborious but logical, and, crucially, once a
130 decision is made (and ratified) cannot be reversed.
131 Qualifying future Power ISA Scalar instructions for SVP64
132 is **strongly** advised to utilise this same process and the same
133 sv_analysis.py program as a canonical method of maintaining the
134 relationships. Alterations to that same program which
135 change the Designation is **prohibited** once finalised (ratified
136 through the Power ISA WG Process). It would
137 be similar to deciding that `add` should be changed from X-Form
138 to D-Form.
139
140 ## Single Predication <a name="1p"> </a>
141
142 This is a standard mode normally found in Vector ISAs. every element in every source Vector and in the destination uses the same bit of one single predicate mask.
143
144 In SVSTATE, for Single-predication, implementors MUST increment both srcstep and dststep, but depending on whether sz and/or dz are set, srcstep and
145 dststep can still potentially become different indices. Only when sz=dz
146 is srcstep guaranteed to equal dststep at all times.
147
148 Note that in some Mode Formats there is only one flag (zz). This indicates
149 that *both* sz *and* dz are set to the same.
150
151 Example 1:
152
153 * VL=4
154 * mask=0b1101
155 * sz=0, dz=1
156
157 The following schedule for srcstep and dststep will occur:
158
159 | srcstep | dststep | comment |
160 | ---- | ----- | -------- |
161 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
162 | 1 | 2 | sz=1 but dz=0: dst skips mask[1], src soes not |
163 | 2 | 3 | mask[src=2] and mask[dst=3] are 1 |
164 | end | end | loop has ended because dst reached VL-1 |
165
166 Example 2:
167
168 * VL=4
169 * mask=0b1101
170 * sz=1, dz=0
171
172 The following schedule for srcstep and dststep will occur:
173
174 | srcstep | dststep | comment |
175 | ---- | ----- | -------- |
176 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
177 | 2 | 1 | sz=0 but dz=1: src skips mask[1], dst does not |
178 | 3 | 2 | mask[src=3] and mask[dst=2] are 1 |
179 | end | end | loop has ended because src reached VL-1 |
180
181 In both these examples it is crucial to note that despite there being
182 a single predicate mask, with sz and dz being different, srcstep and
183 dststep are being requested to react differently.
184
185 Example 3:
186
187 * VL=4
188 * mask=0b1101
189 * sz=0, dz=0
190
191 The following schedule for srcstep and dststep will occur:
192
193 | srcstep | dststep | comment |
194 | ---- | ----- | -------- |
195 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
196 | 2 | 2 | sz=0 and dz=0: both src and dst skip mask[1] |
197 | 3 | 3 | mask[src=3] and mask[dst=3] are 1 |
198 | end | end | loop has ended because src and dst reached VL-1 |
199
200 Here, both srcstep and dststep remain in lockstep because sz=dz=1
201
202 ## Twin Predication <a name="2p"> </a>
203
204 This is a novel concept that allows predication to be applied to a single
205 source and a single dest register. The following types of traditional
206 Vector operations may be encoded with it, *without requiring explicit
207 opcodes to do so*
208
209 * VSPLAT (a single scalar distributed across a vector)
210 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
211 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
212 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
213 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
214
215 Those patterns (and more) may be applied to:
216
217 * mv (the usual way that V\* ISA operations are created)
218 * exts\* sign-extension
219 * rwlinm and other RS-RA shift operations (**note**: excluding
220 those that take RA as both a src and dest. These are not
221 1-src 1-dest, they are 2-src, 1-dest)
222 * LD and ST (treating AGEN as one source)
223 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
224 * Condition Register ops mfcr, mtcr and other similar
225
226 This is a huge list that creates extremely powerful combinations,
227 particularly given that one of the predicate options is `(1<<r3)`
228
229 Additional unusual capabilities of Twin Predication include a back-to-back
230 version of VCOMPRESS-VEXPAND which is effectively the ability to do
231 sequentially ordered multiple VINSERTs. The source predicate selects a
232 sequentially ordered subset of elements to be inserted; the destination
233 predicate specifies the sequentially ordered recipient locations.
234 This is equivalent to
235 `llvm.masked.compressstore.*`
236 followed by
237 `llvm.masked.expandload.*`
238 with a single instruction, but abstracted out from Load/Store and applicable
239 in general to any 2P instruction.
240
241 This extreme power and flexibility comes down to the fact that SVP64
242 is not actually a Vector ISA: it is a loop-abstraction-concept that
243 is applied *in general* to Scalar operations, just like the x86
244 `REP` instruction (if put on steroids).
245
246 ## Pack/Unpack
247
248 The pack/unpack concept of VSX `vpack` is abstracted out as Sub-Vector
249 reordering.
250 Two bits in the `SVSHAPE` [[sv/spr]]
251 enable either "packing" or "unpacking"
252 on the subvectors vec2/3/4.
253
254 First, illustrating a
255 "normal" SVP64 operation with `SUBVL!=1:` (assuming no elwidth overrides),
256 note that the VL loop is outer and the SUBVL loop inner:
257
258 def index():
259 for i in range(VL):
260 for j in range(SUBVL):
261 yield i*SUBVL+j
262
263 for idx in index():
264 operation_on(RA+idx)
265
266 For pack/unpack (again, no elwidth overrides), note that now there is the
267 option to swap the SUBVL and VL loop orders.
268 In effect the Pack/Unpack performs a Transpose of the subvector elements.
269 Illustrated this time with a GPR mv operation:
270
271 # yield an outer-SUBVL or inner VL loop with SUBVL
272 def index_p(outer):
273 if outer:
274 for j in range(SUBVL): # subvl is outer
275 for i in range(VL): # vl is inner
276 yield i+VL*j
277 else:
278 for i in range(VL): # vl is outer
279 for j in range(SUBVL): # subvl is inner
280 yield i*SUBVL+j
281
282 # walk through both source and dest indices simultaneously
283 for src_idx, dst_idx in zip(index_p(PACK), index_p(UNPACK)):
284 move_operation(RT+dst_idx, RA+src_idx)
285
286 "yield" from python is used here for simplicity and clarity.
287 The two Finite State Machines for the generation of the source
288 and destination element offsets progress incrementally in
289 lock-step.
290
291 Example VL=2, SUBVL=3, PACK_en=1 - elements grouped by
292 vec3 will be redistributed such that Sub-elements 0 are
293 packed together, Sub-elements 1 are packed together, as
294 are Sub-elements 2.
295
296 srcstep=0 srcstep=1
297 0 1 2 3 4 5
298
299 dststep=0 dststep=1 dststep=2
300 0 3 1 4 2 5
301
302 Setting of both `PACK` and `UNPACK` is neither prohibited nor
303 `UNDEFINED` because the reordering is fully deterministic, and
304 additional REMAP reordering may be applied. Combined with
305 Matrix REMAP this would
306 give potentially up to 4 Dimensions of reordering.
307
308 Pack/Unpack has quirky interactions on
309 [[sv/mv.swizzle]] because it can set a different subvector length for
310 destination, and has a slightly different pseudocode algorithm
311 for Vertical-First Mode.
312
313 Pack/Unpack is enabled (set up) through [[sv/svstep]].
314
315 ## Reduce modes
316
317 Reduction in SVP64 is deterministic and somewhat of a misnomer. A normal
318 Vector ISA would have explicit Reduce opcodes with defined characteristics
319 per operation: in SX Aurora there is even an additional scalar argument
320 containing the initial reduction value, and the default is either 0
321 or 1 depending on the specifics of the explicit opcode.
322 SVP64 fundamentally has to
323 utilise *existing* Scalar Power ISA v3.0B operations, which presents some
324 unique challenges.
325
326 The solution turns out to be to simply define reduction as permitting
327 deterministic element-based schedules to be issued using the base Scalar
328 operations, and to rely on the underlying microarchitecture to resolve
329 Register Hazards at the element level. This goes back to
330 the fundamental principle that SV is nothing more than a Sub-Program-Counter
331 sitting between Decode and Issue phases.
332
333 For Scalar Reduction,
334 Microarchitectures *may* take opportunities to parallelise the reduction
335 but only if in doing so they preserve strict Program Order at the Element Level.
336 Opportunities where this is possible include an `OR` operation
337 or a MIN/MAX operation: it may be possible to parallelise the reduction,
338 but for Floating Point it is not permitted due to different results
339 being obtained if the reduction is not executed in strict Program-Sequential
340 Order.
341
342 In essence it becomes the programmer's responsibility to leverage the
343 pre-determined schedules to desired effect.
344
345 ### Scalar result reduction and iteration
346
347 Scalar Reduction per se does not exist, instead is implemented in SVP64
348 as a simple and natural relaxation of the usual restriction on the Vector
349 Looping which would terminate if the destination was marked as a Scalar.
350 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
351 even though the destination register is marked as scalar.
352 Thus it is up to the programmer to be aware of this, observe some
353 conventions, and thus end up achieving the desired outcome of scalar
354 reduction.
355
356 It is also important to appreciate that there is no
357 actual imposition or restriction on how this mode is utilised: there
358 will therefore be several valuable uses (including Vector Iteration
359 and "Reverse-Gear")
360 and it is up to the programmer to make best use of the
361 (strictly deterministic) capability
362 provided.
363
364 In this mode, which is suited to operations involving carry or overflow,
365 one register must be assigned, by convention by the programmer to be the
366 "accumulator". Scalar reduction is thus categorised by:
367
368 * One of the sources is a Vector
369 * the destination is a scalar
370 * optionally but most usefully when one source scalar register is
371 also the scalar destination (which may be informally termed
372 the "accumulator")
373 * That the source register type is the same as the destination register
374 type identified as the "accumulator". Scalar reduction on `cmp`,
375 `setb` or `isel` makes no sense for example because of the mixture
376 between CRs and GPRs.
377
378 *Note that issuing instructions in Scalar reduce mode such as `setb`
379 are neither `UNDEFINED` nor prohibited, despite them not making much
380 sense at first glance.
381 Scalar reduce is strictly defined behaviour, and the cost in
382 hardware terms of prohibition of seemingly non-sensical operations is too great.
383 Therefore it is permitted and required to be executed successfully.
384 Implementors **MAY** choose to optimise such instructions in instances
385 where their use results in "extraneous execution", i.e. where it is clear
386 that the sequence of operations, comprising multiple overwrites to
387 a scalar destination **without** cumulative, iterative, or reductive
388 behaviour (no "accumulator"), may discard all but the last element
389 operation. Identification
390 of such is trivial to do for `setb` and `cmp`: the source register type is
391 a completely different register file from the destination.
392 Likewise Scalar reduction when the destination is a Vector
393 is as if the Reduction Mode was not requested. However it would clearly
394 be unacceptable to perform such optimisations on cache-inhibited LD/ST,
395 so some considerable care needs to be taken.*
396
397 Typical applications include simple operations such as `ADD r3, r10.v,
398 r3` where, clearly, r3 is being used to accumulate the addition of all
399 elements of the vector starting at r10.
400
401 # add RT, RA,RB but when RT==RA
402 for i in range(VL):
403 iregs[RA] += iregs[RB+i] # RT==RA
404
405 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
406 SV ordinarily
407 **terminates** at the first scalar operation. Only by marking the
408 operation as "mapreduce" will it continue to issue multiple sub-looped
409 (element) instructions in `Program Order`.
410
411 To perform the loop in reverse order, the ```RG``` (reverse gear) bit must be set. This may be useful in situations where the results may be different
412 (floating-point) if executed in a different order. Given that there is
413 no actual prohibition on Reduce Mode being applied when the destination
414 is a Vector, the "Reverse Gear" bit turns out to be a way to apply Iterative
415 or Cumulative Vector operations in reverse. `sv.add/rg r3.v, r4.v, r4.v`
416 for example will start at the opposite end of the Vector and push
417 a cumulative series of overlapping add operations into the Execution units of
418 the underlying hardware.
419
420 Other examples include shift-mask operations where a Vector of inserts
421 into a single destination register is required (see [[sv/bitmanip]], bmset),
422 as a way to construct
423 a value quickly from multiple arbitrary bit-ranges and bit-offsets.
424 Using the same register as both the source and destination, with Vectors
425 of different offsets masks and values to be inserted has multiple
426 applications including Video, cryptography and JIT compilation.
427
428 # assume VL=4:
429 # * Vector of shift-offsets contained in RC (r12.v)
430 # * Vector of masks contained in RB (r8.v)
431 # * Vector of values to be masked-in in RA (r4.v)
432 # * Scalar destination RT (r0) to receive all mask-offset values
433 sv.bmset/mr r0, r4.v, r8.v, r12.v
434
435 Due to the Deterministic Scheduling,
436 Subtract and Divide are still permitted to be executed in this mode,
437 although from an algorithmic perspective it is strongly discouraged.
438 It would be better to use addition followed by one final subtract,
439 or in the case of divide, to get better accuracy, to perform a multiply
440 cascade followed by a final divide.
441
442 Note that single-operand or three-operand scalar-dest reduce is perfectly
443 well permitted: the programmer may still declare one register, used as
444 both a Vector source and Scalar destination, to be utilised as
445 the "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc
446 this naturally fits well with the normal expected usage of these
447 operations.
448
449 If an interrupt or exception occurs in the middle of the scalar mapreduce,
450 the scalar destination register **MUST** be updated with the current
451 (intermediate) result, because this is how ```Program Order``` is
452 preserved (Vector Loops are to be considered to be just another way of issuing instructions
453 in Program Order). In this way, after return from interrupt,
454 the scalar mapreduce may continue where it left off. This provides
455 "precise" exception behaviour.
456
457 Note that hardware is perfectly permitted to perform multi-issue
458 parallel optimisation of the scalar reduce operation: it's just that
459 as far as the user is concerned, all exceptions and interrupts **MUST**
460 be precise.
461
462
463 ## Fail-on-first <a name="fail-first"> </a>
464
465 Data-dependent fail-on-first has two distinct variants: one for LD/ST
466 (see [[sv/ldst]],
467 the other for arithmetic operations (actually, CR-driven)
468 [[sv/normal]] and CR operations [[sv/cr_ops]].
469 Note in each
470 case the assumption is that vector elements are required appear to be
471 executed in sequential Program Order, element 0 being the first.
472
473 * LD/ST ffirst treats the first LD/ST in a vector (element 0) as an
474 ordinary one. Exceptions occur "as normal". However for elements 1
475 and above, if an exception would occur, then VL is **truncated** to the
476 previous element.
477 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
478 CR-creating operation produces a result (including cmp). Similar to
479 branch, an analysis of the CR is performed and if the test fails, the
480 vector operation terminates and discards all element operations
481 above the current one (and the current one if VLi is not set),
482 and VL is truncated to either
483 the *previous* element or the current one, depending on whether
484 VLi (VL "inclusive") is set.
485
486 Thus the new VL comprises a contiguous vector of results,
487 all of which pass the testing criteria (equal to zero, less than zero).
488
489 The CR-based data-driven fail-on-first is new and not found in ARM
490 SVE or RVV. At the same time it is also "old" because it is a generalisation
491 of the Z80
492 [Block compare](https://rvbelzen.tripod.com/z80prgtemp/z80prg04.htm)
493 instructions, especially
494 [CPIR](http://z80-heaven.wikidot.com/instructions-set:cpir)
495 which is based on CP (compare) as the ultimate "element" (suffix)
496 operation to which the repeat (prefix) is applied.
497 It is extremely useful for reducing instruction count,
498 however requires speculative execution involving modifications of VL
499 to get high performance implementations. An additional mode (RC1=1)
500 effectively turns what would otherwise be an arithmetic operation
501 into a type of `cmp`. The CR is stored (and the CR.eq bit tested
502 against the `inv` field).
503 If the CR.eq bit is equal to `inv` then the Vector is truncated and
504 the loop ends.
505 Note that when RC1=1 the result elements are never stored, only the CRs.
506
507 VLi is only available as an option when `Rc=0` (or for instructions
508 which do not have Rc). When set, the current element is always
509 also included in the count (the new length that VL will be set to).
510 This may be useful in combination with "inv" to truncate the Vector
511 to *exclude* elements that fail a test, or, in the case of implementations
512 of strncpy, to include the terminating zero.
513
514 In CR-based data-driven fail-on-first there is only the option to select
515 and test one bit of each CR (just as with branch BO). For more complex
516 tests this may be insufficient. If that is the case, a vectorised crops
517 (crand, cror) may be used, and ffirst applied to the crop instead of to
518 the arithmetic vector.
519
520 One extremely important aspect of ffirst is:
521
522 * LDST ffirst may never set VL equal to zero. This because on the first
523 element an exception must be raised "as normal".
524 * CR-based data-dependent ffirst on the other hand **can** set VL equal
525 to zero. This is the only means in the entirety of SV that VL may be set
526 to zero (with the exception of via the SV.STATE SPR). When VL is set
527 zero due to the first element failing the CR bit-test, all subsequent
528 vectorised operations are effectively `nops` which is
529 *precisely the desired and intended behaviour*.
530
531 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
532 to a nonzero value for any implementation-specific reason. For example:
533 it is perfectly reasonable for implementations to alter VL when ffirst
534 LD or ST operations are initiated on a nonaligned boundary, such that
535 within a loop the subsequent iteration of that loop begins subsequent
536 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
537 workloads or balance resources.
538
539 CR-based data-dependent first on the other hand MUST not truncate VL
540 arbitrarily to a length decided by the hardware: VL MUST only be
541 truncated based explicitly on whether a test fails.
542 This because it is a precise test on which algorithms
543 will rely.
544
545 *Note: there is no reverse-direction for Data-dependent Fail-First.
546 REMAP will need to be activated to invert the ordering of element
547 traversal.*
548
549 ### Data-dependent fail-first on CR operations (crand etc)
550
551 Operations that actually produce or alter CR Field as a result
552 do not also in turn have an Rc=1 mode. However it makes no
553 sense to try to test the 4 bits of a CR Field for being equal
554 or not equal to zero. Moreover, the result is already in the
555 form that is desired: it is a CR field. Therefore,
556 CR-based operations have their own SVP64 Mode, described
557 in [[sv/cr_ops]]
558
559 There are two primary different types of CR operations:
560
561 * Those which have a 3-bit operand field (referring to a CR Field)
562 * Those which have a 5-bit operand (referring to a bit within the
563 whole 32-bit CR)
564
565 More details can be found in [[sv/cr_ops]].
566
567 ## pred-result mode
568
569 Pred-result mode may not be applied on CR-based operations.
570
571 Although CR operations (mtcr, crand, cror) may be Vectorised,
572 predicated, pred-result mode applies to operations that have
573 an Rc=1 mode, or make sense to add an RC1 option.
574
575 Predicate-result merges common CR testing with predication, saving on
576 instruction count. In essence, a Condition Register Field test
577 is performed, and if it fails it is considered to have been
578 *as if* the destination predicate bit was zero. Given that
579 there are no CR-based operations that produce Rc=1 co-results,
580 there can be no pred-result mode for mtcr and other CR-based instructions
581
582 Arithmetic and Logical Pred-result, which does have Rc=1 or for which
583 RC1 Mode makes sense, is covered in [[sv/normal]]
584
585 ## CR Operations
586
587 CRs are slightly more involved than INT or FP registers due to the
588 possibility for indexing individual bits (crops BA/BB/BT). Again however
589 the access pattern needs to be understandable in relation to v3.0B / v3.1B
590 numbering, with a clear linear relationship and mapping existing when
591 SV is applied.
592
593 ### CR EXTRA mapping table and algorithm <a name="cr_extra"></a>
594
595 Numbering relationships for CR fields are already complex due to being
596 in BE format (*the relationship is not clearly explained in the v3.0B
597 or v3.1 specification*). However with some care and consideration
598 the exact same mapping used for INT and FP regfiles may be applied,
599 just to the upper bits, as explained below. Firstly and most
600 importantly a new notation
601 `CR{field number}` is used to indicate access to a particular
602 Condition Register Field (as opposed to the notation `CR[bit]`
603 which accesses one bit of the 32 bit Power ISA v3.0B
604 Condition Register).
605
606 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
607
608 CR{n} = CR[32+n*4:35+n*4]
609
610 For SVP64 the relationship for the sequential
611 numbering of elements is to the CR **fields** within
612 the CR Register, not to individual bits within the CR register.
613
614 The `CR{n}` notation is designed to give *linear sequential
615 numbering* in the Vector domain on a straight sequential Vector Loop.
616
617 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
618 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits
619 *in* that CR (EQ/LT/GT/SO). The numbering was determined (after 4 months of
620 analysis and research) to be as follows:
621
622 CR_index = (BA>>2) # top 3 bits
623 bit_index = (BA & 0b11) # low 2 bits
624 CR_reg = CR{CR_index} # get the CR
625 # finally get the bit from the CR.
626 CR_bit = (CR_reg & (1<<bit_index)) != 0
627
628 When it comes to applying SV, it is the *CR Field* number `CR_reg`
629 to which SV EXTRA2/3
630 applies, **not** the `CR_bit` portion (bits 3-4):
631
632 if extra3_mode:
633 spec = EXTRA3
634 else:
635 spec = EXTRA2<<1 | 0b0
636 if spec[0]:
637 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
638 return ((BA >> 2)<<6) | # hi 3 bits shifted up
639 (spec[1:2]<<4) | # to make room for these
640 (BA & 0b11) # CR_bit on the end
641 else:
642 # scalar constructs "00 spec[1:2] BA[0:4]"
643 return (spec[1:2] << 5) | BA
644
645 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
646 algorithm to determine CR\_reg is modified to as follows:
647
648 CR_index = (BA>>2) # top 3 bits
649 if spec[0]:
650 # vector mode, 0-124 increments of 4
651 CR_index = (CR_index<<4) | (spec[1:2] << 2)
652 else:
653 # scalar mode, 0-32 increments of 1
654 CR_index = (spec[1:2]<<3) | CR_index
655 # same as for v3.0/v3.1 from this point onwards
656 bit_index = (BA & 0b11) # low 2 bits
657 CR_reg = CR{CR_index} # get the CR
658 # finally get the bit from the CR.
659 CR_bit = (CR_reg & (1<<bit_index)) != 0
660
661 Note here that the decoding pattern to determine CR\_bit does not change.
662
663 Note: high-performance implementations may read/write Vectors of CRs in
664 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
665 simplify internal design. If instructions are issued where CR Vectors
666 do not start on a 32-bit aligned boundary, performance may be affected.
667
668 ### CR fields as inputs/outputs of vector operations
669
670 CRs (or, the arithmetic operations associated with them)
671 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
672
673 When vectorized, the CR inputs/outputs are sequentially read/written
674 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
675 writing to CR8 (TBD evaluate) and increase sequentially from there.
676 This is so that:
677
678 * implementations may rely on the Vector CRs being aligned to 8. This
679 means that CRs may be read or written in aligned batches of 32 bits
680 (8 CRs per batch), for high performance implementations.
681 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
682 overwritten by vector Rc=1 operations except for very large VL
683 * CR-based predication, from CR32, is also not interfered with
684 (except by large VL).
685
686 However when the SV result (destination) is marked as a scalar by the
687 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
688 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
689 for FP operations.
690
691 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
692 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
693 v3.0B scalar operations produce a **tuple** of element results: the
694 result of the operation as one part of that element *and a corresponding
695 CR element*. Greatly simplified pseudocode:
696
697 for i in range(VL):
698 # calculate the vector result of an add
699 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
700 # now calculate CR bits
701 CRs{8+i}.eq = iregs[RT+i] == 0
702 CRs{8+i}.gt = iregs[RT+i] > 0
703 ... etc
704
705 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
706 then a followup instruction must be performed, setting "reduce" mode on
707 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
708 more flexibility in analysing vectors than standard Vector ISAs. Normal
709 Vector ISAs are typically restricted to "were all results nonzero" and
710 "were some results nonzero". The application of mapreduce to Vectorised
711 cr operations allows far more sophisticated analysis, particularly in
712 conjunction with the new crweird operations see [[sv/cr_int_predication]].
713
714 Note in particular that the use of a separate instruction in this way
715 ensures that high performance multi-issue OoO inplementations do not
716 have the computation of the cumulative analysis CR as a bottleneck and
717 hindrance, regardless of the length of VL.
718
719 Additionally,
720 SVP64 [[sv/branches]] may be used, even when the branch itself is to
721 the following instruction. The combined side-effects of CTR reduction
722 and VL truncation provide several benefits.
723
724 (see [[discussion]]. some alternative schemes are described there)
725
726 ### Rc=1 when SUBVL!=1
727
728 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
729 predicate is allocated per subvector; likewise only one CR is allocated
730 per subvector.
731
732 This leaves a conundrum as to how to apply CR computation per subvector,
733 when normally Rc=1 is exclusively applied to scalar elements. A solution
734 is to perform a bitwise OR or AND of the subvector tests. Given that
735 OE is ignored in SVP64, this field may (when available) be used to select OR or
736 AND behavior.
737
738 #### Table of CR fields
739
740 CRn is the notation used by the OpenPower spec to refer to CR field #i,
741 so FP instructions with Rc=1 write to CR1 (n=1).
742
743 CRs are not stored in SPRs: they are registers in their own right.
744 Therefore context-switching the full set of CRs involves a Vectorised
745 mfcr or mtcr, using VL=8 to do so. This is exactly as how
746 scalar OpenPOWER context-switches CRs: it is just that there are now
747 more of them.
748
749 The 64 SV CRs are arranged similarly to the way the 128 integer registers
750 are arranged. TODO a python program that auto-generates a CSV file
751 which can be included in a table, which is in a new page (so as not to
752 overwhelm this one). [[svp64/cr_names]]
753
754 ## Register Profiles
755
756 Instructions are broken down by Register Profiles as listed in the
757 following auto-generated page: [[opcode_regs_deduped]]. These tables,
758 despite being auto-generated, are part of the Specification.
759
760 ## SV pseudocode illustration
761
762 ### Single-predicated Instruction
763
764 illustration of normal mode add operation: zeroing not included, elwidth
765 overrides not included. if there is no predicate, it is set to all 1s
766
767 function op_add(rd, rs1, rs2) # add not VADD!
768 int i, id=0, irs1=0, irs2=0;
769 predval = get_pred_val(FALSE, rd);
770 for (i = 0; i < VL; i++)
771 STATE.srcoffs = i # save context
772 if (predval & 1<<i) # predication uses intregs
773 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
774 if (!int_vec[rd].isvec) break;
775 if (rd.isvec) { id += 1; }
776 if (rs1.isvec) { irs1 += 1; }
777 if (rs2.isvec) { irs2 += 1; }
778 if (id == VL or irs1 == VL or irs2 == VL) {
779 # end VL hardware loop
780 STATE.srcoffs = 0; # reset
781 return;
782 }
783
784 This has several modes:
785
786 * RT.v = RA.v RB.v
787 * RT.v = RA.v RB.s (and RA.s RB.v)
788 * RT.v = RA.s RB.s
789 * RT.s = RA.v RB.v
790 * RT.s = RA.v RB.s (and RA.s RB.v)
791 * RT.s = RA.s RB.s
792
793 All of these may be predicated. Vector-Vector is straightfoward.
794 When one of source is a Vector and the other a Scalar, it is clear that
795 each element of the Vector source should be added to the Scalar source,
796 each result placed into the Vector (or, if the destination is a scalar,
797 only the first nonpredicated result).
798
799 The one that is not obvious is RT=vector but both RA/RB=scalar.
800 Here this acts as a "splat scalar result", copying the same result into
801 all nonpredicated result elements. If a fixed destination scalar was
802 intended, then an all-Scalar operation should be used.
803
804 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
805
806 ## Assembly Annotation
807
808 Assembly code annotation is required for SV to be able to successfully
809 mark instructions as "prefixed".
810
811 A reasonable (prototype) starting point:
812
813 svp64 [field=value]*
814
815 Fields:
816
817 * ew=8/16/32 - element width
818 * sew=8/16/32 - source element width
819 * vec=2/3/4 - SUBVL
820 * mode=mr/satu/sats/crpred
821 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
822
823 similar to x86 "rex" prefix.
824
825 For actual assembler:
826
827 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
828
829 Qualifiers:
830
831 * m={pred}: predicate mask mode
832 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
833 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
834 * ew={N}: ew=8/16/32 - sets elwidth override
835 * sw={N}: sw=8/16/32 - sets source elwidth override
836 * ff={xx}: see fail-first mode
837 * pr={xx}: see predicate-result mode
838 * sat{x}: satu / sats - see saturation mode
839 * mr: see map-reduce mode
840 * mrr: map-reduce, reverse-gear (VL-1 downto 0)
841 * mr.svm see map-reduce with sub-vector mode
842 * crm: see map-reduce CR mode
843 * crm.svm see map-reduce CR with sub-vector mode
844 * sz: predication with source-zeroing
845 * dz: predication with dest-zeroing
846
847 For modes:
848
849 * pred-result:
850 - pm=lt/gt/le/ge/eq/ne/so/ns
851 - RC1 mode
852 * fail-first
853 - ff=lt/gt/le/ge/eq/ne/so/ns
854 - RC1 mode
855 * saturation:
856 - sats
857 - satu
858 * map-reduce:
859 - mr OR crm: "normal" map-reduce mode or CR-mode.
860 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
861
862 ## Parallel-reduction algorithm
863
864 The principle of SVP64 is that SVP64 is a fully-independent
865 Abstraction of hardware-looping in between issue and execute phases
866 that has no relation to the operation it issues.
867 Additional state cannot be saved on context-switching beyond that
868 of SVSTATE, making things slightly tricky.
869
870 Executable demo pseudocode, full version
871 [here](https://git.libre-soc.org/?p=libreriscv.git;a=blob;f=openpower/sv/test_preduce.py;hb=HEAD)
872
873 ```
874 [[!inline pages="openpower/sv/preduce.py" raw="yes" ]]
875 ```
876
877 This algorithm works by noting when data remains in-place rather than
878 being reduced, and referring to that alternative position on subsequent
879 layers of reduction. It is re-entrant. If however interrupted and
880 restored, some implementations may take longer to re-establish the
881 context.
882
883 Its application by default is that:
884
885 * RA, FRA or BFA is the first register as the first operand
886 (ci index offset in the above pseudocode)
887 * RB, FRB or BFB is the second (co index offset)
888 * RT (result) also uses ci **if RA==RT**
889
890 For more complex applications a REMAP Schedule must be used
891
892 *Programmers's note:
893 if passed a predicate mask with only one bit set, this algorithm
894 takes no action, similar to when a predicate mask is all zero.*
895
896 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
897 implemented in hardware with MVs that ensure lane-crossing is minimised.
898 The mistake which would be catastrophic to SVP64 to make is to then
899 limit the Reduction Sequence for all implementors
900 based solely and exclusively on what one
901 specific internal microarchitecture does.
902 In SIMD ISAs the internal SIMD Architectural design is exposed and imposed on the programmer. Cray-style Vector ISAs on the other hand provide convenient,
903 compact and efficient encodings of abstract concepts.*
904 **It is the Implementor's responsibility to produce a design
905 that complies with the above algorithm,
906 utilising internal Micro-coding and other techniques to transparently
907 insert micro-architectural lane-crossing Move operations
908 if necessary or desired, to give the level of efficiency or performance
909 required.**
910
911 ## Element-width overrides <a name="elwidth"> </>
912
913 Element-width overrides are best illustrated with a packed structure
914 union in the c programming language. The following should be taken
915 literally, and assume always a little-endian layout:
916
917 #pragma pack
918 typedef union {
919 uint8_t b[];
920 uint16_t s[];
921 uint32_t i[];
922 uint64_t l[];
923 uint8_t actual_bytes[8];
924 } el_reg_t;
925
926 elreg_t int_regfile[128];
927
928 Accessing (get and set) of registers given a value, register (in `elreg_t`
929 form), and that all arithmetic, numbering and pseudo-Memory format is
930 LE-endian and LSB0-numbered below:
931
932 elreg_t& get_polymorphed_reg(elreg_t const& reg, bitwidth, offset):
933 el_reg_t res; // result
934 res.l = 0; // TODO: going to need sign-extending / zero-extending
935 if !reg.isvec: // scalar access has no element offset
936 offset = 0
937 if bitwidth == 8:
938 reg.b = int_regfile[reg].b[offset]
939 elif bitwidth == 16:
940 reg.s = int_regfile[reg].s[offset]
941 elif bitwidth == 32:
942 reg.i = int_regfile[reg].i[offset]
943 elif bitwidth == 64:
944 reg.l = int_regfile[reg].l[offset]
945 return reg
946
947 set_polymorphed_reg(elreg_t& reg, bitwidth, offset, val):
948 if (!reg.isvec):
949 # for safety mask out hi bits
950 bytemask = (8 << bitwidth) - 1
951 val &= bytemask
952 # not a vector: first element only, overwrites high bits.
953 # and with the *Architectural* definition being LE,
954 # storing in the first DWORD works perfectly.
955 int_regfile[reg].l[0] = val
956 elif bitwidth == 8:
957 int_regfile[reg].b[offset] = val
958 elif bitwidth == 16:
959 int_regfile[reg].s[offset] = val
960 elif bitwidth == 32:
961 int_regfile[reg].i[offset] = val
962 elif bitwidth == 64:
963 int_regfile[reg].l[offset] = val
964
965 In effect the GPR registers r0 to r127 (and corresponding FPRs fp0
966 to fp127) are reinterpreted to be "starting points" in a byte-addressable
967 memory. Vectors - which become just a virtual naming construct - effectively
968 overlap.
969
970 It is extremely important for implementors to note that the only circumstance
971 where upper portions of an underlying 64-bit register are zero'd out is
972 when the destination is a scalar. The ideal register file has byte-level
973 write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.
974
975 An example ADD operation with predication and element width overrides:
976
977  for (i = 0; i < VL; i++)
978 if (predval & 1<<i) # predication
979 src1 = get_polymorphed_reg(RA, srcwid, irs1)
980 src2 = get_polymorphed_reg(RB, srcwid, irs2)
981 result = src1 + src2 # actual add here
982 set_polymorphed_reg(RT, destwid, ird, result)
983 if (!RT.isvec) break
984 if (RT.isvec)  { id += 1; }
985 if (RA.isvec)  { irs1 += 1; }
986 if (RB.isvec)  { irs2 += 1; }
987
988 Thus it can be clearly seen that elements are packed by their
989 element width, and the packing starts from the source (or destination)
990 specified by the instruction.
991
992 ## Twin (implicit) result operations
993
994 Some operations in the Power ISA already target two 64-bit scalar
995 registers: `lq` for example, and LD with update.
996 Some mathematical algorithms are more
997 efficient when there are two outputs rather than one, providing
998 feedback loops between elements (the most well-known being add with
999 carry). 64-bit multiply
1000 for example actually internally produces a 128 bit result, which clearly
1001 cannot be stored in a single 64 bit register. Some ISAs recommend
1002 "macro op fusion": the practice of setting a convention whereby if
1003 two commonly used instructions (mullo, mulhi) use the same ALU but
1004 one selects the low part of an identical operation and the other
1005 selects the high part, then optimised micro-architectures may
1006 "fuse" those two instructions together, using Micro-coding techniques,
1007 internally.
1008
1009 The practice and convention of macro-op fusion however is not compatible
1010 with SVP64 Horizontal-First, because Horizontal Mode may only
1011 be applied to a single instruction at a time, and SVP64 is based on
1012 the principle of strict Program Order even at the element
1013 level. Thus it becomes
1014 necessary to add explicit more complex single instructions with
1015 more operands than would normally be seen in the average RISC ISA
1016 (3-in, 2-out, in some cases). If it
1017 was not for Power ISA already having LD/ST with update as well as
1018 Condition Codes and `lq` this would be hard to justify.
1019
1020 With limited space in the `EXTRA` Field, and Power ISA opcodes
1021 being only 32 bit, 5 operands is quite an ask. `lq` however sets
1022 a precedent: `RTp` stands for "RT pair". In other words the result
1023 is stored in RT and RT+1. For Scalar operations, following this
1024 precedent is perfectly reasonable. In Scalar mode,
1025 `maddedu` therefore stores the two halves of the 128-bit multiply
1026 into RT and RT+1.
1027
1028 What, then, of `sv.maddedu`? If the destination is hard-coded to
1029 RT and RT+1 the instruction is not useful when Vectorised because
1030 the output will be overwritten on the next element. To solve this
1031 is easy: define the destination registers as RT and RT+MAXVL
1032 respectively. This makes it easy for compilers to statically allocate
1033 registers even when VL changes dynamically.
1034
1035 Bear in mind that both RT and RT+MAXVL are starting points for Vectors,
1036 and bear in mind that element-width overrides still have to be taken
1037 into consideration, the starting point for the implicit destination
1038 is best illustrated in pseudocode:
1039
1040 # demo of maddedu
1041  for (i = 0; i < VL; i++)
1042 if (predval & 1<<i) # predication
1043 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1044 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1045 src2 = get_polymorphed_reg(RC, srcwid, irs3)
1046 result = src1*src2 + src2
1047 destmask = (2<<destwid)-1
1048 # store two halves of result, both start from RT.
1049 set_polymorphed_reg(RT, destwid, ird , result&destmask)
1050 set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
1051 if (!RT.isvec) break
1052 if (RT.isvec)  { id += 1; }
1053 if (RA.isvec)  { irs1 += 1; }
1054 if (RB.isvec)  { irs2 += 1; }
1055 if (RC.isvec)  { irs3 += 1; }
1056
1057 The significant part here is that the second half is stored
1058 starting not from RT+MAXVL at all: it is the *element* index
1059 that is offset by MAXVL, both halves actually starting from RT.
1060 If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements
1061 RT0 to RT2 are stored:
1062
1063 LSB0: 63:32 31:0
1064 MSB0: 0:31 32:63
1065 r0 unchanged unchanged
1066 r1 RT1.lo RT0.lo
1067 r2 unchanged RT2.lo
1068 r3 RT0.hi unchanged
1069 r4 RT2.hi RT1.hi
1070 r5 unchanged unchanged
1071
1072 Note that all of the LO halves start from r1, but that the HI halves
1073 start from half-way into r3. The reason is that with MAXVL bring
1074 5 and elwidth being 32, this is the 5th element
1075 offset (in 32 bit quantities) counting from r1.
1076
1077 *Programmer's note: accessing registers that have been placed
1078 starting on a non-contiguous boundary (half-way along a scalar
1079 register) can be inconvenient: REMAP can provide an offset but
1080 it requires extra instructions to set up. A simple solution
1081 is to ensure that MAXVL is rounded up such that the Vector
1082 ends cleanly on a contiguous register boundary. MAXVL=6 in
1083 the above example would achieve that*
1084
1085 Additional DRAFT Scalar instructions in 3-in 2-out form
1086 with an implicit 2nd destination:
1087
1088 * [[isa/svfixedarith]]
1089 * [[isa/svfparith]]
1090
1091 [[!tag standards]]
1092
1093 ------
1094
1095 \newpage{}
1096