3 * <https://bugs.libre-soc.org/show_bug.cgi?id=574> Saturation
4 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47> Parallel Prefix
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=697> Reduce Modes
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=864> parallel prefix simulator
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=809> OV sv.addex discussion
8 * ARM SVE Fault-first <https://alastairreid.github.io/papers/sve-ieee-micro-2017.pdf>
10 This is the appendix to [[sv/svp64]], providing explanations of modes
11 etc. leaving the main svp64 page's primary purpose as outlining the
18 ## Partial Implementations
20 It is perfectly legal to implement subsets of SVP64 as long as illegal
21 instruction traps are always raised on unimplemented features,
22 so that soft-emulation is possible,
23 even for future revisions of SVP64. With SVP64 being partly controlled
24 through contextual SPRs, a little care has to be taken.
27 not implemented including reserved ones for future use must raise an illegal
28 instruction trap if read or written. This allows software the
29 opportunity to emulate the context created by the given SPR.
31 See [[sv/compliancy_levels]] for full details.
33 ## XER, SO and other global flags
35 Vector systems are expected to be high performance. This is achieved
36 through parallelism, which requires that elements in the vector be
37 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
38 Read-Write Hazards on single-bit global resources, having a significant
41 Consequently in SV, XER.SO behaviour is disregarded (including
42 in `cmp` instructions). XER.SO is not read, but XER.OV may be written,
43 breaking the Read-Modify-Write Hazard Chain that complicates
44 microarchitectural implementations.
45 This includes when `scalar identity behaviour` occurs. If precise
46 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
47 instructions should be used without an SV Prefix.
49 TODO jacob add about OV https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf
51 Of note here is that XER.SO and OV may already be disregarded in the
52 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
53 SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets,
54 but only for SVP64 Prefixed Operations.
56 XER.CA/CA32 on the other hand is expected and required to be implemented
57 according to standard Power ISA Scalar behaviour. Interestingly, due
58 to SVP64 being in effect a hardware for-loop around Scalar instructions
59 executing in precise Program Order, a little thought shows that a Vectorised
60 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
61 and producing, at the end, a single bit Carry out. High performance
62 implementations may exploit this observation to deploy efficient
63 Parallel Carry Lookahead.
65 # assume VL=4, this results in 4 sequential ops (below)
66 sv.adde r0.v, r4.v, r8.v
68 # instructions that get executed in backend hardware:
69 adde r0, r4, r8 # takes carry-in, produces carry-out
70 adde r1, r5, r9 # takes carry from previous
72 adde r3, r7, r11 # likewise
74 It can clearly be seen that the carry chains from one
75 64 bit add to the next, the end result being that a
76 256-bit "Big Integer Add with Carry" has been performed, and that
77 CA contains the 257th bit. A one-instruction 512-bit Add-with-Carry
78 may be performed by setting VL=8, and a one-instruction
79 1024-bit Add-with-Carry by setting VL=16, and so on. More on
80 this in [[openpower/sv/biginteger]]
82 ## EXTRA Field Mapping
84 The purpose of the 9-bit EXTRA field mapping is to mark individual
85 registers (RT, RA, BFA) as either scalar or vector, and to extend
86 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
87 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
88 Predication) leaving a mere 6 bits for qualifying registers. As can
89 be seen there is significant pressure on these (and in fact all) SVP64 bits.
91 In Power ISA v3.1 prefixing there are bits which describe and classify
92 the prefix in a fashion that is independent of the suffix. MLSS for
93 example. For SVP64 there is insufficient space to make the SVP64 Prefix
94 "self-describing", and consequently every single Scalar instruction
95 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
96 This process was semi-automated and is described in this section.
97 The final results, which are part of the SVP64 Specification, are here:
98 [[openpower/opcode_regs_deduped]]
100 * Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
101 from reading the markdown formatted version of the Scalar pseudocode
102 which is machine-readable and found in [[openpower/isatables]]. The
103 analysis gives, by instruction, a "Register Profile". `add RT, RA, RB`
104 for example is given a designation `RM-2R-1W` because it requires
105 two GPR reads and one GPR write.
106 * Secondly, the total number of registers was added up (2R-1W is 3 registers)
107 and if less than or equal to three then that instruction could be given an
108 EXTRA3 designation. Four or more is given an EXTRA2 designation because
109 there are only 9 bits available.
110 * Thirdly, the instruction was analysed to see if Twin or Single
111 Predication was suitable. As a general rule this was if there
112 was only a single operand and a single result (`extw` and LD/ST)
113 however it was found that some 2 or 3 operand instructions also
114 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
115 in Twin Predication, some compromises were made, here. LDST is
116 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
117 * Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
118 could have been decided
119 that RA would be indexed 0 (EXTRA bits 0-2), RB indexed 1 (EXTRA bits 3-5)
120 and RT indexed 2 (EXTRA bits 6-8). In some cases (LD/ST with update)
121 RA-as-a-source is given a **different** EXTRA index from RA-as-a-result
122 (because it is possible to do, and perceived to be useful). Rc=1
123 co-results (CR0, CR1) are always given the same EXTRA index as their
124 main result (RT, FRT).
125 * Fifthly, in an automated process the results of the analysis
126 were outputted in CSV Format for use in machine-readable form
127 by sv_analysis.py <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
129 This process was laborious but logical, and, crucially, once a
130 decision is made (and ratified) cannot be reversed.
131 Qualifying future Power ISA Scalar instructions for SVP64
132 is **strongly** advised to utilise this same process and the same
133 sv_analysis.py program as a canonical method of maintaining the
134 relationships. Alterations to that same program which
135 change the Designation is **prohibited** once finalised (ratified
136 through the Power ISA WG Process). It would
137 be similar to deciding that `add` should be changed from X-Form
140 ## Single Predication <a name="1p"> </a>
142 This is a standard mode normally found in Vector ISAs. every element in every source Vector and in the destination uses the same bit of one single predicate mask.
144 In SVSTATE, for Single-predication, implementors MUST increment both srcstep and dststep, but depending on whether sz and/or dz are set, srcstep and
145 dststep can still potentially become different indices. Only when sz=dz
146 is srcstep guaranteed to equal dststep at all times.
148 Note that in some Mode Formats there is only one flag (zz). This indicates
149 that *both* sz *and* dz are set to the same.
157 The following schedule for srcstep and dststep will occur:
159 | srcstep | dststep | comment |
160 | ---- | ----- | -------- |
161 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
162 | 1 | 2 | sz=1 but dz=0: dst skips mask[1], src soes not |
163 | 2 | 3 | mask[src=2] and mask[dst=3] are 1 |
164 | end | end | loop has ended because dst reached VL-1 |
172 The following schedule for srcstep and dststep will occur:
174 | srcstep | dststep | comment |
175 | ---- | ----- | -------- |
176 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
177 | 2 | 1 | sz=0 but dz=1: src skips mask[1], dst does not |
178 | 3 | 2 | mask[src=3] and mask[dst=2] are 1 |
179 | end | end | loop has ended because src reached VL-1 |
181 In both these examples it is crucial to note that despite there being
182 a single predicate mask, with sz and dz being different, srcstep and
183 dststep are being requested to react differently.
191 The following schedule for srcstep and dststep will occur:
193 | srcstep | dststep | comment |
194 | ---- | ----- | -------- |
195 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
196 | 2 | 2 | sz=0 and dz=0: both src and dst skip mask[1] |
197 | 3 | 3 | mask[src=3] and mask[dst=3] are 1 |
198 | end | end | loop has ended because src and dst reached VL-1 |
200 Here, both srcstep and dststep remain in lockstep because sz=dz=1
202 ## Twin Predication <a name="2p"> </a>
204 This is a novel concept that allows predication to be applied to a single
205 source and a single dest register. The following types of traditional
206 Vector operations may be encoded with it, *without requiring explicit
209 * VSPLAT (a single scalar distributed across a vector)
210 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
211 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
212 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
213 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
215 Those patterns (and more) may be applied to:
217 * mv (the usual way that V\* ISA operations are created)
218 * exts\* sign-extension
219 * rwlinm and other RS-RA shift operations (**note**: excluding
220 those that take RA as both a src and dest. These are not
221 1-src 1-dest, they are 2-src, 1-dest)
222 * LD and ST (treating AGEN as one source)
223 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
224 * Condition Register ops mfcr, mtcr and other similar
226 This is a huge list that creates extremely powerful combinations,
227 particularly given that one of the predicate options is `(1<<r3)`
229 Additional unusual capabilities of Twin Predication include a back-to-back
230 version of VCOMPRESS-VEXPAND which is effectively the ability to do
231 sequentially ordered multiple VINSERTs. The source predicate selects a
232 sequentially ordered subset of elements to be inserted; the destination
233 predicate specifies the sequentially ordered recipient locations.
234 This is equivalent to
235 `llvm.masked.compressstore.*`
237 `llvm.masked.expandload.*`
238 with a single instruction, but abstracted out from Load/Store and applicable
239 in general to any 2P instruction.
241 This extreme power and flexibility comes down to the fact that SVP64
242 is not actually a Vector ISA: it is a loop-abstraction-concept that
243 is applied *in general* to Scalar operations, just like the x86
244 `REP` instruction (if put on steroids).
248 The pack/unpack concept of VSX `vpack` is abstracted out as Sub-Vector
250 Two bits in the `SVSHAPE` [[sv/spr]]
251 enable either "packing" or "unpacking"
252 on the subvectors vec2/3/4.
254 First, illustrating a
255 "normal" SVP64 operation with `SUBVL!=1:` (assuming no elwidth overrides),
256 note that the VL loop is outer and the SUBVL loop inner:
260 for j in range(SUBVL):
266 For pack/unpack (again, no elwidth overrides), note that now there is the
267 option to swap the SUBVL and VL loop orders.
268 In effect the Pack/Unpack performs a Transpose of the subvector elements.
269 Illustrated this time with a GPR mv operation:
271 # yield an outer-SUBVL or inner VL loop with SUBVL
274 for j in range(SUBVL): # subvl is outer
275 for i in range(VL): # vl is inner
278 for i in range(VL): # vl is outer
279 for j in range(SUBVL): # subvl is inner
282 # walk through both source and dest indices simultaneously
283 for src_idx, dst_idx in zip(index_p(PACK), index_p(UNPACK)):
284 move_operation(RT+dst_idx, RA+src_idx)
286 "yield" from python is used here for simplicity and clarity.
287 The two Finite State Machines for the generation of the source
288 and destination element offsets progress incrementally in
291 Example VL=2, SUBVL=3, PACK_en=1 - elements grouped by
292 vec3 will be redistributed such that Sub-elements 0 are
293 packed together, Sub-elements 1 are packed together, as
299 dststep=0 dststep=1 dststep=2
302 Setting of both `PACK` and `UNPACK` is neither prohibited nor
303 `UNDEFINED` because the reordering is fully deterministic, and
304 additional REMAP reordering may be applied. Combined with
305 Matrix REMAP this would
306 give potentially up to 4 Dimensions of reordering.
308 Pack/Unpack has quirky interactions on
309 [[sv/mv.swizzle]] because it can set a different subvector length for
310 destination, and has a slightly different pseudocode algorithm
311 for Vertical-First Mode.
313 Pack/Unpack is enabled (set up) through [[sv/svstep]].
317 Reduction in SVP64 is deterministic and somewhat of a misnomer. A normal
318 Vector ISA would have explicit Reduce opcodes with defined characteristics
319 per operation: in SX Aurora there is even an additional scalar argument
320 containing the initial reduction value, and the default is either 0
321 or 1 depending on the specifics of the explicit opcode.
322 SVP64 fundamentally has to
323 utilise *existing* Scalar Power ISA v3.0B operations, which presents some
326 The solution turns out to be to simply define reduction as permitting
327 deterministic element-based schedules to be issued using the base Scalar
328 operations, and to rely on the underlying microarchitecture to resolve
329 Register Hazards at the element level. This goes back to
330 the fundamental principle that SV is nothing more than a Sub-Program-Counter
331 sitting between Decode and Issue phases.
333 For Scalar Reduction,
334 Microarchitectures *may* take opportunities to parallelise the reduction
335 but only if in doing so they preserve strict Program Order at the Element Level.
336 Opportunities where this is possible include an `OR` operation
337 or a MIN/MAX operation: it may be possible to parallelise the reduction,
338 but for Floating Point it is not permitted due to different results
339 being obtained if the reduction is not executed in strict Program-Sequential
342 In essence it becomes the programmer's responsibility to leverage the
343 pre-determined schedules to desired effect.
345 ### Scalar result reduction and iteration
347 Scalar Reduction per se does not exist, instead is implemented in SVP64
348 as a simple and natural relaxation of the usual restriction on the Vector
349 Looping which would terminate if the destination was marked as a Scalar.
350 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
351 even though the destination register is marked as scalar.
352 Thus it is up to the programmer to be aware of this, observe some
353 conventions, and thus end up achieving the desired outcome of scalar
356 It is also important to appreciate that there is no
357 actual imposition or restriction on how this mode is utilised: there
358 will therefore be several valuable uses (including Vector Iteration
360 and it is up to the programmer to make best use of the
361 (strictly deterministic) capability
364 In this mode, which is suited to operations involving carry or overflow,
365 one register must be assigned, by convention by the programmer to be the
366 "accumulator". Scalar reduction is thus categorised by:
368 * One of the sources is a Vector
369 * the destination is a scalar
370 * optionally but most usefully when one source scalar register is
371 also the scalar destination (which may be informally termed
373 * That the source register type is the same as the destination register
374 type identified as the "accumulator". Scalar reduction on `cmp`,
375 `setb` or `isel` makes no sense for example because of the mixture
376 between CRs and GPRs.
378 *Note that issuing instructions in Scalar reduce mode such as `setb`
379 are neither `UNDEFINED` nor prohibited, despite them not making much
380 sense at first glance.
381 Scalar reduce is strictly defined behaviour, and the cost in
382 hardware terms of prohibition of seemingly non-sensical operations is too great.
383 Therefore it is permitted and required to be executed successfully.
384 Implementors **MAY** choose to optimise such instructions in instances
385 where their use results in "extraneous execution", i.e. where it is clear
386 that the sequence of operations, comprising multiple overwrites to
387 a scalar destination **without** cumulative, iterative, or reductive
388 behaviour (no "accumulator"), may discard all but the last element
389 operation. Identification
390 of such is trivial to do for `setb` and `cmp`: the source register type is
391 a completely different register file from the destination.
392 Likewise Scalar reduction when the destination is a Vector
393 is as if the Reduction Mode was not requested. However it would clearly
394 be unacceptable to perform such optimisations on cache-inhibited LD/ST,
395 so some considerable care needs to be taken.*
397 Typical applications include simple operations such as `ADD r3, r10.v,
398 r3` where, clearly, r3 is being used to accumulate the addition of all
399 elements of the vector starting at r10.
401 # add RT, RA,RB but when RT==RA
403 iregs[RA] += iregs[RB+i] # RT==RA
405 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
407 **terminates** at the first scalar operation. Only by marking the
408 operation as "mapreduce" will it continue to issue multiple sub-looped
409 (element) instructions in `Program Order`.
411 To perform the loop in reverse order, the ```RG``` (reverse gear) bit must be set. This may be useful in situations where the results may be different
412 (floating-point) if executed in a different order. Given that there is
413 no actual prohibition on Reduce Mode being applied when the destination
414 is a Vector, the "Reverse Gear" bit turns out to be a way to apply Iterative
415 or Cumulative Vector operations in reverse. `sv.add/rg r3.v, r4.v, r4.v`
416 for example will start at the opposite end of the Vector and push
417 a cumulative series of overlapping add operations into the Execution units of
418 the underlying hardware.
420 Other examples include shift-mask operations where a Vector of inserts
421 into a single destination register is required (see [[sv/bitmanip]], bmset),
422 as a way to construct
423 a value quickly from multiple arbitrary bit-ranges and bit-offsets.
424 Using the same register as both the source and destination, with Vectors
425 of different offsets masks and values to be inserted has multiple
426 applications including Video, cryptography and JIT compilation.
429 # * Vector of shift-offsets contained in RC (r12.v)
430 # * Vector of masks contained in RB (r8.v)
431 # * Vector of values to be masked-in in RA (r4.v)
432 # * Scalar destination RT (r0) to receive all mask-offset values
433 sv.bmset/mr r0, r4.v, r8.v, r12.v
435 Due to the Deterministic Scheduling,
436 Subtract and Divide are still permitted to be executed in this mode,
437 although from an algorithmic perspective it is strongly discouraged.
438 It would be better to use addition followed by one final subtract,
439 or in the case of divide, to get better accuracy, to perform a multiply
440 cascade followed by a final divide.
442 Note that single-operand or three-operand scalar-dest reduce is perfectly
443 well permitted: the programmer may still declare one register, used as
444 both a Vector source and Scalar destination, to be utilised as
445 the "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc
446 this naturally fits well with the normal expected usage of these
449 If an interrupt or exception occurs in the middle of the scalar mapreduce,
450 the scalar destination register **MUST** be updated with the current
451 (intermediate) result, because this is how ```Program Order``` is
452 preserved (Vector Loops are to be considered to be just another way of issuing instructions
453 in Program Order). In this way, after return from interrupt,
454 the scalar mapreduce may continue where it left off. This provides
455 "precise" exception behaviour.
457 Note that hardware is perfectly permitted to perform multi-issue
458 parallel optimisation of the scalar reduce operation: it's just that
459 as far as the user is concerned, all exceptions and interrupts **MUST**
463 ## Fail-on-first <a name="fail-first"> </a>
465 Data-dependent fail-on-first has two distinct variants: one for LD/ST
467 the other for arithmetic operations (actually, CR-driven)
468 [[sv/normal]] and CR operations [[sv/cr_ops]].
470 case the assumption is that vector elements are required appear to be
471 executed in sequential Program Order, element 0 being the first.
473 * LD/ST ffirst treats the first LD/ST in a vector (element 0) as an
474 ordinary one. Exceptions occur "as normal". However for elements 1
475 and above, if an exception would occur, then VL is **truncated** to the
477 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
478 CR-creating operation produces a result (including cmp). Similar to
479 branch, an analysis of the CR is performed and if the test fails, the
480 vector operation terminates and discards all element operations
481 above the current one (and the current one if VLi is not set),
482 and VL is truncated to either
483 the *previous* element or the current one, depending on whether
484 VLi (VL "inclusive") is set.
486 Thus the new VL comprises a contiguous vector of results,
487 all of which pass the testing criteria (equal to zero, less than zero).
489 The CR-based data-driven fail-on-first is new and not found in ARM
490 SVE or RVV. At the same time it is also "old" because it is a generalisation
492 [Block compare](https://rvbelzen.tripod.com/z80prgtemp/z80prg04.htm)
493 instructions, especially
494 [CPIR](http://z80-heaven.wikidot.com/instructions-set:cpir)
495 which is based on CP (compare) as the ultimate "element" (suffix)
496 operation to which the repeat (prefix) is applied.
497 It is extremely useful for reducing instruction count,
498 however requires speculative execution involving modifications of VL
499 to get high performance implementations. An additional mode (RC1=1)
500 effectively turns what would otherwise be an arithmetic operation
501 into a type of `cmp`. The CR is stored (and the CR.eq bit tested
502 against the `inv` field).
503 If the CR.eq bit is equal to `inv` then the Vector is truncated and
505 Note that when RC1=1 the result elements are never stored, only the CRs.
507 VLi is only available as an option when `Rc=0` (or for instructions
508 which do not have Rc). When set, the current element is always
509 also included in the count (the new length that VL will be set to).
510 This may be useful in combination with "inv" to truncate the Vector
511 to *exclude* elements that fail a test, or, in the case of implementations
512 of strncpy, to include the terminating zero.
514 In CR-based data-driven fail-on-first there is only the option to select
515 and test one bit of each CR (just as with branch BO). For more complex
516 tests this may be insufficient. If that is the case, a vectorised crops
517 (crand, cror) may be used, and ffirst applied to the crop instead of to
518 the arithmetic vector.
520 One extremely important aspect of ffirst is:
522 * LDST ffirst may never set VL equal to zero. This because on the first
523 element an exception must be raised "as normal".
524 * CR-based data-dependent ffirst on the other hand **can** set VL equal
525 to zero. This is the only means in the entirety of SV that VL may be set
526 to zero (with the exception of via the SV.STATE SPR). When VL is set
527 zero due to the first element failing the CR bit-test, all subsequent
528 vectorised operations are effectively `nops` which is
529 *precisely the desired and intended behaviour*.
531 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
532 to a nonzero value for any implementation-specific reason. For example:
533 it is perfectly reasonable for implementations to alter VL when ffirst
534 LD or ST operations are initiated on a nonaligned boundary, such that
535 within a loop the subsequent iteration of that loop begins subsequent
536 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
537 workloads or balance resources.
539 CR-based data-dependent first on the other hand MUST not truncate VL
540 arbitrarily to a length decided by the hardware: VL MUST only be
541 truncated based explicitly on whether a test fails.
542 This because it is a precise test on which algorithms
545 *Note: there is no reverse-direction for Data-dependent Fail-First.
546 REMAP will need to be activated to invert the ordering of element
549 ### Data-dependent fail-first on CR operations (crand etc)
551 Operations that actually produce or alter CR Field as a result
552 do not also in turn have an Rc=1 mode. However it makes no
553 sense to try to test the 4 bits of a CR Field for being equal
554 or not equal to zero. Moreover, the result is already in the
555 form that is desired: it is a CR field. Therefore,
556 CR-based operations have their own SVP64 Mode, described
559 There are two primary different types of CR operations:
561 * Those which have a 3-bit operand field (referring to a CR Field)
562 * Those which have a 5-bit operand (referring to a bit within the
565 More details can be found in [[sv/cr_ops]].
569 Pred-result mode may not be applied on CR-based operations.
571 Although CR operations (mtcr, crand, cror) may be Vectorised,
572 predicated, pred-result mode applies to operations that have
573 an Rc=1 mode, or make sense to add an RC1 option.
575 Predicate-result merges common CR testing with predication, saving on
576 instruction count. In essence, a Condition Register Field test
577 is performed, and if it fails it is considered to have been
578 *as if* the destination predicate bit was zero. Given that
579 there are no CR-based operations that produce Rc=1 co-results,
580 there can be no pred-result mode for mtcr and other CR-based instructions
582 Arithmetic and Logical Pred-result, which does have Rc=1 or for which
583 RC1 Mode makes sense, is covered in [[sv/normal]]
587 CRs are slightly more involved than INT or FP registers due to the
588 possibility for indexing individual bits (crops BA/BB/BT). Again however
589 the access pattern needs to be understandable in relation to v3.0B / v3.1B
590 numbering, with a clear linear relationship and mapping existing when
593 ### CR EXTRA mapping table and algorithm <a name="cr_extra"></a>
595 Numbering relationships for CR fields are already complex due to being
596 in BE format (*the relationship is not clearly explained in the v3.0B
597 or v3.1 specification*). However with some care and consideration
598 the exact same mapping used for INT and FP regfiles may be applied,
599 just to the upper bits, as explained below. Firstly and most
600 importantly a new notation
601 `CR{field number}` is used to indicate access to a particular
602 Condition Register Field (as opposed to the notation `CR[bit]`
603 which accesses one bit of the 32 bit Power ISA v3.0B
606 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
608 CR{n} = CR[32+n*4:35+n*4]
610 For SVP64 the relationship for the sequential
611 numbering of elements is to the CR **fields** within
612 the CR Register, not to individual bits within the CR register.
614 The `CR{n}` notation is designed to give *linear sequential
615 numbering* in the Vector domain on a straight sequential Vector Loop.
617 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
618 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits
619 *in* that CR (EQ/LT/GT/SO). The numbering was determined (after 4 months of
620 analysis and research) to be as follows:
622 CR_index = (BA>>2) # top 3 bits
623 bit_index = (BA & 0b11) # low 2 bits
624 CR_reg = CR{CR_index} # get the CR
625 # finally get the bit from the CR.
626 CR_bit = (CR_reg & (1<<bit_index)) != 0
628 When it comes to applying SV, it is the *CR Field* number `CR_reg`
630 applies, **not** the `CR_bit` portion (bits 3-4):
635 spec = EXTRA2<<1 | 0b0
637 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
638 return ((BA >> 2)<<6) | # hi 3 bits shifted up
639 (spec[1:2]<<4) | # to make room for these
640 (BA & 0b11) # CR_bit on the end
642 # scalar constructs "00 spec[1:2] BA[0:4]"
643 return (spec[1:2] << 5) | BA
645 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
646 algorithm to determine CR\_reg is modified to as follows:
648 CR_index = (BA>>2) # top 3 bits
650 # vector mode, 0-124 increments of 4
651 CR_index = (CR_index<<4) | (spec[1:2] << 2)
653 # scalar mode, 0-32 increments of 1
654 CR_index = (spec[1:2]<<3) | CR_index
655 # same as for v3.0/v3.1 from this point onwards
656 bit_index = (BA & 0b11) # low 2 bits
657 CR_reg = CR{CR_index} # get the CR
658 # finally get the bit from the CR.
659 CR_bit = (CR_reg & (1<<bit_index)) != 0
661 Note here that the decoding pattern to determine CR\_bit does not change.
663 Note: high-performance implementations may read/write Vectors of CRs in
664 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
665 simplify internal design. If instructions are issued where CR Vectors
666 do not start on a 32-bit aligned boundary, performance may be affected.
668 ### CR fields as inputs/outputs of vector operations
670 CRs (or, the arithmetic operations associated with them)
671 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
673 When vectorized, the CR inputs/outputs are sequentially read/written
674 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
675 writing to CR8 (TBD evaluate) and increase sequentially from there.
678 * implementations may rely on the Vector CRs being aligned to 8. This
679 means that CRs may be read or written in aligned batches of 32 bits
680 (8 CRs per batch), for high performance implementations.
681 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
682 overwritten by vector Rc=1 operations except for very large VL
683 * CR-based predication, from CR32, is also not interfered with
684 (except by large VL).
686 However when the SV result (destination) is marked as a scalar by the
687 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
688 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
691 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
692 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
693 v3.0B scalar operations produce a **tuple** of element results: the
694 result of the operation as one part of that element *and a corresponding
695 CR element*. Greatly simplified pseudocode:
698 # calculate the vector result of an add
699 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
700 # now calculate CR bits
701 CRs{8+i}.eq = iregs[RT+i] == 0
702 CRs{8+i}.gt = iregs[RT+i] > 0
705 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
706 then a followup instruction must be performed, setting "reduce" mode on
707 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
708 more flexibility in analysing vectors than standard Vector ISAs. Normal
709 Vector ISAs are typically restricted to "were all results nonzero" and
710 "were some results nonzero". The application of mapreduce to Vectorised
711 cr operations allows far more sophisticated analysis, particularly in
712 conjunction with the new crweird operations see [[sv/cr_int_predication]].
714 Note in particular that the use of a separate instruction in this way
715 ensures that high performance multi-issue OoO inplementations do not
716 have the computation of the cumulative analysis CR as a bottleneck and
717 hindrance, regardless of the length of VL.
720 SVP64 [[sv/branches]] may be used, even when the branch itself is to
721 the following instruction. The combined side-effects of CTR reduction
722 and VL truncation provide several benefits.
724 (see [[discussion]]. some alternative schemes are described there)
726 ### Rc=1 when SUBVL!=1
728 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
729 predicate is allocated per subvector; likewise only one CR is allocated
732 This leaves a conundrum as to how to apply CR computation per subvector,
733 when normally Rc=1 is exclusively applied to scalar elements. A solution
734 is to perform a bitwise OR or AND of the subvector tests. Given that
735 OE is ignored in SVP64, this field may (when available) be used to select OR or
738 #### Table of CR fields
740 CRn is the notation used by the OpenPower spec to refer to CR field #i,
741 so FP instructions with Rc=1 write to CR1 (n=1).
743 CRs are not stored in SPRs: they are registers in their own right.
744 Therefore context-switching the full set of CRs involves a Vectorised
745 mfcr or mtcr, using VL=8 to do so. This is exactly as how
746 scalar OpenPOWER context-switches CRs: it is just that there are now
749 The 64 SV CRs are arranged similarly to the way the 128 integer registers
750 are arranged. TODO a python program that auto-generates a CSV file
751 which can be included in a table, which is in a new page (so as not to
752 overwhelm this one). [[svp64/cr_names]]
756 Instructions are broken down by Register Profiles as listed in the
757 following auto-generated page: [[opcode_regs_deduped]]. These tables,
758 despite being auto-generated, are part of the Specification.
760 ## SV pseudocode illustration
762 ### Single-predicated Instruction
764 illustration of normal mode add operation: zeroing not included, elwidth
765 overrides not included. if there is no predicate, it is set to all 1s
767 function op_add(rd, rs1, rs2) # add not VADD!
768 int i, id=0, irs1=0, irs2=0;
769 predval = get_pred_val(FALSE, rd);
770 for (i = 0; i < VL; i++)
771 STATE.srcoffs = i # save context
772 if (predval & 1<<i) # predication uses intregs
773 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
774 if (!int_vec[rd].isvec) break;
775 if (rd.isvec) { id += 1; }
776 if (rs1.isvec) { irs1 += 1; }
777 if (rs2.isvec) { irs2 += 1; }
778 if (id == VL or irs1 == VL or irs2 == VL) {
779 # end VL hardware loop
780 STATE.srcoffs = 0; # reset
784 This has several modes:
787 * RT.v = RA.v RB.s (and RA.s RB.v)
790 * RT.s = RA.v RB.s (and RA.s RB.v)
793 All of these may be predicated. Vector-Vector is straightfoward.
794 When one of source is a Vector and the other a Scalar, it is clear that
795 each element of the Vector source should be added to the Scalar source,
796 each result placed into the Vector (or, if the destination is a scalar,
797 only the first nonpredicated result).
799 The one that is not obvious is RT=vector but both RA/RB=scalar.
800 Here this acts as a "splat scalar result", copying the same result into
801 all nonpredicated result elements. If a fixed destination scalar was
802 intended, then an all-Scalar operation should be used.
804 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
806 ## Assembly Annotation
808 Assembly code annotation is required for SV to be able to successfully
809 mark instructions as "prefixed".
811 A reasonable (prototype) starting point:
817 * ew=8/16/32 - element width
818 * sew=8/16/32 - source element width
820 * mode=mr/satu/sats/crpred
821 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
823 similar to x86 "rex" prefix.
825 For actual assembler:
827 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
831 * m={pred}: predicate mask mode
832 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
833 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
834 * ew={N}: ew=8/16/32 - sets elwidth override
835 * sw={N}: sw=8/16/32 - sets source elwidth override
836 * ff={xx}: see fail-first mode
837 * pr={xx}: see predicate-result mode
838 * sat{x}: satu / sats - see saturation mode
839 * mr: see map-reduce mode
840 * mrr: map-reduce, reverse-gear (VL-1 downto 0)
841 * mr.svm see map-reduce with sub-vector mode
842 * crm: see map-reduce CR mode
843 * crm.svm see map-reduce CR with sub-vector mode
844 * sz: predication with source-zeroing
845 * dz: predication with dest-zeroing
850 - pm=lt/gt/le/ge/eq/ne/so/ns
853 - ff=lt/gt/le/ge/eq/ne/so/ns
859 - mr OR crm: "normal" map-reduce mode or CR-mode.
860 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
862 ## Parallel-reduction algorithm
864 The principle of SVP64 is that SVP64 is a fully-independent
865 Abstraction of hardware-looping in between issue and execute phases
866 that has no relation to the operation it issues.
867 Additional state cannot be saved on context-switching beyond that
868 of SVSTATE, making things slightly tricky.
870 Executable demo pseudocode, full version
871 [here](https://git.libre-soc.org/?p=libreriscv.git;a=blob;f=openpower/sv/test_preduce.py;hb=HEAD)
874 [[!inline pages="openpower/sv/preduce.py" raw="yes" ]]
877 This algorithm works by noting when data remains in-place rather than
878 being reduced, and referring to that alternative position on subsequent
879 layers of reduction. It is re-entrant. If however interrupted and
880 restored, some implementations may take longer to re-establish the
883 Its application by default is that:
885 * RA, FRA or BFA is the first register as the first operand
886 (ci index offset in the above pseudocode)
887 * RB, FRB or BFB is the second (co index offset)
888 * RT (result) also uses ci **if RA==RT**
890 For more complex applications a REMAP Schedule must be used
893 if passed a predicate mask with only one bit set, this algorithm
894 takes no action, similar to when a predicate mask is all zero.*
896 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
897 implemented in hardware with MVs that ensure lane-crossing is minimised.
898 The mistake which would be catastrophic to SVP64 to make is to then
899 limit the Reduction Sequence for all implementors
900 based solely and exclusively on what one
901 specific internal microarchitecture does.
902 In SIMD ISAs the internal SIMD Architectural design is exposed and imposed on the programmer. Cray-style Vector ISAs on the other hand provide convenient,
903 compact and efficient encodings of abstract concepts.*
904 **It is the Implementor's responsibility to produce a design
905 that complies with the above algorithm,
906 utilising internal Micro-coding and other techniques to transparently
907 insert micro-architectural lane-crossing Move operations
908 if necessary or desired, to give the level of efficiency or performance
911 ## Element-width overrides <a name="elwidth"> </>
913 Element-width overrides are best illustrated with a packed structure
914 union in the c programming language. The following should be taken
915 literally, and assume always a little-endian layout:
923 uint8_t actual_bytes[8];
926 elreg_t int_regfile[128];
928 Accessing (get and set) of registers given a value, register (in `elreg_t`
929 form), and that all arithmetic, numbering and pseudo-Memory format is
930 LE-endian and LSB0-numbered below:
932 elreg_t& get_polymorphed_reg(elreg_t const& reg, bitwidth, offset):
933 el_reg_t res; // result
934 res.l = 0; // TODO: going to need sign-extending / zero-extending
935 if !reg.isvec: // scalar access has no element offset
938 reg.b = int_regfile[reg].b[offset]
940 reg.s = int_regfile[reg].s[offset]
942 reg.i = int_regfile[reg].i[offset]
944 reg.l = int_regfile[reg].l[offset]
947 set_polymorphed_reg(elreg_t& reg, bitwidth, offset, val):
949 # for safety mask out hi bits
950 bytemask = (8 << bitwidth) - 1
952 # not a vector: first element only, overwrites high bits.
953 # and with the *Architectural* definition being LE,
954 # storing in the first DWORD works perfectly.
955 int_regfile[reg].l[0] = val
957 int_regfile[reg].b[offset] = val
959 int_regfile[reg].s[offset] = val
961 int_regfile[reg].i[offset] = val
963 int_regfile[reg].l[offset] = val
965 In effect the GPR registers r0 to r127 (and corresponding FPRs fp0
966 to fp127) are reinterpreted to be "starting points" in a byte-addressable
967 memory. Vectors - which become just a virtual naming construct - effectively
970 It is extremely important for implementors to note that the only circumstance
971 where upper portions of an underlying 64-bit register are zero'd out is
972 when the destination is a scalar. The ideal register file has byte-level
973 write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.
975 An example ADD operation with predication and element width overrides:
977 for (i = 0; i < VL; i++)
978 if (predval & 1<<i) # predication
979 src1 = get_polymorphed_reg(RA, srcwid, irs1)
980 src2 = get_polymorphed_reg(RB, srcwid, irs2)
981 result = src1 + src2 # actual add here
982 set_polymorphed_reg(RT, destwid, ird, result)
984 if (RT.isvec) { id += 1; }
985 if (RA.isvec) { irs1 += 1; }
986 if (RB.isvec) { irs2 += 1; }
988 Thus it can be clearly seen that elements are packed by their
989 element width, and the packing starts from the source (or destination)
990 specified by the instruction.
992 ## Twin (implicit) result operations
994 Some operations in the Power ISA already target two 64-bit scalar
995 registers: `lq` for example, and LD with update.
996 Some mathematical algorithms are more
997 efficient when there are two outputs rather than one, providing
998 feedback loops between elements (the most well-known being add with
999 carry). 64-bit multiply
1000 for example actually internally produces a 128 bit result, which clearly
1001 cannot be stored in a single 64 bit register. Some ISAs recommend
1002 "macro op fusion": the practice of setting a convention whereby if
1003 two commonly used instructions (mullo, mulhi) use the same ALU but
1004 one selects the low part of an identical operation and the other
1005 selects the high part, then optimised micro-architectures may
1006 "fuse" those two instructions together, using Micro-coding techniques,
1009 The practice and convention of macro-op fusion however is not compatible
1010 with SVP64 Horizontal-First, because Horizontal Mode may only
1011 be applied to a single instruction at a time, and SVP64 is based on
1012 the principle of strict Program Order even at the element
1013 level. Thus it becomes
1014 necessary to add explicit more complex single instructions with
1015 more operands than would normally be seen in the average RISC ISA
1016 (3-in, 2-out, in some cases). If it
1017 was not for Power ISA already having LD/ST with update as well as
1018 Condition Codes and `lq` this would be hard to justify.
1020 With limited space in the `EXTRA` Field, and Power ISA opcodes
1021 being only 32 bit, 5 operands is quite an ask. `lq` however sets
1022 a precedent: `RTp` stands for "RT pair". In other words the result
1023 is stored in RT and RT+1. For Scalar operations, following this
1024 precedent is perfectly reasonable. In Scalar mode,
1025 `maddedu` therefore stores the two halves of the 128-bit multiply
1028 What, then, of `sv.maddedu`? If the destination is hard-coded to
1029 RT and RT+1 the instruction is not useful when Vectorised because
1030 the output will be overwritten on the next element. To solve this
1031 is easy: define the destination registers as RT and RT+MAXVL
1032 respectively. This makes it easy for compilers to statically allocate
1033 registers even when VL changes dynamically.
1035 Bear in mind that both RT and RT+MAXVL are starting points for Vectors,
1036 and bear in mind that element-width overrides still have to be taken
1037 into consideration, the starting point for the implicit destination
1038 is best illustrated in pseudocode:
1041 for (i = 0; i < VL; i++)
1042 if (predval & 1<<i) # predication
1043 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1044 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1045 src2 = get_polymorphed_reg(RC, srcwid, irs3)
1046 result = src1*src2 + src2
1047 destmask = (2<<destwid)-1
1048 # store two halves of result, both start from RT.
1049 set_polymorphed_reg(RT, destwid, ird , result&destmask)
1050 set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
1051 if (!RT.isvec) break
1052 if (RT.isvec) { id += 1; }
1053 if (RA.isvec) { irs1 += 1; }
1054 if (RB.isvec) { irs2 += 1; }
1055 if (RC.isvec) { irs3 += 1; }
1057 The significant part here is that the second half is stored
1058 starting not from RT+MAXVL at all: it is the *element* index
1059 that is offset by MAXVL, both halves actually starting from RT.
1060 If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements
1061 RT0 to RT2 are stored:
1065 r0 unchanged unchanged
1070 r5 unchanged unchanged
1072 Note that all of the LO halves start from r1, but that the HI halves
1073 start from half-way into r3. The reason is that with MAXVL bring
1074 5 and elwidth being 32, this is the 5th element
1075 offset (in 32 bit quantities) counting from r1.
1077 *Programmer's note: accessing registers that have been placed
1078 starting on a non-contiguous boundary (half-way along a scalar
1079 register) can be inconvenient: REMAP can provide an offset but
1080 it requires extra instructions to set up. A simple solution
1081 is to ensure that MAXVL is rounded up such that the Vector
1082 ends cleanly on a contiguous register boundary. MAXVL=6 in
1083 the above example would achieve that*
1085 Additional DRAFT Scalar instructions in 3-in 2-out form
1086 with an implicit 2nd destination:
1088 * [[isa/svfixedarith]]