46b399834d3d414bb77f82935fb1004a3d2465d3
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 [[!tag standards]]
2
3 # Appendix
4
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=574> Saturation
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47> Parallel Prefix
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=697> Reduce Modes
8 * <https://bugs.libre-soc.org/show_bug.cgi?id=864> parallel prefix simulator
9 * <https://bugs.libre-soc.org/show_bug.cgi?id=809> OV sv.addex discussion
10 * ARM SVE Fault-first <https://alastairreid.github.io/papers/sve-ieee-micro-2017.pdf>
11
12 This is the appendix to [[sv/svp64]], providing explanations of modes
13 etc. leaving the main svp64 page's primary purpose as outlining the
14 instruction format.
15
16 Table of contents:
17
18 [[!toc]]
19
20 # Partial Implementations
21
22 It is perfectly legal to implement subsets of SVP64 as long as illegal
23 instruction traps are always raised on unimplemented features,
24 so that soft-emulation is possible,
25 even for future revisions of SVP64. With SVP64 being partly controlled
26 through contextual SPRs, a little care has to be taken.
27
28 **All** SPRs
29 not implemented including reserved ones for future use must raise an illegal
30 instruction trap if read or written. This allows software the
31 opportunity to emulate the context created by the given SPR.
32
33 See [[sv/compliancy_levels]] for full details.
34
35 # XER, SO and other global flags
36
37 Vector systems are expected to be high performance. This is achieved
38 through parallelism, which requires that elements in the vector be
39 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
40 Read-Write Hazards on single-bit global resources, having a significant
41 detrimental effect.
42
43 Consequently in SV, XER.SO behaviour is disregarded (including
44 in `cmp` instructions). XER.SO is not read, but XER.OV may be written,
45 breaking the Read-Modify-Write Hazard Chain that complicates
46 microarchitectural implementations.
47 This includes when `scalar identity behaviour` occurs. If precise
48 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
49 instructions should be used without an SV Prefix.
50
51 TODO jacob add about OV https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf
52
53 Of note here is that XER.SO and OV may already be disregarded in the
54 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
55 SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets,
56 but only for SVP64 Prefixed Operations.
57
58 XER.CA/CA32 on the other hand is expected and required to be implemented
59 according to standard Power ISA Scalar behaviour. Interestingly, due
60 to SVP64 being in effect a hardware for-loop around Scalar instructions
61 executing in precise Program Order, a little thought shows that a Vectorised
62 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
63 and producing, at the end, a single bit Carry out. High performance
64 implementations may exploit this observation to deploy efficient
65 Parallel Carry Lookahead.
66
67 # assume VL=4, this results in 4 sequential ops (below)
68 sv.adde r0.v, r4.v, r8.v
69
70 # instructions that get executed in backend hardware:
71 adde r0, r4, r8 # takes carry-in, produces carry-out
72 adde r1, r5, r9 # takes carry from previous
73 ...
74 adde r3, r7, r11 # likewise
75
76 It can clearly be seen that the carry chains from one
77 64 bit add to the next, the end result being that a
78 256-bit "Big Integer Add with Carry" has been performed, and that
79 CA contains the 257th bit. A one-instruction 512-bit Add-with-Carry
80 may be performed by setting VL=8, and a one-instruction
81 1024-bit Add-with-Carry by setting VL=16, and so on. More on
82 this in [[openpower/sv/biginteger]]
83
84 # EXTRA Field Mapping
85
86 The purpose of the 9-bit EXTRA field mapping is to mark individual
87 registers (RT, RA, BFA) as either scalar or vector, and to extend
88 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
89 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
90 Predication) leaving a mere 6 bits for qualifying registers. As can
91 be seen there is significant pressure on these (and in fact all) SVP64 bits.
92
93 In Power ISA v3.1 prefixing there are bits which describe and classify
94 the prefix in a fashion that is independent of the suffix. MLSS for
95 example. For SVP64 there is insufficient space to make the SVP64 Prefix
96 "self-describing", and consequently every single Scalar instruction
97 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
98 This process was semi-automated and is described in this section.
99 The final results, which are part of the SVP64 Specification, are here:
100 [[openpower/opcode_regs_deduped]]
101
102 * Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
103 from reading the markdown formatted version of the Scalar pseudocode
104 which is machine-readable and found in [[openpower/isatables]]. The
105 analysis gives, by instruction, a "Register Profile". `add RT, RA, RB`
106 for example is given a designation `RM-2R-1W` because it requires
107 two GPR reads and one GPR write.
108 * Secondly, the total number of registers was added up (2R-1W is 3 registers)
109 and if less than or equal to three then that instruction could be given an
110 EXTRA3 designation. Four or more is given an EXTRA2 designation because
111 there are only 9 bits available.
112 * Thirdly, the instruction was analysed to see if Twin or Single
113 Predication was suitable. As a general rule this was if there
114 was only a single operand and a single result (`extw` and LD/ST)
115 however it was found that some 2 or 3 operand instructions also
116 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
117 in Twin Predication, some compromises were made, here. LDST is
118 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
119 * Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
120 could have been decided
121 that RA would be indexed 0 (EXTRA bits 0-2), RB indexed 1 (EXTRA bits 3-5)
122 and RT indexed 2 (EXTRA bits 6-8). In some cases (LD/ST with update)
123 RA-as-a-source is given a **different** EXTRA index from RA-as-a-result
124 (because it is possible to do, and perceived to be useful). Rc=1
125 co-results (CR0, CR1) are always given the same EXTRA index as their
126 main result (RT, FRT).
127 * Fifthly, in an automated process the results of the analysis
128 were outputted in CSV Format for use in machine-readable form
129 by sv_analysis.py <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
130
131 This process was laborious but logical, and, crucially, once a
132 decision is made (and ratified) cannot be reversed.
133 Qualifying future Power ISA Scalar instructions for SVP64
134 is **strongly** advised to utilise this same process and the same
135 sv_analysis.py program as a canonical method of maintaining the
136 relationships. Alterations to that same program which
137 change the Designation is **prohibited** once finalised (ratified
138 through the Power ISA WG Process). It would
139 be similar to deciding that `add` should be changed from X-Form
140 to D-Form.
141
142 # Single Predication <a name="1p"> </a>
143
144 This is a standard mode normally found in Vector ISAs. every element in every source Vector and in the destination uses the same bit of one single predicate mask.
145
146 In SVSTATE, for Single-predication, implementors MUST increment both srcstep and dststep, but depending on whether sz and/or dz are set, srcstep and
147 dststep can still potentially become different indices. Only when sz=dz
148 is srcstep guaranteed to equal dststep at all times.
149
150 Note that in some Mode Formats there is only one flag (zz). This indicates
151 that *both* sz *and* dz are set to the same.
152
153 Example 1:
154
155 * VL=4
156 * mask=0b1101
157 * sz=0, dz=1
158
159 The following schedule for srcstep and dststep will occur:
160
161 | srcstep | dststep | comment |
162 | ---- | ----- | -------- |
163 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
164 | 1 | 2 | sz=1 but dz=0: dst skips mask[1], src soes not |
165 | 2 | 3 | mask[src=2] and mask[dst=3] are 1 |
166 | end | end | loop has ended because dst reached VL-1 |
167
168 Example 2:
169
170 * VL=4
171 * mask=0b1101
172 * sz=1, dz=0
173
174 The following schedule for srcstep and dststep will occur:
175
176 | srcstep | dststep | comment |
177 | ---- | ----- | -------- |
178 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
179 | 2 | 1 | sz=0 but dz=1: src skips mask[1], dst does not |
180 | 3 | 2 | mask[src=3] and mask[dst=2] are 1 |
181 | end | end | loop has ended because src reached VL-1 |
182
183 In both these examples it is crucial to note that despite there being
184 a single predicate mask, with sz and dz being different, srcstep and
185 dststep are being requested to react differently.
186
187 Example 3:
188
189 * VL=4
190 * mask=0b1101
191 * sz=0, dz=0
192
193 The following schedule for srcstep and dststep will occur:
194
195 | srcstep | dststep | comment |
196 | ---- | ----- | -------- |
197 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
198 | 2 | 2 | sz=0 and dz=0: both src and dst skip mask[1] |
199 | 3 | 3 | mask[src=3] and mask[dst=3] are 1 |
200 | end | end | loop has ended because src and dst reached VL-1 |
201
202 Here, both srcstep and dststep remain in lockstep because sz=dz=1
203
204 # Twin Predication <a name="2p"> </a>
205
206 This is a novel concept that allows predication to be applied to a single
207 source and a single dest register. The following types of traditional
208 Vector operations may be encoded with it, *without requiring explicit
209 opcodes to do so*
210
211 * VSPLAT (a single scalar distributed across a vector)
212 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
213 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
214 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
215 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
216
217 Those patterns (and more) may be applied to:
218
219 * mv (the usual way that V\* ISA operations are created)
220 * exts\* sign-extension
221 * rwlinm and other RS-RA shift operations (**note**: excluding
222 those that take RA as both a src and dest. These are not
223 1-src 1-dest, they are 2-src, 1-dest)
224 * LD and ST (treating AGEN as one source)
225 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
226 * Condition Register ops mfcr, mtcr and other similar
227
228 This is a huge list that creates extremely powerful combinations,
229 particularly given that one of the predicate options is `(1<<r3)`
230
231 Additional unusual capabilities of Twin Predication include a back-to-back
232 version of VCOMPRESS-VEXPAND which is effectively the ability to do
233 sequentially ordered multiple VINSERTs. The source predicate selects a
234 sequentially ordered subset of elements to be inserted; the destination
235 predicate specifies the sequentially ordered recipient locations.
236 This is equivalent to
237 `llvm.masked.compressstore.*`
238 followed by
239 `llvm.masked.expandload.*`
240 with a single instruction, but abstracted out from Load/Store and applicable
241 in general to any 2P instruction.
242
243 This extreme power and flexibility comes down to the fact that SVP64
244 is not actually a Vector ISA: it is a loop-abstraction-concept that
245 is applied *in general* to Scalar operations, just like the x86
246 `REP` instruction (if put on steroids).
247
248 # Pack/Unpack
249
250 The pack/unpack concept of VSX `vpack` is abstracted out as Sub-Vector
251 reordering.
252 Two bits in the `SVSHAPE` [[sv/spr]]
253 enable either "packing" or "unpacking"
254 on the subvectors vec2/3/4.
255
256 First, illustrating a
257 "normal" SVP64 operation with `SUBVL!=1:` (assuming no elwidth overrides),
258 note that the VL loop is outer and the SUBVL loop inner:
259
260 def index():
261 for i in range(VL):
262 for j in range(SUBVL):
263 yield i*SUBVL+j
264
265 for idx in index():
266 operation_on(RA+idx)
267
268 For pack/unpack (again, no elwidth overrides), note that now there is the
269 option to swap the SUBVL and VL loop orders.
270 In effect the Pack/Unpack performs a Transpose of the subvector elements.
271 Illustrated this time with a GPR mv operation:
272
273 # yield an outer-SUBVL or inner VL loop with SUBVL
274 def index_p(outer):
275 if outer:
276 for j in range(SUBVL): # subvl is outer
277 for i in range(VL): # vl is inner
278 yield i+VL*j
279 else:
280 for i in range(VL): # vl is outer
281 for j in range(SUBVL): # subvl is inner
282 yield i*SUBVL+j
283
284 # walk through both source and dest indices simultaneously
285 for src_idx, dst_idx in zip(index_p(PACK), index_p(UNPACK)):
286 move_operation(RT+dst_idx, RA+src_idx)
287
288 "yield" from python is used here for simplicity and clarity.
289 The two Finite State Machines for the generation of the source
290 and destination element offsets progress incrementally in
291 lock-step.
292
293 Example VL=2, SUBVL=3, PACK_en=1 - elements grouped by
294 vec3 will be redistributed such that Sub-elements 0 are
295 packed together, Sub-elements 1 are packed together, as
296 are Sub-elements 2.
297
298 srcstep=0 srcstep=1
299 0 1 2 3 4 5
300
301 dststep=0 dststep=1 dststep=2
302 0 3 1 4 2 5
303
304 Setting of both `PACK` and `UNPACK` is neither prohibited nor
305 `UNDEFINED` because the reordering is fully deterministic, and
306 additional REMAP reordering may be applied. Combined with
307 Matrix REMAP this would
308 give potentially up to 4 Dimensions of reordering.
309
310 Pack/Unpack has quirky interactions on
311 [[sv/mv.swizzle]] because it can set a different subvector length for
312 destination, and has a slightly different pseudocode algorithm
313 for Vertical-First Mode.
314
315 Pack/Unpack is enabled (set up) through [[sv/svstep]].
316
317 # Reduce modes
318
319 Reduction in SVP64 is deterministic and somewhat of a misnomer. A normal
320 Vector ISA would have explicit Reduce opcodes with defined characteristics
321 per operation: in SX Aurora there is even an additional scalar argument
322 containing the initial reduction value, and the default is either 0
323 or 1 depending on the specifics of the explicit opcode.
324 SVP64 fundamentally has to
325 utilise *existing* Scalar Power ISA v3.0B operations, which presents some
326 unique challenges.
327
328 The solution turns out to be to simply define reduction as permitting
329 deterministic element-based schedules to be issued using the base Scalar
330 operations, and to rely on the underlying microarchitecture to resolve
331 Register Hazards at the element level. This goes back to
332 the fundamental principle that SV is nothing more than a Sub-Program-Counter
333 sitting between Decode and Issue phases.
334
335 For Scalar Reduction,
336 Microarchitectures *may* take opportunities to parallelise the reduction
337 but only if in doing so they preserve strict Program Order at the Element Level.
338 Opportunities where this is possible include an `OR` operation
339 or a MIN/MAX operation: it may be possible to parallelise the reduction,
340 but for Floating Point it is not permitted due to different results
341 being obtained if the reduction is not executed in strict Program-Sequential
342 Order.
343
344 In essence it becomes the programmer's responsibility to leverage the
345 pre-determined schedules to desired effect.
346
347 ## Scalar result reduction and iteration
348
349 Scalar Reduction per se does not exist, instead is implemented in SVP64
350 as a simple and natural relaxation of the usual restriction on the Vector
351 Looping which would terminate if the destination was marked as a Scalar.
352 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
353 even though the destination register is marked as scalar.
354 Thus it is up to the programmer to be aware of this, observe some
355 conventions, and thus end up achieving the desired outcome of scalar
356 reduction.
357
358 It is also important to appreciate that there is no
359 actual imposition or restriction on how this mode is utilised: there
360 will therefore be several valuable uses (including Vector Iteration
361 and "Reverse-Gear")
362 and it is up to the programmer to make best use of the
363 (strictly deterministic) capability
364 provided.
365
366 In this mode, which is suited to operations involving carry or overflow,
367 one register must be assigned, by convention by the programmer to be the
368 "accumulator". Scalar reduction is thus categorised by:
369
370 * One of the sources is a Vector
371 * the destination is a scalar
372 * optionally but most usefully when one source scalar register is
373 also the scalar destination (which may be informally termed
374 the "accumulator")
375 * That the source register type is the same as the destination register
376 type identified as the "accumulator". Scalar reduction on `cmp`,
377 `setb` or `isel` makes no sense for example because of the mixture
378 between CRs and GPRs.
379
380 *Note that issuing instructions in Scalar reduce mode such as `setb`
381 are neither `UNDEFINED` nor prohibited, despite them not making much
382 sense at first glance.
383 Scalar reduce is strictly defined behaviour, and the cost in
384 hardware terms of prohibition of seemingly non-sensical operations is too great.
385 Therefore it is permitted and required to be executed successfully.
386 Implementors **MAY** choose to optimise such instructions in instances
387 where their use results in "extraneous execution", i.e. where it is clear
388 that the sequence of operations, comprising multiple overwrites to
389 a scalar destination **without** cumulative, iterative, or reductive
390 behaviour (no "accumulator"), may discard all but the last element
391 operation. Identification
392 of such is trivial to do for `setb` and `cmp`: the source register type is
393 a completely different register file from the destination.
394 Likewise Scalar reduction when the destination is a Vector
395 is as if the Reduction Mode was not requested. However it would clearly
396 be unacceptable to perform such optimisations on cache-inhibited LD/ST,
397 so some considerable care needs to be taken.*
398
399 Typical applications include simple operations such as `ADD r3, r10.v,
400 r3` where, clearly, r3 is being used to accumulate the addition of all
401 elements of the vector starting at r10.
402
403 # add RT, RA,RB but when RT==RA
404 for i in range(VL):
405 iregs[RA] += iregs[RB+i] # RT==RA
406
407 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
408 SV ordinarily
409 **terminates** at the first scalar operation. Only by marking the
410 operation as "mapreduce" will it continue to issue multiple sub-looped
411 (element) instructions in `Program Order`.
412
413 To perform the loop in reverse order, the ```RG``` (reverse gear) bit must be set. This may be useful in situations where the results may be different
414 (floating-point) if executed in a different order. Given that there is
415 no actual prohibition on Reduce Mode being applied when the destination
416 is a Vector, the "Reverse Gear" bit turns out to be a way to apply Iterative
417 or Cumulative Vector operations in reverse. `sv.add/rg r3.v, r4.v, r4.v`
418 for example will start at the opposite end of the Vector and push
419 a cumulative series of overlapping add operations into the Execution units of
420 the underlying hardware.
421
422 Other examples include shift-mask operations where a Vector of inserts
423 into a single destination register is required (see [[sv/bitmanip]], bmset),
424 as a way to construct
425 a value quickly from multiple arbitrary bit-ranges and bit-offsets.
426 Using the same register as both the source and destination, with Vectors
427 of different offsets masks and values to be inserted has multiple
428 applications including Video, cryptography and JIT compilation.
429
430 # assume VL=4:
431 # * Vector of shift-offsets contained in RC (r12.v)
432 # * Vector of masks contained in RB (r8.v)
433 # * Vector of values to be masked-in in RA (r4.v)
434 # * Scalar destination RT (r0) to receive all mask-offset values
435 sv.bmset/mr r0, r4.v, r8.v, r12.v
436
437 Due to the Deterministic Scheduling,
438 Subtract and Divide are still permitted to be executed in this mode,
439 although from an algorithmic perspective it is strongly discouraged.
440 It would be better to use addition followed by one final subtract,
441 or in the case of divide, to get better accuracy, to perform a multiply
442 cascade followed by a final divide.
443
444 Note that single-operand or three-operand scalar-dest reduce is perfectly
445 well permitted: the programmer may still declare one register, used as
446 both a Vector source and Scalar destination, to be utilised as
447 the "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc
448 this naturally fits well with the normal expected usage of these
449 operations.
450
451 If an interrupt or exception occurs in the middle of the scalar mapreduce,
452 the scalar destination register **MUST** be updated with the current
453 (intermediate) result, because this is how ```Program Order``` is
454 preserved (Vector Loops are to be considered to be just another way of issuing instructions
455 in Program Order). In this way, after return from interrupt,
456 the scalar mapreduce may continue where it left off. This provides
457 "precise" exception behaviour.
458
459 Note that hardware is perfectly permitted to perform multi-issue
460 parallel optimisation of the scalar reduce operation: it's just that
461 as far as the user is concerned, all exceptions and interrupts **MUST**
462 be precise.
463
464
465 # Fail-on-first <a name="fail-first"> </a>
466
467 Data-dependent fail-on-first has two distinct variants: one for LD/ST
468 (see [[sv/ldst]],
469 the other for arithmetic operations (actually, CR-driven)
470 [[sv/normal]] and CR operations [[sv/cr_ops]].
471 Note in each
472 case the assumption is that vector elements are required appear to be
473 executed in sequential Program Order, element 0 being the first.
474
475 * LD/ST ffirst treats the first LD/ST in a vector (element 0) as an
476 ordinary one. Exceptions occur "as normal". However for elements 1
477 and above, if an exception would occur, then VL is **truncated** to the
478 previous element.
479 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
480 CR-creating operation produces a result (including cmp). Similar to
481 branch, an analysis of the CR is performed and if the test fails, the
482 vector operation terminates and discards all element operations
483 above the current one (and the current one if VLi is not set),
484 and VL is truncated to either
485 the *previous* element or the current one, depending on whether
486 VLi (VL "inclusive") is set.
487
488 Thus the new VL comprises a contiguous vector of results,
489 all of which pass the testing criteria (equal to zero, less than zero).
490
491 The CR-based data-driven fail-on-first is new and not found in ARM
492 SVE or RVV. At the same time it is also "old" because it is a generalisation
493 of the Z80
494 [Block compare](https://rvbelzen.tripod.com/z80prgtemp/z80prg04.htm)
495 instructions, especially
496 [CPIR](http://z80-heaven.wikidot.com/instructions-set:cpir)
497 which is based on CP (compare) as the ultimate "element" (suffix)
498 operation to which the repeat (prefix) is applied.
499 It is extremely useful for reducing instruction count,
500 however requires speculative execution involving modifications of VL
501 to get high performance implementations. An additional mode (RC1=1)
502 effectively turns what would otherwise be an arithmetic operation
503 into a type of `cmp`. The CR is stored (and the CR.eq bit tested
504 against the `inv` field).
505 If the CR.eq bit is equal to `inv` then the Vector is truncated and
506 the loop ends.
507 Note that when RC1=1 the result elements are never stored, only the CRs.
508
509 VLi is only available as an option when `Rc=0` (or for instructions
510 which do not have Rc). When set, the current element is always
511 also included in the count (the new length that VL will be set to).
512 This may be useful in combination with "inv" to truncate the Vector
513 to *exclude* elements that fail a test, or, in the case of implementations
514 of strncpy, to include the terminating zero.
515
516 In CR-based data-driven fail-on-first there is only the option to select
517 and test one bit of each CR (just as with branch BO). For more complex
518 tests this may be insufficient. If that is the case, a vectorised crops
519 (crand, cror) may be used, and ffirst applied to the crop instead of to
520 the arithmetic vector.
521
522 One extremely important aspect of ffirst is:
523
524 * LDST ffirst may never set VL equal to zero. This because on the first
525 element an exception must be raised "as normal".
526 * CR-based data-dependent ffirst on the other hand **can** set VL equal
527 to zero. This is the only means in the entirety of SV that VL may be set
528 to zero (with the exception of via the SV.STATE SPR). When VL is set
529 zero due to the first element failing the CR bit-test, all subsequent
530 vectorised operations are effectively `nops` which is
531 *precisely the desired and intended behaviour*.
532
533 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
534 to a nonzero value for any implementation-specific reason. For example:
535 it is perfectly reasonable for implementations to alter VL when ffirst
536 LD or ST operations are initiated on a nonaligned boundary, such that
537 within a loop the subsequent iteration of that loop begins subsequent
538 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
539 workloads or balance resources.
540
541 CR-based data-dependent first on the other hand MUST not truncate VL
542 arbitrarily to a length decided by the hardware: VL MUST only be
543 truncated based explicitly on whether a test fails.
544 This because it is a precise test on which algorithms
545 will rely.
546
547 *Note: there is no reverse-direction for Data-dependent Fail-First.
548 REMAP will need to be activated to invert the ordering of element
549 traversal.*
550
551 ## Data-dependent fail-first on CR operations (crand etc)
552
553 Operations that actually produce or alter CR Field as a result
554 do not also in turn have an Rc=1 mode. However it makes no
555 sense to try to test the 4 bits of a CR Field for being equal
556 or not equal to zero. Moreover, the result is already in the
557 form that is desired: it is a CR field. Therefore,
558 CR-based operations have their own SVP64 Mode, described
559 in [[sv/cr_ops]]
560
561 There are two primary different types of CR operations:
562
563 * Those which have a 3-bit operand field (referring to a CR Field)
564 * Those which have a 5-bit operand (referring to a bit within the
565 whole 32-bit CR)
566
567 More details can be found in [[sv/cr_ops]].
568
569 # pred-result mode
570
571 Pred-result mode may not be applied on CR-based operations.
572
573 Although CR operations (mtcr, crand, cror) may be Vectorised,
574 predicated, pred-result mode applies to operations that have
575 an Rc=1 mode, or make sense to add an RC1 option.
576
577 Predicate-result merges common CR testing with predication, saving on
578 instruction count. In essence, a Condition Register Field test
579 is performed, and if it fails it is considered to have been
580 *as if* the destination predicate bit was zero. Given that
581 there are no CR-based operations that produce Rc=1 co-results,
582 there can be no pred-result mode for mtcr and other CR-based instructions
583
584 Arithmetic and Logical Pred-result, which does have Rc=1 or for which
585 RC1 Mode makes sense, is covered in [[sv/normal]]
586
587 # CR Operations
588
589 CRs are slightly more involved than INT or FP registers due to the
590 possibility for indexing individual bits (crops BA/BB/BT). Again however
591 the access pattern needs to be understandable in relation to v3.0B / v3.1B
592 numbering, with a clear linear relationship and mapping existing when
593 SV is applied.
594
595 ## CR EXTRA mapping table and algorithm <a name="cr_extra"></a>
596
597 Numbering relationships for CR fields are already complex due to being
598 in BE format (*the relationship is not clearly explained in the v3.0B
599 or v3.1 specification*). However with some care and consideration
600 the exact same mapping used for INT and FP regfiles may be applied,
601 just to the upper bits, as explained below. Firstly and most
602 importantly a new notation
603 `CR{field number}` is used to indicate access to a particular
604 Condition Register Field (as opposed to the notation `CR[bit]`
605 which accesses one bit of the 32 bit Power ISA v3.0B
606 Condition Register).
607
608 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
609
610 CR{n} = CR[32+n*4:35+n*4]
611
612 For SVP64 the relationship for the sequential
613 numbering of elements is to the CR **fields** within
614 the CR Register, not to individual bits within the CR register.
615
616 The `CR{n}` notation is designed to give *linear sequential
617 numbering* in the Vector domain on a straight sequential Vector Loop.
618
619 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
620 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits
621 *in* that CR (EQ/LT/GT/SO). The numbering was determined (after 4 months of
622 analysis and research) to be as follows:
623
624 CR_index = (BA>>2) # top 3 bits
625 bit_index = (BA & 0b11) # low 2 bits
626 CR_reg = CR{CR_index} # get the CR
627 # finally get the bit from the CR.
628 CR_bit = (CR_reg & (1<<bit_index)) != 0
629
630 When it comes to applying SV, it is the *CR Field* number `CR_reg`
631 to which SV EXTRA2/3
632 applies, **not** the `CR_bit` portion (bits 3-4):
633
634 if extra3_mode:
635 spec = EXTRA3
636 else:
637 spec = EXTRA2<<1 | 0b0
638 if spec[0]:
639 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
640 return ((BA >> 2)<<6) | # hi 3 bits shifted up
641 (spec[1:2]<<4) | # to make room for these
642 (BA & 0b11) # CR_bit on the end
643 else:
644 # scalar constructs "00 spec[1:2] BA[0:4]"
645 return (spec[1:2] << 5) | BA
646
647 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
648 algorithm to determine CR\_reg is modified to as follows:
649
650 CR_index = (BA>>2) # top 3 bits
651 if spec[0]:
652 # vector mode, 0-124 increments of 4
653 CR_index = (CR_index<<4) | (spec[1:2] << 2)
654 else:
655 # scalar mode, 0-32 increments of 1
656 CR_index = (spec[1:2]<<3) | CR_index
657 # same as for v3.0/v3.1 from this point onwards
658 bit_index = (BA & 0b11) # low 2 bits
659 CR_reg = CR{CR_index} # get the CR
660 # finally get the bit from the CR.
661 CR_bit = (CR_reg & (1<<bit_index)) != 0
662
663 Note here that the decoding pattern to determine CR\_bit does not change.
664
665 Note: high-performance implementations may read/write Vectors of CRs in
666 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
667 simplify internal design. If instructions are issued where CR Vectors
668 do not start on a 32-bit aligned boundary, performance may be affected.
669
670 ## CR fields as inputs/outputs of vector operations
671
672 CRs (or, the arithmetic operations associated with them)
673 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
674
675 When vectorized, the CR inputs/outputs are sequentially read/written
676 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
677 writing to CR8 (TBD evaluate) and increase sequentially from there.
678 This is so that:
679
680 * implementations may rely on the Vector CRs being aligned to 8. This
681 means that CRs may be read or written in aligned batches of 32 bits
682 (8 CRs per batch), for high performance implementations.
683 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
684 overwritten by vector Rc=1 operations except for very large VL
685 * CR-based predication, from CR32, is also not interfered with
686 (except by large VL).
687
688 However when the SV result (destination) is marked as a scalar by the
689 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
690 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
691 for FP operations.
692
693 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
694 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
695 v3.0B scalar operations produce a **tuple** of element results: the
696 result of the operation as one part of that element *and a corresponding
697 CR element*. Greatly simplified pseudocode:
698
699 for i in range(VL):
700 # calculate the vector result of an add
701 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
702 # now calculate CR bits
703 CRs{8+i}.eq = iregs[RT+i] == 0
704 CRs{8+i}.gt = iregs[RT+i] > 0
705 ... etc
706
707 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
708 then a followup instruction must be performed, setting "reduce" mode on
709 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
710 more flexibility in analysing vectors than standard Vector ISAs. Normal
711 Vector ISAs are typically restricted to "were all results nonzero" and
712 "were some results nonzero". The application of mapreduce to Vectorised
713 cr operations allows far more sophisticated analysis, particularly in
714 conjunction with the new crweird operations see [[sv/cr_int_predication]].
715
716 Note in particular that the use of a separate instruction in this way
717 ensures that high performance multi-issue OoO inplementations do not
718 have the computation of the cumulative analysis CR as a bottleneck and
719 hindrance, regardless of the length of VL.
720
721 Additionally,
722 SVP64 [[sv/branches]] may be used, even when the branch itself is to
723 the following instruction. The combined side-effects of CTR reduction
724 and VL truncation provide several benefits.
725
726 (see [[discussion]]. some alternative schemes are described there)
727
728 ## Rc=1 when SUBVL!=1
729
730 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
731 predicate is allocated per subvector; likewise only one CR is allocated
732 per subvector.
733
734 This leaves a conundrum as to how to apply CR computation per subvector,
735 when normally Rc=1 is exclusively applied to scalar elements. A solution
736 is to perform a bitwise OR or AND of the subvector tests. Given that
737 OE is ignored in SVP64, this field may (when available) be used to select OR or
738 AND behavior.
739
740 ### Table of CR fields
741
742 CRn is the notation used by the OpenPower spec to refer to CR field #i,
743 so FP instructions with Rc=1 write to CR1 (n=1).
744
745 CRs are not stored in SPRs: they are registers in their own right.
746 Therefore context-switching the full set of CRs involves a Vectorised
747 mfcr or mtcr, using VL=8 to do so. This is exactly as how
748 scalar OpenPOWER context-switches CRs: it is just that there are now
749 more of them.
750
751 The 64 SV CRs are arranged similarly to the way the 128 integer registers
752 are arranged. TODO a python program that auto-generates a CSV file
753 which can be included in a table, which is in a new page (so as not to
754 overwhelm this one). [[svp64/cr_names]]
755
756 # Register Profiles
757
758 Instructions are broken down by Register Profiles as listed in the
759 following auto-generated page: [[opcode_regs_deduped]]. These tables,
760 despite being auto-generated, are part of the Specification.
761
762 # SV pseudocode illustration
763
764 ## Single-predicated Instruction
765
766 illustration of normal mode add operation: zeroing not included, elwidth
767 overrides not included. if there is no predicate, it is set to all 1s
768
769 function op_add(rd, rs1, rs2) # add not VADD!
770 int i, id=0, irs1=0, irs2=0;
771 predval = get_pred_val(FALSE, rd);
772 for (i = 0; i < VL; i++)
773 STATE.srcoffs = i # save context
774 if (predval & 1<<i) # predication uses intregs
775 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
776 if (!int_vec[rd].isvec) break;
777 if (rd.isvec) { id += 1; }
778 if (rs1.isvec) { irs1 += 1; }
779 if (rs2.isvec) { irs2 += 1; }
780 if (id == VL or irs1 == VL or irs2 == VL) {
781 # end VL hardware loop
782 STATE.srcoffs = 0; # reset
783 return;
784 }
785
786 This has several modes:
787
788 * RT.v = RA.v RB.v
789 * RT.v = RA.v RB.s (and RA.s RB.v)
790 * RT.v = RA.s RB.s
791 * RT.s = RA.v RB.v
792 * RT.s = RA.v RB.s (and RA.s RB.v)
793 * RT.s = RA.s RB.s
794
795 All of these may be predicated. Vector-Vector is straightfoward.
796 When one of source is a Vector and the other a Scalar, it is clear that
797 each element of the Vector source should be added to the Scalar source,
798 each result placed into the Vector (or, if the destination is a scalar,
799 only the first nonpredicated result).
800
801 The one that is not obvious is RT=vector but both RA/RB=scalar.
802 Here this acts as a "splat scalar result", copying the same result into
803 all nonpredicated result elements. If a fixed destination scalar was
804 intended, then an all-Scalar operation should be used.
805
806 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
807
808 # Assembly Annotation
809
810 Assembly code annotation is required for SV to be able to successfully
811 mark instructions as "prefixed".
812
813 A reasonable (prototype) starting point:
814
815 svp64 [field=value]*
816
817 Fields:
818
819 * ew=8/16/32 - element width
820 * sew=8/16/32 - source element width
821 * vec=2/3/4 - SUBVL
822 * mode=mr/satu/sats/crpred
823 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
824
825 similar to x86 "rex" prefix.
826
827 For actual assembler:
828
829 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
830
831 Qualifiers:
832
833 * m={pred}: predicate mask mode
834 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
835 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
836 * ew={N}: ew=8/16/32 - sets elwidth override
837 * sw={N}: sw=8/16/32 - sets source elwidth override
838 * ff={xx}: see fail-first mode
839 * pr={xx}: see predicate-result mode
840 * sat{x}: satu / sats - see saturation mode
841 * mr: see map-reduce mode
842 * mrr: map-reduce, reverse-gear (VL-1 downto 0)
843 * mr.svm see map-reduce with sub-vector mode
844 * crm: see map-reduce CR mode
845 * crm.svm see map-reduce CR with sub-vector mode
846 * sz: predication with source-zeroing
847 * dz: predication with dest-zeroing
848
849 For modes:
850
851 * pred-result:
852 - pm=lt/gt/le/ge/eq/ne/so/ns
853 - RC1 mode
854 * fail-first
855 - ff=lt/gt/le/ge/eq/ne/so/ns
856 - RC1 mode
857 * saturation:
858 - sats
859 - satu
860 * map-reduce:
861 - mr OR crm: "normal" map-reduce mode or CR-mode.
862 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
863
864 # Parallel-reduction algorithm
865
866 The principle of SVP64 is that SVP64 is a fully-independent
867 Abstraction of hardware-looping in between issue and execute phases
868 that has no relation to the operation it issues.
869 Additional state cannot be saved on context-switching beyond that
870 of SVSTATE, making things slightly tricky.
871
872 Executable demo pseudocode, full version
873 [here](https://git.libre-soc.org/?p=libreriscv.git;a=blob;f=openpower/sv/test_preduce.py;hb=HEAD)
874
875 ```
876 [[!inline pages="openpower/sv/preduce.py" raw="yes" ]]
877 ```
878
879 This algorithm works by noting when data remains in-place rather than
880 being reduced, and referring to that alternative position on subsequent
881 layers of reduction. It is re-entrant. If however interrupted and
882 restored, some implementations may take longer to re-establish the
883 context.
884
885 Its application by default is that:
886
887 * RA, FRA or BFA is the first register as the first operand
888 (ci index offset in the above pseudocode)
889 * RB, FRB or BFB is the second (co index offset)
890 * RT (result) also uses ci **if RA==RT**
891
892 For more complex applications a REMAP Schedule must be used
893
894 *Programmers's note:
895 if passed a predicate mask with only one bit set, this algorithm
896 takes no action, similar to when a predicate mask is all zero.*
897
898 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
899 implemented in hardware with MVs that ensure lane-crossing is minimised.
900 The mistake which would be catastrophic to SVP64 to make is to then
901 limit the Reduction Sequence for all implementors
902 based solely and exclusively on what one
903 specific internal microarchitecture does.
904 In SIMD ISAs the internal SIMD Architectural design is exposed and imposed on the programmer. Cray-style Vector ISAs on the other hand provide convenient,
905 compact and efficient encodings of abstract concepts.*
906 **It is the Implementor's responsibility to produce a design
907 that complies with the above algorithm,
908 utilising internal Micro-coding and other techniques to transparently
909 insert micro-architectural lane-crossing Move operations
910 if necessary or desired, to give the level of efficiency or performance
911 required.**
912
913 # Element-width overrides <a name="elwidth"> </>
914
915 Element-width overrides are best illustrated with a packed structure
916 union in the c programming language. The following should be taken
917 literally, and assume always a little-endian layout:
918
919 #pragma pack
920 typedef union {
921 uint8_t b[];
922 uint16_t s[];
923 uint32_t i[];
924 uint64_t l[];
925 uint8_t actual_bytes[8];
926 } el_reg_t;
927
928 elreg_t int_regfile[128];
929
930 Accessing (get and set) of registers given a value, register (in `elreg_t`
931 form), and that all arithmetic, numbering and pseudo-Memory format is
932 LE-endian and LSB0-numbered below:
933
934 elreg_t& get_polymorphed_reg(elreg_t const& reg, bitwidth, offset):
935 el_reg_t res; // result
936 res.l = 0; // TODO: going to need sign-extending / zero-extending
937 if !reg.isvec: // scalar access has no element offset
938 offset = 0
939 if bitwidth == 8:
940 reg.b = int_regfile[reg].b[offset]
941 elif bitwidth == 16:
942 reg.s = int_regfile[reg].s[offset]
943 elif bitwidth == 32:
944 reg.i = int_regfile[reg].i[offset]
945 elif bitwidth == 64:
946 reg.l = int_regfile[reg].l[offset]
947 return reg
948
949 set_polymorphed_reg(elreg_t& reg, bitwidth, offset, val):
950 if (!reg.isvec):
951 # for safety mask out hi bits
952 bytemask = (8 << bitwidth) - 1
953 val &= bytemask
954 # not a vector: first element only, overwrites high bits.
955 # and with the *Architectural* definition being LE,
956 # storing in the first DWORD works perfectly.
957 int_regfile[reg].l[0] = val
958 elif bitwidth == 8:
959 int_regfile[reg].b[offset] = val
960 elif bitwidth == 16:
961 int_regfile[reg].s[offset] = val
962 elif bitwidth == 32:
963 int_regfile[reg].i[offset] = val
964 elif bitwidth == 64:
965 int_regfile[reg].l[offset] = val
966
967 In effect the GPR registers r0 to r127 (and corresponding FPRs fp0
968 to fp127) are reinterpreted to be "starting points" in a byte-addressable
969 memory. Vectors - which become just a virtual naming construct - effectively
970 overlap.
971
972 It is extremely important for implementors to note that the only circumstance
973 where upper portions of an underlying 64-bit register are zero'd out is
974 when the destination is a scalar. The ideal register file has byte-level
975 write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.
976
977 An example ADD operation with predication and element width overrides:
978
979  for (i = 0; i < VL; i++)
980 if (predval & 1<<i) # predication
981 src1 = get_polymorphed_reg(RA, srcwid, irs1)
982 src2 = get_polymorphed_reg(RB, srcwid, irs2)
983 result = src1 + src2 # actual add here
984 set_polymorphed_reg(RT, destwid, ird, result)
985 if (!RT.isvec) break
986 if (RT.isvec)  { id += 1; }
987 if (RA.isvec)  { irs1 += 1; }
988 if (RB.isvec)  { irs2 += 1; }
989
990 Thus it can be clearly seen that elements are packed by their
991 element width, and the packing starts from the source (or destination)
992 specified by the instruction.
993
994 # Twin (implicit) result operations
995
996 Some operations in the Power ISA already target two 64-bit scalar
997 registers: `lq` for example, and LD with update.
998 Some mathematical algorithms are more
999 efficient when there are two outputs rather than one, providing
1000 feedback loops between elements (the most well-known being add with
1001 carry). 64-bit multiply
1002 for example actually internally produces a 128 bit result, which clearly
1003 cannot be stored in a single 64 bit register. Some ISAs recommend
1004 "macro op fusion": the practice of setting a convention whereby if
1005 two commonly used instructions (mullo, mulhi) use the same ALU but
1006 one selects the low part of an identical operation and the other
1007 selects the high part, then optimised micro-architectures may
1008 "fuse" those two instructions together, using Micro-coding techniques,
1009 internally.
1010
1011 The practice and convention of macro-op fusion however is not compatible
1012 with SVP64 Horizontal-First, because Horizontal Mode may only
1013 be applied to a single instruction at a time, and SVP64 is based on
1014 the principle of strict Program Order even at the element
1015 level. Thus it becomes
1016 necessary to add explicit more complex single instructions with
1017 more operands than would normally be seen in the average RISC ISA
1018 (3-in, 2-out, in some cases). If it
1019 was not for Power ISA already having LD/ST with update as well as
1020 Condition Codes and `lq` this would be hard to justify.
1021
1022 With limited space in the `EXTRA` Field, and Power ISA opcodes
1023 being only 32 bit, 5 operands is quite an ask. `lq` however sets
1024 a precedent: `RTp` stands for "RT pair". In other words the result
1025 is stored in RT and RT+1. For Scalar operations, following this
1026 precedent is perfectly reasonable. In Scalar mode,
1027 `maddedu` therefore stores the two halves of the 128-bit multiply
1028 into RT and RT+1.
1029
1030 What, then, of `sv.maddedu`? If the destination is hard-coded to
1031 RT and RT+1 the instruction is not useful when Vectorised because
1032 the output will be overwritten on the next element. To solve this
1033 is easy: define the destination registers as RT and RT+MAXVL
1034 respectively. This makes it easy for compilers to statically allocate
1035 registers even when VL changes dynamically.
1036
1037 Bear in mind that both RT and RT+MAXVL are starting points for Vectors,
1038 and bear in mind that element-width overrides still have to be taken
1039 into consideration, the starting point for the implicit destination
1040 is best illustrated in pseudocode:
1041
1042 # demo of maddedu
1043  for (i = 0; i < VL; i++)
1044 if (predval & 1<<i) # predication
1045 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1046 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1047 src2 = get_polymorphed_reg(RC, srcwid, irs3)
1048 result = src1*src2 + src2
1049 destmask = (2<<destwid)-1
1050 # store two halves of result, both start from RT.
1051 set_polymorphed_reg(RT, destwid, ird , result&destmask)
1052 set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
1053 if (!RT.isvec) break
1054 if (RT.isvec)  { id += 1; }
1055 if (RA.isvec)  { irs1 += 1; }
1056 if (RB.isvec)  { irs2 += 1; }
1057 if (RC.isvec)  { irs3 += 1; }
1058
1059 The significant part here is that the second half is stored
1060 starting not from RT+MAXVL at all: it is the *element* index
1061 that is offset by MAXVL, both halves actually starting from RT.
1062 If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements
1063 RT0 to RT2 are stored:
1064
1065 LSB0: 63:32 31:0
1066 MSB0: 0:31 32:63
1067 r0 unchanged unchanged
1068 r1 RT1.lo RT0.lo
1069 r2 unchanged RT2.lo
1070 r3 RT0.hi unchanged
1071 r4 RT2.hi RT1.hi
1072 r5 unchanged unchanged
1073
1074 Note that all of the LO halves start from r1, but that the HI halves
1075 start from half-way into r3. The reason is that with MAXVL bring
1076 5 and elwidth being 32, this is the 5th element
1077 offset (in 32 bit quantities) counting from r1.
1078
1079 *Programmer's note: accessing registers that have been placed
1080 starting on a non-contiguous boundary (half-way along a scalar
1081 register) can be inconvenient: REMAP can provide an offset but
1082 it requires extra instructions to set up. A simple solution
1083 is to ensure that MAXVL is rounded up such that the Vector
1084 ends cleanly on a contiguous register boundary. MAXVL=6 in
1085 the above example would achieve that*
1086
1087 Additional DRAFT Scalar instructions in 3-in 2-out form
1088 with an implicit 2nd destination:
1089
1090 * [[isa/svfixedarith]]
1091 * [[isa/svfparith]]
1092