(no commit message)
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 # Appendix
2
3 * <https://bugs.libre-soc.org/show_bug.cgi?id=574>
4 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47>
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=697>
6
7 This is the appendix to [[sv/svp64]], providing explanations of modes
8 etc. leaving the main svp64 page's primary purpose as outlining the
9 instruction format.
10
11 Table of contents:
12
13 [[!toc]]
14
15 # XER, SO and other global flags
16
17 Vector systems are expected to be high performance. This is achieved
18 through parallelism, which requires that elements in the vector be
19 independent. XER SO and other global "accumulation" flags (CR.OV) cause
20 Read-Write Hazards on single-bit global resources, having a significant
21 detrimental effect.
22
23 Consequently in SV, XER.SO and CR.OV behaviour is disregarded (including
24 in `cmp` instructions). XER is simply neither read nor written.
25 This includes when `scalar identity behaviour` occurs. If precise
26 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
27 instructions should be used without an SV Prefix.
28
29 An interesting side-effect of this decision is that the OE flag is now
30 free for other uses when SV Prefixing is used.
31
32 Regarding XER.CA: this does not fit either: it was designed for a scalar
33 ISA. Instead, both carry-in and carry-out go into the CR.so bit of a given
34 Vector element. This provides a means to perform large parallel batches
35 of Vectorised carry-capable additions. crweird instructions can be used
36 to transfer the CRs in and out of an integer, where bitmanipulation
37 may be performed to analyse the carry bits (including carry lookahead
38 propagation) before continuing with further parallel additions.
39
40 # v3.0B/v3.1 relevant instructions
41
42 SV is primarily designed for use as an efficient hybrid 3D GPU / VPU /
43 CPU ISA.
44
45 As mentioned above, OE=1 is not applicable in SV, freeing this bit for
46 alternative uses. Additionally, Vectorisation of the VSX SIMD system
47 likewise makes no sense whatsoever. SV *replaces* VSX and provides,
48 at the very minimum, predication (which VSX was designed without).
49 Thus all VSX Major Opcodes - all of them - are "unused" and must raise
50 illegal instruction exceptions in SV Prefix Mode.
51
52 Likewise, `lq` (Load Quad), and Load/Store Multiple make no sense to
53 have because they are not only provided by SV, the SV alternatives may
54 be predicated as well, making them far better suited to use in function
55 calls and context-switching.
56
57 Additionally, some v3.0/1 instructions simply make no sense at all in a
58 Vector context: `rfid` falls into this category,
59 as well as `sc` and `scv`. Here there is simply no point
60 trying to Vectorise them: the standard OpenPOWER v3.0/1 instructions
61 should be called instead.
62
63 Fortuitously this leaves several Major Opcodes free for use by SV
64 to fit alternative future instructions. In a 3D context this means
65 Vector Product, Vector Normalise, [[sv/mv.swizzle]], Texture LD/ST
66 operations, and others critical to an efficient, effective 3D GPU and
67 VPU ISA. With such instructions being included as standard in other
68 commercially-successful GPU ISAs it is likewise critical that a 3D
69 GPU/VPU based on svp64 also have such instructions.
70
71 Note however that svp64 is stand-alone and is in no way
72 critically dependent on the existence or provision of 3D GPU or VPU
73 instructions. These should be considered extensions, and their discussion
74 and specification is out of scope for this document.
75
76 Note, again: this is *only* under svp64 prefixing. Standard v3.0B /
77 v3.1B is *not* altered by svp64 in any way.
78
79 ## Major opcode map (v3.0B)
80
81 This table is taken from v3.0B.
82 Table 9: Primary Opcode Map (opcode bits 0:5)
83
84 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
85 000 | | | tdi | twi | EXT04 | | | mulli | 000
86 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
87 010 | bc/l/a | EXT17 | b/l/a | EXT19 | rlwimi| rlwinm | | rlwnm | 010
88 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
89 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
90 101 | lhz | lhzu | lha | lhau | sth | sthu | lmw | stmw | 101
91 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
92 111 | lq | EXT57 | EXT58 | EXT59 | EXT60 | EXT61 | EXT62 | EXT63 | 111
93 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
94
95 ## Suitable for svp64-only
96
97 This is the same table containing v3.0B Primary Opcodes except those that
98 make no sense in a Vectorisation Context have been removed. These removed
99 POs can, *in the SV Vector Context only*, be assigned to alternative
100 (Vectorised-only) instructions, including future extensions.
101
102 Note, again, to emphasise: outside of svp64 these opcodes **do not**
103 change. When not prefixed with svp64 these opcodes **specifically**
104 retain their v3.0B / v3.1B OpenPOWER Standard compliant meaning.
105
106 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
107 000 | | | | | | | | mulli | 000
108 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
109 010 | | | | EXT19 | rlwimi| rlwinm | | rlwnm | 010
110 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
111 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
112 101 | lhz | lhzu | lha | lhau | sth | sthu | | | 101
113 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
114 111 | | | EXT58 | EXT59 | | EXT61 | | EXT63 | 111
115 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
116
117 It is important to note that having a different v3.0B Scalar opcode
118 that is different from an SVP64 one is highly undesirable: the complexity
119 in the decoder is greatly increased.
120
121 # Single Predication
122
123 This is a standard mode normally found in Vector ISAs. every element in every source Vector and in the destination uses the same bit of one single predicate mask.
124
125 In SVSTATE, for Single-predication, implementors MUST increment both srcstep and dststep: unlike Twin-Predication the two must be equal at all times.
126
127 # Twin Predication
128
129 This is a novel concept that allows predication to be applied to a single
130 source and a single dest register. The following types of traditional
131 Vector operations may be encoded with it, *without requiring explicit
132 opcodes to do so*
133
134 * VSPLAT (a single scalar distributed across a vector)
135 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
136 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
137 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
138 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
139
140 Those patterns (and more) may be applied to:
141
142 * mv (the usual way that V\* ISA operations are created)
143 * exts\* sign-extension
144 * rwlinm and other RS-RA shift operations (**note**: excluding
145 those that take RA as both a src and dest. These are not
146 1-src 1-dest, they are 2-src, 1-dest)
147 * LD and ST (treating AGEN as one source)
148 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
149 * Condition Register ops mfcr, mtcr and other similar
150
151 This is a huge list that creates extremely powerful combinations,
152 particularly given that one of the predicate options is `(1<<r3)`
153
154 Additional unusual capabilities of Twin Predication include a back-to-back
155 version of VCOMPRESS-VEXPAND which is effectively the ability to do
156 sequentially ordered multiple VINSERTs. The source predicate selects a
157 sequentially ordered subset of elements to be inserted; the destination
158 predicate specifies the sequentially ordered recipient locations.
159 This is equivalent to
160 `llvm.masked.compressstore.*`
161 followed by
162 `llvm.masked.expandload.*`
163
164 # Reduce modes
165
166 Reduction in SVP64 is deterministic and somewhat of a misnomer. A normal
167 Vector ISA would have explicit Reduce opcodes with defined characteristics
168 per operation: in SX Aurora there is even an additional scalar argument
169 containing the initial reduction value, and the default is either 0
170 or 1 depending on the specifics of the explicit opcode.
171 SVP64 fundamentally has to
172 utilise *existing* Scalar Power ISA v3.0B operations, which presents some
173 unique challenges.
174
175 The solution turns out to be to simply define reduction as permitting
176 deterministic element-based schedules to be issued using the base Scalar
177 operations, and to rely on the underlying microarchitecture to resolve
178 Register Hazards at the element level. This goes back to
179 the fundamental principle that SV is nothing more than a Sub-Program-Counter
180 sitting between Decode and Issue phases.
181
182 Microarchitectures *may* take opportunities to parallelise the reduction
183 but only if in doing so they preserve Program Order at the Element Level.
184 Opportunities where this is possible include an `OR` operation
185 or a MIN/MAX operation: it may be possible to parallelise the reduction,
186 but for Floating Point it is not permitted due to different results
187 being obtained if the reduction is not executed in strict sequential
188 order.
189
190 In essence it becomes the programmer's responsibility to leverage the
191 pre-determined schedules to desired effect.
192
193 ## Scalar result reduce mode
194
195 Scalar Reduction per se does not exist, instead is implemented in SVP64
196 as a simple and natural relaxation of the usual restriction on the Vector
197 Looping which would terminate if the destination was marked as a Scalar.
198 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
199 even though the destination register is marked as scalar.
200 Thus it is up to the programmer to be aware of this and observe some
201 conventions.
202
203 It is also important to appreciate that there is no
204 actual imposition or restriction on how this mode is utilised: there
205 will therefore be several valuable uses (including Vector Iteration
206 and "Reverse-Gear")
207 and it is up to the programmer to make best use of the
208 (strictly deterministic) capability
209 provided.
210
211 In this mode, which is suited to operations involving carry or overflow,
212 one register must be identified by the programmer as being the "accumulator".
213 Scalar reduction is thus categorised by:
214
215 * One of the sources is a Vector
216 * the destination is a scalar
217 * optionally but most usefully when one source scalar register is
218 also the scalar destination (which may be informally termed
219 the "accumulator")
220 * That the source register type is the same as the destination register
221 type identified as the "accumulator". scalar reduction on `cmp`,
222 `setb` or `isel` makes no sense for example because of the mixture
223 between CRs and GPRs.
224
225 *Note that issuing instructions in Scalar reduce mode such as `setb`
226 are neither `UNDEFINED` nor prohibited, despite them not making much
227 sense at first glance.
228 Scalar reduce is strictly defined behaviour, and the cost in
229 hardware terms of prohibition of seemingly non-sensical operations is too great.
230 Therefore it is permitted and required to be executed successfully.
231 Implementors **MAY** choose to optimise such instructions in instances
232 where their use results in "extraneous execution", i.e. where it is clear
233 that the sequence of operations, comprising multiple overwrites to
234 a scalar destination **without** cumulative, iterative, or reductive
235 behaviour (no "accumulator"), may discard all but the last element
236 operation. Identification
237 of such is trivial to do for `setb` and `cmp`: the source register type is
238 a completely different register file from the destination*
239
240 Typical applications include simple operations such as `ADD r3, r10.v,
241 r3` where, clearly, r3 is being used to accumulate the addition of all
242 elements is the vector starting at r10.
243
244 # add RT, RA,RB but when RT==RA
245 for i in range(VL):
246 iregs[RA] += iregs[RB+i] # RT==RA
247
248 However, *unless* the operation is marked as "mapreduce", SV ordinarily
249 **terminates** at the first scalar operation. Only by marking the
250 operation as "mapreduce" will it continue to issue multiple sub-looped
251 (element) instructions in `Program Order`.
252
253 To perform the loop in reverse order, the ```RG``` (reverse gear) bit must be set. This may be useful in situations where the results may be different
254 (floating-point) if executed in a different order. Given that there is
255 no actual prohibition on Reduce Mode being applied when the destination
256 is a Vector, the "Reverse Gear" bit turns out to be a way to apply Iterative
257 or Cumulative Vector operations in reverse. `sv.add/rg r3.v, r4.v, r4.v`
258 for example will start at the opposite end of the Vector and push
259 a cumulative series of overlapping add operations into the Execution units of
260 the underlying hardware.
261
262 Other examples include shift-mask operations where a Vector of inserts
263 into a single destination register is required, as a way to construct
264 a value quickly from multiple arbitrary bit-ranges and bit-offsets.
265 Using the same register as both the source and destination, with Vectors
266 of different offsets masks and values to be inserted has multiple
267 applications including Video, cryptography and JIT compilation.
268
269 Subtract and Divide are still permitted to be executed in this mode,
270 although from an algorithmic perspective it is strongly discouraged.
271 It would be better to use addition followed by one final subtract,
272 or in the case of divide, to get better accuracy, to perform a multiply
273 cascade followed by a final divide.
274
275 Note that single-operand or three-operand scalar-dest reduce is perfectly
276 well permitted: the programmer may still declare one register, used as
277 both a Vector source and Scalar destination, to be utilised as
278 the "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc
279 this naturally fits well with the normal expected usage of these
280 operations.
281
282 If an interrupt or exception occurs in the middle of the scalar mapreduce,
283 the scalar destination register **MUST** be updated with the current
284 (intermediate) result, because this is how ```Program Order``` is
285 preserved (Vector Loops are to be considered to be just another way of issuing instructions
286 in Program Order). In this way, after return from interrupt,
287 the scalar mapreduce may continue where it left off. This provides
288 "precise" exception behaviour.
289
290 Note that hardware is perfectly permitted to perform multi-issue
291 parallel optimisation of the scalar reduce operation: it's just that
292 as far as the user is concerned, all exceptions and interrupts **MUST**
293 be precise.
294
295 ## Vector result reduce mode
296
297 Vector Reduce Mode issues a deterministic tree-reduction schedule to the underlying micro-architecture. Like Scalar reduction, the "Scalar Base"
298 (Power ISA v3.0B) operation is leveraged, unmodified, to give the
299 *appearance* and *effect* of Reduction.
300
301 Given that the tree-reduction schedule is deterministic,
302 Interrupts and exceptions
303 can therefore also be precise. The final result will be in the first
304 non-predicate-masked-out destination element, but due again to
305 the deterministic schedule programmers may find uses for the intermediate
306 results.
307
308 When Rc=1 a corresponding Vector of co-resultant CRs is also
309 created. No special action is taken: the result and its CR Field
310 are stored "as usual" exactly as all other SVP64 Rc=1 operations.
311
312 ## Sub-Vector Horizontal Reduction
313
314 Note that when SVM is clear and SUBVL!=1 the sub-elements are
315 *independent*, i.e. they are mapreduced per *sub-element* as a result.
316 illustration with a vec2, assuming RA==RT, e.g `sv.add/mr/vec2 r4, r4, r16`
317
318 for i in range(0, VL):
319 # RA==RT in the instruction. does not have to be
320 iregs[RT].x = op(iregs[RT].x, iregs[RB+i].x)
321 iregs[RT].y = op(iregs[RT].y, iregs[RB+i].y)
322
323 Thus logically there is nothing special or unanticipated about
324 `SVM=0`: it is expected behaviour according to standard SVP64
325 Sub-Vector rules.
326
327 By contrast, when SVM is set and SUBVL!=1, a Horizontal
328 Subvector mode is enabled, which behaves very much more
329 like a traditional Vector Processor Reduction instruction.
330 Example for a vec3:
331
332 for i in range(VL):
333 result = iregs[RA+i].x
334 result = op(result, iregs[RA+i].y)
335 result = op(result, iregs[RA+i].z)
336 iregs[RT+i] = result
337
338 In this mode, when Rc=1 the Vector of CRs is as normal: each result
339 element creates a corresponding CR element (for the final, reduced, result).
340
341 # Fail-on-first
342
343 Data-dependent fail-on-first has two distinct variants: one for LD/ST,
344 the other for arithmetic operations (actually, CR-driven). Note in each
345 case the assumption is that vector elements are required appear to be
346 executed in sequential Program Order, element 0 being the first.
347
348 * LD/ST ffirst treats the first LD/ST in a vector (element 0) as an
349 ordinary one. Exceptions occur "as normal". However for elements 1
350 and above, if an exception would occur, then VL is **truncated** to the
351 previous element.
352 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
353 CR-creating operation produces a result (including cmp). Similar to
354 branch, an analysis of the CR is performed and if the test fails, the
355 vector operation terminates and discards all element operations at and
356 above the current one, and VL is truncated to either
357 the *previous* element or the current one, depending on whether
358 VLi (VL "inclusive") is set.
359
360 Thus the new VL comprises a contiguous vector of results,
361 all of which pass the testing criteria (equal to zero, less than zero).
362
363 The CR-based data-driven fail-on-first is new and not found in ARM
364 SVE or RVV. It is extremely useful for reducing instruction count,
365 however requires speculative execution involving modifications of VL
366 to get high performance implementations. An additional mode (RC1=1)
367 effectively turns what would otherwise be an arithmetic operation
368 into a type of `cmp`. The CR is stored (and the CR.eq bit tested
369 against the `inv` field).
370 If the CR.eq bit is equal to `inv` then the Vector is truncated and
371 the loop ends.
372 Note that when RC1=1 the result elements are never stored, only the CRs.
373
374 VLi is only available as an option when `Rc=0` (or for instructions
375 which do not have Rc). When set, the current element is always
376 also included in the count (the new length that VL will be set to).
377 This may be useful in combination with "inv" to truncate the Vector
378 to `exclude` elements that fail a test, or, in the case of implementations
379 of strncpy, to include the terminating zero.
380
381 In CR-based data-driven fail-on-first there is only the option to select
382 and test one bit of each CR (just as with branch BO). For more complex
383 tests this may be insufficient. If that is the case, a vectorised crops
384 (crand, cror) may be used, and ffirst applied to the crop instead of to
385 the arithmetic vector.
386
387 One extremely important aspect of ffirst is:
388
389 * LDST ffirst may never set VL equal to zero. This because on the first
390 element an exception must be raised "as normal".
391 * CR-based data-dependent ffirst on the other hand **can** set VL equal
392 to zero. This is the only means in the entirety of SV that VL may be set
393 to zero (with the exception of via the SV.STATE SPR). When VL is set
394 zero due to the first element failing the CR bit-test, all subsequent
395 vectorised operations are effectively `nops` which is
396 *precisely the desired and intended behaviour*.
397
398 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
399 to a nonzero value for any implementation-specific reason. For example:
400 it is perfectly reasonable for implementations to alter VL when ffirst
401 LD or ST operations are initiated on a nonaligned boundary, such that
402 within a loop the subsequent iteration of that loop begins subsequent
403 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
404 workloads or balance resources.
405
406 CR-based data-dependent first on the other hand MUST not truncate VL
407 arbitrarily to a length decided by the hardware: VL MUST only be
408 truncated based explicitly on whether a test fails.
409 This because it is a precise test on which algorithms
410 will rely.
411
412 ## Data-dependent fail-first on CR operations (crand etc)
413
414 Operations that actually produce or alter CR Field as a result
415 do not also in turn have an Rc=1 mode. However it makes no
416 sense to try to test the 4 bits of a CR Field for being equal
417 or not equal to zero. Moreover, the result is already in the
418 form that is desired: it is a CR field. Therefore,
419 CR-based operations have their own SVP64 Mode, described
420 in [[sv/cr_ops]]
421
422 There are two primary different types of CR operations:
423
424 * Those which have a 3-bit operand field (referring to a CR Field)
425 * Those which have a 5-bit operand (referring to a bit within the
426 whole 32-bit CR)
427
428 More details can be found in [[sv/cr_ops]].
429
430 # pred-result mode
431
432 This mode merges common CR testing with predication, saving on instruction
433 count. Below is the pseudocode excluding predicate zeroing and elwidth
434 overrides. Note that the paeudocode for [[sv/cr_ops]] is slightly different.
435
436 for i in range(VL):
437 # predication test, skip all masked out elements.
438 if predicate_masked_out(i):
439 continue
440 result = op(iregs[RA+i], iregs[RB+i])
441 CRnew = analyse(result) # calculates eq/lt/gt
442 # Rc=1 always stores the CR
443 if Rc=1 or RC1:
444 crregs[offs+i] = CRnew
445 # now test CR, similar to branch
446 if RC1 or CRnew[BO[0:1]] != BO[2]:
447 continue # test failed: cancel store
448 # result optionally stored but CR always is
449 iregs[RT+i] = result
450
451 The reason for allowing the CR element to be stored is so that
452 post-analysis of the CR Vector may be carried out. For example:
453 Saturation may have occurred (and been prevented from updating, by the
454 test) but it is desirable to know *which* elements fail saturation.
455
456 Note that RC1 Mode basically turns all operations into `cmp`. The
457 calculation is performed but it is only the CR that is written. The
458 element result is *always* discarded, never written (just like `cmp`).
459
460 Note that predication is still respected: predicate zeroing is slightly
461 different: elements that fail the CR test *or* are masked out are zero'd.
462
463 ## pred-result mode on CR ops
464
465 CR operations (mtcr, crand, cror) may be Vectorised,
466 predicated, and also pred-result mode applied to it.
467 Vectorisation applies to 4-bit CR Fields which are treated as
468 elements, not the individual bits of the 32-bit CR.
469 CR ops and how to identify them is described in [[sv/cr_ops]]
470
471 # CR Operations
472
473 CRs are slightly more involved than INT or FP registers due to the
474 possibility for indexing individual bits (crops BA/BB/BT). Again however
475 the access pattern needs to be understandable in relation to v3.0B / v3.1B
476 numbering, with a clear linear relationship and mapping existing when
477 SV is applied.
478
479 ## CR EXTRA mapping table and algorithm
480
481 Numbering relationships for CR fields are already complex due to being
482 in BE format (*the relationship is not clearly explained in the v3.0B
483 or v3.1B specification*). However with some care and consideration
484 the exact same mapping used for INT and FP regfiles may be applied,
485 just to the upper bits, as explained below. The notation
486 `CR{field number}` is used to indicate access to a particular
487 Condition Register Field (as opposed to the notation `CR[bit]`
488 which accesses one bit of the 32 bit Power ISA v3.0B
489 Condition Register)
490
491 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
492 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits
493 *in* that CR. The numbering was determined (after 4 months of
494 analysis and research) to be as follows:
495
496 CR_index = 7-(BA>>2) # top 3 bits but BE
497 bit_index = 3-(BA & 0b11) # low 2 bits but BE
498 CR_reg = CR{CR_index} # get the CR
499 # finally get the bit from the CR.
500 CR_bit = (CR_reg & (1<<bit_index)) != 0
501
502 When it comes to applying SV, it is the CR\_reg number to which SV EXTRA2/3
503 applies, **not** the CR\_bit portion (bits 3:4):
504
505 if extra3_mode:
506 spec = EXTRA3
507 else:
508 spec = EXTRA2<<1 | 0b0
509 if spec[0]:
510 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
511 return ((BA >> 2)<<6) | # hi 3 bits shifted up
512 (spec[1:2]<<4) | # to make room for these
513 (BA & 0b11) # CR_bit on the end
514 else:
515 # scalar constructs "00 spec[1:2] BA[0:4]"
516 return (spec[1:2] << 5) | BA
517
518 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
519 algorithm to determin CR\_reg is modified to as follows:
520
521 CR_index = 7-(BA>>2) # top 3 bits but BE
522 if spec[0]:
523 # vector mode, 0-124 increments of 4
524 CR_index = (CR_index<<4) | (spec[1:2] << 2)
525 else:
526 # scalar mode, 0-32 increments of 1
527 CR_index = (spec[1:2]<<3) | CR_index
528 # same as for v3.0/v3.1 from this point onwards
529 bit_index = 3-(BA & 0b11) # low 2 bits but BE
530 CR_reg = CR{CR_index} # get the CR
531 # finally get the bit from the CR.
532 CR_bit = (CR_reg & (1<<bit_index)) != 0
533
534 Note here that the decoding pattern to determine CR\_bit does not change.
535
536 Note: high-performance implementations may read/write Vectors of CRs in
537 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
538 simplify internal design. If instructions are issued where CR Vectors
539 do not start on a 32-bit aligned boundary, performance may be affected.
540
541 ## CR fields as inputs/outputs of vector operations
542
543 CRs (or, the arithmetic operations associated with them)
544 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
545
546 When vectorized, the CR inputs/outputs are sequentially read/written
547 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
548 writing to CR8 (TBD evaluate) and increase sequentially from there.
549 This is so that:
550
551 * implementations may rely on the Vector CRs being aligned to 8. This
552 means that CRs may be read or written in aligned batches of 32 bits
553 (8 CRs per batch), for high performance implementations.
554 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
555 overwritten by vector Rc=1 operations except for very large VL
556 * CR-based predication, from CR32, is also not interfered with
557 (except by large VL).
558
559 However when the SV result (destination) is marked as a scalar by the
560 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
561 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
562 for FP operations.
563
564 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
565 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
566 v3.0B scalar operations produce a **tuple** of element results: the
567 result of the operation as one part of that element *and a corresponding
568 CR element*. Greatly simplified pseudocode:
569
570 for i in range(VL):
571 # calculate the vector result of an add iregs[RT+i] = iregs[RA+i]
572 + iregs[RB+i] # now calculate CR bits CRs{8+i}.eq = iregs[RT+i]
573 == 0 CRs{8+i}.gt = iregs[RT+i] > 0 ... etc
574
575 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
576 then a followup instruction must be performed, setting "reduce" mode on
577 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
578 more flexibility in analysing vectors than standard Vector ISAs. Normal
579 Vector ISAs are typically restricted to "were all results nonzero" and
580 "were some results nonzero". The application of mapreduce to Vectorised
581 cr operations allows far more sophisticated analysis, particularly in
582 conjunction with the new crweird operations see [[sv/cr_int_predication]].
583
584 Note in particular that the use of a separate instruction in this way
585 ensures that high performance multi-issue OoO inplementations do not
586 have the computation of the cumulative analysis CR as a bottleneck and
587 hindrance, regardless of the length of VL.
588
589 Additionally,
590 SVP64 [[sv/branches]] may be used, even when the branch itself is to
591 the following instruction. The combined side-effects of CTR reduction
592 and VL truncation provide several benefits.
593
594 (see [[discussion]]. some alternative schemes are described there)
595
596 ## Rc=1 when SUBVL!=1
597
598 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
599 predicate is allocated per subvector; likewise only one CR is allocated
600 per subvector.
601
602 This leaves a conundrum as to how to apply CR computation per subvector,
603 when normally Rc=1 is exclusively applied to scalar elements. A solution
604 is to perform a bitwise OR or AND of the subvector tests. Given that
605 OE is ignored in SVP64, this field may (when available) be used to select OR or
606 AND behavior.
607
608 ### Table of CR fields
609
610 CR[i] is the notation used by the OpenPower spec to refer to CR field #i,
611 so FP instructions with Rc=1 write to CR[1] aka SVCR1_000.
612
613 CRs are not stored in SPRs: they are registers in their own right.
614 Therefore context-switching the full set of CRs involves a Vectorised
615 mfcr or mtcr, using VL=64, elwidth=8 to do so. This is exactly as how
616 scalar OpenPOWER context-switches CRs: it is just that there are now
617 more of them.
618
619 The 64 SV CRs are arranged similarly to the way the 128 integer registers
620 are arranged. TODO a python program that auto-generates a CSV file
621 which can be included in a table, which is in a new page (so as not to
622 overwhelm this one). [[svp64/cr_names]]
623
624 # Register Profiles
625
626 **NOTE THIS TABLE SHOULD NO LONGER BE HAND EDITED** see
627 <https://bugs.libre-soc.org/show_bug.cgi?id=548> for details.
628
629 Instructions are broken down by Register Profiles as listed in the
630 following auto-generated page: [[opcode_regs_deduped]]. "Non-SV"
631 indicates that the operations with this Register Profile cannot be
632 Vectorised (mtspr, bc, dcbz, twi)
633
634 TODO generate table which will be here [[svp64/reg_profiles]]
635
636 # SV pseudocode illilustration
637
638 ## Single-predicated Instruction
639
640 illustration of normal mode add operation: zeroing not included, elwidth
641 overrides not included. if there is no predicate, it is set to all 1s
642
643 function op_add(rd, rs1, rs2) # add not VADD!
644 int i, id=0, irs1=0, irs2=0; predval = get_pred_val(FALSE, rd);
645 for (i = 0; i < VL; i++)
646 STATE.srcoffs = i # save context if (predval & 1<<i) # predication
647 uses intregs
648 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2]; if (!int_vec[rd
649 ].isvec) break;
650 if (rd.isvec) { id += 1; } if (rs1.isvec) { irs1 += 1; } if
651 (rs2.isvec) { irs2 += 1; } if (id == VL or irs1 == VL or irs2 ==
652 VL) {
653 # end VL hardware loop STATE.srcoffs = 0; # reset return;
654 }
655
656 This has several modes:
657
658 * RT.v = RA.v RB.v * RT.v = RA.v RB.s (and RA.s RB.v) * RT.v = RA.s RB.s *
659 RT.s = RA.v RB.v * RT.s = RA.v RB.s (and RA.s RB.v) * RT.s = RA.s RB.s
660
661 All of these may be predicated. Vector-Vector is straightfoward.
662 When one of source is a Vector and the other a Scalar, it is clear that
663 each element of the Vector source should be added to the Scalar source,
664 each result placed into the Vector (or, if the destination is a scalar,
665 only the first nonpredicated result).
666
667 The one that is not obvious is RT=vector but both RA/RB=scalar.
668 Here this acts as a "splat scalar result", copying the same result into
669 all nonpredicated result elements. If a fixed destination scalar was
670 intended, then an all-Scalar operation should be used.
671
672 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
673
674 # Assembly Annotation
675
676 Assembly code annotation is required for SV to be able to successfully
677 mark instructions as "prefixed".
678
679 A reasonable (prototype) starting point:
680
681 svp64 [field=value]*
682
683 Fields:
684
685 * ew=8/16/32 - element width
686 * sew=8/16/32 - source element width
687 * vec=2/3/4 - SUBVL
688 * mode=reduce/satu/sats/crpred
689 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
690 * spred={reg spec}
691
692 similar to x86 "rex" prefix.
693
694 For actual assembler:
695
696 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
697
698 Qualifiers:
699
700 * m={pred}: predicate mask mode
701 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
702 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
703 * ew={N}: ew=8/16/32 - sets elwidth override
704 * sw={N}: sw=8/16/32 - sets source elwidth override
705 * ff={xx}: see fail-first mode
706 * pr={xx}: see predicate-result mode
707 * sat{x}: satu / sats - see saturation mode
708 * mr: see map-reduce mode
709 * mr.svm see map-reduce with sub-vector mode
710 * crm: see map-reduce CR mode
711 * crm.svm see map-reduce CR with sub-vector mode
712 * sz: predication with source-zeroing
713 * dz: predication with dest-zeroing
714
715 For modes:
716
717 * pred-result:
718 - pm=lt/gt/le/ge/eq/ne/so/ns OR
719 - pm=RC1 OR pm=~RC1
720 * fail-first
721 - ff=lt/gt/le/ge/eq/ne/so/ns OR
722 - ff=RC1 OR ff=~RC1
723 * saturation:
724 - sats
725 - satu
726 * map-reduce:
727 - mr OR crm: "normal" map-reduce mode or CR-mode.
728 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
729
730 # Proposed Parallel-reduction algorithm
731
732 ```
733 /// reference implementation of proposed SimpleV reduction semantics.
734 ///
735 // reduction operation -- we still use this algorithm even
736 // if the reduction operation isn't associative or
737 // commutative.
738 /// `temp_pred` is a user-visible Vector Condition register
739 ///
740 /// all input arrays have length `vl`
741 def reduce( vl, vec, pred, pred,):
742 step = 1;
743 while step < vl
744 step *= 2;
745 for i in (0..vl).step_by(step)
746 other = i + step / 2;
747 other_pred = other < vl && pred[other];
748 if pred[i] && other_pred
749 vec[i] += vec[other];
750 else if other_pred
751 vec[i] = vec[other];
752 pred[i] |= other_pred;
753
754 def reduce( vl, vec, pred, pred,):
755 j = 0
756 vi = [] # array of lookup indices to skip nonpredicated
757 for i, pbit in enumerate(pred):
758 if pbit:
759 vi[j] = i
760 j += 1
761 step = 2
762 while step <= vl
763 halfstep = step // 2
764 for i in (0..vl).step_by(step)
765 other = vi[i + halfstep]
766 i = vi[i]
767 other_pred = other < vl && pred[other]
768 if pred[i] && other_pred
769 vec[i] += vec[other]
770 pred[i] |= other_pred
771 step *= 2
772
773 ```