(no commit message)
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 # Appendix
2
3 * <https://bugs.libre-soc.org/show_bug.cgi?id=574>
4 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47>
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=697>
6
7 This is the appendix to [[sv/svp64]], providing explanations of modes
8 etc. leaving the main svp64 page's primary purpose as outlining the
9 instruction format.
10
11 Table of contents:
12
13 [[!toc]]
14
15 # XER, SO and other global flags
16
17 Vector systems are expected to be high performance. This is achieved
18 through parallelism, which requires that elements in the vector be
19 independent. XER SO and other global "accumulation" flags (CR.OV) cause
20 Read-Write Hazards on single-bit global resources, having a significant
21 detrimental effect.
22
23 Consequently in SV, XER.SO and CR.OV behaviour is disregarded (including
24 in `cmp` instructions). XER is simply neither read nor written.
25 This includes when `scalar identity behaviour` occurs. If precise
26 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
27 instructions should be used without an SV Prefix.
28
29 An interesting side-effect of this decision is that the OE flag is now
30 free for other uses when SV Prefixing is used.
31
32 Regarding XER.CA: this does not fit either: it was designed for a scalar
33 ISA. Instead, both carry-in and carry-out go into the CR.so bit of a given
34 Vector element. This provides a means to perform large parallel batches
35 of Vectorised carry-capable additions. crweird instructions can be used
36 to transfer the CRs in and out of an integer, where bitmanipulation
37 may be performed to analyse the carry bits (including carry lookahead
38 propagation) before continuing with further parallel additions.
39
40 # v3.0B/v3.1 relevant instructions
41
42 SV is primarily designed for use as an efficient hybrid 3D GPU / VPU /
43 CPU ISA.
44
45 As mentioned above, OE=1 is not applicable in SV, freeing this bit for
46 alternative uses. Additionally, Vectorisation of the VSX SIMD system
47 likewise makes no sense whatsoever. SV *replaces* VSX and provides,
48 at the very minimum, predication (which VSX was designed without).
49 Thus all VSX Major Opcodes - all of them - are "unused" and must raise
50 illegal instruction exceptions in SV Prefix Mode.
51
52 Likewise, `lq` (Load Quad), and Load/Store Multiple make no sense to
53 have because they are not only provided by SV, the SV alternatives may
54 be predicated as well, making them far better suited to use in function
55 calls and context-switching.
56
57 Additionally, some v3.0/1 instructions simply make no sense at all in a
58 Vector context: `rfid` falls into this category,
59 as well as `sc` and `scv`. Here there is simply no point
60 trying to Vectorise them: the standard OpenPOWER v3.0/1 instructions
61 should be called instead.
62
63 Fortuitously this leaves several Major Opcodes free for use by SV
64 to fit alternative future instructions. In a 3D context this means
65 Vector Product, Vector Normalise, [[sv/mv.swizzle]], Texture LD/ST
66 operations, and others critical to an efficient, effective 3D GPU and
67 VPU ISA. With such instructions being included as standard in other
68 commercially-successful GPU ISAs it is likewise critical that a 3D
69 GPU/VPU based on svp64 also have such instructions.
70
71 Note however that svp64 is stand-alone and is in no way
72 critically dependent on the existence or provision of 3D GPU or VPU
73 instructions. These should be considered extensions, and their discussion
74 and specification is out of scope for this document.
75
76 Note, again: this is *only* under svp64 prefixing. Standard v3.0B /
77 v3.1B is *not* altered by svp64 in any way.
78
79 ## Major opcode map (v3.0B)
80
81 This table is taken from v3.0B.
82 Table 9: Primary Opcode Map (opcode bits 0:5)
83
84 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
85 000 | | | tdi | twi | EXT04 | | | mulli | 000
86 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
87 010 | bc/l/a | EXT17 | b/l/a | EXT19 | rlwimi| rlwinm | | rlwnm | 010
88 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
89 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
90 101 | lhz | lhzu | lha | lhau | sth | sthu | lmw | stmw | 101
91 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
92 111 | lq | EXT57 | EXT58 | EXT59 | EXT60 | EXT61 | EXT62 | EXT63 | 111
93 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
94
95 ## Suitable for svp64-only
96
97 This is the same table containing v3.0B Primary Opcodes except those that
98 make no sense in a Vectorisation Context have been removed. These removed
99 POs can, *in the SV Vector Context only*, be assigned to alternative
100 (Vectorised-only) instructions, including future extensions.
101
102 Note, again, to emphasise: outside of svp64 these opcodes **do not**
103 change. When not prefixed with svp64 these opcodes **specifically**
104 retain their v3.0B / v3.1B OpenPOWER Standard compliant meaning.
105
106 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
107 000 | | | | | | | | mulli | 000
108 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
109 010 | | | | EXT19 | rlwimi| rlwinm | | rlwnm | 010
110 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
111 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
112 101 | lhz | lhzu | lha | lhau | sth | sthu | | | 101
113 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
114 111 | | | EXT58 | EXT59 | | EXT61 | | EXT63 | 111
115 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
116
117 It is important to note that having a different v3.0B Scalar opcode
118 that is different from an SVP64 one is highly undesirable: the complexity
119 in the decoder is greatly increased.
120
121 # Single Predication
122
123 This is a standard mode normally found in Vector ISAs. every element in every source Vector and in the destination uses the same bit of one single predicate mask.
124
125 In SVSTATE, for Single-predication, implementors MUST increment both srcstep and dststep: unlike Twin-Predication the two must be equal at all times.
126
127 # Twin Predication
128
129 This is a novel concept that allows predication to be applied to a single
130 source and a single dest register. The following types of traditional
131 Vector operations may be encoded with it, *without requiring explicit
132 opcodes to do so*
133
134 * VSPLAT (a single scalar distributed across a vector)
135 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
136 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
137 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
138 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
139
140 Those patterns (and more) may be applied to:
141
142 * mv (the usual way that V\* ISA operations are created)
143 * exts\* sign-extension
144 * rwlinm and other RS-RA shift operations (**note**: excluding
145 those that take RA as both a src and dest. These are not
146 1-src 1-dest, they are 2-src, 1-dest)
147 * LD and ST (treating AGEN as one source)
148 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
149 * Condition Register ops mfcr, mtcr and other similar
150
151 This is a huge list that creates extremely powerful combinations,
152 particularly given that one of the predicate options is `(1<<r3)`
153
154 Additional unusual capabilities of Twin Predication include a back-to-back
155 version of VCOMPRESS-VEXPAND which is effectively the ability to do
156 sequentially ordered multiple VINSERTs. The source predicate selects a
157 sequentially ordered subset of elements to be inserted; the destination
158 predicate specifies the sequentially ordered recipient locations.
159 This is equivalent to
160 `llvm.masked.compressstore.*`
161 followed by
162 `llvm.masked.expandload.*`
163
164 # Reduce modes
165
166 Reduction in SVP64 is deterministic and somewhat of a misnomer. A normal
167 Vector ISA would have explicit Reduce opcodes with defined characteristics
168 per operation: in SX Aurora there is even an additional scalar argument
169 containing the initial reduction value, and the default is either 0
170 or 1 depending on the specifics of the explicit opcode.
171 SVP64 fundamentally has to
172 utilise *existing* Scalar Power ISA v3.0B operations, which presents some
173 unique challenges.
174
175 The solution turns out to be to simply define reduction as permitting
176 deterministic element-based schedules to be issued using the base Scalar
177 operations, and to rely on the underlying microarchitecture to resolve
178 Register Hazards at the element level. This goes back to
179 the fundamental principle that SV is nothing more than a Sub-Program-Counter
180 sitting between Decode and Issue phases.
181
182 Microarchitectures *may* take opportunities to parallelise the reduction
183 but only if in doing so they preserve Program Order at the Element Level.
184 Opportunities where this is possible include an `OR` operation
185 or a MIN/MAX operation: it may be possible to parallelise the reduction,
186 but for Floating Point it is not permitted due to different results
187 being obtained if the reduction is not executed in strict sequential
188 order.
189
190 In essence it becomes the programmer's responsibility to leverage the
191 pre-determined schedules to desired effect.
192
193 ## Scalar result reduce mode
194
195 Scalar Reduction per se does not exist, instead is implemented in SVP64
196 as a simple and natural relaxation of the usual restriction on the Vector
197 Looping which would terminate if the destination was marked as a Scalar.
198 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
199 even though the destination register is marked as scalar.
200 Thus it is up to the programmer to be aware of this and observe some
201 conventions.
202
203 It is also important to appreciate that there is no
204 actual imposition or restriction on how this mode is utilised: there
205 will therefore be several valuable uses (including Vector Iteration
206 and "Reverse-Gear")
207 and it is up to the programmer to make best use of the
208 (strictly deterministic) capability
209 provided.
210
211 In this mode, which is suited to operations involving carry or overflow,
212 one register must be identified by the programmer as being the "accumulator".
213 Scalar reduction is thus categorised by:
214
215 * One of the sources is a Vector
216 * the destination is a scalar
217 * optionally but most usefully when one source scalar register is
218 also the scalar destination (which may be informally termed
219 the "accumulator")
220 * That the source register type is the same as the destination register
221 type identified as the "accumulator". scalar reduction on `cmp`,
222 `setb` or `isel` makes no sense for example because of the mixture
223 between CRs and GPRs.
224
225 *Note that issuing instructions in Scalar reduce mode such as `setb`
226 are neither `UNDEFINED` nor prohibited, despite them not making much
227 sense at first glance.
228 Scalar reduce is strictly defined behaviour, and the cost in
229 hardware terms of prohibition of seemingly non-sensical operations is too great.
230 Therefore it is permitted and required to be executed successfully.
231 Implementors **MAY** choose to optimise such instructions in instances
232 where their use results in "extraneous execution", i.e. where it is clear
233 that the sequence of operations, comprising multiple overwrites to
234 a scalar destination **without** cumulative, iterative, or reductive
235 behaviour (no "accumulator"), may discard all but the last element
236 operation. Identification
237 of such is trivial to do for `setb` and `cmp`: the source register type is
238 a completely different register file from the destination*
239
240 Typical applications include simple operations such as `ADD r3, r10.v,
241 r3` where, clearly, r3 is being used to accumulate the addition of all
242 elements is the vector starting at r10.
243
244 # add RT, RA,RB but when RT==RA
245 for i in range(VL):
246 iregs[RA] += iregs[RB+i] # RT==RA
247
248 However, *unless* the operation is marked as "mapreduce", SV ordinarily
249 **terminates** at the first scalar operation. Only by marking the
250 operation as "mapreduce" will it continue to issue multiple sub-looped
251 (element) instructions in `Program Order`.
252
253 To perform the loop in reverse order, the ```RG``` (reverse gear) bit must be set. This may be useful in situations where the results may be different
254 (floating-point) if executed in a different order. Given that there is
255 no actual prohibition on Reduce Mode being applied when the destination
256 is a Vector, the "Reverse Gear" bit turns out to be a way to apply Iterative
257 or Cumulative Vector operations in reverse. `sv.add/rg r3.v, r4.v, r4.v`
258 for example will start at the opposite end of the Vector and push
259 a cumulative series of overlapping add operations into the Execution units of
260 the underlying hardware.
261
262 Other examples include shift-mask operations where a Vector of inserts
263 into a single destination register is required, as a way to construct
264 a value quickly from multiple arbitrary bit-ranges and bit-offsets.
265 Using the same register as both the source and destination, with Vectors
266 of different offsets masks and values to be inserted has multiple
267 applications including Video, cryptography and JIT compilation.
268
269 Subtract and Divide are still permitted to be executed in this mode,
270 although from an algorithmic perspective it is strongly discouraged.
271 It would be better to use addition followed by one final subtract,
272 or in the case of divide, to get better accuracy, to perform a multiply
273 cascade followed by a final divide.
274
275 Note that single-operand or three-operand scalar-dest reduce is perfectly
276 well permitted: the programmer may still declare one register, used as
277 both a Vector source and Scalar destination, to be utilised as
278 the "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc
279 this naturally fits well with the normal expected usage of these
280 operations.
281
282 If an interrupt or exception occurs in the middle of the scalar mapreduce,
283 the scalar destination register **MUST** be updated with the current
284 (intermediate) result, because this is how ```Program Order``` is
285 preserved (Vector Loops are to be considered to be just another way of issuing instructions
286 in Program Order). In this way, after return from interrupt,
287 the scalar mapreduce may continue where it left off. This provides
288 "precise" exception behaviour.
289
290 Note that hardware is perfectly permitted to perform multi-issue
291 parallel optimisation of the scalar reduce operation: it's just that
292 as far as the user is concerned, all exceptions and interrupts **MUST**
293 be precise.
294
295 ## Vector result reduce mode
296
297 Vector Reduce Mode issues a deterministic tree-reduction schedule to the underlying micro-architecture. Like Scalar reduction, the "Scalar Base"
298 (Power ISA v3.0B) operation is leveraged, unmodified, to give the
299 *appearance* and *effect* of Reduction.
300
301 Given that the tree-reduction schedule is deterministic,
302 Interrupts and exceptions
303 can therefore also be precise. The final result will be in the first
304 non-predicate-masked-out destination element, but due again to
305 the deterministic schedule programmers may find uses for the intermediate
306 results.
307
308 When Rc=1 a corresponding Vector of co-resultant CRs is also
309 created. No special action is taken: the result and its CR Field
310 are stored "as usual" exactly as all other SVP64 Rc=1 operations.
311
312 ## Sub-Vector Horizontal Reduction
313
314 Note that when SVM is clear and SUBVL!=1 the sub-elements are
315 *independent*, i.e. they are mapreduced per *sub-element* as a result.
316 illustration with a vec2, assuming RA==RT, e.g `sv.add/mr/vec2 r4, r4, r16`
317
318 for i in range(0, VL):
319 # RA==RT in the instruction. does not have to be
320 iregs[RT].x = op(iregs[RT].x, iregs[RB+i].x)
321 iregs[RT].y = op(iregs[RT].y, iregs[RB+i].y)
322
323 Thus logically there is nothing special or unanticipated about
324 `SVM=0`: it is expected behaviour according to standard SVP64
325 Sub-Vector rules.
326
327 By contrast, when SVM is set and SUBVL!=1, a Horizontal
328 Subvector mode is enabled, which behaves very much more
329 like a traditional Vector Processor Reduction instruction.
330 Example for a vec3:
331
332 for i in range(VL):
333 result = iregs[RA+i].x
334 result = op(result, iregs[RA+i].y)
335 result = op(result, iregs[RA+i].z)
336 iregs[RT+i] = result
337
338 In this mode, when Rc=1 the Vector of CRs is as normal: each result
339 element creates a corresponding CR element (for the final, reduced, result).
340
341 # Fail-on-first
342
343 Data-dependent fail-on-first has two distinct variants: one for LD/ST,
344 the other for arithmetic operations (actually, CR-driven). Note in each
345 case the assumption is that vector elements are required appear to be
346 executed in sequential Program Order, element 0 being the first.
347
348 * LD/ST ffirst treats the first LD/ST in a vector (element 0) as an
349 ordinary one. Exceptions occur "as normal". However for elements 1
350 and above, if an exception would occur, then VL is **truncated** to the
351 previous element.
352 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
353 CR-creating operation produces a result (including cmp). Similar to
354 branch, an analysis of the CR is performed and if the test fails, the
355 vector operation terminates and discards all element operations at and
356 above the current one, and VL is truncated to either
357 the *previous* element or the current one, depending on whether
358 VLi (VL "inclusive") is set.
359
360 Thus the new VL comprises a contiguous vector of results,
361 all of which pass the testing criteria (equal to zero, less than zero).
362
363 The CR-based data-driven fail-on-first is new and not found in ARM
364 SVE or RVV. It is extremely useful for reducing instruction count,
365 however requires speculative execution involving modifications of VL
366 to get high performance implementations. An additional mode (RC1=1)
367 effectively turns what would otherwise be an arithmetic operation
368 into a type of `cmp`. The CR is stored (and the CR.eq bit tested
369 against the `inv` field).
370 If the CR.eq bit is equal to `inv` then the Vector is truncated and
371 the loop ends.
372 Note that when RC1=1 the result elements are never stored, only the CRs.
373
374 VLi is only available as an option when `Rc=0` (or for instructions
375 which do not have Rc). When set, the current element is always
376 also included in the count (the new length that VL will be set to).
377 This may be useful in combination with "inv" to truncate the Vector
378 to `exclude` elements that fail a test, or, in the case of implementations
379 of strncpy, to include the terminating zero.
380
381 In CR-based data-driven fail-on-first there is only the option to select
382 and test one bit of each CR (just as with branch BO). For more complex
383 tests this may be insufficient. If that is the case, a vectorised crops
384 (crand, cror) may be used, and ffirst applied to the crop instead of to
385 the arithmetic vector.
386
387 One extremely important aspect of ffirst is:
388
389 * LDST ffirst may never set VL equal to zero. This because on the first
390 element an exception must be raised "as normal".
391 * CR-based data-dependent ffirst on the other hand **can** set VL equal
392 to zero. This is the only means in the entirety of SV that VL may be set
393 to zero (with the exception of via the SV.STATE SPR). When VL is set
394 zero due to the first element failing the CR bit-test, all subsequent
395 vectorised operations are effectively `nops` which is
396 *precisely the desired and intended behaviour*.
397
398 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
399 to a nonzero value for any implementation-specific reason. For example:
400 it is perfectly reasonable for implementations to alter VL when ffirst
401 LD or ST operations are initiated on a nonaligned boundary, such that
402 within a loop the subsequent iteration of that loop begins subsequent
403 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
404 workloads or balance resources.
405
406 CR-based data-dependent first on the other hand MUST not truncate VL
407 arbitrarily to a length decided by the hardware: VL MUST only be
408 truncated based explicitly on whether a test fails.
409 This because it is a precise test on which algorithms
410 will rely.
411
412 ## Data-dependent fail-first on CR operations (crand etc)
413
414 Operations that actually produce or alter CR Field as a result
415 do not also in turn have an Rc=1 mode. However it makes no
416 sense to try to test the 4 bits of a CR Field for being equal
417 or not equal to zero. Moreover, the result is already in the
418 form that is desired: it is a CR field. Therefore,
419 CR-based operations have their own SVP64 Mode, described
420 in [[sv/cr_ops]]
421
422 There are two primary different types of CR operations:
423
424 * Those which have a 3-bit operand field (referring to a CR Field)
425 * Those which have a 5-bit operand (referring to a bit within the
426 whole 32-bit CR)
427
428 More details can be found in [[sv/cr_ops]].
429
430 # pred-result mode
431
432 Predicate-result merges common CR testing with predication, saving on
433 instruction count. In essence, a Condition Register Field test
434 is performed, and if it fails it is considered to have been
435 *as if* the destination predicate bit was zero.
436 Arithmetic and Logical Pred-result is covered in [[sv/normal]]
437
438 ## pred-result mode on CR ops
439
440 CR operations (mtcr, crand, cror) may be Vectorised,
441 predicated, and also pred-result mode applied to it.
442 Vectorisation applies to 4-bit CR Fields which are treated as
443 elements, not the individual bits of the 32-bit CR.
444 CR ops and how to identify them is described in [[sv/cr_ops]]
445
446 # CR Operations
447
448 CRs are slightly more involved than INT or FP registers due to the
449 possibility for indexing individual bits (crops BA/BB/BT). Again however
450 the access pattern needs to be understandable in relation to v3.0B / v3.1B
451 numbering, with a clear linear relationship and mapping existing when
452 SV is applied.
453
454 ## CR EXTRA mapping table and algorithm
455
456 Numbering relationships for CR fields are already complex due to being
457 in BE format (*the relationship is not clearly explained in the v3.0B
458 or v3.1B specification*). However with some care and consideration
459 the exact same mapping used for INT and FP regfiles may be applied,
460 just to the upper bits, as explained below. The notation
461 `CR{field number}` is used to indicate access to a particular
462 Condition Register Field (as opposed to the notation `CR[bit]`
463 which accesses one bit of the 32 bit Power ISA v3.0B
464 Condition Register)
465
466 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
467 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits
468 *in* that CR. The numbering was determined (after 4 months of
469 analysis and research) to be as follows:
470
471 CR_index = 7-(BA>>2) # top 3 bits but BE
472 bit_index = 3-(BA & 0b11) # low 2 bits but BE
473 CR_reg = CR{CR_index} # get the CR
474 # finally get the bit from the CR.
475 CR_bit = (CR_reg & (1<<bit_index)) != 0
476
477 When it comes to applying SV, it is the CR\_reg number to which SV EXTRA2/3
478 applies, **not** the CR\_bit portion (bits 3:4):
479
480 if extra3_mode:
481 spec = EXTRA3
482 else:
483 spec = EXTRA2<<1 | 0b0
484 if spec[0]:
485 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
486 return ((BA >> 2)<<6) | # hi 3 bits shifted up
487 (spec[1:2]<<4) | # to make room for these
488 (BA & 0b11) # CR_bit on the end
489 else:
490 # scalar constructs "00 spec[1:2] BA[0:4]"
491 return (spec[1:2] << 5) | BA
492
493 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
494 algorithm to determin CR\_reg is modified to as follows:
495
496 CR_index = 7-(BA>>2) # top 3 bits but BE
497 if spec[0]:
498 # vector mode, 0-124 increments of 4
499 CR_index = (CR_index<<4) | (spec[1:2] << 2)
500 else:
501 # scalar mode, 0-32 increments of 1
502 CR_index = (spec[1:2]<<3) | CR_index
503 # same as for v3.0/v3.1 from this point onwards
504 bit_index = 3-(BA & 0b11) # low 2 bits but BE
505 CR_reg = CR{CR_index} # get the CR
506 # finally get the bit from the CR.
507 CR_bit = (CR_reg & (1<<bit_index)) != 0
508
509 Note here that the decoding pattern to determine CR\_bit does not change.
510
511 Note: high-performance implementations may read/write Vectors of CRs in
512 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
513 simplify internal design. If instructions are issued where CR Vectors
514 do not start on a 32-bit aligned boundary, performance may be affected.
515
516 ## CR fields as inputs/outputs of vector operations
517
518 CRs (or, the arithmetic operations associated with them)
519 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
520
521 When vectorized, the CR inputs/outputs are sequentially read/written
522 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
523 writing to CR8 (TBD evaluate) and increase sequentially from there.
524 This is so that:
525
526 * implementations may rely on the Vector CRs being aligned to 8. This
527 means that CRs may be read or written in aligned batches of 32 bits
528 (8 CRs per batch), for high performance implementations.
529 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
530 overwritten by vector Rc=1 operations except for very large VL
531 * CR-based predication, from CR32, is also not interfered with
532 (except by large VL).
533
534 However when the SV result (destination) is marked as a scalar by the
535 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
536 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
537 for FP operations.
538
539 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
540 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
541 v3.0B scalar operations produce a **tuple** of element results: the
542 result of the operation as one part of that element *and a corresponding
543 CR element*. Greatly simplified pseudocode:
544
545 for i in range(VL):
546 # calculate the vector result of an add
547 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
548 # now calculate CR bits
549 CRs{8+i}.eq = iregs[RT+i] == 0
550 CRs{8+i}.gt = iregs[RT+i] > 0
551 ... etc
552
553 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
554 then a followup instruction must be performed, setting "reduce" mode on
555 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
556 more flexibility in analysing vectors than standard Vector ISAs. Normal
557 Vector ISAs are typically restricted to "were all results nonzero" and
558 "were some results nonzero". The application of mapreduce to Vectorised
559 cr operations allows far more sophisticated analysis, particularly in
560 conjunction with the new crweird operations see [[sv/cr_int_predication]].
561
562 Note in particular that the use of a separate instruction in this way
563 ensures that high performance multi-issue OoO inplementations do not
564 have the computation of the cumulative analysis CR as a bottleneck and
565 hindrance, regardless of the length of VL.
566
567 Additionally,
568 SVP64 [[sv/branches]] may be used, even when the branch itself is to
569 the following instruction. The combined side-effects of CTR reduction
570 and VL truncation provide several benefits.
571
572 (see [[discussion]]. some alternative schemes are described there)
573
574 ## Rc=1 when SUBVL!=1
575
576 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
577 predicate is allocated per subvector; likewise only one CR is allocated
578 per subvector.
579
580 This leaves a conundrum as to how to apply CR computation per subvector,
581 when normally Rc=1 is exclusively applied to scalar elements. A solution
582 is to perform a bitwise OR or AND of the subvector tests. Given that
583 OE is ignored in SVP64, this field may (when available) be used to select OR or
584 AND behavior.
585
586 ### Table of CR fields
587
588 CR[i] is the notation used by the OpenPower spec to refer to CR field #i,
589 so FP instructions with Rc=1 write to CR[1] aka SVCR1_000.
590
591 CRs are not stored in SPRs: they are registers in their own right.
592 Therefore context-switching the full set of CRs involves a Vectorised
593 mfcr or mtcr, using VL=64, elwidth=8 to do so. This is exactly as how
594 scalar OpenPOWER context-switches CRs: it is just that there are now
595 more of them.
596
597 The 64 SV CRs are arranged similarly to the way the 128 integer registers
598 are arranged. TODO a python program that auto-generates a CSV file
599 which can be included in a table, which is in a new page (so as not to
600 overwhelm this one). [[svp64/cr_names]]
601
602 # Register Profiles
603
604 **NOTE THIS TABLE SHOULD NO LONGER BE HAND EDITED** see
605 <https://bugs.libre-soc.org/show_bug.cgi?id=548> for details.
606
607 Instructions are broken down by Register Profiles as listed in the
608 following auto-generated page: [[opcode_regs_deduped]]. "Non-SV"
609 indicates that the operations with this Register Profile cannot be
610 Vectorised (mtspr, bc, dcbz, twi)
611
612 TODO generate table which will be here [[svp64/reg_profiles]]
613
614 # SV pseudocode illilustration
615
616 ## Single-predicated Instruction
617
618 illustration of normal mode add operation: zeroing not included, elwidth
619 overrides not included. if there is no predicate, it is set to all 1s
620
621 function op_add(rd, rs1, rs2) # add not VADD!
622 int i, id=0, irs1=0, irs2=0; predval = get_pred_val(FALSE, rd);
623 for (i = 0; i < VL; i++)
624 STATE.srcoffs = i # save context if (predval & 1<<i) # predication
625 uses intregs
626 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2]; if (!int_vec[rd
627 ].isvec) break;
628 if (rd.isvec) { id += 1; } if (rs1.isvec) { irs1 += 1; } if
629 (rs2.isvec) { irs2 += 1; } if (id == VL or irs1 == VL or irs2 ==
630 VL) {
631 # end VL hardware loop STATE.srcoffs = 0; # reset return;
632 }
633
634 This has several modes:
635
636 * RT.v = RA.v RB.v * RT.v = RA.v RB.s (and RA.s RB.v) * RT.v = RA.s RB.s *
637 RT.s = RA.v RB.v * RT.s = RA.v RB.s (and RA.s RB.v) * RT.s = RA.s RB.s
638
639 All of these may be predicated. Vector-Vector is straightfoward.
640 When one of source is a Vector and the other a Scalar, it is clear that
641 each element of the Vector source should be added to the Scalar source,
642 each result placed into the Vector (or, if the destination is a scalar,
643 only the first nonpredicated result).
644
645 The one that is not obvious is RT=vector but both RA/RB=scalar.
646 Here this acts as a "splat scalar result", copying the same result into
647 all nonpredicated result elements. If a fixed destination scalar was
648 intended, then an all-Scalar operation should be used.
649
650 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
651
652 # Assembly Annotation
653
654 Assembly code annotation is required for SV to be able to successfully
655 mark instructions as "prefixed".
656
657 A reasonable (prototype) starting point:
658
659 svp64 [field=value]*
660
661 Fields:
662
663 * ew=8/16/32 - element width
664 * sew=8/16/32 - source element width
665 * vec=2/3/4 - SUBVL
666 * mode=reduce/satu/sats/crpred
667 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
668 * spred={reg spec}
669
670 similar to x86 "rex" prefix.
671
672 For actual assembler:
673
674 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
675
676 Qualifiers:
677
678 * m={pred}: predicate mask mode
679 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
680 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
681 * ew={N}: ew=8/16/32 - sets elwidth override
682 * sw={N}: sw=8/16/32 - sets source elwidth override
683 * ff={xx}: see fail-first mode
684 * pr={xx}: see predicate-result mode
685 * sat{x}: satu / sats - see saturation mode
686 * mr: see map-reduce mode
687 * mr.svm see map-reduce with sub-vector mode
688 * crm: see map-reduce CR mode
689 * crm.svm see map-reduce CR with sub-vector mode
690 * sz: predication with source-zeroing
691 * dz: predication with dest-zeroing
692
693 For modes:
694
695 * pred-result:
696 - pm=lt/gt/le/ge/eq/ne/so/ns OR
697 - pm=RC1 OR pm=~RC1
698 * fail-first
699 - ff=lt/gt/le/ge/eq/ne/so/ns OR
700 - ff=RC1 OR ff=~RC1
701 * saturation:
702 - sats
703 - satu
704 * map-reduce:
705 - mr OR crm: "normal" map-reduce mode or CR-mode.
706 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
707
708 # Proposed Parallel-reduction algorithm
709
710 ```
711 /// reference implementation of proposed SimpleV reduction semantics.
712 ///
713 // reduction operation -- we still use this algorithm even
714 // if the reduction operation isn't associative or
715 // commutative.
716 /// `temp_pred` is a user-visible Vector Condition register
717 ///
718 /// all input arrays have length `vl`
719 def reduce( vl, vec, pred, pred,):
720 step = 1;
721 while step < vl
722 step *= 2;
723 for i in (0..vl).step_by(step)
724 other = i + step / 2;
725 other_pred = other < vl && pred[other];
726 if pred[i] && other_pred
727 vec[i] += vec[other];
728 else if other_pred
729 vec[i] = vec[other];
730 pred[i] |= other_pred;
731
732 def reduce( vl, vec, pred, pred,):
733 j = 0
734 vi = [] # array of lookup indices to skip nonpredicated
735 for i, pbit in enumerate(pred):
736 if pbit:
737 vi[j] = i
738 j += 1
739 step = 2
740 while step <= vl
741 halfstep = step // 2
742 for i in (0..vl).step_by(step)
743 other = vi[i + halfstep]
744 i = vi[i]
745 other_pred = other < vl && pred[other]
746 if pred[i] && other_pred
747 vec[i] += vec[other]
748 pred[i] |= other_pred
749 step *= 2
750
751 ```