ff38108490cc6ade14a183de5b903c47b8b12c55
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 # Appendix
2
3 * <https://bugs.libre-soc.org/show_bug.cgi?id=574>
4 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47>
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=697>
6
7 This is the appendix to [[sv/svp64]], providing explanations of modes
8 etc. leaving the main svp64 page's primary purpose as outlining the
9 instruction format.
10
11 Table of contents:
12
13 [[!toc]]
14
15 # XER, SO and other global flags
16
17 Vector systems are expected to be high performance. This is achieved
18 through parallelism, which requires that elements in the vector be
19 independent. XER SO and other global "accumulation" flags (CR.OV) cause
20 Read-Write Hazards on single-bit global resources, having a significant
21 detrimental effect.
22
23 Consequently in SV, XER.SO and CR.OV behaviour is disregarded (including
24 in `cmp` instructions). XER is simply neither read nor written.
25 This includes when `scalar identity behaviour` occurs. If precise
26 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
27 instructions should be used without an SV Prefix.
28
29 An interesting side-effect of this decision is that the OE flag is now
30 free for other uses when SV Prefixing is used.
31
32 Regarding XER.CA: this does not fit either: it was designed for a scalar
33 ISA. Instead, both carry-in and carry-out go into the CR.so bit of a given
34 Vector element. This provides a means to perform large parallel batches
35 of Vectorised carry-capable additions. crweird instructions can be used
36 to transfer the CRs in and out of an integer, where bitmanipulation
37 may be performed to analyse the carry bits (including carry lookahead
38 propagation) before continuing with further parallel additions.
39
40 # v3.0B/v3.1 relevant instructions
41
42 SV is primarily designed for use as an efficient hybrid 3D GPU / VPU /
43 CPU ISA.
44
45 As mentioned above, OE=1 is not applicable in SV, freeing this bit for
46 alternative uses. Additionally, Vectorisation of the VSX SIMD system
47 likewise makes no sense whatsoever. SV *replaces* VSX and provides,
48 at the very minimum, predication (which VSX was designed without).
49 Thus all VSX Major Opcodes - all of them - are "unused" and must raise
50 illegal instruction exceptions in SV Prefix Mode.
51
52 Likewise, `lq` (Load Quad), and Load/Store Multiple make no sense to
53 have because they are not only provided by SV, the SV alternatives may
54 be predicated as well, making them far better suited to use in function
55 calls and context-switching.
56
57 Additionally, some v3.0/1 instructions simply make no sense at all in a
58 Vector context: `rfid` falls into this category,
59 as well as `sc` and `scv`. Here there is simply no point
60 trying to Vectorise them: the standard OpenPOWER v3.0/1 instructions
61 should be called instead.
62
63 Fortuitously this leaves several Major Opcodes free for use by SV
64 to fit alternative future instructions. In a 3D context this means
65 Vector Product, Vector Normalise, [[sv/mv.swizzle]], Texture LD/ST
66 operations, and others critical to an efficient, effective 3D GPU and
67 VPU ISA. With such instructions being included as standard in other
68 commercially-successful GPU ISAs it is likewise critical that a 3D
69 GPU/VPU based on svp64 also have such instructions.
70
71 Note however that svp64 is stand-alone and is in no way
72 critically dependent on the existence or provision of 3D GPU or VPU
73 instructions. These should be considered extensions, and their discussion
74 and specification is out of scope for this document.
75
76 Note, again: this is *only* under svp64 prefixing. Standard v3.0B /
77 v3.1B is *not* altered by svp64 in any way.
78
79 ## Major opcode map (v3.0B)
80
81 This table is taken from v3.0B.
82 Table 9: Primary Opcode Map (opcode bits 0:5)
83
84 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
85 000 | | | tdi | twi | EXT04 | | | mulli | 000
86 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
87 010 | bc/l/a | EXT17 | b/l/a | EXT19 | rlwimi| rlwinm | | rlwnm | 010
88 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
89 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
90 101 | lhz | lhzu | lha | lhau | sth | sthu | lmw | stmw | 101
91 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
92 111 | lq | EXT57 | EXT58 | EXT59 | EXT60 | EXT61 | EXT62 | EXT63 | 111
93 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
94
95 ## Suitable for svp64-only
96
97 This is the same table containing v3.0B Primary Opcodes except those that
98 make no sense in a Vectorisation Context have been removed. These removed
99 POs can, *in the SV Vector Context only*, be assigned to alternative
100 (Vectorised-only) instructions, including future extensions.
101
102 Note, again, to emphasise: outside of svp64 these opcodes **do not**
103 change. When not prefixed with svp64 these opcodes **specifically**
104 retain their v3.0B / v3.1B OpenPOWER Standard compliant meaning.
105
106 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
107 000 | | | | | | | | mulli | 000
108 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
109 010 | | | | EXT19 | rlwimi| rlwinm | | rlwnm | 010
110 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
111 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
112 101 | lhz | lhzu | lha | lhau | sth | sthu | | | 101
113 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
114 111 | | | EXT58 | EXT59 | | EXT61 | | EXT63 | 111
115 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
116
117 It is important to note that having a different v3.0B Scalar opcode
118 that is different from an SVP64 one is highly undesirable: the complexity
119 in the decoder is greatly increased.
120
121 # Single Predication
122
123 This is a standard mode normally found in Vector ISAs. every element in every source Vector and in the destination uses the same bit of one single predicate mask.
124
125 In SVSTATE, for Single-predication, implementors MUST increment both srcstep and dststep: unlike Twin-Predication the two must be equal at all times.
126
127 # Twin Predication
128
129 This is a novel concept that allows predication to be applied to a single
130 source and a single dest register. The following types of traditional
131 Vector operations may be encoded with it, *without requiring explicit
132 opcodes to do so*
133
134 * VSPLAT (a single scalar distributed across a vector)
135 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
136 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
137 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
138 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
139
140 Those patterns (and more) may be applied to:
141
142 * mv (the usual way that V\* ISA operations are created)
143 * exts\* sign-extension
144 * rwlinm and other RS-RA shift operations (**note**: excluding
145 those that take RA as both a src and dest. These are not
146 1-src 1-dest, they are 2-src, 1-dest)
147 * LD and ST (treating AGEN as one source)
148 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
149 * Condition Register ops mfcr, mtcr and other similar
150
151 This is a huge list that creates extremely powerful combinations,
152 particularly given that one of the predicate options is `(1<<r3)`
153
154 Additional unusual capabilities of Twin Predication include a back-to-back
155 version of VCOMPRESS-VEXPAND which is effectively the ability to do
156 sequentially ordered multiple VINSERTs. The source predicate selects a
157 sequentially ordered subset of elements to be inserted; the destination
158 predicate specifies the sequentially ordered recipient locations.
159 This is equivalent to
160 `llvm.masked.compressstore.*`
161 followed by
162 `llvm.masked.expandload.*`
163
164 # Reduce modes
165
166 Reduction in SVP64 is deterministic and somewhat of a misnomer. A normal
167 Vector ISA would have explicit Reduce opcodes with defined characteristics
168 per operation: in SX Aurora there is even an additional scalar argument
169 containing the initial reduction value, and the default is either 0
170 or 1 depending on the specifics of the explicit opcode.
171 SVP64 fundamentally has to
172 utilise *existing* Scalar Power ISA v3.0B operations, which presents some
173 unique challenges.
174
175 The solution turns out to be to simply define reduction as permitting
176 deterministic element-based schedules to be issued using the base Scalar
177 operations, and to rely on the underlying microarchitecture to resolve
178 Register Hazards at the element level. This goes back to
179 the fundamental principle that SV is nothing more than a Sub-Program-Counter
180 sitting between Decode and Issue phases.
181
182 Microarchitectures *may* take opportunities to parallelise the reduction
183 but only if in doing so they preserve Program Order at the Element Level.
184 Opportunities where this is possible include an `OR` operation
185 or a MIN/MAX operation: it may be possible to parallelise the reduction,
186 but for Floating Point it is not permitted due to different results
187 being obtained if the reduction is not executed in strict sequential
188 order.
189
190 In essence it becomes the programmer's responsibility to leverage the
191 pre-determined schedules to desired effect.
192
193 ## Scalar result reduction and iteration
194
195 Scalar Reduction per se does not exist, instead is implemented in SVP64
196 as a simple and natural relaxation of the usual restriction on the Vector
197 Looping which would terminate if the destination was marked as a Scalar.
198 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
199 even though the destination register is marked as scalar.
200 Thus it is up to the programmer to be aware of this and observe some
201 conventions.
202
203 It is also important to appreciate that there is no
204 actual imposition or restriction on how this mode is utilised: there
205 will therefore be several valuable uses (including Vector Iteration
206 and "Reverse-Gear")
207 and it is up to the programmer to make best use of the
208 (strictly deterministic) capability
209 provided.
210
211 In this mode, which is suited to operations involving carry or overflow,
212 one register must be assigned, by convention by the programmer to be the
213 "accumulator". Scalar reduction is thus categorised by:
214
215 * One of the sources is a Vector
216 * the destination is a scalar
217 * optionally but most usefully when one source scalar register is
218 also the scalar destination (which may be informally termed
219 the "accumulator")
220 * That the source register type is the same as the destination register
221 type identified as the "accumulator". Scalar reduction on `cmp`,
222 `setb` or `isel` makes no sense for example because of the mixture
223 between CRs and GPRs.
224
225 *Note that issuing instructions in Scalar reduce mode such as `setb`
226 are neither `UNDEFINED` nor prohibited, despite them not making much
227 sense at first glance.
228 Scalar reduce is strictly defined behaviour, and the cost in
229 hardware terms of prohibition of seemingly non-sensical operations is too great.
230 Therefore it is permitted and required to be executed successfully.
231 Implementors **MAY** choose to optimise such instructions in instances
232 where their use results in "extraneous execution", i.e. where it is clear
233 that the sequence of operations, comprising multiple overwrites to
234 a scalar destination **without** cumulative, iterative, or reductive
235 behaviour (no "accumulator"), may discard all but the last element
236 operation. Identification
237 of such is trivial to do for `setb` and `cmp`: the source register type is
238 a completely different register file from the destination*
239
240 Typical applications include simple operations such as `ADD r3, r10.v,
241 r3` where, clearly, r3 is being used to accumulate the addition of all
242 elements is the vector starting at r10.
243
244 # add RT, RA,RB but when RT==RA
245 for i in range(VL):
246 iregs[RA] += iregs[RB+i] # RT==RA
247
248 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
249 SV ordinarily
250 **terminates** at the first scalar operation. Only by marking the
251 operation as "mapreduce" will it continue to issue multiple sub-looped
252 (element) instructions in `Program Order`.
253
254 To perform the loop in reverse order, the ```RG``` (reverse gear) bit must be set. This may be useful in situations where the results may be different
255 (floating-point) if executed in a different order. Given that there is
256 no actual prohibition on Reduce Mode being applied when the destination
257 is a Vector, the "Reverse Gear" bit turns out to be a way to apply Iterative
258 or Cumulative Vector operations in reverse. `sv.add/rg r3.v, r4.v, r4.v`
259 for example will start at the opposite end of the Vector and push
260 a cumulative series of overlapping add operations into the Execution units of
261 the underlying hardware.
262
263 Other examples include shift-mask operations where a Vector of inserts
264 into a single destination register is required, as a way to construct
265 a value quickly from multiple arbitrary bit-ranges and bit-offsets.
266 Using the same register as both the source and destination, with Vectors
267 of different offsets masks and values to be inserted has multiple
268 applications including Video, cryptography and JIT compilation.
269
270 Subtract and Divide are still permitted to be executed in this mode,
271 although from an algorithmic perspective it is strongly discouraged.
272 It would be better to use addition followed by one final subtract,
273 or in the case of divide, to get better accuracy, to perform a multiply
274 cascade followed by a final divide.
275
276 Note that single-operand or three-operand scalar-dest reduce is perfectly
277 well permitted: the programmer may still declare one register, used as
278 both a Vector source and Scalar destination, to be utilised as
279 the "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc
280 this naturally fits well with the normal expected usage of these
281 operations.
282
283 If an interrupt or exception occurs in the middle of the scalar mapreduce,
284 the scalar destination register **MUST** be updated with the current
285 (intermediate) result, because this is how ```Program Order``` is
286 preserved (Vector Loops are to be considered to be just another way of issuing instructions
287 in Program Order). In this way, after return from interrupt,
288 the scalar mapreduce may continue where it left off. This provides
289 "precise" exception behaviour.
290
291 Note that hardware is perfectly permitted to perform multi-issue
292 parallel optimisation of the scalar reduce operation: it's just that
293 as far as the user is concerned, all exceptions and interrupts **MUST**
294 be precise.
295
296 ## Vector result reduce mode
297
298 Vector Reduce Mode issues a deterministic tree-reduction schedule to the underlying micro-architecture. Like Scalar reduction, the "Scalar Base"
299 (Power ISA v3.0B) operation is leveraged, unmodified, to give the
300 *appearance* and *effect* of Reduction.
301
302 Given that the tree-reduction schedule is deterministic,
303 Interrupts and exceptions
304 can therefore also be precise. The final result will be in the first
305 non-predicate-masked-out destination element, but due again to
306 the deterministic schedule programmers may find uses for the intermediate
307 results.
308
309 When Rc=1 a corresponding Vector of co-resultant CRs is also
310 created. No special action is taken: the result and its CR Field
311 are stored "as usual" exactly as all other SVP64 Rc=1 operations.
312
313 ## Sub-Vector Horizontal Reduction
314
315 Note that when SVM is clear and SUBVL!=1 the sub-elements are
316 *independent*, i.e. they are mapreduced per *sub-element* as a result.
317 illustration with a vec2, assuming RA==RT, e.g `sv.add/mr/vec2 r4, r4, r16`
318
319 for i in range(0, VL):
320 # RA==RT in the instruction. does not have to be
321 iregs[RT].x = op(iregs[RT].x, iregs[RB+i].x)
322 iregs[RT].y = op(iregs[RT].y, iregs[RB+i].y)
323
324 Thus logically there is nothing special or unanticipated about
325 `SVM=0`: it is expected behaviour according to standard SVP64
326 Sub-Vector rules.
327
328 By contrast, when SVM is set and SUBVL!=1, a Horizontal
329 Subvector mode is enabled, which behaves very much more
330 like a traditional Vector Processor Reduction instruction.
331 Example for a vec3:
332
333 for i in range(VL):
334 result = iregs[RA+i].x
335 result = op(result, iregs[RA+i].y)
336 result = op(result, iregs[RA+i].z)
337 iregs[RT+i] = result
338
339 In this mode, when Rc=1 the Vector of CRs is as normal: each result
340 element creates a corresponding CR element (for the final, reduced, result).
341
342 # Fail-on-first
343
344 Data-dependent fail-on-first has two distinct variants: one for LD/ST,
345 the other for arithmetic operations (actually, CR-driven). Note in each
346 case the assumption is that vector elements are required appear to be
347 executed in sequential Program Order, element 0 being the first.
348
349 * LD/ST ffirst treats the first LD/ST in a vector (element 0) as an
350 ordinary one. Exceptions occur "as normal". However for elements 1
351 and above, if an exception would occur, then VL is **truncated** to the
352 previous element.
353 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
354 CR-creating operation produces a result (including cmp). Similar to
355 branch, an analysis of the CR is performed and if the test fails, the
356 vector operation terminates and discards all element operations at and
357 above the current one, and VL is truncated to either
358 the *previous* element or the current one, depending on whether
359 VLi (VL "inclusive") is set.
360
361 Thus the new VL comprises a contiguous vector of results,
362 all of which pass the testing criteria (equal to zero, less than zero).
363
364 The CR-based data-driven fail-on-first is new and not found in ARM
365 SVE or RVV. It is extremely useful for reducing instruction count,
366 however requires speculative execution involving modifications of VL
367 to get high performance implementations. An additional mode (RC1=1)
368 effectively turns what would otherwise be an arithmetic operation
369 into a type of `cmp`. The CR is stored (and the CR.eq bit tested
370 against the `inv` field).
371 If the CR.eq bit is equal to `inv` then the Vector is truncated and
372 the loop ends.
373 Note that when RC1=1 the result elements are never stored, only the CRs.
374
375 VLi is only available as an option when `Rc=0` (or for instructions
376 which do not have Rc). When set, the current element is always
377 also included in the count (the new length that VL will be set to).
378 This may be useful in combination with "inv" to truncate the Vector
379 to `exclude` elements that fail a test, or, in the case of implementations
380 of strncpy, to include the terminating zero.
381
382 In CR-based data-driven fail-on-first there is only the option to select
383 and test one bit of each CR (just as with branch BO). For more complex
384 tests this may be insufficient. If that is the case, a vectorised crops
385 (crand, cror) may be used, and ffirst applied to the crop instead of to
386 the arithmetic vector.
387
388 One extremely important aspect of ffirst is:
389
390 * LDST ffirst may never set VL equal to zero. This because on the first
391 element an exception must be raised "as normal".
392 * CR-based data-dependent ffirst on the other hand **can** set VL equal
393 to zero. This is the only means in the entirety of SV that VL may be set
394 to zero (with the exception of via the SV.STATE SPR). When VL is set
395 zero due to the first element failing the CR bit-test, all subsequent
396 vectorised operations are effectively `nops` which is
397 *precisely the desired and intended behaviour*.
398
399 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
400 to a nonzero value for any implementation-specific reason. For example:
401 it is perfectly reasonable for implementations to alter VL when ffirst
402 LD or ST operations are initiated on a nonaligned boundary, such that
403 within a loop the subsequent iteration of that loop begins subsequent
404 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
405 workloads or balance resources.
406
407 CR-based data-dependent first on the other hand MUST not truncate VL
408 arbitrarily to a length decided by the hardware: VL MUST only be
409 truncated based explicitly on whether a test fails.
410 This because it is a precise test on which algorithms
411 will rely.
412
413 ## Data-dependent fail-first on CR operations (crand etc)
414
415 Operations that actually produce or alter CR Field as a result
416 do not also in turn have an Rc=1 mode. However it makes no
417 sense to try to test the 4 bits of a CR Field for being equal
418 or not equal to zero. Moreover, the result is already in the
419 form that is desired: it is a CR field. Therefore,
420 CR-based operations have their own SVP64 Mode, described
421 in [[sv/cr_ops]]
422
423 There are two primary different types of CR operations:
424
425 * Those which have a 3-bit operand field (referring to a CR Field)
426 * Those which have a 5-bit operand (referring to a bit within the
427 whole 32-bit CR)
428
429 More details can be found in [[sv/cr_ops]].
430
431 # pred-result mode
432
433 Predicate-result merges common CR testing with predication, saving on
434 instruction count. In essence, a Condition Register Field test
435 is performed, and if it fails it is considered to have been
436 *as if* the destination predicate bit was zero.
437 Arithmetic and Logical Pred-result is covered in [[sv/normal]]
438
439 ## pred-result mode on CR ops
440
441 CR operations (mtcr, crand, cror) may be Vectorised,
442 predicated, and also pred-result mode applied to it.
443 Vectorisation applies to 4-bit CR Fields which are treated as
444 elements, not the individual bits of the 32-bit CR.
445 CR ops and how to identify them is described in [[sv/cr_ops]]
446
447 # CR Operations
448
449 CRs are slightly more involved than INT or FP registers due to the
450 possibility for indexing individual bits (crops BA/BB/BT). Again however
451 the access pattern needs to be understandable in relation to v3.0B / v3.1B
452 numbering, with a clear linear relationship and mapping existing when
453 SV is applied.
454
455 ## CR EXTRA mapping table and algorithm
456
457 Numbering relationships for CR fields are already complex due to being
458 in BE format (*the relationship is not clearly explained in the v3.0B
459 or v3.1B specification*). However with some care and consideration
460 the exact same mapping used for INT and FP regfiles may be applied,
461 just to the upper bits, as explained below. The notation
462 `CR{field number}` is used to indicate access to a particular
463 Condition Register Field (as opposed to the notation `CR[bit]`
464 which accesses one bit of the 32 bit Power ISA v3.0B
465 Condition Register)
466
467 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
468 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits
469 *in* that CR. The numbering was determined (after 4 months of
470 analysis and research) to be as follows:
471
472 CR_index = 7-(BA>>2) # top 3 bits but BE
473 bit_index = 3-(BA & 0b11) # low 2 bits but BE
474 CR_reg = CR{CR_index} # get the CR
475 # finally get the bit from the CR.
476 CR_bit = (CR_reg & (1<<bit_index)) != 0
477
478 When it comes to applying SV, it is the CR\_reg number to which SV EXTRA2/3
479 applies, **not** the CR\_bit portion (bits 3:4):
480
481 if extra3_mode:
482 spec = EXTRA3
483 else:
484 spec = EXTRA2<<1 | 0b0
485 if spec[0]:
486 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
487 return ((BA >> 2)<<6) | # hi 3 bits shifted up
488 (spec[1:2]<<4) | # to make room for these
489 (BA & 0b11) # CR_bit on the end
490 else:
491 # scalar constructs "00 spec[1:2] BA[0:4]"
492 return (spec[1:2] << 5) | BA
493
494 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
495 algorithm to determin CR\_reg is modified to as follows:
496
497 CR_index = 7-(BA>>2) # top 3 bits but BE
498 if spec[0]:
499 # vector mode, 0-124 increments of 4
500 CR_index = (CR_index<<4) | (spec[1:2] << 2)
501 else:
502 # scalar mode, 0-32 increments of 1
503 CR_index = (spec[1:2]<<3) | CR_index
504 # same as for v3.0/v3.1 from this point onwards
505 bit_index = 3-(BA & 0b11) # low 2 bits but BE
506 CR_reg = CR{CR_index} # get the CR
507 # finally get the bit from the CR.
508 CR_bit = (CR_reg & (1<<bit_index)) != 0
509
510 Note here that the decoding pattern to determine CR\_bit does not change.
511
512 Note: high-performance implementations may read/write Vectors of CRs in
513 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
514 simplify internal design. If instructions are issued where CR Vectors
515 do not start on a 32-bit aligned boundary, performance may be affected.
516
517 ## CR fields as inputs/outputs of vector operations
518
519 CRs (or, the arithmetic operations associated with them)
520 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
521
522 When vectorized, the CR inputs/outputs are sequentially read/written
523 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
524 writing to CR8 (TBD evaluate) and increase sequentially from there.
525 This is so that:
526
527 * implementations may rely on the Vector CRs being aligned to 8. This
528 means that CRs may be read or written in aligned batches of 32 bits
529 (8 CRs per batch), for high performance implementations.
530 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
531 overwritten by vector Rc=1 operations except for very large VL
532 * CR-based predication, from CR32, is also not interfered with
533 (except by large VL).
534
535 However when the SV result (destination) is marked as a scalar by the
536 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
537 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
538 for FP operations.
539
540 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
541 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
542 v3.0B scalar operations produce a **tuple** of element results: the
543 result of the operation as one part of that element *and a corresponding
544 CR element*. Greatly simplified pseudocode:
545
546 for i in range(VL):
547 # calculate the vector result of an add
548 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
549 # now calculate CR bits
550 CRs{8+i}.eq = iregs[RT+i] == 0
551 CRs{8+i}.gt = iregs[RT+i] > 0
552 ... etc
553
554 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
555 then a followup instruction must be performed, setting "reduce" mode on
556 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
557 more flexibility in analysing vectors than standard Vector ISAs. Normal
558 Vector ISAs are typically restricted to "were all results nonzero" and
559 "were some results nonzero". The application of mapreduce to Vectorised
560 cr operations allows far more sophisticated analysis, particularly in
561 conjunction with the new crweird operations see [[sv/cr_int_predication]].
562
563 Note in particular that the use of a separate instruction in this way
564 ensures that high performance multi-issue OoO inplementations do not
565 have the computation of the cumulative analysis CR as a bottleneck and
566 hindrance, regardless of the length of VL.
567
568 Additionally,
569 SVP64 [[sv/branches]] may be used, even when the branch itself is to
570 the following instruction. The combined side-effects of CTR reduction
571 and VL truncation provide several benefits.
572
573 (see [[discussion]]. some alternative schemes are described there)
574
575 ## Rc=1 when SUBVL!=1
576
577 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
578 predicate is allocated per subvector; likewise only one CR is allocated
579 per subvector.
580
581 This leaves a conundrum as to how to apply CR computation per subvector,
582 when normally Rc=1 is exclusively applied to scalar elements. A solution
583 is to perform a bitwise OR or AND of the subvector tests. Given that
584 OE is ignored in SVP64, this field may (when available) be used to select OR or
585 AND behavior.
586
587 ### Table of CR fields
588
589 CR[i] is the notation used by the OpenPower spec to refer to CR field #i,
590 so FP instructions with Rc=1 write to CR[1] aka SVCR1_000.
591
592 CRs are not stored in SPRs: they are registers in their own right.
593 Therefore context-switching the full set of CRs involves a Vectorised
594 mfcr or mtcr, using VL=64, elwidth=8 to do so. This is exactly as how
595 scalar OpenPOWER context-switches CRs: it is just that there are now
596 more of them.
597
598 The 64 SV CRs are arranged similarly to the way the 128 integer registers
599 are arranged. TODO a python program that auto-generates a CSV file
600 which can be included in a table, which is in a new page (so as not to
601 overwhelm this one). [[svp64/cr_names]]
602
603 # Register Profiles
604
605 **NOTE THIS TABLE SHOULD NO LONGER BE HAND EDITED** see
606 <https://bugs.libre-soc.org/show_bug.cgi?id=548> for details.
607
608 Instructions are broken down by Register Profiles as listed in the
609 following auto-generated page: [[opcode_regs_deduped]]. "Non-SV"
610 indicates that the operations with this Register Profile cannot be
611 Vectorised (mtspr, bc, dcbz, twi)
612
613 TODO generate table which will be here [[svp64/reg_profiles]]
614
615 # SV pseudocode illilustration
616
617 ## Single-predicated Instruction
618
619 illustration of normal mode add operation: zeroing not included, elwidth
620 overrides not included. if there is no predicate, it is set to all 1s
621
622 function op_add(rd, rs1, rs2) # add not VADD!
623 int i, id=0, irs1=0, irs2=0; predval = get_pred_val(FALSE, rd);
624 for (i = 0; i < VL; i++)
625 STATE.srcoffs = i # save context if (predval & 1<<i) # predication
626 uses intregs
627 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2]; if (!int_vec[rd
628 ].isvec) break;
629 if (rd.isvec) { id += 1; } if (rs1.isvec) { irs1 += 1; } if
630 (rs2.isvec) { irs2 += 1; } if (id == VL or irs1 == VL or irs2 ==
631 VL) {
632 # end VL hardware loop STATE.srcoffs = 0; # reset return;
633 }
634
635 This has several modes:
636
637 * RT.v = RA.v RB.v * RT.v = RA.v RB.s (and RA.s RB.v) * RT.v = RA.s RB.s *
638 RT.s = RA.v RB.v * RT.s = RA.v RB.s (and RA.s RB.v) * RT.s = RA.s RB.s
639
640 All of these may be predicated. Vector-Vector is straightfoward.
641 When one of source is a Vector and the other a Scalar, it is clear that
642 each element of the Vector source should be added to the Scalar source,
643 each result placed into the Vector (or, if the destination is a scalar,
644 only the first nonpredicated result).
645
646 The one that is not obvious is RT=vector but both RA/RB=scalar.
647 Here this acts as a "splat scalar result", copying the same result into
648 all nonpredicated result elements. If a fixed destination scalar was
649 intended, then an all-Scalar operation should be used.
650
651 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
652
653 # Assembly Annotation
654
655 Assembly code annotation is required for SV to be able to successfully
656 mark instructions as "prefixed".
657
658 A reasonable (prototype) starting point:
659
660 svp64 [field=value]*
661
662 Fields:
663
664 * ew=8/16/32 - element width
665 * sew=8/16/32 - source element width
666 * vec=2/3/4 - SUBVL
667 * mode=reduce/satu/sats/crpred
668 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
669 * spred={reg spec}
670
671 similar to x86 "rex" prefix.
672
673 For actual assembler:
674
675 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
676
677 Qualifiers:
678
679 * m={pred}: predicate mask mode
680 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
681 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
682 * ew={N}: ew=8/16/32 - sets elwidth override
683 * sw={N}: sw=8/16/32 - sets source elwidth override
684 * ff={xx}: see fail-first mode
685 * pr={xx}: see predicate-result mode
686 * sat{x}: satu / sats - see saturation mode
687 * mr: see map-reduce mode
688 * mr.svm see map-reduce with sub-vector mode
689 * crm: see map-reduce CR mode
690 * crm.svm see map-reduce CR with sub-vector mode
691 * sz: predication with source-zeroing
692 * dz: predication with dest-zeroing
693
694 For modes:
695
696 * pred-result:
697 - pm=lt/gt/le/ge/eq/ne/so/ns OR
698 - pm=RC1 OR pm=~RC1
699 * fail-first
700 - ff=lt/gt/le/ge/eq/ne/so/ns OR
701 - ff=RC1 OR ff=~RC1
702 * saturation:
703 - sats
704 - satu
705 * map-reduce:
706 - mr OR crm: "normal" map-reduce mode or CR-mode.
707 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
708
709 # Proposed Parallel-reduction algorithm
710
711 ```
712 /// reference implementation of proposed SimpleV reduction semantics.
713 ///
714 // reduction operation -- we still use this algorithm even
715 // if the reduction operation isn't associative or
716 // commutative.
717 /// `temp_pred` is a user-visible Vector Condition register
718 ///
719 /// all input arrays have length `vl`
720 def reduce( vl, vec, pred, pred,):
721 step = 1;
722 while step < vl
723 step *= 2;
724 for i in (0..vl).step_by(step)
725 other = i + step / 2;
726 other_pred = other < vl && pred[other];
727 if pred[i] && other_pred
728 vec[i] += vec[other];
729 else if other_pred
730 vec[i] = vec[other];
731 pred[i] |= other_pred;
732
733 def reduce( vl, vec, pred, pred,):
734 j = 0
735 vi = [] # array of lookup indices to skip nonpredicated
736 for i, pbit in enumerate(pred):
737 if pbit:
738 vi[j] = i
739 j += 1
740 step = 2
741 while step <= vl
742 halfstep = step // 2
743 for i in (0..vl).step_by(step)
744 other = vi[i + halfstep]
745 i = vi[i]
746 other_pred = other < vl && pred[other]
747 if pred[i] && other_pred
748 vec[i] += vec[other]
749 pred[i] |= other_pred
750 step *= 2
751
752 ```