(no commit message)
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 [[!tag standards]]
2
3 # Appendix
4
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=574> Saturation
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47> Parallel Prefix
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=697> Reduce Modes
8 * <https://bugs.libre-soc.org/show_bug.cgi?id=864> parallel prefix simulator
9 * <https://bugs.libre-soc.org/show_bug.cgi?id=809> OV sv.addex discussion
10
11 This is the appendix to [[sv/svp64]], providing explanations of modes
12 etc. leaving the main svp64 page's primary purpose as outlining the
13 instruction format.
14
15 Table of contents:
16
17 [[!toc]]
18
19 # Partial Implementations
20
21 It is perfectly legal to implement subsets of SVP64 as long as illegal
22 instruction traps are always raised on unimplemented features,
23 so that soft-emulation is possible,
24 even for future revisions of SVP64. With SVP64 being partly controlled
25 through contextual SPRs, a little care has to be taken.
26
27 **All** SPRs
28 not implemented including reserved ones for future use must raise an illegal
29 instruction trap if read or written. This allows software the
30 opportunity to emulate the context created by the given SPR.
31
32 See [[sv/compliancy_levels]] for full details.
33
34 # XER, SO and other global flags
35
36 Vector systems are expected to be high performance. This is achieved
37 through parallelism, which requires that elements in the vector be
38 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
39 Read-Write Hazards on single-bit global resources, having a significant
40 detrimental effect.
41
42 Consequently in SV, XER.SO behaviour is disregarded (including
43 in `cmp` instructions). XER.SO is not read, but XER.OV may be written,
44 breaking the Read-Modify-Write Hazard Chain that complicates
45 microarchitectural implementations.
46 This includes when `scalar identity behaviour` occurs. If precise
47 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
48 instructions should be used without an SV Prefix.
49
50 TODO jacob add about OV https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf
51
52 Of note here is that XER.SO and OV may already be disregarded in the
53 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
54 SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets,
55 but only for SVP64 Prefixed Operations.
56
57 XER.CA/CA32 on the other hand is expected and required to be implemented
58 according to standard Power ISA Scalar behaviour. Interestingly, due
59 to SVP64 being in effect a hardware for-loop around Scalar instructions
60 executing in precise Program Order, a little thought shows that a Vectorised
61 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
62 and producing, at the end, a single bit Carry out. High performance
63 implementations may exploit this observation to deploy efficient
64 Parallel Carry Lookahead.
65
66 # assume VL=4, this results in 4 sequential ops (below)
67 sv.adde r0.v, r4.v, r8.v
68
69 # instructions that get executed in backend hardware:
70 adde r0, r4, r8 # takes carry-in, produces carry-out
71 adde r1, r5, r9 # takes carry from previous
72 ...
73 adde r3, r7, r11 # likewise
74
75 It can clearly be seen that the carry chains from one
76 64 bit add to the next, the end result being that a
77 256-bit "Big Integer Add with Carry" has been performed, and that
78 CA contains the 257th bit. A one-instruction 512-bit Add-with-Carry
79 may be performed by setting VL=8, and a one-instruction
80 1024-bit Add-with-Carry by setting VL=16, and so on. More on
81 this in [[openpower/sv/biginteger]]
82
83 # v3.0B/v3.1 relevant instructions
84
85 SV is primarily designed for use as an efficient hybrid 3D GPU / VPU /
86 CPU ISA.
87
88 Vectorisation of the VSX Packed SIMD system makes no sense whatsoever,
89 the sole exceptions potentially being any operations with 128-bit
90 operands such as `vrlq` (Rotate Quad Word) and `xsaddqp` (Scalar
91 Quad-precision Add).
92 SV effectively *replaces* the majority of VSX, requiring far less
93 instructions, and provides, at the very minimum, predication
94 (which VSX was designed without).
95
96 Likewise, Load/Store Multiple make no sense to
97 have because they are not only provided by SV, the SV alternatives may
98 be predicated as well, making them far better suited to use in function
99 calls and context-switching.
100
101 Additionally, some v3.0/1 instructions simply make no sense at all in a
102 Vector context: `rfid` falls into this category,
103 as well as `sc` and `scv`. Here there is simply no point
104 trying to Vectorise them: the standard OpenPOWER v3.0/1 instructions
105 should be called instead.
106
107 Fortuitously this leaves several Major Opcodes free for use by SV
108 to fit alternative future instructions. In a 3D context this means
109 Vector Product, Vector Normalise, [[sv/mv.swizzle]], Texture LD/ST
110 operations, and others critical to an efficient, effective 3D GPU and
111 VPU ISA. With such instructions being included as standard in other
112 commercially-successful GPU ISAs it is likewise critical that a 3D
113 GPU/VPU based on svp64 also have such instructions.
114
115 Note however that svp64 is stand-alone and is in no way
116 critically dependent on the existence or provision of 3D GPU or VPU
117 instructions. These should be considered entirely separate
118 extensions, and their discussion
119 and specification is out of scope for this document.
120
121 ## Major opcode map (v3.0B)
122
123 This table is taken from v3.0B.
124 Table 9: Primary Opcode Map (opcode bits 0:5)
125
126 ```
127 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
128 000 | | | tdi | twi | EXT04 | | | mulli | 000
129 001 | subfic | | cmpli | cmpi | addic | addic. | addi | addis | 001
130 010 | bc/l/a | EXT17 | b/l/a | EXT19 | rlwimi| rlwinm | | rlwnm | 010
131 011 | ori | oris | xori | xoris | andi. | andis. | EXT30 | EXT31 | 011
132 100 | lwz | lwzu | lbz | lbzu | stw | stwu | stb | stbu | 100
133 101 | lhz | lhzu | lha | lhau | sth | sthu | lmw | stmw | 101
134 110 | lfs | lfsu | lfd | lfdu | stfs | stfsu | stfd | stfdu | 110
135 111 | lq | EXT57 | EXT58 | EXT59 | EXT60 | EXT61 | EXT62 | EXT63 | 111
136 | 000 | 001 | 010 | 011 | 100 | 101 | 110 | 111
137 ```
138
139 It is important to note that having a different v3.0B Scalar opcode
140 that is different from an SVP64 one is highly undesirable: the complexity
141 in the decoder is greatly increased, through breaking of the RISC paradigm.
142
143 # EXTRA Field Mapping
144
145 The purpose of the 9-bit EXTRA field mapping is to mark individual
146 registers (RT, RA, BFA) as either scalar or vector, and to extend
147 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
148 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
149 Predication) leaving a mere 6 bits for qualifying registers. As can
150 be seen there is significant pressure on these (and in fact all) SVP64 bits.
151
152 In Power ISA v3.1 prefixing there are bits which describe and classify
153 the prefix in a fashion that is independent of the suffix. MLSS for
154 example. For SVP64 there is insufficient space to make the SVP64 Prefix
155 "self-describing", and consequently every single Scalar instruction
156 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
157 This process was semi-automated and is described in this section.
158 The final results, which are part of the SVP64 Specification, are here:
159 [[openpower/opcode_regs_deduped]]
160
161 * Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
162 from reading the markdown formatted version of the Scalar pseudocode
163 which is machine-readable and found in [[openpower/isatables]]. The
164 analysis gives, by instruction, a "Register Profile". `add RT, RA, RB`
165 for example is given a designation `RM-2R-1W` because it requires
166 two GPR reads and one GPR write.
167 * Secondly, the total number of registers was added up (2R-1W is 3 registers)
168 and if less than or equal to three then that instruction could be given an
169 EXTRA3 designation. Four or more is given an EXTRA2 designation because
170 there are only 9 bits available.
171 * Thirdly, the instruction was analysed to see if Twin or Single
172 Predication was suitable. As a general rule this was if there
173 was only a single operand and a single result (`extw` and LD/ST)
174 however it was found that some 2 or 3 operand instructions also
175 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
176 in Twin Predication, some compromises were made, here. LDST is
177 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
178 * Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
179 could have been decided
180 that RA would be indexed 0 (EXTRA bits 0-2), RB indexed 1 (EXTRA bits 3-5)
181 and RT indexed 2 (EXTRA bits 6-8). In some cases (LD/ST with update)
182 RA-as-a-source is given a **different** EXTRA index from RA-as-a-result
183 (because it is possible to do, and perceived to be useful). Rc=1
184 co-results (CR0, CR1) are always given the same EXTRA index as their
185 main result (RT, FRT).
186 * Fifthly, in an automated process the results of the analysis
187 were outputted in CSV Format for use in machine-readable form
188 by sv_analysis.py <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
189
190 This process was laborious but logical, and, crucially, once a
191 decision is made (and ratified) cannot be reversed.
192 Qualifying future Power ISA Scalar instructions for SVP64
193 is **strongly** advised to utilise this same process and the same
194 sv_analysis.py program as a canonical method of maintaining the
195 relationships. Alterations to that same program which
196 change the Designation is **prohibited** once finalised (ratified
197 through the Power ISA WG Process). It would
198 be similar to deciding that `add` should be changed from X-Form
199 to D-Form.
200
201 # Single Predication <a name="1p"> </a>
202
203 This is a standard mode normally found in Vector ISAs. every element in every source Vector and in the destination uses the same bit of one single predicate mask.
204
205 In SVSTATE, for Single-predication, implementors MUST increment both srcstep and dststep, but depending on whether sz and/or dz are set, srcstep and
206 dststep can still potentially become different indices. Only when sz=dz
207 is srcstep guaranteed to equal dststep at all times.
208
209 Note that in some Mode Formats there is only one flag (zz). This indicates
210 that *both* sz *and* dz are set to the same.
211
212 Example 1:
213
214 * VL=4
215 * mask=0b1101
216 * sz=0, dz=1
217
218 The following schedule for srcstep and dststep will occur:
219
220 | srcstep | dststep | comment |
221 | ---- | ----- | -------- |
222 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
223 | 1 | 2 | sz=1 but dz=0: dst skips mask[1], src soes not |
224 | 2 | 3 | mask[src=2] and mask[dst=3] are 1 |
225 | end | end | loop has ended because dst reached VL-1 |
226
227 Example 2:
228
229 * VL=4
230 * mask=0b1101
231 * sz=1, dz=0
232
233 The following schedule for srcstep and dststep will occur:
234
235 | srcstep | dststep | comment |
236 | ---- | ----- | -------- |
237 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
238 | 2 | 1 | sz=0 but dz=1: src skips mask[1], dst does not |
239 | 3 | 2 | mask[src=3] and mask[dst=2] are 1 |
240 | end | end | loop has ended because src reached VL-1 |
241
242 In both these examples it is crucial to note that despite there being
243 a single predicate mask, with sz and dz being different, srcstep and
244 dststep are being requested to react differently.
245
246 Example 3:
247
248 * VL=4
249 * mask=0b1101
250 * sz=0, dz=0
251
252 The following schedule for srcstep and dststep will occur:
253
254 | srcstep | dststep | comment |
255 | ---- | ----- | -------- |
256 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
257 | 2 | 2 | sz=0 and dz=0: both src and dst skip mask[1] |
258 | 3 | 3 | mask[src=3] and mask[dst=3] are 1 |
259 | end | end | loop has ended because src and dst reached VL-1 |
260
261 Here, both srcstep and dststep remain in lockstep because sz=dz=1
262
263 # Twin Predication <a name="2p"> </a>
264
265 This is a novel concept that allows predication to be applied to a single
266 source and a single dest register. The following types of traditional
267 Vector operations may be encoded with it, *without requiring explicit
268 opcodes to do so*
269
270 * VSPLAT (a single scalar distributed across a vector)
271 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
272 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
273 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
274 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
275
276 Those patterns (and more) may be applied to:
277
278 * mv (the usual way that V\* ISA operations are created)
279 * exts\* sign-extension
280 * rwlinm and other RS-RA shift operations (**note**: excluding
281 those that take RA as both a src and dest. These are not
282 1-src 1-dest, they are 2-src, 1-dest)
283 * LD and ST (treating AGEN as one source)
284 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
285 * Condition Register ops mfcr, mtcr and other similar
286
287 This is a huge list that creates extremely powerful combinations,
288 particularly given that one of the predicate options is `(1<<r3)`
289
290 Additional unusual capabilities of Twin Predication include a back-to-back
291 version of VCOMPRESS-VEXPAND which is effectively the ability to do
292 sequentially ordered multiple VINSERTs. The source predicate selects a
293 sequentially ordered subset of elements to be inserted; the destination
294 predicate specifies the sequentially ordered recipient locations.
295 This is equivalent to
296 `llvm.masked.compressstore.*`
297 followed by
298 `llvm.masked.expandload.*`
299 with a single instruction.
300
301 This extreme power and flexibility comes down to the fact that SVP64
302 is not actually a Vector ISA: it is a loop-abstraction-concept that
303 is applied *in general* to Scalar operations, just like the x86
304 `REP` instruction (if put on steroids).
305
306 # EXTRA Pack/Unpack Modes
307
308 The pack/unpack concept of VSX `vpack` is abstracted out as a Sub-Vector
309 reordering Schedule, named `RM-2P-1S1D-PU`.
310 The usual RM-2P-1S1D is reduced from EXTRA3 to EXTRA2, making
311 room for 2 extra bits that enable either "packing" or "unpacking"
312 on the subvectors vec2/3/4.
313
314 Illustrating a
315 "normal" SVP64 operation with `SUBVL!=1:` (assuming no elwidth overrides):
316
317 def index():
318 for i in range(VL):
319 for j in range(SUBVL):
320 yield i*SUBVL+j
321
322 for idx in index():
323 operation_on(RA+idx)
324
325 For pack/unpack (again, no elwidth overrides):
326
327 # yield an outer-SUBVL or inner VL loop with SUBVL
328 def index_p(outer):
329 if outer:
330 for j in range(SUBVL):
331 for i in range(VL):
332 yield i+VL*j
333 else:
334 for i in range(VL):
335 for j in range(SUBVL):
336 yield i*SUBVL+j
337
338 # walk through both source and dest indices simultaneously
339 for src_idx, dst_idx in zip(index_p(PACK), index_p(UNPACK)):
340 move_operation(RT+dst_idx, RA+src_idx)
341
342 "yield" from python is used here for simplicity and clarity.
343 The two Finite State Machines for the generation of the source
344 and destination element offsets progress incrementally in
345 lock-step.
346
347 Example VL=2, SUBVL=3, PACK_en=1 - elements grouped by
348 vec3 will be redistributed such that Sub-elements 0 are
349 packed together, Sub-elements 1 are packed together, as
350 are Sub-elements 2.
351
352 srcstep=0 srcstep=1
353 0 1 2 3 4 5
354
355 dststep=0 dststep=1 dststep=2
356 0 3 1 4 2 5
357
358 Setting of both `PACK_en` and `UNPACK_en` is neither prohibited nor
359 `UNDEFINED` because the reordering is fully deterministic, and
360 additional REMAP reordering may be applied. For Matrix this would
361 give potentially up to 4 Dimensions of reordering.
362
363 Pack/Unpack applies to mv operations, mv.swizzle,
364 and some other single-source
365 single-destination operations such as Indexed LD/ST and extsw.
366 [[sv/mv.swizzle]] has a slightly different pseudocode algorithm
367 for Vertical-First Mode.
368
369 # Reduce modes
370
371 Reduction in SVP64 is deterministic and somewhat of a misnomer. A normal
372 Vector ISA would have explicit Reduce opcodes with defined characteristics
373 per operation: in SX Aurora there is even an additional scalar argument
374 containing the initial reduction value, and the default is either 0
375 or 1 depending on the specifics of the explicit opcode.
376 SVP64 fundamentally has to
377 utilise *existing* Scalar Power ISA v3.0B operations, which presents some
378 unique challenges.
379
380 The solution turns out to be to simply define reduction as permitting
381 deterministic element-based schedules to be issued using the base Scalar
382 operations, and to rely on the underlying microarchitecture to resolve
383 Register Hazards at the element level. This goes back to
384 the fundamental principle that SV is nothing more than a Sub-Program-Counter
385 sitting between Decode and Issue phases.
386
387 For Scalar Reduction,
388 Microarchitectures *may* take opportunities to parallelise the reduction
389 but only if in doing so they preserve strict Program Order at the Element Level.
390 Opportunities where this is possible include an `OR` operation
391 or a MIN/MAX operation: it may be possible to parallelise the reduction,
392 but for Floating Point it is not permitted due to different results
393 being obtained if the reduction is not executed in strict Program-Sequential
394 Order.
395
396 In essence it becomes the programmer's responsibility to leverage the
397 pre-determined schedules to desired effect.
398
399 ## Scalar result reduction and iteration
400
401 Scalar Reduction per se does not exist, instead is implemented in SVP64
402 as a simple and natural relaxation of the usual restriction on the Vector
403 Looping which would terminate if the destination was marked as a Scalar.
404 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
405 even though the destination register is marked as scalar.
406 Thus it is up to the programmer to be aware of this, observe some
407 conventions, and thus end up achieving the desired outcome of scalar
408 reduction.
409
410 It is also important to appreciate that there is no
411 actual imposition or restriction on how this mode is utilised: there
412 will therefore be several valuable uses (including Vector Iteration
413 and "Reverse-Gear")
414 and it is up to the programmer to make best use of the
415 (strictly deterministic) capability
416 provided.
417
418 In this mode, which is suited to operations involving carry or overflow,
419 one register must be assigned, by convention by the programmer to be the
420 "accumulator". Scalar reduction is thus categorised by:
421
422 * One of the sources is a Vector
423 * the destination is a scalar
424 * optionally but most usefully when one source scalar register is
425 also the scalar destination (which may be informally termed
426 the "accumulator")
427 * That the source register type is the same as the destination register
428 type identified as the "accumulator". Scalar reduction on `cmp`,
429 `setb` or `isel` makes no sense for example because of the mixture
430 between CRs and GPRs.
431
432 *Note that issuing instructions in Scalar reduce mode such as `setb`
433 are neither `UNDEFINED` nor prohibited, despite them not making much
434 sense at first glance.
435 Scalar reduce is strictly defined behaviour, and the cost in
436 hardware terms of prohibition of seemingly non-sensical operations is too great.
437 Therefore it is permitted and required to be executed successfully.
438 Implementors **MAY** choose to optimise such instructions in instances
439 where their use results in "extraneous execution", i.e. where it is clear
440 that the sequence of operations, comprising multiple overwrites to
441 a scalar destination **without** cumulative, iterative, or reductive
442 behaviour (no "accumulator"), may discard all but the last element
443 operation. Identification
444 of such is trivial to do for `setb` and `cmp`: the source register type is
445 a completely different register file from the destination.
446 Likewise Scalar reduction when the destination is a Vector
447 is as if the Reduction Mode was not requested. However it would clearly
448 be unacceptable to perform such optimisations on cache-inhibited LD/ST,
449 so some considerable care needs to be taken.*
450
451 Typical applications include simple operations such as `ADD r3, r10.v,
452 r3` where, clearly, r3 is being used to accumulate the addition of all
453 elements of the vector starting at r10.
454
455 # add RT, RA,RB but when RT==RA
456 for i in range(VL):
457 iregs[RA] += iregs[RB+i] # RT==RA
458
459 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
460 SV ordinarily
461 **terminates** at the first scalar operation. Only by marking the
462 operation as "mapreduce" will it continue to issue multiple sub-looped
463 (element) instructions in `Program Order`.
464
465 To perform the loop in reverse order, the ```RG``` (reverse gear) bit must be set. This may be useful in situations where the results may be different
466 (floating-point) if executed in a different order. Given that there is
467 no actual prohibition on Reduce Mode being applied when the destination
468 is a Vector, the "Reverse Gear" bit turns out to be a way to apply Iterative
469 or Cumulative Vector operations in reverse. `sv.add/rg r3.v, r4.v, r4.v`
470 for example will start at the opposite end of the Vector and push
471 a cumulative series of overlapping add operations into the Execution units of
472 the underlying hardware.
473
474 Other examples include shift-mask operations where a Vector of inserts
475 into a single destination register is required (see [[sv/bitmanip]], bmset),
476 as a way to construct
477 a value quickly from multiple arbitrary bit-ranges and bit-offsets.
478 Using the same register as both the source and destination, with Vectors
479 of different offsets masks and values to be inserted has multiple
480 applications including Video, cryptography and JIT compilation.
481
482 # assume VL=4:
483 # * Vector of shift-offsets contained in RC (r12.v)
484 # * Vector of masks contained in RB (r8.v)
485 # * Vector of values to be masked-in in RA (r4.v)
486 # * Scalar destination RT (r0) to receive all mask-offset values
487 sv.bmset/mr r0, r4.v, r8.v, r12.v
488
489 Due to the Deterministic Scheduling,
490 Subtract and Divide are still permitted to be executed in this mode,
491 although from an algorithmic perspective it is strongly discouraged.
492 It would be better to use addition followed by one final subtract,
493 or in the case of divide, to get better accuracy, to perform a multiply
494 cascade followed by a final divide.
495
496 Note that single-operand or three-operand scalar-dest reduce is perfectly
497 well permitted: the programmer may still declare one register, used as
498 both a Vector source and Scalar destination, to be utilised as
499 the "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc
500 this naturally fits well with the normal expected usage of these
501 operations.
502
503 If an interrupt or exception occurs in the middle of the scalar mapreduce,
504 the scalar destination register **MUST** be updated with the current
505 (intermediate) result, because this is how ```Program Order``` is
506 preserved (Vector Loops are to be considered to be just another way of issuing instructions
507 in Program Order). In this way, after return from interrupt,
508 the scalar mapreduce may continue where it left off. This provides
509 "precise" exception behaviour.
510
511 Note that hardware is perfectly permitted to perform multi-issue
512 parallel optimisation of the scalar reduce operation: it's just that
513 as far as the user is concerned, all exceptions and interrupts **MUST**
514 be precise.
515
516 ## Vector result reduce mode
517
518 Vector Reduce Mode issues a deterministic tree-reduction schedule to the underlying micro-architecture. Like Scalar reduction, the "Scalar Base"
519 (Power ISA v3.0B) operation is leveraged, unmodified, to give the
520 *appearance* and *effect* of Reduction.
521
522 In Horizontal-First Mode, Vector-result reduction **requires**
523 the destination to be a Vector, which will be used to store
524 intermediary results.
525
526 Given that the tree-reduction schedule is deterministic,
527 Interrupts and exceptions
528 can therefore also be precise. The final result will be in the first
529 non-predicate-masked-out destination element, but due again to
530 the deterministic schedule programmers may find uses for the intermediate
531 results.
532
533 When Rc=1 a corresponding Vector of co-resultant CRs is also
534 created. No special action is taken: the result and its CR Field
535 are stored "as usual" exactly as all other SVP64 Rc=1 operations.
536
537 Note that the Schedule only makes sense on top of certain instructions:
538 X-Form with a Register Profile of `RT,RA,RB` is fine. Like Scalar
539 Reduction, nothing is prohibited:
540 the results of execution on an unsuitable instruction may simply
541 not make sense. Many 3-input instructions (madd, fmadd) unlike Scalar
542 Reduction in particular do not make sense, but `ternlogi`, if used
543 with care, would.
544
545 **Parallel-Reduction with Predication**
546
547 To avoid breaking the strict RISC-paradigm, keeping the Issue-Schedule
548 completely separate from the actual element-level (scalar) operations,
549 Move operations are **not** included in the Schedule. This means that
550 the Schedule leaves the final (scalar) result in the first-non-masked
551 element of the Vector used. With the predicate mask being dynamic
552 (but deterministic) this result could be anywhere.
553
554 If that result is needed to be moved to a (single) scalar register
555 then a follow-up `sv.mv/sm=predicate rt, ra.v` instruction will be
556 needed to get it, where the predicate is the exact same predicate used
557 in the prior Parallel-Reduction instruction. For *some* implementations
558 this may be a slow operation. It may be better to perform a pre-copy
559 of the values, compressing them (VREDUCE-style) into a contiguous block,
560 which will guarantee that the result goes into the very first element
561 of the destination vector.
562
563 **Usage conditions**
564
565 The simplest usage is to perform an overwrite, specifying all three
566 register operands the same.
567
568 setvl VL=6
569 sv.add/vr 8.v, 8.v, 8.v
570
571 The Reduction Schedule will issue the Parallel Tree Reduction spanning
572 registers 8 through 13, by adjusting the offsets to RT, RA and RB as
573 necessary (see "Parallel Reduction algorithm" in a later section).
574
575 A non-overwrite is possible as well but just as with the overwrite
576 version, only those destination elements necessary for storing
577 intermediary computations will be written to: the remaining elements
578 will **not** be overwritten and will **not** be zero'd.
579
580 setvl VL=4
581 sv.add/vr 0.v, 8.v, 8.v
582
583 ## Sub-Vector Horizontal Reduction
584
585 Note that when SVM is clear and SUBVL!=1 the sub-elements are
586 *independent*, i.e. they are mapreduced per *sub-element* as a result.
587 illustration with a vec2, assuming RA==RT, e.g `sv.add/mr/vec2 r4, r4, r16.v`
588
589 for i in range(0, VL):
590 # RA==RT in the instruction. does not have to be
591 iregs[RT].x = op(iregs[RT].x, iregs[RB+i].x)
592 iregs[RT].y = op(iregs[RT].y, iregs[RB+i].y)
593
594 Thus logically there is nothing special or unanticipated about
595 `SVM=0`: it is expected behaviour according to standard SVP64
596 Sub-Vector rules.
597
598 By contrast, when SVM is set and SUBVL!=1, a Horizontal
599 Subvector mode is enabled, which behaves very much more
600 like a traditional Vector Processor Reduction instruction.
601
602 Example for a vec2:
603
604 for i in range(VL):
605 iregs[RT+i] = op(iregs[RA+i].x, iregs[RB+i].y)
606
607 Example for a vec3:
608
609 for i in range(VL):
610 iregs[RT+i] = op(iregs[RA+i].x, iregs[RB+i].y)
611 iregs[RT+i] = op(iregs[RT+i] , iregs[RB+i].z)
612
613 Example for a vec4:
614
615 for i in range(VL):
616 iregs[RT+i] = op(iregs[RA+i].x, iregs[RB+i].y)
617 iregs[RT+i] = op(iregs[RT+i] , iregs[RB+i].z)
618 iregs[RT+i] = op(iregs[RT+i] , iregs[RB+i].w)
619
620 In this mode, when Rc=1 the Vector of CRs is as normal: each result
621 element creates a corresponding CR element (for the final, reduced, result).
622
623 Note:
624
625 1. that the destination (RT) is inherently used as an "Accumulator"
626 register, and consequently the Sub-Vector Loop is interruptible.
627 If RT is a Scalar then as usual the main VL Loop terminates at the
628 first predicated element (or the first element if unpredicated).
629 2. that the Sub-Vector designation applies to RA and RB *but not RT*.
630 3. that the number of operations executed is one less than the Sub-vector
631 length
632
633 # Fail-on-first
634
635 Data-dependent fail-on-first has two distinct variants: one for LD/ST
636 (see [[sv/ldst]],
637 the other for arithmetic operations (actually, CR-driven)
638 ([[sv/normal]]) and CR operations ([[sv/cr_ops]]).
639 Note in each
640 case the assumption is that vector elements are required appear to be
641 executed in sequential Program Order, element 0 being the first.
642
643 * LD/ST ffirst treats the first LD/ST in a vector (element 0) as an
644 ordinary one. Exceptions occur "as normal". However for elements 1
645 and above, if an exception would occur, then VL is **truncated** to the
646 previous element.
647 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
648 CR-creating operation produces a result (including cmp). Similar to
649 branch, an analysis of the CR is performed and if the test fails, the
650 vector operation terminates and discards all element operations
651 above the current one (and the current one if VLi is not set),
652 and VL is truncated to either
653 the *previous* element or the current one, depending on whether
654 VLi (VL "inclusive") is set.
655
656 Thus the new VL comprises a contiguous vector of results,
657 all of which pass the testing criteria (equal to zero, less than zero).
658
659 The CR-based data-driven fail-on-first is new and not found in ARM
660 SVE or RVV. It is extremely useful for reducing instruction count,
661 however requires speculative execution involving modifications of VL
662 to get high performance implementations. An additional mode (RC1=1)
663 effectively turns what would otherwise be an arithmetic operation
664 into a type of `cmp`. The CR is stored (and the CR.eq bit tested
665 against the `inv` field).
666 If the CR.eq bit is equal to `inv` then the Vector is truncated and
667 the loop ends.
668 Note that when RC1=1 the result elements are never stored, only the CRs.
669
670 VLi is only available as an option when `Rc=0` (or for instructions
671 which do not have Rc). When set, the current element is always
672 also included in the count (the new length that VL will be set to).
673 This may be useful in combination with "inv" to truncate the Vector
674 to *exclude* elements that fail a test, or, in the case of implementations
675 of strncpy, to include the terminating zero.
676
677 In CR-based data-driven fail-on-first there is only the option to select
678 and test one bit of each CR (just as with branch BO). For more complex
679 tests this may be insufficient. If that is the case, a vectorised crops
680 (crand, cror) may be used, and ffirst applied to the crop instead of to
681 the arithmetic vector.
682
683 One extremely important aspect of ffirst is:
684
685 * LDST ffirst may never set VL equal to zero. This because on the first
686 element an exception must be raised "as normal".
687 * CR-based data-dependent ffirst on the other hand **can** set VL equal
688 to zero. This is the only means in the entirety of SV that VL may be set
689 to zero (with the exception of via the SV.STATE SPR). When VL is set
690 zero due to the first element failing the CR bit-test, all subsequent
691 vectorised operations are effectively `nops` which is
692 *precisely the desired and intended behaviour*.
693
694 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
695 to a nonzero value for any implementation-specific reason. For example:
696 it is perfectly reasonable for implementations to alter VL when ffirst
697 LD or ST operations are initiated on a nonaligned boundary, such that
698 within a loop the subsequent iteration of that loop begins subsequent
699 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
700 workloads or balance resources.
701
702 CR-based data-dependent first on the other hand MUST not truncate VL
703 arbitrarily to a length decided by the hardware: VL MUST only be
704 truncated based explicitly on whether a test fails.
705 This because it is a precise test on which algorithms
706 will rely.
707
708 ## Data-dependent fail-first on CR operations (crand etc)
709
710 Operations that actually produce or alter CR Field as a result
711 do not also in turn have an Rc=1 mode. However it makes no
712 sense to try to test the 4 bits of a CR Field for being equal
713 or not equal to zero. Moreover, the result is already in the
714 form that is desired: it is a CR field. Therefore,
715 CR-based operations have their own SVP64 Mode, described
716 in [[sv/cr_ops]]
717
718 There are two primary different types of CR operations:
719
720 * Those which have a 3-bit operand field (referring to a CR Field)
721 * Those which have a 5-bit operand (referring to a bit within the
722 whole 32-bit CR)
723
724 More details can be found in [[sv/cr_ops]].
725
726 # pred-result mode
727
728 Pred-result mode may not be applied on CR-based operations.
729
730 Although CR operations (mtcr, crand, cror) may be Vectorised,
731 predicated, pred-result mode applies to operations that have
732 an Rc=1 mode, or make sense to add an RC1 option.
733
734 Predicate-result merges common CR testing with predication, saving on
735 instruction count. In essence, a Condition Register Field test
736 is performed, and if it fails it is considered to have been
737 *as if* the destination predicate bit was zero. Given that
738 there are no CR-based operations that produce Rc=1 co-results,
739 there can be no pred-result mode for mtcr and other CR-based instructions
740
741 Arithmetic and Logical Pred-result, which does have Rc=1 or for which
742 RC1 Mode makes sense, is covered in [[sv/normal]]
743
744 # CR Operations
745
746 CRs are slightly more involved than INT or FP registers due to the
747 possibility for indexing individual bits (crops BA/BB/BT). Again however
748 the access pattern needs to be understandable in relation to v3.0B / v3.1B
749 numbering, with a clear linear relationship and mapping existing when
750 SV is applied.
751
752 ## CR EXTRA mapping table and algorithm <a name="cr_extra"></a>
753
754 Numbering relationships for CR fields are already complex due to being
755 in BE format (*the relationship is not clearly explained in the v3.0B
756 or v3.1 specification*). However with some care and consideration
757 the exact same mapping used for INT and FP regfiles may be applied,
758 just to the upper bits, as explained below. The notation
759 `CR{field number}` is used to indicate access to a particular
760 Condition Register Field (as opposed to the notation `CR[bit]`
761 which accesses one bit of the 32 bit Power ISA v3.0B
762 Condition Register)
763
764 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
765
766 CR{7-n} = CR[32+n*4:35+n*4]
767
768 For SVP64 the relationship for the sequential
769 numbering of elements is to the CR **fields** within
770 the CR Register, not to individual bits within the CR register.
771
772 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
773 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits
774 *in* that CR (EQ/LT/GT/SO). The numbering was determined (after 4 months of
775 analysis and research) to be as follows:
776
777 CR_index = 7-(BA>>2) # top 3 bits but BE
778 bit_index = 3-(BA & 0b11) # low 2 bits but BE
779 CR_reg = CR{CR_index} # get the CR
780 # finally get the bit from the CR.
781 CR_bit = (CR_reg & (1<<bit_index)) != 0
782
783 When it comes to applying SV, it is the CR\_reg number to which SV EXTRA2/3
784 applies, **not** the CR\_bit portion (bits 3-4):
785
786 if extra3_mode:
787 spec = EXTRA3
788 else:
789 spec = EXTRA2<<1 | 0b0
790 if spec[0]:
791 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
792 return ((BA >> 2)<<6) | # hi 3 bits shifted up
793 (spec[1:2]<<4) | # to make room for these
794 (BA & 0b11) # CR_bit on the end
795 else:
796 # scalar constructs "00 spec[1:2] BA[0:4]"
797 return (spec[1:2] << 5) | BA
798
799 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
800 algorithm to determine CR\_reg is modified to as follows:
801
802 CR_index = 7-(BA>>2) # top 3 bits but BE
803 if spec[0]:
804 # vector mode, 0-124 increments of 4
805 CR_index = (CR_index<<4) | (spec[1:2] << 2)
806 else:
807 # scalar mode, 0-32 increments of 1
808 CR_index = (spec[1:2]<<3) | CR_index
809 # same as for v3.0/v3.1 from this point onwards
810 bit_index = 3-(BA & 0b11) # low 2 bits but BE
811 CR_reg = CR{CR_index} # get the CR
812 # finally get the bit from the CR.
813 CR_bit = (CR_reg & (1<<bit_index)) != 0
814
815 Note here that the decoding pattern to determine CR\_bit does not change.
816
817 Note: high-performance implementations may read/write Vectors of CRs in
818 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
819 simplify internal design. If instructions are issued where CR Vectors
820 do not start on a 32-bit aligned boundary, performance may be affected.
821
822 ## CR fields as inputs/outputs of vector operations
823
824 CRs (or, the arithmetic operations associated with them)
825 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
826
827 When vectorized, the CR inputs/outputs are sequentially read/written
828 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
829 writing to CR8 (TBD evaluate) and increase sequentially from there.
830 This is so that:
831
832 * implementations may rely on the Vector CRs being aligned to 8. This
833 means that CRs may be read or written in aligned batches of 32 bits
834 (8 CRs per batch), for high performance implementations.
835 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
836 overwritten by vector Rc=1 operations except for very large VL
837 * CR-based predication, from CR32, is also not interfered with
838 (except by large VL).
839
840 However when the SV result (destination) is marked as a scalar by the
841 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
842 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
843 for FP operations.
844
845 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
846 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
847 v3.0B scalar operations produce a **tuple** of element results: the
848 result of the operation as one part of that element *and a corresponding
849 CR element*. Greatly simplified pseudocode:
850
851 for i in range(VL):
852 # calculate the vector result of an add
853 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
854 # now calculate CR bits
855 CRs{8+i}.eq = iregs[RT+i] == 0
856 CRs{8+i}.gt = iregs[RT+i] > 0
857 ... etc
858
859 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
860 then a followup instruction must be performed, setting "reduce" mode on
861 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
862 more flexibility in analysing vectors than standard Vector ISAs. Normal
863 Vector ISAs are typically restricted to "were all results nonzero" and
864 "were some results nonzero". The application of mapreduce to Vectorised
865 cr operations allows far more sophisticated analysis, particularly in
866 conjunction with the new crweird operations see [[sv/cr_int_predication]].
867
868 Note in particular that the use of a separate instruction in this way
869 ensures that high performance multi-issue OoO inplementations do not
870 have the computation of the cumulative analysis CR as a bottleneck and
871 hindrance, regardless of the length of VL.
872
873 Additionally,
874 SVP64 [[sv/branches]] may be used, even when the branch itself is to
875 the following instruction. The combined side-effects of CTR reduction
876 and VL truncation provide several benefits.
877
878 (see [[discussion]]. some alternative schemes are described there)
879
880 ## Rc=1 when SUBVL!=1
881
882 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
883 predicate is allocated per subvector; likewise only one CR is allocated
884 per subvector.
885
886 This leaves a conundrum as to how to apply CR computation per subvector,
887 when normally Rc=1 is exclusively applied to scalar elements. A solution
888 is to perform a bitwise OR or AND of the subvector tests. Given that
889 OE is ignored in SVP64, this field may (when available) be used to select OR or
890 AND behavior.
891
892 ### Table of CR fields
893
894 CRn is the notation used by the OpenPower spec to refer to CR field #i,
895 so FP instructions with Rc=1 write to CR1 (n=1).
896
897 CRs are not stored in SPRs: they are registers in their own right.
898 Therefore context-switching the full set of CRs involves a Vectorised
899 mfcr or mtcr, using VL=8 to do so. This is exactly as how
900 scalar OpenPOWER context-switches CRs: it is just that there are now
901 more of them.
902
903 The 64 SV CRs are arranged similarly to the way the 128 integer registers
904 are arranged. TODO a python program that auto-generates a CSV file
905 which can be included in a table, which is in a new page (so as not to
906 overwhelm this one). [[svp64/cr_names]]
907
908 # Register Profiles
909
910 Instructions are broken down by Register Profiles as listed in the
911 following auto-generated page: [[opcode_regs_deduped]]. These tables,
912 despite being auto-generated, are part of the Specification.
913
914 # SV pseudocode illilustration
915
916 ## Single-predicated Instruction
917
918 illustration of normal mode add operation: zeroing not included, elwidth
919 overrides not included. if there is no predicate, it is set to all 1s
920
921 function op_add(rd, rs1, rs2) # add not VADD!
922 int i, id=0, irs1=0, irs2=0;
923 predval = get_pred_val(FALSE, rd);
924 for (i = 0; i < VL; i++)
925 STATE.srcoffs = i # save context
926 if (predval & 1<<i) # predication uses intregs
927 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
928 if (!int_vec[rd].isvec) break;
929 if (rd.isvec) { id += 1; }
930 if (rs1.isvec) { irs1 += 1; }
931 if (rs2.isvec) { irs2 += 1; }
932 if (id == VL or irs1 == VL or irs2 == VL)
933 {
934 # end VL hardware loop
935 STATE.srcoffs = 0; # reset
936 return;
937 }
938
939 This has several modes:
940
941 * RT.v = RA.v RB.v
942 * RT.v = RA.v RB.s (and RA.s RB.v)
943 * RT.v = RA.s RB.s
944 * RT.s = RA.v RB.v
945 * RT.s = RA.v RB.s (and RA.s RB.v)
946 * RT.s = RA.s RB.s
947
948 All of these may be predicated. Vector-Vector is straightfoward.
949 When one of source is a Vector and the other a Scalar, it is clear that
950 each element of the Vector source should be added to the Scalar source,
951 each result placed into the Vector (or, if the destination is a scalar,
952 only the first nonpredicated result).
953
954 The one that is not obvious is RT=vector but both RA/RB=scalar.
955 Here this acts as a "splat scalar result", copying the same result into
956 all nonpredicated result elements. If a fixed destination scalar was
957 intended, then an all-Scalar operation should be used.
958
959 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
960
961 # Assembly Annotation
962
963 Assembly code annotation is required for SV to be able to successfully
964 mark instructions as "prefixed".
965
966 A reasonable (prototype) starting point:
967
968 svp64 [field=value]*
969
970 Fields:
971
972 * ew=8/16/32 - element width
973 * sew=8/16/32 - source element width
974 * vec=2/3/4 - SUBVL
975 * mode=mr/satu/sats/crpred
976 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
977
978 similar to x86 "rex" prefix.
979
980 For actual assembler:
981
982 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
983
984 Qualifiers:
985
986 * m={pred}: predicate mask mode
987 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
988 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
989 * ew={N}: ew=8/16/32 - sets elwidth override
990 * sw={N}: sw=8/16/32 - sets source elwidth override
991 * ff={xx}: see fail-first mode
992 * pr={xx}: see predicate-result mode
993 * sat{x}: satu / sats - see saturation mode
994 * mr: see map-reduce mode
995 * mr.svm see map-reduce with sub-vector mode
996 * crm: see map-reduce CR mode
997 * crm.svm see map-reduce CR with sub-vector mode
998 * sz: predication with source-zeroing
999 * dz: predication with dest-zeroing
1000
1001 For modes:
1002
1003 * pred-result:
1004 - pm=lt/gt/le/ge/eq/ne/so/ns
1005 - RC1 mode
1006 * fail-first
1007 - ff=lt/gt/le/ge/eq/ne/so/ns
1008 - RC1 mode
1009 * saturation:
1010 - sats
1011 - satu
1012 * map-reduce:
1013 - mr OR crm: "normal" map-reduce mode or CR-mode.
1014 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
1015
1016 # Parallel-reduction algorithm
1017
1018 The principle of SVP64 is that SVP64 is a fully-independent
1019 Abstraction of hardware-looping in between issue and execute phases
1020 that has no relation to the operation it issues.
1021 Additional state cannot be saved on context-switching beyond that
1022 of SVSTATE, making things slightly tricky.
1023
1024 Executable demo pseudocode, full version
1025 [here](https://git.libre-soc.org/?p=libreriscv.git;a=blob;f=openpower/sv/test_preduce.py;hb=HEAD)
1026
1027 ```
1028 [[!inline raw="yes" pages="openpower/sv/preduce.py" ]]
1029 ```
1030
1031 This algorithm works by noting when data remains in-place rather than
1032 being reduced, and referring to that alternative position on subsequent
1033 layers of reduction. It is re-entrant. If however interrupted and
1034 restored, some implementations may take longer to re-establish the
1035 context.
1036
1037 Its application by default is that:
1038
1039 * RA, FRA or BFA is the first register as the first operand
1040 (ci index offset in the above pseudocode)
1041 * RB, FRB or BFB is the second (co index offset)
1042 * RT (result) also uses ci **if RA==RT**
1043
1044 For more complex applications a REMAP Schedule must be used
1045
1046 *Programmers's note:
1047 if passed a predicate mask with only one bit set, this algorithm
1048 takes no action, similar to when a predicate mask is all zero.*
1049
1050 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
1051 implemented in hardware with MVs that ensure lane-crossing is minimised.
1052 The mistake which would be catastrophic to SVP64 to make is to then
1053 limit the Reduction Sequence for all implementors
1054 based solely and exclusively on what one
1055 specific internal microarchitecture does.
1056 In SIMD ISAs the internal SIMD Architectural design is exposed and imposed on the programmer. Cray-style Vector ISAs on the other hand provide convenient,
1057 compact and efficient encodings of abstract concepts.*
1058 **It is the Implementor's responsibility to produce a design
1059 that complies with the above algorithm,
1060 utilising internal Micro-coding and other techniques to transparently
1061 insert micro-architectural lane-crossing Move operations
1062 if necessary or desired, to give the level of efficiency or performance
1063 required.**
1064
1065 # Element-width overrides <a name="elwidth"> </>
1066
1067 Element-width overrides are best illustrated with a packed structure
1068 union in the c programming language. The following should be taken
1069 literally, and assume always a little-endian layout:
1070
1071 typedef union {
1072 uint8_t b[];
1073 uint16_t s[];
1074 uint32_t i[];
1075 uint64_t l[];
1076 uint8_t actual_bytes[8];
1077 } el_reg_t;
1078
1079 elreg_t int_regfile[128];
1080
1081 get_polymorphed_reg(reg, bitwidth, offset):
1082 el_reg_t res;
1083 res.l = 0; // TODO: going to need sign-extending / zero-extending
1084 if bitwidth == 8:
1085 reg.b = int_regfile[reg].b[offset]
1086 elif bitwidth == 16:
1087 reg.s = int_regfile[reg].s[offset]
1088 elif bitwidth == 32:
1089 reg.i = int_regfile[reg].i[offset]
1090 elif bitwidth == 64:
1091 reg.l = int_regfile[reg].l[offset]
1092 return res
1093
1094 set_polymorphed_reg(reg, bitwidth, offset, val):
1095 if (!reg.isvec):
1096 # not a vector: first element only, overwrites high bits
1097 int_regfile[reg].l[0] = val
1098 elif bitwidth == 8:
1099 int_regfile[reg].b[offset] = val
1100 elif bitwidth == 16:
1101 int_regfile[reg].s[offset] = val
1102 elif bitwidth == 32:
1103 int_regfile[reg].i[offset] = val
1104 elif bitwidth == 64:
1105 int_regfile[reg].l[offset] = val
1106
1107 In effect the GPR registers r0 to r127 (and corresponding FPRs fp0
1108 to fp127) are reinterpreted to be "starting points" in a byte-addressable
1109 memory. Vectors - which become just a virtual naming construct - effectively
1110 overlap.
1111
1112 It is extremely important for implementors to note that the only circumstance
1113 where upper portions of an underlying 64-bit register are zero'd out is
1114 when the destination is a scalar. The ideal register file has byte-level
1115 write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.
1116
1117 An example ADD operation with predication and element width overrides:
1118
1119  for (i = 0; i < VL; i++)
1120 if (predval & 1<<i) # predication
1121 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1122 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1123 result = src1 + src2 # actual add here
1124 set_polymorphed_reg(RT, destwid, ird, result)
1125 if (!RT.isvec) break
1126 if (RT.isvec)  { id += 1; }
1127 if (RA.isvec)  { irs1 += 1; }
1128 if (RB.isvec)  { irs2 += 1; }
1129
1130 Thus it can be clearly seen that elements are packed by their
1131 element width, and the packing starts from the source (or destination)
1132 specified by the instruction.
1133
1134 # Twin (implicit) result operations
1135
1136 Some operations in the Power ISA already target two 64-bit scalar
1137 registers: `lq` for example, and LD with update.
1138 Some mathematical algorithms are more
1139 efficient when there are two outputs rather than one, providing
1140 feedback loops between elements (the most well-known being add with
1141 carry). 64-bit multiply
1142 for example actually internally produces a 128 bit result, which clearly
1143 cannot be stored in a single 64 bit register. Some ISAs recommend
1144 "macro op fusion": the practice of setting a convention whereby if
1145 two commonly used instructions (mullo, mulhi) use the same ALU but
1146 one selects the low part of an identical operation and the other
1147 selects the high part, then optimised micro-architectures may
1148 "fuse" those two instructions together, using Micro-coding techniques,
1149 internally.
1150
1151 The practice and convention of macro-op fusion however is not compatible
1152 with SVP64 Horizontal-First, because Horizontal Mode may only
1153 be applied to a single instruction at a time, and SVP64 is based on
1154 the principle of strict Program Order even at the element
1155 level. Thus it becomes
1156 necessary to add explicit more complex single instructions with
1157 more operands than would normally be seen in the average RISC ISA
1158 (3-in, 2-out, in some cases). If it
1159 was not for Power ISA already having LD/ST with update as well as
1160 Condition Codes and `lq` this would be hard to justify.
1161
1162 With limited space in the `EXTRA` Field, and Power ISA opcodes
1163 being only 32 bit, 5 operands is quite an ask. `lq` however sets
1164 a precedent: `RTp` stands for "RT pair". In other words the result
1165 is stored in RT and RT+1. For Scalar operations, following this
1166 precedent is perfectly reasonable. In Scalar mode,
1167 `madded` therefore stores the two halves of the 128-bit multiply
1168 into RT and RT+1.
1169
1170 What, then, of `sv.madded`? If the destination is hard-coded to
1171 RT and RT+1 the instruction is not useful when Vectorised because
1172 the output will be overwritten on the next element. To solve this
1173 is easy: define the destination registers as RT and RT+MAXVL
1174 respectively. This makes it easy for compilers to statically allocate
1175 registers even when VL changes dynamically.
1176
1177 Bear in mind that both RT and RT+MAXVL are starting points for Vectors,
1178 and bear in mind that element-width overrides still have to be taken
1179 into consideration, the starting point for the implicit destination
1180 is best illustrated in pseudocode:
1181
1182 # demo of madded
1183  for (i = 0; i < VL; i++)
1184 if (predval & 1<<i) # predication
1185 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1186 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1187 src2 = get_polymorphed_reg(RC, srcwid, irs3)
1188 result = src1*src2 + src2
1189 destmask = (2<<destwid)-1
1190 # store two halves of result, both start from RT.
1191 set_polymorphed_reg(RT, destwid, ird , result&destmask)
1192 set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
1193 if (!RT.isvec) break
1194 if (RT.isvec)  { id += 1; }
1195 if (RA.isvec)  { irs1 += 1; }
1196 if (RB.isvec)  { irs2 += 1; }
1197 if (RC.isvec)  { irs3 += 1; }
1198
1199 The significant part here is that the second half is stored
1200 starting not from RT+MAXVL at all: it is the *element* index
1201 that is offset by MAXVL, both halves actually starting from RT.
1202 If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements
1203 RT0 to RT2 are stored:
1204
1205 0..31 32..63
1206 r0 unchanged unchanged
1207 r1 RT0.lo RT1.lo
1208 r2 RT2.lo unchanged
1209 r3 unchanged RT0.hi
1210 r4 RT1.hi RT2.hi
1211 r5 unchanged unchanged
1212
1213 Note that all of the LO halves start from r1, but that the HI halves
1214 start from half-way into r3. The reason is that with MAXVL bring
1215 5 and elwidth being 32, this is the 5th element
1216 offset (in 32 bit quantities) counting from r1.
1217
1218 *Programmer's note: accessing registers that have been placed
1219 starting on a non-contiguous boundary (half-way along a scalar
1220 register) can be inconvenient: REMAP can provide an offset but
1221 it requires extra instructions to set up. A simple solution
1222 is to ensure that MAXVL is rounded up such that the Vector
1223 ends cleanly on a contiguous register boundary. MAXVL=6 in
1224 the above example would achieve that*
1225
1226 Additional DRAFT Scalar instructions in 3-in 2-out form
1227 with an implicit 2nd destination:
1228
1229 * [[isa/svfixedarith]]
1230 * [[isa/svfparith]]
1231