c4dc6701e4c5d9fe79ce1ca0817e3a5b4030995f
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 # Appendix
2
3 * <https://bugs.libre-soc.org/show_bug.cgi?id=574> Saturation
4 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47> Parallel Prefix
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=697> Reduce Modes
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=864> parallel prefix simulator
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=809> OV sv.addex discussion
8 * ARM SVE Fault-first <https://alastairreid.github.io/papers/sve-ieee-micro-2017.pdf>
9
10 This is the appendix to [[sv/svp64]], providing explanations of modes
11 etc. leaving the main svp64 page's primary purpose as outlining the
12 instruction format.
13
14 Table of contents:
15
16 [[!toc]]
17
18 ## Partial Implementations
19
20 It is perfectly legal to implement subsets of SVP64 as long as illegal
21 instruction traps are always raised on unimplemented features,
22 so that soft-emulation is possible,
23 even for future revisions of SVP64. With SVP64 being partly controlled
24 through contextual SPRs, a little care has to be taken.
25
26 **All** SPRs
27 not implemented including reserved ones for future use must raise an illegal
28 instruction trap if read or written. This allows software the
29 opportunity to emulate the context created by the given SPR.
30
31 See [[sv/compliancy_levels]] for full details.
32
33 ## XER, SO and other global flags
34
35 Vector systems are expected to be high performance. This is achieved
36 through parallelism, which requires that elements in the vector be
37 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
38 Read-Write Hazards on single-bit global resources, having a significant
39 detrimental effect.
40
41 Consequently in SV, XER.SO behaviour is disregarded (including
42 in `cmp` instructions). XER.SO is not read, but XER.OV may be written,
43 breaking the Read-Modify-Write Hazard Chain that complicates
44 microarchitectural implementations.
45 This includes when `scalar identity behaviour` occurs. If precise
46 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
47 instructions should be used without an SV Prefix.
48
49 TODO jacob add about OV https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf
50
51 Of note here is that XER.SO and OV may already be disregarded in the
52 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
53 SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets,
54 but only for SVP64 Prefixed Operations.
55
56 XER.CA/CA32 on the other hand is expected and required to be implemented
57 according to standard Power ISA Scalar behaviour. Interestingly, due
58 to SVP64 being in effect a hardware for-loop around Scalar instructions
59 executing in precise Program Order, a little thought shows that a Vectorized
60 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
61 and producing, at the end, a single bit Carry out. High performance
62 implementations may exploit this observation to deploy efficient
63 Parallel Carry Lookahead.
64
65 ```
66 # assume VL=4, this results in 4 sequential ops (below)
67 sv.adde r0.v, r4.v, r8.v
68
69 # instructions that get executed in backend hardware:
70 adde r0, r4, r8 # takes carry-in, produces carry-out
71 adde r1, r5, r9 # takes carry from previous
72 ...
73 adde r3, r7, r11 # likewise
74 ```
75
76 It can clearly be seen that the carry chains from one
77 64 bit add to the next, the end result being that a
78 256-bit "Big Integer Add with Carry" has been performed, and that
79 CA contains the 257th bit. A one-instruction 512-bit Add-with-Carry
80 may be performed by setting VL=8, and a one-instruction
81 1024-bit Add-with-Carry by setting VL=16, and so on. More on
82 this in [[openpower/sv/biginteger]]
83
84 ## EXTRA Field Mapping
85
86 The purpose of the 9-bit EXTRA field mapping is to mark individual
87 registers (RT, RA, BFA) as either scalar or vector, and to extend
88 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
89 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
90 Predication) leaving a mere 6 bits for qualifying registers. As can
91 be seen there is significant pressure on these (and in fact all) SVP64 bits.
92
93 In Power ISA v3.1 prefixing there are bits which describe and classify
94 the prefix in a fashion that is independent of the suffix. MLSS for
95 example. For SVP64 there is insufficient space to make the SVP64 Prefix
96 "self-describing", and consequently every single Scalar instruction
97 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
98 This process was semi-automated and is described in this section.
99 The final results, which are part of the SVP64 Specification, are here:
100 [[openpower/opcode_regs_deduped]]
101
102 * Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
103 from reading the markdown formatted version of the Scalar pseudocode which
104 is machine-readable and found in [[openpower/isatables]]. The analysis
105 gives, by instruction, a "Register Profile". `add RT, RA, RB` for
106 example is given a designation `RM-2R-1W` because it requires two GPR
107 reads and one GPR write.
108 * Secondly, the total number of registers was added up (2R-1W is 3
109 registers) and if less than or equal to three then that instruction
110 could be given an EXTRA3 designation. Four or more is given an EXTRA2
111 designation because there are only 9 bits available.
112 * Thirdly, the instruction was analysed to see if Twin or Single
113 Predication was suitable. As a general rule this was if there
114 was only a single operand and a single result (`extw` and LD/ST)
115 however it was found that some 2 or 3 operand instructions also
116 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
117 in Twin Predication, some compromises were made, here. LDST is
118 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
119 * Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
120 could have been decided that RA would be indexed 0 (EXTRA bits 0-2), RB
121 indexed 1 (EXTRA bits 3-5) and RT indexed 2 (EXTRA bits 6-8). In some
122 cases (LD/ST with update) RA-as-a-source is given a **different** EXTRA
123 index from RA-as-a-result (because it is possible to do, and perceived
124 to be useful). Rc=1 co-results (CR0, CR1) are always given the same
125 EXTRA index as their main result (RT, FRT).
126 * Fifthly, in an automated process the results of the analysis were
127 outputted in CSV Format for use in machine-readable form by sv_analysis.py
128 <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
129
130 This process was laborious but logical, and, crucially, once a decision
131 is made (and ratified) cannot be reversed. Qualifying future Power ISA
132 Scalar instructions for SVP64 is **strongly** advised to utilise this
133 same process and the same sv_analysis.py program as a canonical method
134 of maintaining the relationships. Alterations to that same program
135 which change the Designation is **prohibited** once finalised (ratified
136 through the Power ISA WG Process). It would be similar to deciding that
137 `add` should be changed from X-Form
138 to D-Form.
139
140 ## Single Predication <a name="1p"> </a>
141
142 This is a standard mode normally found in Vector ISAs. every element
143 in every source Vector and in the destination uses the same bit of one
144 single predicate mask.
145
146 In SVSTATE, for Single-predication, implementors MUST increment both
147 srcstep and dststep, but depending on whether sz and/or dz are set,
148 srcstep and dststep can still potentially become different indices.
149 Only when sz=dz is srcstep guaranteed to equal dststep at all times.
150
151 Note that in some Mode Formats there is only one flag (zz). This indicates
152 that *both* sz *and* dz are set to the same.
153
154 Example 1:
155
156 * VL=4
157 * mask=0b1101
158 * sz=0, dz=1
159
160 The following schedule for srcstep and dststep will occur:
161
162 | srcstep | dststep | comment |
163 | ---- | ----- | -------- |
164 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
165 | 1 | 2 | sz=1 but dz=0: dst skips mask[1], src soes not |
166 | 2 | 3 | mask[src=2] and mask[dst=3] are 1 |
167 | 3 | end | loop has ended because dst reached VL-1 |
168
169 Example 2:
170
171 * VL=4
172 * mask=0b1101
173 * sz=1, dz=0
174
175 The following schedule for srcstep and dststep will occur:
176
177 | srcstep | dststep | comment |
178 | ---- | ----- | -------- |
179 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
180 | 2 | 1 | sz=0 but dz=1: src skips mask[1], dst does not |
181 | 3 | 2 | mask[src=3] and mask[dst=2] are 1 |
182 | end | 3 | loop has ended because src reached VL-1 |
183
184 In both these examples it is crucial to note that despite there being
185 a single predicate mask, with sz and dz being different, srcstep and
186 dststep are being requested to react differently.
187
188 Example 3:
189
190 * VL=4
191 * mask=0b1101
192 * sz=0, dz=0
193
194 The following schedule for srcstep and dststep will occur:
195
196 | srcstep | dststep | comment |
197 | ---- | ----- | -------- |
198 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
199 | 2 | 2 | sz=0 and dz=0: both src and dst skip mask[1] |
200 | 3 | 3 | mask[src=3] and mask[dst=3] are 1 |
201 | end | end | loop has ended because src and dst reached VL-1 |
202
203 Here, both srcstep and dststep remain in lockstep because sz=dz=0
204
205 ## Twin Predication <a name="2p"> </a>
206
207 This is a novel concept that allows predication to be applied to a single
208 source and a single dest register. The following types of traditional
209 Vector operations may be encoded with it, *without requiring explicit
210 opcodes to do so*
211
212 * VSPLAT (a single scalar distributed across a vector)
213 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
214 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
215 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
216 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
217
218 Those patterns (and more) may be applied to:
219
220 * mv (the usual way that V\* ISA operations are created)
221 * exts\* sign-extension
222 * rwlinm and other RS-RA shift operations (**note**: excluding
223 those that take RA as both a src and dest. These are not
224 1-src 1-dest, they are 2-src, 1-dest)
225 * LD and ST (treating AGEN as one source)
226 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
227 * Condition Register ops mfcr, mtcr and other similar
228
229 This is a huge list that creates extremely powerful combinations,
230 particularly given that one of the predicate options is `(1<<r3)`
231
232 Additional unusual capabilities of Twin Predication include a back-to-back
233 version of VCOMPRESS-VEXPAND which is effectively the ability to do
234 sequentially ordered multiple VINSERTs. The source predicate selects a
235 sequentially ordered subset of elements to be inserted; the destination
236 predicate specifies the sequentially ordered recipient locations.
237 This is equivalent to
238 `llvm.masked.compressstore.*`
239 followed by
240 `llvm.masked.expandload.*`
241 with a single instruction, but abstracted out from Load/Store and applicable
242 in general to any 2P instruction.
243
244 This extreme power and flexibility comes down to the fact that SVP64
245 is not actually a Vector ISA: it is a loop-abstraction-concept that
246 is applied *in general* to Scalar operations, just like the x86 `REP`
247 instruction (if put on steroids).
248
249 ## Pack/Unpack
250
251 The pack/unpack concept of VSX `vpack` is abstracted out as Sub-Vector
252 reordering. Two bits in the `SVSHAPE` [[sv/spr]] enable either "packing"
253 or "unpacking" on the subvectors vec2/3/4.
254
255 First, illustrating a "normal" SVP64 operation with `SUBVL!=1:` (assuming
256 no elwidth overrides), note that the VL loop is outer and the SUBVL
257 loop inner:
258
259 ```
260 def index():
261 for i in range(VL):
262 for j in range(SUBVL):
263 yield i*SUBVL+j
264
265 for idx in index():
266 operation_on(RA+idx)
267 ```
268
269 For pack/unpack (again, no elwidth overrides), note that now there is the
270 option to swap the SUBVL and VL loop orders.
271 In effect the Pack/Unpack performs a Transpose of the subvector elements.
272 Illustrated this time with a GPR mv operation:
273
274 ```
275 # yield an outer-SUBVL or inner VL loop with SUBVL
276 def index_p(outer):
277 if outer:
278 for j in range(SUBVL): # subvl is outer
279 for i in range(VL): # vl is inner
280 yield i+VL*j
281 else:
282 for i in range(VL): # vl is outer
283 for j in range(SUBVL): # subvl is inner
284 yield i*SUBVL+j
285
286 # walk through both source and dest indices simultaneously
287 for src_idx, dst_idx in zip(index_p(PACK), index_p(UNPACK)):
288 move_operation(RT+dst_idx, RA+src_idx)
289 ```
290
291 "yield" from python is used here for simplicity and clarity.
292 The two Finite State Machines for the generation of the source
293 and destination element offsets progress incrementally in
294 lock-step.
295
296 Example VL=2, SUBVL=3, PACK_en=1 - elements grouped by
297 vec3 will be redistributed such that Sub-elements 0 are
298 packed together, Sub-elements 1 are packed together, as
299 are Sub-elements 2.
300
301 ```
302 srcstep=0 srcstep=1
303 0 1 2 3 4 5
304
305 dststep=0 dststep=1 dststep=2
306 0 3 1 4 2 5
307 ```
308
309 Setting of both `PACK` and `UNPACK` is neither prohibited nor `UNDEFINED`
310 because the reordering is fully deterministic, and additional REMAP
311 reordering may be applied. Combined with Matrix REMAP this would give
312 potentially up to 4 Dimensions of reordering.
313
314 Pack/Unpack has quirky interactions on [[sv/mv.swizzle]] because it can
315 set a different subvector length for destination, and has a slightly
316 different pseudocode algorithm for Vertical-First Mode.
317
318 Ordering is as follows:
319
320 * SVSHAPE srcstep, dststep, ssubstep and dsubstep are advanced sequentially
321 depending on PACK/UNPACK.
322 * srcstep and dststep are pushed through REMAP to compute actual Element offsets.
323 * Swizzle is independently applied to ssubstep and dsubstep
324
325 Pack/Unpack is enabled (set up) through [[sv/svstep]].
326
327 ## Reduce modes
328
329 Reduction in SVP64 is deterministic and somewhat of a misnomer.
330 A normal Vector ISA would have explicit Reduce opcodes with defined
331 characteristics per operation: in SX Aurora there is even an additional
332 scalar argument containing the initial reduction value, and the default
333 is either 0 or 1 depending on the specifics of the explicit opcode.
334 SVP64 fundamentally has to utilise *existing* Scalar Power ISA v3.0B
335 operations, which presents some unique challenges.
336
337 The solution turns out to be to simply define reduction as permitting
338 deterministic element-based schedules to be issued using the base Scalar
339 operations, and to rely on the underlying microarchitecture to resolve
340 Register Hazards at the element level. This goes back to the fundamental
341 principle that SV is nothing more than a Sub-Program-Counter sitting
342 between Decode and Issue phases.
343
344 For Scalar Reduction, Microarchitectures *may* take opportunities to
345 parallelise the reduction but only if in doing so they preserve strict
346 Program Order at the Element Level. Opportunities where this is possible
347 include an `OR` operation or a MIN/MAX operation: it may be possible to
348 parallelise the reduction, but for Floating Point it is not permitted
349 due to different results being obtained if the reduction is not executed
350 in strict Program-Sequential Order.
351
352 In essence it becomes the programmer's responsibility to leverage the
353 pre-determined schedules to desired effect.
354
355 ### Scalar result reduction and iteration
356
357 Scalar Reduction per se does not exist, instead is implemented in SVP64
358 as a simple and natural relaxation of the usual restriction on the Vector
359 Looping which would terminate if the destination was marked as a Scalar.
360 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
361 even though the destination register is marked as scalar *and*
362 the same register is used as a source register. Thus it is
363 up to the programmer to be aware of this, observe some conventions,
364 and thus end up achieving the desired outcome of scalar reduction.
365
366 It is also important to appreciate that there is no actual imposition or
367 restriction on how this mode is utilised: there will therefore be several
368 valuable uses (including Vector Iteration and "Reverse-Gear") and it is
369 up to the programmer to make best use of the (strictly deterministic)
370 capability provided.
371
372 In this mode, which is suited to operations involving carry or overflow,
373 one register must be assigned, by convention by the programmer to be the
374 "accumulator". Scalar reduction is thus categorised by:
375
376 * One of the sources is a Vector
377 * the destination is a scalar
378 * optionally but most usefully when one source scalar register is
379 also the scalar destination (which may be informally termed by
380 convention the "accumulator")
381 * That the source register type is the same as the destination register
382 type identified as the "accumulator". Scalar reduction on `cmp`,
383 `setb` or `isel` makes no sense for example because of the mixture
384 between CRs and GPRs.
385
386 *Note that issuing instructions in Scalar reduce mode such as `setb`
387 are neither `UNDEFINED` nor prohibited, despite them not making much
388 sense at first glance. Scalar reduce is strictly defined behaviour,
389 and the cost in hardware terms of prohibition of seemingly non-sensical
390 operations is too great. Therefore it is permitted and required to
391 be executed successfully. Implementors **MAY** choose to optimise
392 such instructions in instances where their use results in "extraneous
393 execution", i.e. where it is clear that the sequence of operations,
394 comprising multiple overwrites to a scalar destination **without**
395 cumulative, iterative, or reductive behaviour (no "accumulator"), may
396 discard all but the last element operation. Identification of such
397 is trivial to do for `setb` and `cmp`: the source register type is a
398 completely different register file from the destination. Likewise Scalar
399 reduction when the destination is a Vector is as if the Reduction Mode
400 was not requested. However it would clearly be unacceptable to perform
401 such optimisations on cache-inhibited LD/ST, so some considerable care
402 needs to be taken.*
403
404 Typical applications include simple operations such as `ADD r3, r10.v,
405 r3` where, clearly, r3 is being used to accumulate the addition of all
406 elements of the vector starting at r10.
407
408 ```
409 # add RT, RA,RB but when RT==RA
410 for i in range(VL):
411 iregs[RA] += iregs[RB+i] # RT==RA
412 ```
413
414 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
415 SV ordinarily **terminates** at the first scalar operation. Only by
416 marking the operation as "mapreduce" will it continue to issue multiple
417 sub-looped (element) instructions in `Program Order`.
418
419 To perform the loop in reverse order, the ```RG``` (reverse gear) bit
420 must be set. This may be useful in situations where the results may be
421 different (floating-point) if executed in a different order. Given that
422 there is no actual prohibition on Reduce Mode being applied when the
423 destination is a Vector, the "Reverse Gear" bit turns out to be a way to
424 apply Iterative or Cumulative Vector operations in reverse. `sv.add/rg
425 r3.v, r4.v, r4.v` for example will start at the opposite end of the
426 Vector and push a cumulative series of overlapping add operations into
427 the Execution units of the underlying hardware.
428
429 Other examples include shift-mask operations where a Vector of inserts
430 into a single destination register is required (see [[sv/bitmanip]],
431 bmset), as a way to construct a value quickly from multiple arbitrary
432 bit-ranges and bit-offsets. Using the same register as both the source
433 and destination, with Vectors of different offsets masks and values to
434 be inserted has multiple applications including Video, cryptography and
435 JIT compilation.
436
437 ```
438 # assume VL=4:
439 # * Vector of shift-offsets contained in RC (r12.v)
440 # * Vector of masks contained in RB (r8.v)
441 # * Vector of values to be masked-in in RA (r4.v)
442 # * Scalar destination RT (r0) to receive all mask-offset values
443 sv.bmset/mr r0, r4.v, r8.v, r12.v
444 ```
445
446 Due to the Deterministic Scheduling, Subtract and Divide are still
447 permitted to be executed in this mode, although from an algorithmic
448 perspective it is strongly discouraged. It would be better to use
449 addition followed by one final subtract, or in the case of divide, to get
450 better accuracy, to perform a multiply cascade followed by a final divide.
451
452 Note that single-operand or three-operand scalar-dest reduce is perfectly
453 well permitted: the programmer may still declare one register, used
454 as both a Vector source and Scalar destination, to be utilised as the
455 "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc this
456 naturally fits well with the normal expected usage of these operations.
457
458 If an interrupt or exception occurs in the middle of the scalar mapreduce,
459 the scalar destination register **MUST** be updated with the current
460 (intermediate) result, because this is how ```Program Order``` is
461 preserved (Vector Loops are to be considered to be just another way
462 of issuing instructions in Program Order). In this way, after return
463 from interrupt, the scalar mapreduce may continue where it left off.
464 This provides "precise" exception behaviour.
465
466 Note that hardware is perfectly permitted to perform multi-issue parallel
467 optimisation of the scalar reduce operation: it's just that as far as
468 the user is concerned, all exceptions and interrupts **MUST** be precise.
469
470 ## Fail-on-first <a name="fail-first"> </a>
471
472 Data-dependent fail-on-first has two distinct variants: one for LD/ST (see
473 [[sv/ldst]], the other for arithmetic operations (actually, CR-driven)
474 [[sv/normal]] and CR operations [[sv/cr_ops]]. Note in each case the
475 assumption is that vector elements are required appear to be executed
476 in sequential Program Order, element 0 being the first.
477
478 * LD/ST ffirst (not to be confused with *Data-Dependent* LD/ST ffirst)
479 treats the first LD/ST in a vector (element 0) as an ordinary one.
480 Exceptions occur "as normal" on the first element. However for elements
481 1 and above, if an exception would occur, then VL is **truncated**
482 to the previous element.
483 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
484 CR-creating operation produces a result (including cmp). Similar to
485 branch, an analysis of the CR is performed and if the test fails,
486 the vector operation terminates and discards all element operations
487 above the current one (and the current one if VLi is not set), and
488 VL is truncated to either the *previous* element or the current one,
489 depending on whether VLi (VL "inclusive") is set.
490
491 Thus the new VL comprises a contiguous vector of results, all of which
492 pass the testing criteria (equal to zero, less than zero). Demonstrated
493 approximately in pseudocode:
494
495 ```
496 for i in range(VL):
497 GPR[RT+i], CR[i] = operation(GPR[RA+i]... )
498 if test(CR[i]) == failure:
499 VL = i+VLi
500 break
501 ```
502
503 The CR-based data-driven fail-on-first is new and not
504 found in ARM SVE or RVV. At the same time it is also
505 "old" because it is a generalisation of the Z80 [Block
506 compare](https://rvbelzen.tripod.com/z80prgtemp/z80prg04.htm)
507 instructions, especially
508 [CPIR](http://z80-heaven.wikidot.com/instructions-set:cpir) which is
509 based on CP (compare) as the ultimate "element" (suffix) operation
510 to which the repeat (prefix) is applied. It is extremely useful for
511 reducing instruction count, however requires speculative execution
512 involving modifications of VL to get high performance implementations.
513 An additional mode (RC1=1) effectively turns what would otherwise be an
514 arithmetic operation into a type of `cmp`. The CR is stored (and the
515 CR.eq bit tested against the `inv` field). If the CR.eq bit is equal to
516 `inv` then the Vector is truncated and the loop ends. Note that when
517 RC1=1 the result elements are never stored, only the CRs.
518
519 VLi is only available as an option when `Rc=0` (or for instructions
520 which do not have Rc). When set, the current element is always also
521 included in the count (the new length that VL will be set to). This may
522 be useful in combination with "inv" to truncate the Vector to *exclude*
523 elements that fail a test, or, in the case of implementations of strncpy,
524 to include the terminating zero.
525
526 In CR-based data-driven fail-on-first there is only the option to select
527 and test one bit of each CR (just as with branch BO). For more complex
528 tests this may be insufficient. If that is the case, a vectorized crops
529 (crand, cror) may be used, and ffirst applied to the crop instead of to
530 the arithmetic vector.
531
532 One extremely important aspect of ffirst is:
533
534 * LDST ffirst may never set VL equal to zero. This because on the first
535 element an exception must be raised "as normal".
536 * CR-based data-dependent ffirst on the other hand **can** set VL equal
537 to zero. This is the only means in the entirety of SV that VL may be set
538 to zero (with the exception of via the SV.STATE SPR). When VL is set
539 zero due to the first element failing the CR bit-test, all subsequent
540 vectorized operations are effectively `nops` which is
541 *precisely the desired and intended behaviour*.
542
543 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
544 to a nonzero value for any implementation-specific reason. For example:
545 it is perfectly reasonable for implementations to alter VL when ffirst
546 LD or ST operations are initiated on a nonaligned boundary, such that
547 within a loop the subsequent iteration of that loop begins subsequent
548 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
549 workloads or balance resources.
550
551 CR-based data-dependent first on the other hand MUST not truncate VL
552 arbitrarily to a length decided by the hardware: VL MUST only be truncated
553 based explicitly on whether a test fails. This because it is a precise
554 test on which algorithms will rely.
555
556 *Note: there is no reverse-direction for Data-dependent Fail-First. REMAP
557 will need to be activated to invert the ordering of element traversal.*
558
559 ### Data-dependent fail-first on CR operations (crand etc)
560
561 Operations that actually produce or alter CR Field as a result do not
562 also in turn have an Rc=1 mode. However it makes no sense to try to test
563 the 4 bits of a CR Field for being equal or not equal to zero. Moreover,
564 the result is already in the form that is desired: it is a CR field.
565 Therefore, CR-based operations have their own SVP64 Mode, described in
566 [[sv/cr_ops]]
567
568 There are two primary different types of CR operations:
569
570 * Those which have a 3-bit operand field (referring to a CR Field)
571 * Those which have a 5-bit operand (referring to a bit within the
572 whole 32-bit CR)
573
574 More details can be found in [[sv/cr_ops]].
575
576 ## CR Operations
577
578 CRs are slightly more involved than INT or FP registers due to the
579 possibility for indexing individual bits (crops BA/BB/BT). Again however
580 the access pattern needs to be understandable in relation to v3.0B / v3.1B
581 numbering, with a clear linear relationship and mapping existing when
582 SV is applied.
583
584 ### CR EXTRA mapping table and algorithm <a name="cr_extra"></a>
585
586 Numbering relationships for CR fields are already complex due to being
587 in BE format (*the relationship is not clearly explained in the v3.0B
588 or v3.1 specification*). However with some care and consideration the
589 exact same mapping used for INT and FP regfiles may be applied, just to
590 the upper bits, as explained below. Firstly and most importantly a new
591 notation `CR{field number}` is used to indicate access to a particular
592 Condition Register Field (as opposed to the notation `CR[bit]` which
593 accesses one bit of the 32 bit Power ISA v3.0B Condition Register).
594
595 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
596
597 ```
598 CR{n} = CR[32+n*4:35+n*4]
599 ```
600
601 For SVP64 the relationship for the sequential numbering of elements is to
602 the CR **fields** within the CR Register, not to individual bits within
603 the CR register.
604
605 The `CR{n}` notation is designed to give *linear sequential
606 numbering* in the Vector domain on a straight sequential Vector Loop.
607
608 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
609 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits *in*
610 that CR (EQ/LT/GT/SO). The numbering was determined (after 4 months of
611 analysis and research) to be as follows:
612
613 ```
614 CR_index = (BA>>2) # top 3 bits
615 bit_index = (BA & 0b11) # low 2 bits
616 CR_reg = CR{CR_index} # get the CR
617 # finally get the bit from the CR.
618 CR_bit = (CR_reg & (1<<bit_index)) != 0
619 ```
620
621 When it comes to applying SV, it is the *CR Field* number `CR_reg`
622 to which SV EXTRA2/3
623 applies, **not** the `CR_bit` portion (bits 3-4):
624
625 ```
626 if extra3_mode:
627 spec = EXTRA3
628 elif EXTRA2[0]: # vector mode
629 spec = EXTRA2 << 1 # same as EXTRA3, shifted
630 else: # scalar mode
631 spec = (EXTRA2[0] << 2) | EXTRA2[1]
632 if spec[0]:
633 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
634 return ((BA >> 2)<<6) | # hi 3 bits shifted up
635 (spec[1:2]<<4) | # to make room for these
636 (BA & 0b11) # CR_bit on the end
637 else:
638 # scalar constructs "00 spec[1:2] BA[0:4]"
639 return (spec[1:2] << 5) | BA
640 ```
641
642 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
643 algorithm to determine CR\_reg is modified to as follows:
644
645 ```
646 CR_index = (BA>>2) # top 3 bits
647 if spec[0]:
648 # vector mode, 0-124 increments of 4
649 CR_index = (CR_index<<4) | (spec[1:2] << 2)
650 else:
651 # scalar mode, 0-32 increments of 1
652 CR_index = (spec[1:2]<<3) | CR_index
653 # same as for v3.0/v3.1 from this point onwards
654 bit_index = (BA & 0b11) # low 2 bits
655 CR_reg = CR{CR_index} # get the CR
656 # finally get the bit from the CR.
657 CR_bit = (CR_reg & (1<<bit_index)) != 0
658 ```
659
660 Note here that the decoding pattern to determine CR\_bit does not change.
661
662 Note: high-performance implementations may read/write Vectors of CRs in
663 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
664 simplify internal design. If instructions are issued where CR Vectors
665 do not start on a 32-bit aligned boundary, performance may be affected.
666
667 ### CR fields as inputs/outputs of vector operations
668
669 CRs (or, the arithmetic operations associated with them)
670 may be marked as Vectorized or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorized if the destination is Vectorized. Likewise if the destination is scalar then so is the CR.
671
672 When vectorized, the CR inputs/outputs are sequentially read/written
673 to 4-bit CR fields. Vectorized Integer results, when Rc=1, will begin
674 writing to CR8 (TBD evaluate) and increase sequentially from there.
675 This is so that:
676
677 * implementations may rely on the Vector CRs being aligned to 8. This
678 means that CRs may be read or written in aligned batches of 32 bits
679 (8 CRs per batch), for high performance implementations.
680 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
681 overwritten by vector Rc=1 operations except for very large VL
682 * CR-based predication, from CR32, is also not interfered with
683 (except by large VL).
684
685 However when the SV result (destination) is marked as a scalar by the
686 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
687 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
688 for FP operations.
689
690 Note that yes, the CR Fields are genuinely Vectorized. Unlike in SIMD VSX which
691 has a single CR (CR6) for a given SIMD result, SV Vectorized OpenPOWER
692 v3.0B scalar operations produce a **tuple** of element results: the
693 result of the operation as one part of that element *and a corresponding
694 CR element*. Greatly simplified pseudocode:
695
696 ```
697 for i in range(VL):
698 # calculate the vector result of an add
699 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
700 # now calculate CR bits
701 CRs{8+i}.eq = iregs[RT+i] == 0
702 CRs{8+i}.gt = iregs[RT+i] > 0
703 ... etc
704 ```
705
706 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
707 then a followup instruction must be performed, setting "reduce" mode on
708 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
709 more flexibility in analysing vectors than standard Vector ISAs. Normal
710 Vector ISAs are typically restricted to "were all results nonzero" and
711 "were some results nonzero". The application of mapreduce to Vectorized
712 cr operations allows far more sophisticated analysis, particularly in
713 conjunction with the new crweird operations see [[sv/cr_int_predication]].
714
715 Note in particular that the use of a separate instruction in this way
716 ensures that high performance multi-issue OoO inplementations do not
717 have the computation of the cumulative analysis CR as a bottleneck and
718 hindrance, regardless of the length of VL.
719
720 Additionally,
721 SVP64 [[sv/branches]] may be used, even when the branch itself is to
722 the following instruction. The combined side-effects of CTR reduction
723 and VL truncation provide several benefits.
724
725 (see [[discussion]]. some alternative schemes are described there)
726
727 ### Rc=1 when SUBVL!=1
728
729 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
730 predicate is allocated per subvector; likewise only one CR is allocated
731 per subvector.
732
733 This leaves a conundrum as to how to apply CR computation per subvector,
734 when normally Rc=1 is exclusively applied to scalar elements. A solution
735 is to perform a bitwise OR or AND of the subvector tests. Given that
736 OE is ignored in SVP64, this field may (when available) be used to select OR or
737 AND behavior.
738
739 #### Table of CR fields
740
741 CRn is the notation used by the OpenPower spec to refer to CR field #i,
742 so FP instructions with Rc=1 write to CR1 (n=1).
743
744 CRs are not stored in SPRs: they are registers in their own right.
745 Therefore context-switching the full set of CRs involves a Vectorized
746 mfcr or mtcr, using VL=8 to do so. This is exactly as how
747 scalar OpenPOWER context-switches CRs: it is just that there are now
748 more of them.
749
750 The 64 SV CRs are arranged similarly to the way the 128 integer registers
751 are arranged. TODO a python program that auto-generates a CSV file
752 which can be included in a table, which is in a new page (so as not to
753 overwhelm this one). [[svp64/cr_names]]
754
755 ## Register Profiles
756
757 Instructions are broken down by Register Profiles as listed in the
758 following auto-generated page: [[opcode_regs_deduped]]. These tables,
759 despite being auto-generated, are part of the Specification.
760
761 ## SV pseudocode illustration
762
763 ### Single-predicated Instruction
764
765 illustration of normal mode add operation: zeroing not included, elwidth
766 overrides not included. if there is no predicate, it is set to all 1s
767
768 ```
769 function op_add(rd, rs1, rs2) # add not VADD!
770 int i, id=0, irs1=0, irs2=0;
771 predval = get_pred_val(FALSE, rd);
772 for (i = 0; i < VL; i++)
773 STATE.srcoffs = i # save context
774 if (predval & 1<<i) # predication uses intregs
775 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
776 if (!int_vec[rd].isvec) break;
777 if (rd.isvec) { id += 1; }
778 if (rs1.isvec) { irs1 += 1; }
779 if (rs2.isvec) { irs2 += 1; }
780 if (id == VL or irs1 == VL or irs2 == VL) {
781 # end VL hardware loop
782 STATE.srcoffs = 0; # reset
783 return;
784 }
785 ```
786
787 This has several modes:
788
789 * RT.v = RA.v RB.v
790 * RT.v = RA.v RB.s (and RA.s RB.v)
791 * RT.v = RA.s RB.s
792 * RT.s = RA.v RB.v
793 * RT.s = RA.v RB.s (and RA.s RB.v)
794 * RT.s = RA.s RB.s
795
796 All of these may be predicated. Vector-Vector is straightfoward.
797 When one of source is a Vector and the other a Scalar, it is clear that
798 each element of the Vector source should be added to the Scalar source,
799 each result placed into the Vector (or, if the destination is a scalar,
800 only the first nonpredicated result).
801
802 The one that is not obvious is RT=vector but both RA/RB=scalar.
803 Here this acts as a "splat scalar result", copying the same result into
804 all nonpredicated result elements. If a fixed destination scalar was
805 intended, then an all-Scalar operation should be used.
806
807 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
808
809 ## Assembly Annotation
810
811 Assembly code annotation is required for SV to be able to successfully
812 mark instructions as "prefixed".
813
814 A reasonable (prototype) starting point:
815
816 ```
817 svp64 [field=value]*
818 ```
819
820 Fields:
821
822 * ew=8/16/32 - element width
823 * sew=8/16/32 - source element width
824 * vec=2/3/4 - SUBVL
825 * mode=mr/satu/sats/crpred
826 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
827
828 similar to x86 "rex" prefix.
829
830 For actual assembler:
831
832 ```
833 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
834 ```
835
836 Qualifiers:
837
838 * m={pred}: predicate mask mode
839 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
840 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
841 * ew={N}: ew=8/16/32 - sets elwidth override
842 * sw={N}: sw=8/16/32 - sets source elwidth override
843 * ff={xx}: see fail-first mode
844 * sat{x}: satu / sats - see saturation mode
845 * mr: see map-reduce mode
846 * mrr: map-reduce, reverse-gear (VL-1 downto 0)
847 * mr.svm see map-reduce with sub-vector mode
848 * crm: see map-reduce CR mode
849 * crm.svm see map-reduce CR with sub-vector mode
850 * sz: predication with source-zeroing
851 * dz: predication with dest-zeroing
852
853 For modes:
854
855 * fail-first
856 - ff=lt/gt/le/ge/eq/ne/so/ns
857 - RC1 mode
858 * saturation:
859 - sats
860 - satu
861 * map-reduce:
862 - mr OR crm: "normal" map-reduce mode or CR-mode.
863 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
864
865 ## Parallel-reduction algorithm
866
867 The principle of SVP64 is that SVP64 is a fully-independent
868 Abstraction of hardware-looping in between issue and execute phases
869 that has no relation to the operation it issues.
870 Additional state cannot be saved on context-switching beyond that
871 of SVSTATE, making things slightly tricky.
872
873 Executable demo pseudocode, full version
874 [here](https://git.libre-soc.org/?p=libreriscv.git;a=blob;f=openpower/sv/test_preduce.py;hb=HEAD)
875
876 ```
877 [[!inline pages="openpower/sv/preduce.py" raw="yes" ]]
878 ```
879
880 This algorithm works by noting when data remains in-place rather than
881 being reduced, and referring to that alternative position on subsequent
882 layers of reduction. It is re-entrant. If however interrupted and
883 restored, some implementations may take longer to re-establish the
884 context.
885
886 Its application by default is that:
887
888 * RA, FRA or BFA is the first register as the first operand
889 (ci index offset in the above pseudocode)
890 * RB, FRB or BFB is the second (co index offset)
891 * RT (result) also uses ci **if RA==RT**
892
893 For more complex applications a REMAP Schedule must be used
894
895 *Programmers's note: if passed a predicate mask with only one bit set,
896 this algorithm takes no action, similar to when a predicate mask is
897 all zero.*
898
899 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
900 implemented in hardware with MVs that ensure lane-crossing is minimised.
901 The mistake which would be catastrophic to SVP64 to make is to then limit
902 the Reduction Sequence for all implementors based solely and exclusively
903 on what one specific internal microarchitecture does. In SIMD ISAs
904 the internal SIMD Architectural design is exposed and imposed on the
905 programmer. Cray-style Vector ISAs on the other hand provide convenient,
906 compact and efficient encodings of abstract concepts.* **It is the
907 Implementor's responsibility to produce a design that complies with the
908 above algorithm, utilising internal Micro-coding and other techniques to
909 transparently insert micro-architectural lane-crossing Move operations
910 if necessary or desired, to give the level of efficiency or performance
911 required.**
912
913 ## Element-width overrides <a name="elwidth"> </>
914
915 Element-width overrides are best illustrated with a packed structure
916 union in the c programming language. The following should be taken
917 literally, and assume always a little-endian layout:
918
919 ```
920 #pragma pack
921 typedef union {
922 uint8_t b[];
923 uint16_t s[];
924 uint32_t i[];
925 uint64_t l[];
926 uint8_t actual_bytes[8];
927 } el_reg_t;
928
929 elreg_t int_regfile[128];
930 ```
931
932 Accessing (get and set) of registers given a value, register (in `elreg_t`
933 form), and that all arithmetic, numbering and pseudo-Memory format is
934 LE-endian and LSB0-numbered below:
935
936 ```
937 elreg_t& get_polymorphed_reg(elreg_t const& reg, bitwidth, offset):
938 el_reg_t res; // result
939 res.l = 0; // TODO: going to need sign-extending / zero-extending
940 if !reg.isvec: // scalar access has no element offset
941 offset = 0
942 if bitwidth == 8:
943 reg.b = int_regfile[reg].b[offset]
944 elif bitwidth == 16:
945 reg.s = int_regfile[reg].s[offset]
946 elif bitwidth == 32:
947 reg.i = int_regfile[reg].i[offset]
948 elif bitwidth == 64:
949 reg.l = int_regfile[reg].l[offset]
950 return reg
951
952 set_polymorphed_reg(elreg_t& reg, bitwidth, offset, val):
953 if (!reg.isvec):
954 # for safety mask out hi bits
955 bytemask = (8 << bitwidth) - 1
956 val &= bytemask
957 # not a vector: first element only, overwrites high bits.
958 # and with the *Architectural* definition being LE,
959 # storing in the first DWORD works perfectly.
960 int_regfile[reg].l[0] = val
961 elif bitwidth == 8:
962 int_regfile[reg].b[offset] = val
963 elif bitwidth == 16:
964 int_regfile[reg].s[offset] = val
965 elif bitwidth == 32:
966 int_regfile[reg].i[offset] = val
967 elif bitwidth == 64:
968 int_regfile[reg].l[offset] = val
969 ```
970
971 In effect the GPR registers r0 to r127 (and corresponding FPRs fp0
972 to fp127) are reinterpreted to be "starting points" in a byte-addressable
973 memory. Vectors - which become just a virtual naming construct - effectively
974 overlap.
975
976 It is extremely important for implementors to note that the only circumstance
977 where upper portions of an underlying 64-bit register are zero'd out is
978 when the destination is a scalar. The ideal register file has byte-level
979 write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.
980
981 An example ADD operation with predication and element width overrides:
982
983 ```
984  for (i = 0; i < VL; i++)
985 if (predval & 1<<i) # predication
986 src1 = get_polymorphed_reg(RA, srcwid, irs1)
987 src2 = get_polymorphed_reg(RB, srcwid, irs2)
988 result = src1 + src2 # actual add here
989 set_polymorphed_reg(RT, destwid, ird, result)
990 if (!RT.isvec) break
991 if (RT.isvec)  { id += 1; }
992 if (RA.isvec)  { irs1 += 1; }
993 if (RB.isvec)  { irs2 += 1; }
994 ```
995
996 Thus it can be clearly seen that elements are packed by their
997 element width, and the packing starts from the source (or destination)
998 specified by the instruction.
999
1000 ## Twin (implicit) result operations
1001
1002 Some operations in the Power ISA already target two 64-bit scalar
1003 registers: `lq` for example, and LD with update. Some mathematical
1004 algorithms are more efficient when there are two outputs rather than one,
1005 providing feedback loops between elements (the most well-known being add
1006 with carry). 64-bit multiply for example actually internally produces
1007 a 128 bit result, which clearly cannot be stored in a single 64 bit
1008 register. Some ISAs recommend "macro op fusion": the practice of setting
1009 a convention whereby if two commonly used instructions (mullo, mulhi) use
1010 the same ALU but one selects the low part of an identical operation and
1011 the other selects the high part, then optimised micro-architectures may
1012 "fuse" those two instructions together, using Micro-coding techniques,
1013 internally.
1014
1015 The practice and convention of macro-op fusion however is not compatible
1016 with SVP64 Horizontal-First, because Horizontal Mode may only be applied
1017 to a single instruction at a time, and SVP64 is based on the principle of
1018 strict Program Order even at the element level. Thus it becomes necessary
1019 to add explicit more complex single instructions with more operands than
1020 would normally be seen in the average RISC ISA (3-in, 2-out, in some
1021 cases). If it was not for Power ISA already having LD/ST with update as
1022 well as Condition Codes and `lq` this would be hard to justify.
1023
1024 With limited space in the `EXTRA` Field, and Power ISA opcodes being only
1025 32 bit, 5 operands is quite an ask. `lq` however sets a precedent: `RTp`
1026 stands for "RT pair". In other words the result is stored in RT and RT+1.
1027 For Scalar operations, following this precedent is perfectly reasonable.
1028 In Scalar mode, `maddedu` therefore stores the two halves of the 128-bit
1029 multiply into RT and RT+1.
1030
1031 What, then, of `sv.maddedu`? If the destination is hard-coded to RT and
1032 RT+1 the instruction is not useful when Vectorized because the output
1033 will be overwritten on the next element. To solve this is easy: define
1034 the destination registers as RT and RT+MAXVL respectively. This makes
1035 it easy for compilers to statically allocate registers even when VL
1036 changes dynamically.
1037
1038 Bear in mind that both RT and RT+MAXVL are starting points for Vectors,
1039 and bear in mind that element-width overrides still have to be taken
1040 into consideration, the starting point for the implicit destination is
1041 best illustrated in pseudocode:
1042
1043 ```
1044 # demo of maddedu
1045  for (i = 0; i < VL; i++)
1046 if (predval & 1<<i) # predication
1047 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1048 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1049 src2 = get_polymorphed_reg(RC, srcwid, irs3)
1050 result = src1*src2 + src2
1051 destmask = (2<<destwid)-1
1052 # store two halves of result, both start from RT.
1053 set_polymorphed_reg(RT, destwid, ird , result&destmask)
1054 set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
1055 if (!RT.isvec) break
1056 if (RT.isvec)  { id += 1; }
1057 if (RA.isvec)  { irs1 += 1; }
1058 if (RB.isvec)  { irs2 += 1; }
1059 if (RC.isvec)  { irs3 += 1; }
1060 ```
1061
1062 The significant part here is that the second half is stored
1063 starting not from RT+MAXVL at all: it is the *element* index
1064 that is offset by MAXVL, both halves actually starting from RT.
1065 If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements
1066 RT0 to RT2 are stored:
1067
1068 ```
1069 LSB0: 63:32 31:0
1070 MSB0: 0:31 32:63
1071 r0 unchanged unchanged
1072 r1 RT1.lo RT0.lo
1073 r2 unchanged RT2.lo
1074 r3 RT0.hi unchanged
1075 r4 RT2.hi RT1.hi
1076 r5 unchanged unchanged
1077 ```
1078
1079 Note that all of the LO halves start from r1, but that the HI halves
1080 start from half-way into r3. The reason is that with MAXVL bring 5 and
1081 elwidth being 32, this is the 5th element offset (in 32 bit quantities)
1082 counting from r1.
1083
1084 *Programmer's note: accessing registers that have been placed starting
1085 on a non-contiguous boundary (half-way along a scalar register) can
1086 be inconvenient: REMAP can provide an offset but it requires extra
1087 instructions to set up. A simple solution is to ensure that MAXVL is
1088 rounded up such that the Vector ends cleanly on a contiguous register
1089 boundary. MAXVL=6 in the above example would achieve that*
1090
1091 Additional DRAFT Scalar instructions in 3-in 2-out form with an implicit
1092 2nd destination:
1093
1094 * [[isa/svfixedarith]]
1095 * [[isa/svfparith]]
1096
1097 [[!tag standards]]
1098
1099 ------
1100
1101 \newpage{}
1102