clarity on reduce modes in appendix
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 # Appendix
2
3 * <https://bugs.libre-soc.org/show_bug.cgi?id=574> Saturation
4 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47> Parallel Prefix
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=697> Reduce Modes
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=864> parallel prefix simulator
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=809> OV sv.addex discussion
8 * ARM SVE Fault-first <https://alastairreid.github.io/papers/sve-ieee-micro-2017.pdf>
9
10 This is the appendix to [[sv/svp64]], providing explanations of modes
11 etc. leaving the main svp64 page's primary purpose as outlining the
12 instruction format.
13
14 Table of contents:
15
16 [[!toc]]
17
18 ## Partial Implementations
19
20 It is perfectly legal to implement subsets of SVP64 as long as illegal
21 instruction traps are always raised on unimplemented features,
22 so that soft-emulation is possible,
23 even for future revisions of SVP64. With SVP64 being partly controlled
24 through contextual SPRs, a little care has to be taken.
25
26 **All** SPRs
27 not implemented including reserved ones for future use must raise an illegal
28 instruction trap if read or written. This allows software the
29 opportunity to emulate the context created by the given SPR.
30
31 See [[sv/compliancy_levels]] for full details.
32
33 ## XER, SO and other global flags
34
35 Vector systems are expected to be high performance. This is achieved
36 through parallelism, which requires that elements in the vector be
37 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
38 Read-Write Hazards on single-bit global resources, having a significant
39 detrimental effect.
40
41 Consequently in SV, XER.SO behaviour is disregarded (including
42 in `cmp` instructions). XER.SO is not read, but XER.OV may be written,
43 breaking the Read-Modify-Write Hazard Chain that complicates
44 microarchitectural implementations.
45 This includes when `scalar identity behaviour` occurs. If precise
46 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
47 instructions should be used without an SV Prefix.
48
49 TODO jacob add about OV https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf
50
51 Of note here is that XER.SO and OV may already be disregarded in the
52 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
53 SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets,
54 but only for SVP64 Prefixed Operations.
55
56 XER.CA/CA32 on the other hand is expected and required to be implemented
57 according to standard Power ISA Scalar behaviour. Interestingly, due
58 to SVP64 being in effect a hardware for-loop around Scalar instructions
59 executing in precise Program Order, a little thought shows that a Vectorised
60 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
61 and producing, at the end, a single bit Carry out. High performance
62 implementations may exploit this observation to deploy efficient
63 Parallel Carry Lookahead.
64
65 ```
66 # assume VL=4, this results in 4 sequential ops (below)
67 sv.adde r0.v, r4.v, r8.v
68
69 # instructions that get executed in backend hardware:
70 adde r0, r4, r8 # takes carry-in, produces carry-out
71 adde r1, r5, r9 # takes carry from previous
72 ...
73 adde r3, r7, r11 # likewise
74 ```
75
76 It can clearly be seen that the carry chains from one
77 64 bit add to the next, the end result being that a
78 256-bit "Big Integer Add with Carry" has been performed, and that
79 CA contains the 257th bit. A one-instruction 512-bit Add-with-Carry
80 may be performed by setting VL=8, and a one-instruction
81 1024-bit Add-with-Carry by setting VL=16, and so on. More on
82 this in [[openpower/sv/biginteger]]
83
84 ## EXTRA Field Mapping
85
86 The purpose of the 9-bit EXTRA field mapping is to mark individual
87 registers (RT, RA, BFA) as either scalar or vector, and to extend
88 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
89 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
90 Predication) leaving a mere 6 bits for qualifying registers. As can
91 be seen there is significant pressure on these (and in fact all) SVP64 bits.
92
93 In Power ISA v3.1 prefixing there are bits which describe and classify
94 the prefix in a fashion that is independent of the suffix. MLSS for
95 example. For SVP64 there is insufficient space to make the SVP64 Prefix
96 "self-describing", and consequently every single Scalar instruction
97 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
98 This process was semi-automated and is described in this section.
99 The final results, which are part of the SVP64 Specification, are here:
100 [[openpower/opcode_regs_deduped]]
101
102 * Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
103 from reading the markdown formatted version of the Scalar pseudocode which
104 is machine-readable and found in [[openpower/isatables]]. The analysis
105 gives, by instruction, a "Register Profile". `add RT, RA, RB` for
106 example is given a designation `RM-2R-1W` because it requires two GPR
107 reads and one GPR write.
108 * Secondly, the total number of registers was added up (2R-1W is 3
109 registers) and if less than or equal to three then that instruction
110 could be given an EXTRA3 designation. Four or more is given an EXTRA2
111 designation because there are only 9 bits available.
112 * Thirdly, the instruction was analysed to see if Twin or Single
113 Predication was suitable. As a general rule this was if there
114 was only a single operand and a single result (`extw` and LD/ST)
115 however it was found that some 2 or 3 operand instructions also
116 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
117 in Twin Predication, some compromises were made, here. LDST is
118 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
119 * Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
120 could have been decided that RA would be indexed 0 (EXTRA bits 0-2), RB
121 indexed 1 (EXTRA bits 3-5) and RT indexed 2 (EXTRA bits 6-8). In some
122 cases (LD/ST with update) RA-as-a-source is given a **different** EXTRA
123 index from RA-as-a-result (because it is possible to do, and perceived
124 to be useful). Rc=1 co-results (CR0, CR1) are always given the same
125 EXTRA index as their main result (RT, FRT).
126 * Fifthly, in an automated process the results of the analysis were
127 outputted in CSV Format for use in machine-readable form by sv_analysis.py
128 <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
129
130 This process was laborious but logical, and, crucially, once a decision
131 is made (and ratified) cannot be reversed. Qualifying future Power ISA
132 Scalar instructions for SVP64 is **strongly** advised to utilise this
133 same process and the same sv_analysis.py program as a canonical method
134 of maintaining the relationships. Alterations to that same program
135 which change the Designation is **prohibited** once finalised (ratified
136 through the Power ISA WG Process). It would be similar to deciding that
137 `add` should be changed from X-Form
138 to D-Form.
139
140 ## Single Predication <a name="1p"> </a>
141
142 This is a standard mode normally found in Vector ISAs. every element
143 in every source Vector and in the destination uses the same bit of one
144 single predicate mask.
145
146 In SVSTATE, for Single-predication, implementors MUST increment both
147 srcstep and dststep, but depending on whether sz and/or dz are set,
148 srcstep and dststep can still potentially become different indices.
149 Only when sz=dz is srcstep guaranteed to equal dststep at all times.
150
151 Note that in some Mode Formats there is only one flag (zz). This indicates
152 that *both* sz *and* dz are set to the same.
153
154 Example 1:
155
156 * VL=4
157 * mask=0b1101
158 * sz=0, dz=1
159
160 The following schedule for srcstep and dststep will occur:
161
162 | srcstep | dststep | comment |
163 | ---- | ----- | -------- |
164 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
165 | 1 | 2 | sz=1 but dz=0: dst skips mask[1], src soes not |
166 | 2 | 3 | mask[src=2] and mask[dst=3] are 1 |
167 | end | end | loop has ended because dst reached VL-1 |
168
169 Example 2:
170
171 * VL=4
172 * mask=0b1101
173 * sz=1, dz=0
174
175 The following schedule for srcstep and dststep will occur:
176
177 | srcstep | dststep | comment |
178 | ---- | ----- | -------- |
179 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
180 | 2 | 1 | sz=0 but dz=1: src skips mask[1], dst does not |
181 | 3 | 2 | mask[src=3] and mask[dst=2] are 1 |
182 | end | end | loop has ended because src reached VL-1 |
183
184 In both these examples it is crucial to note that despite there being
185 a single predicate mask, with sz and dz being different, srcstep and
186 dststep are being requested to react differently.
187
188 Example 3:
189
190 * VL=4
191 * mask=0b1101
192 * sz=0, dz=0
193
194 The following schedule for srcstep and dststep will occur:
195
196 | srcstep | dststep | comment |
197 | ---- | ----- | -------- |
198 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
199 | 2 | 2 | sz=0 and dz=0: both src and dst skip mask[1] |
200 | 3 | 3 | mask[src=3] and mask[dst=3] are 1 |
201 | end | end | loop has ended because src and dst reached VL-1 |
202
203 Here, both srcstep and dststep remain in lockstep because sz=dz=1
204
205 ## Twin Predication <a name="2p"> </a>
206
207 This is a novel concept that allows predication to be applied to a single
208 source and a single dest register. The following types of traditional
209 Vector operations may be encoded with it, *without requiring explicit
210 opcodes to do so*
211
212 * VSPLAT (a single scalar distributed across a vector)
213 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
214 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
215 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
216 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
217
218 Those patterns (and more) may be applied to:
219
220 * mv (the usual way that V\* ISA operations are created)
221 * exts\* sign-extension
222 * rwlinm and other RS-RA shift operations (**note**: excluding
223 those that take RA as both a src and dest. These are not
224 1-src 1-dest, they are 2-src, 1-dest)
225 * LD and ST (treating AGEN as one source)
226 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
227 * Condition Register ops mfcr, mtcr and other similar
228
229 This is a huge list that creates extremely powerful combinations,
230 particularly given that one of the predicate options is `(1<<r3)`
231
232 Additional unusual capabilities of Twin Predication include a back-to-back
233 version of VCOMPRESS-VEXPAND which is effectively the ability to do
234 sequentially ordered multiple VINSERTs. The source predicate selects a
235 sequentially ordered subset of elements to be inserted; the destination
236 predicate specifies the sequentially ordered recipient locations.
237 This is equivalent to
238 `llvm.masked.compressstore.*`
239 followed by
240 `llvm.masked.expandload.*`
241 with a single instruction, but abstracted out from Load/Store and applicable
242 in general to any 2P instruction.
243
244 This extreme power and flexibility comes down to the fact that SVP64
245 is not actually a Vector ISA: it is a loop-abstraction-concept that
246 is applied *in general* to Scalar operations, just like the x86 `REP`
247 instruction (if put on steroids).
248
249 ## Pack/Unpack
250
251 The pack/unpack concept of VSX `vpack` is abstracted out as Sub-Vector
252 reordering. Two bits in the `SVSHAPE` [[sv/spr]] enable either "packing"
253 or "unpacking" on the subvectors vec2/3/4.
254
255 First, illustrating a "normal" SVP64 operation with `SUBVL!=1:` (assuming
256 no elwidth overrides), note that the VL loop is outer and the SUBVL
257 loop inner:
258
259 ```
260 def index():
261 for i in range(VL):
262 for j in range(SUBVL):
263 yield i*SUBVL+j
264
265 for idx in index():
266 operation_on(RA+idx)
267 ```
268
269 For pack/unpack (again, no elwidth overrides), note that now there is the
270 option to swap the SUBVL and VL loop orders.
271 In effect the Pack/Unpack performs a Transpose of the subvector elements.
272 Illustrated this time with a GPR mv operation:
273
274 ```
275 # yield an outer-SUBVL or inner VL loop with SUBVL
276 def index_p(outer):
277 if outer:
278 for j in range(SUBVL): # subvl is outer
279 for i in range(VL): # vl is inner
280 yield i+VL*j
281 else:
282 for i in range(VL): # vl is outer
283 for j in range(SUBVL): # subvl is inner
284 yield i*SUBVL+j
285
286 # walk through both source and dest indices simultaneously
287 for src_idx, dst_idx in zip(index_p(PACK), index_p(UNPACK)):
288 move_operation(RT+dst_idx, RA+src_idx)
289 ```
290
291 "yield" from python is used here for simplicity and clarity.
292 The two Finite State Machines for the generation of the source
293 and destination element offsets progress incrementally in
294 lock-step.
295
296 Example VL=2, SUBVL=3, PACK_en=1 - elements grouped by
297 vec3 will be redistributed such that Sub-elements 0 are
298 packed together, Sub-elements 1 are packed together, as
299 are Sub-elements 2.
300
301 ```
302 srcstep=0 srcstep=1
303 0 1 2 3 4 5
304
305 dststep=0 dststep=1 dststep=2
306 0 3 1 4 2 5
307 ```
308
309 Setting of both `PACK` and `UNPACK` is neither prohibited nor `UNDEFINED`
310 because the reordering is fully deterministic, and additional REMAP
311 reordering may be applied. Combined with Matrix REMAP this would give
312 potentially up to 4 Dimensions of reordering.
313
314 Pack/Unpack has quirky interactions on [[sv/mv.swizzle]] because it can
315 set a different subvector length for destination, and has a slightly
316 different pseudocode algorithm for Vertical-First Mode.
317
318 Pack/Unpack is enabled (set up) through [[sv/svstep]].
319
320 ## Reduce modes
321
322 Reduction in SVP64 is deterministic and somewhat of a misnomer.
323 A normal Vector ISA would have explicit Reduce opcodes with defined
324 characteristics per operation: in SX Aurora there is even an additional
325 scalar argument containing the initial reduction value, and the default
326 is either 0 or 1 depending on the specifics of the explicit opcode.
327 SVP64 fundamentally has to utilise *existing* Scalar Power ISA v3.0B
328 operations, which presents some unique challenges.
329
330 The solution turns out to be to simply define reduction as permitting
331 deterministic element-based schedules to be issued using the base Scalar
332 operations, and to rely on the underlying microarchitecture to resolve
333 Register Hazards at the element level. This goes back to the fundamental
334 principle that SV is nothing more than a Sub-Program-Counter sitting
335 between Decode and Issue phases.
336
337 For Scalar Reduction, Microarchitectures *may* take opportunities to
338 parallelise the reduction but only if in doing so they preserve strict
339 Program Order at the Element Level. Opportunities where this is possible
340 include an `OR` operation or a MIN/MAX operation: it may be possible to
341 parallelise the reduction, but for Floating Point it is not permitted
342 due to different results being obtained if the reduction is not executed
343 in strict Program-Sequential Order.
344
345 In essence it becomes the programmer's responsibility to leverage the
346 pre-determined schedules to desired effect.
347
348 ### Scalar result reduction and iteration
349
350 Scalar Reduction per se does not exist, instead is implemented in SVP64
351 as a simple and natural relaxation of the usual restriction on the Vector
352 Looping which would terminate if the destination was marked as a Scalar.
353 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
354 even though the destination register is marked as scalar *and*
355 the same register is used as a source register. Thus it is
356 up to the programmer to be aware of this, observe some conventions,
357 and thus end up achieving the desired outcome of scalar reduction.
358
359 It is also important to appreciate that there is no actual imposition or
360 restriction on how this mode is utilised: there will therefore be several
361 valuable uses (including Vector Iteration and "Reverse-Gear") and it is
362 up to the programmer to make best use of the (strictly deterministic)
363 capability provided.
364
365 In this mode, which is suited to operations involving carry or overflow,
366 one register must be assigned, by convention by the programmer to be the
367 "accumulator". Scalar reduction is thus categorised by:
368
369 * One of the sources is a Vector
370 * the destination is a scalar
371 * optionally but most usefully when one source scalar register is
372 also the scalar destination (which may be informally termed by
373 convention the "accumulator")
374 * That the source register type is the same as the destination register
375 type identified as the "accumulator". Scalar reduction on `cmp`,
376 `setb` or `isel` makes no sense for example because of the mixture
377 between CRs and GPRs.
378
379 *Note that issuing instructions in Scalar reduce mode such as `setb`
380 are neither `UNDEFINED` nor prohibited, despite them not making much
381 sense at first glance. Scalar reduce is strictly defined behaviour,
382 and the cost in hardware terms of prohibition of seemingly non-sensical
383 operations is too great. Therefore it is permitted and required to
384 be executed successfully. Implementors **MAY** choose to optimise
385 such instructions in instances where their use results in "extraneous
386 execution", i.e. where it is clear that the sequence of operations,
387 comprising multiple overwrites to a scalar destination **without**
388 cumulative, iterative, or reductive behaviour (no "accumulator"), may
389 discard all but the last element operation. Identification of such
390 is trivial to do for `setb` and `cmp`: the source register type is a
391 completely different register file from the destination. Likewise Scalar
392 reduction when the destination is a Vector is as if the Reduction Mode
393 was not requested. However it would clearly be unacceptable to perform
394 such optimisations on cache-inhibited LD/ST, so some considerable care
395 needs to be taken.*
396
397 Typical applications include simple operations such as `ADD r3, r10.v,
398 r3` where, clearly, r3 is being used to accumulate the addition of all
399 elements of the vector starting at r10.
400
401 ```
402 # add RT, RA,RB but when RT==RA
403 for i in range(VL):
404 iregs[RA] += iregs[RB+i] # RT==RA
405 ```
406
407 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
408 SV ordinarily **terminates** at the first scalar operation. Only by
409 marking the operation as "mapreduce" will it continue to issue multiple
410 sub-looped (element) instructions in `Program Order`.
411
412 To perform the loop in reverse order, the ```RG``` (reverse gear) bit
413 must be set. This may be useful in situations where the results may be
414 different (floating-point) if executed in a different order. Given that
415 there is no actual prohibition on Reduce Mode being applied when the
416 destination is a Vector, the "Reverse Gear" bit turns out to be a way to
417 apply Iterative or Cumulative Vector operations in reverse. `sv.add/rg
418 r3.v, r4.v, r4.v` for example will start at the opposite end of the
419 Vector and push a cumulative series of overlapping add operations into
420 the Execution units of the underlying hardware.
421
422 Other examples include shift-mask operations where a Vector of inserts
423 into a single destination register is required (see [[sv/bitmanip]],
424 bmset), as a way to construct a value quickly from multiple arbitrary
425 bit-ranges and bit-offsets. Using the same register as both the source
426 and destination, with Vectors of different offsets masks and values to
427 be inserted has multiple applications including Video, cryptography and
428 JIT compilation.
429
430 ```
431 # assume VL=4:
432 # * Vector of shift-offsets contained in RC (r12.v)
433 # * Vector of masks contained in RB (r8.v)
434 # * Vector of values to be masked-in in RA (r4.v)
435 # * Scalar destination RT (r0) to receive all mask-offset values
436 sv.bmset/mr r0, r4.v, r8.v, r12.v
437 ```
438
439 Due to the Deterministic Scheduling, Subtract and Divide are still
440 permitted to be executed in this mode, although from an algorithmic
441 perspective it is strongly discouraged. It would be better to use
442 addition followed by one final subtract, or in the case of divide, to get
443 better accuracy, to perform a multiply cascade followed by a final divide.
444
445 Note that single-operand or three-operand scalar-dest reduce is perfectly
446 well permitted: the programmer may still declare one register, used
447 as both a Vector source and Scalar destination, to be utilised as the
448 "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc this
449 naturally fits well with the normal expected usage of these operations.
450
451 If an interrupt or exception occurs in the middle of the scalar mapreduce,
452 the scalar destination register **MUST** be updated with the current
453 (intermediate) result, because this is how ```Program Order``` is
454 preserved (Vector Loops are to be considered to be just another way
455 of issuing instructions in Program Order). In this way, after return
456 from interrupt, the scalar mapreduce may continue where it left off.
457 This provides "precise" exception behaviour.
458
459 Note that hardware is perfectly permitted to perform multi-issue parallel
460 optimisation of the scalar reduce operation: it's just that as far as
461 the user is concerned, all exceptions and interrupts **MUST** be precise.
462
463 ## Fail-on-first <a name="fail-first"> </a>
464
465 Data-dependent fail-on-first has two distinct variants: one for LD/ST (see
466 [[sv/ldst]], the other for arithmetic operations (actually, CR-driven)
467 [[sv/normal]] and CR operations [[sv/cr_ops]]. Note in each case the
468 assumption is that vector elements are required appear to be executed
469 in sequential Program Order, element 0 being the first.
470
471 * LD/ST ffirst (not to be confused with *Data-Dependent* LD/ST ffirst)
472 treats the first LD/ST in a vector (element 0) as an ordinary one.
473 Exceptions occur "as normal". However for elements 1 and above, if an
474 exception would occur, then VL is **truncated** to the previous element.
475 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
476 CR-creating operation produces a result (including cmp). Similar to
477 branch, an analysis of the CR is performed and if the test fails,
478 the vector operation terminates and discards all element operations
479 above the current one (and the current one if VLi is not set), and
480 VL is truncated to either the *previous* element or the current one,
481 depending on whether VLi (VL "inclusive") is set.
482
483 Thus the new VL comprises a contiguous vector of results, all of which
484 pass the testing criteria (equal to zero, less than zero).
485
486 The CR-based data-driven fail-on-first is new and not
487 found in ARM SVE or RVV. At the same time it is also
488 "old" because it is a generalisation of the Z80 [Block
489 compare](https://rvbelzen.tripod.com/z80prgtemp/z80prg04.htm)
490 instructions, especially
491 [CPIR](http://z80-heaven.wikidot.com/instructions-set:cpir) which is
492 based on CP (compare) as the ultimate "element" (suffix) operation
493 to which the repeat (prefix) is applied. It is extremely useful for
494 reducing instruction count, however requires speculative execution
495 involving modifications of VL to get high performance implementations.
496 An additional mode (RC1=1) effectively turns what would otherwise be an
497 arithmetic operation into a type of `cmp`. The CR is stored (and the
498 CR.eq bit tested against the `inv` field). If the CR.eq bit is equal to
499 `inv` then the Vector is truncated and the loop ends. Note that when
500 RC1=1 the result elements are never stored, only the CRs.
501
502 VLi is only available as an option when `Rc=0` (or for instructions
503 which do not have Rc). When set, the current element is always also
504 included in the count (the new length that VL will be set to). This may
505 be useful in combination with "inv" to truncate the Vector to *exclude*
506 elements that fail a test, or, in the case of implementations of strncpy,
507 to include the terminating zero.
508
509 In CR-based data-driven fail-on-first there is only the option to select
510 and test one bit of each CR (just as with branch BO). For more complex
511 tests this may be insufficient. If that is the case, a vectorised crops
512 (crand, cror) may be used, and ffirst applied to the crop instead of to
513 the arithmetic vector.
514
515 One extremely important aspect of ffirst is:
516
517 * LDST ffirst may never set VL equal to zero. This because on the first
518 element an exception must be raised "as normal".
519 * CR-based data-dependent ffirst on the other hand **can** set VL equal
520 to zero. This is the only means in the entirety of SV that VL may be set
521 to zero (with the exception of via the SV.STATE SPR). When VL is set
522 zero due to the first element failing the CR bit-test, all subsequent
523 vectorised operations are effectively `nops` which is
524 *precisely the desired and intended behaviour*.
525
526 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
527 to a nonzero value for any implementation-specific reason. For example:
528 it is perfectly reasonable for implementations to alter VL when ffirst
529 LD or ST operations are initiated on a nonaligned boundary, such that
530 within a loop the subsequent iteration of that loop begins subsequent
531 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
532 workloads or balance resources.
533
534 CR-based data-dependent first on the other hand MUST not truncate VL
535 arbitrarily to a length decided by the hardware: VL MUST only be truncated
536 based explicitly on whether a test fails. This because it is a precise
537 test on which algorithms will rely.
538
539 *Note: there is no reverse-direction for Data-dependent Fail-First. REMAP
540 will need to be activated to invert the ordering of element traversal.*
541
542 ### Data-dependent fail-first on CR operations (crand etc)
543
544 Operations that actually produce or alter CR Field as a result do not
545 also in turn have an Rc=1 mode. However it makes no sense to try to test
546 the 4 bits of a CR Field for being equal or not equal to zero. Moreover,
547 the result is already in the form that is desired: it is a CR field.
548 Therefore, CR-based operations have their own SVP64 Mode, described in
549 [[sv/cr_ops]]
550
551 There are two primary different types of CR operations:
552
553 * Those which have a 3-bit operand field (referring to a CR Field)
554 * Those which have a 5-bit operand (referring to a bit within the
555 whole 32-bit CR)
556
557 More details can be found in [[sv/cr_ops]].
558
559 ## pred-result mode
560
561 Pred-result mode may not be applied on CR-based operations.
562
563 Although CR operations (mtcr, crand, cror) may be Vectorised, predicated,
564 pred-result mode applies to operations that have an Rc=1 mode, or make
565 sense to add an RC1 option.
566
567 Predicate-result merges common CR testing with predication, saving
568 on instruction count. In essence, a Condition Register Field test is
569 performed, and if it fails it is considered to have been *as if* the
570 destination predicate bit was zero. Given that there are no CR-based
571 operations that produce Rc=1 co-results, there can be no pred-result
572 mode for mtcr and other CR-based instructions
573
574 Arithmetic and Logical Pred-result, which does have Rc=1 or for which
575 RC1 Mode makes sense, is covered in [[sv/normal]]
576
577 ## CR Operations
578
579 CRs are slightly more involved than INT or FP registers due to the
580 possibility for indexing individual bits (crops BA/BB/BT). Again however
581 the access pattern needs to be understandable in relation to v3.0B / v3.1B
582 numbering, with a clear linear relationship and mapping existing when
583 SV is applied.
584
585 ### CR EXTRA mapping table and algorithm <a name="cr_extra"></a>
586
587 Numbering relationships for CR fields are already complex due to being
588 in BE format (*the relationship is not clearly explained in the v3.0B
589 or v3.1 specification*). However with some care and consideration the
590 exact same mapping used for INT and FP regfiles may be applied, just to
591 the upper bits, as explained below. Firstly and most importantly a new
592 notation `CR{field number}` is used to indicate access to a particular
593 Condition Register Field (as opposed to the notation `CR[bit]` which
594 accesses one bit of the 32 bit Power ISA v3.0B Condition Register).
595
596 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
597
598 ```
599 CR{n} = CR[32+n*4:35+n*4]
600 ```
601
602 For SVP64 the relationship for the sequential numbering of elements is to
603 the CR **fields** within the CR Register, not to individual bits within
604 the CR register.
605
606 The `CR{n}` notation is designed to give *linear sequential
607 numbering* in the Vector domain on a straight sequential Vector Loop.
608
609 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
610 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits *in*
611 that CR (EQ/LT/GT/SO). The numbering was determined (after 4 months of
612 analysis and research) to be as follows:
613
614 ```
615 CR_index = (BA>>2) # top 3 bits
616 bit_index = (BA & 0b11) # low 2 bits
617 CR_reg = CR{CR_index} # get the CR
618 # finally get the bit from the CR.
619 CR_bit = (CR_reg & (1<<bit_index)) != 0
620 ```
621
622 When it comes to applying SV, it is the *CR Field* number `CR_reg`
623 to which SV EXTRA2/3
624 applies, **not** the `CR_bit` portion (bits 3-4):
625
626 ```
627 if extra3_mode:
628 spec = EXTRA3
629 else:
630 spec = EXTRA2<<1 | 0b0
631 if spec[0]:
632 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
633 return ((BA >> 2)<<6) | # hi 3 bits shifted up
634 (spec[1:2]<<4) | # to make room for these
635 (BA & 0b11) # CR_bit on the end
636 else:
637 # scalar constructs "00 spec[1:2] BA[0:4]"
638 return (spec[1:2] << 5) | BA
639 ```
640
641 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
642 algorithm to determine CR\_reg is modified to as follows:
643
644 ```
645 CR_index = (BA>>2) # top 3 bits
646 if spec[0]:
647 # vector mode, 0-124 increments of 4
648 CR_index = (CR_index<<4) | (spec[1:2] << 2)
649 else:
650 # scalar mode, 0-32 increments of 1
651 CR_index = (spec[1:2]<<3) | CR_index
652 # same as for v3.0/v3.1 from this point onwards
653 bit_index = (BA & 0b11) # low 2 bits
654 CR_reg = CR{CR_index} # get the CR
655 # finally get the bit from the CR.
656 CR_bit = (CR_reg & (1<<bit_index)) != 0
657 ```
658
659 Note here that the decoding pattern to determine CR\_bit does not change.
660
661 Note: high-performance implementations may read/write Vectors of CRs in
662 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
663 simplify internal design. If instructions are issued where CR Vectors
664 do not start on a 32-bit aligned boundary, performance may be affected.
665
666 ### CR fields as inputs/outputs of vector operations
667
668 CRs (or, the arithmetic operations associated with them)
669 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
670
671 When vectorized, the CR inputs/outputs are sequentially read/written
672 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
673 writing to CR8 (TBD evaluate) and increase sequentially from there.
674 This is so that:
675
676 * implementations may rely on the Vector CRs being aligned to 8. This
677 means that CRs may be read or written in aligned batches of 32 bits
678 (8 CRs per batch), for high performance implementations.
679 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
680 overwritten by vector Rc=1 operations except for very large VL
681 * CR-based predication, from CR32, is also not interfered with
682 (except by large VL).
683
684 However when the SV result (destination) is marked as a scalar by the
685 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
686 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
687 for FP operations.
688
689 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
690 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
691 v3.0B scalar operations produce a **tuple** of element results: the
692 result of the operation as one part of that element *and a corresponding
693 CR element*. Greatly simplified pseudocode:
694
695 ```
696 for i in range(VL):
697 # calculate the vector result of an add
698 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
699 # now calculate CR bits
700 CRs{8+i}.eq = iregs[RT+i] == 0
701 CRs{8+i}.gt = iregs[RT+i] > 0
702 ... etc
703 ```
704
705 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
706 then a followup instruction must be performed, setting "reduce" mode on
707 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
708 more flexibility in analysing vectors than standard Vector ISAs. Normal
709 Vector ISAs are typically restricted to "were all results nonzero" and
710 "were some results nonzero". The application of mapreduce to Vectorised
711 cr operations allows far more sophisticated analysis, particularly in
712 conjunction with the new crweird operations see [[sv/cr_int_predication]].
713
714 Note in particular that the use of a separate instruction in this way
715 ensures that high performance multi-issue OoO inplementations do not
716 have the computation of the cumulative analysis CR as a bottleneck and
717 hindrance, regardless of the length of VL.
718
719 Additionally,
720 SVP64 [[sv/branches]] may be used, even when the branch itself is to
721 the following instruction. The combined side-effects of CTR reduction
722 and VL truncation provide several benefits.
723
724 (see [[discussion]]. some alternative schemes are described there)
725
726 ### Rc=1 when SUBVL!=1
727
728 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
729 predicate is allocated per subvector; likewise only one CR is allocated
730 per subvector.
731
732 This leaves a conundrum as to how to apply CR computation per subvector,
733 when normally Rc=1 is exclusively applied to scalar elements. A solution
734 is to perform a bitwise OR or AND of the subvector tests. Given that
735 OE is ignored in SVP64, this field may (when available) be used to select OR or
736 AND behavior.
737
738 #### Table of CR fields
739
740 CRn is the notation used by the OpenPower spec to refer to CR field #i,
741 so FP instructions with Rc=1 write to CR1 (n=1).
742
743 CRs are not stored in SPRs: they are registers in their own right.
744 Therefore context-switching the full set of CRs involves a Vectorised
745 mfcr or mtcr, using VL=8 to do so. This is exactly as how
746 scalar OpenPOWER context-switches CRs: it is just that there are now
747 more of them.
748
749 The 64 SV CRs are arranged similarly to the way the 128 integer registers
750 are arranged. TODO a python program that auto-generates a CSV file
751 which can be included in a table, which is in a new page (so as not to
752 overwhelm this one). [[svp64/cr_names]]
753
754 ## Register Profiles
755
756 Instructions are broken down by Register Profiles as listed in the
757 following auto-generated page: [[opcode_regs_deduped]]. These tables,
758 despite being auto-generated, are part of the Specification.
759
760 ## SV pseudocode illustration
761
762 ### Single-predicated Instruction
763
764 illustration of normal mode add operation: zeroing not included, elwidth
765 overrides not included. if there is no predicate, it is set to all 1s
766
767 ```
768 function op_add(rd, rs1, rs2) # add not VADD!
769 int i, id=0, irs1=0, irs2=0;
770 predval = get_pred_val(FALSE, rd);
771 for (i = 0; i < VL; i++)
772 STATE.srcoffs = i # save context
773 if (predval & 1<<i) # predication uses intregs
774 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
775 if (!int_vec[rd].isvec) break;
776 if (rd.isvec) { id += 1; }
777 if (rs1.isvec) { irs1 += 1; }
778 if (rs2.isvec) { irs2 += 1; }
779 if (id == VL or irs1 == VL or irs2 == VL) {
780 # end VL hardware loop
781 STATE.srcoffs = 0; # reset
782 return;
783 }
784 ```
785
786 This has several modes:
787
788 * RT.v = RA.v RB.v
789 * RT.v = RA.v RB.s (and RA.s RB.v)
790 * RT.v = RA.s RB.s
791 * RT.s = RA.v RB.v
792 * RT.s = RA.v RB.s (and RA.s RB.v)
793 * RT.s = RA.s RB.s
794
795 All of these may be predicated. Vector-Vector is straightfoward.
796 When one of source is a Vector and the other a Scalar, it is clear that
797 each element of the Vector source should be added to the Scalar source,
798 each result placed into the Vector (or, if the destination is a scalar,
799 only the first nonpredicated result).
800
801 The one that is not obvious is RT=vector but both RA/RB=scalar.
802 Here this acts as a "splat scalar result", copying the same result into
803 all nonpredicated result elements. If a fixed destination scalar was
804 intended, then an all-Scalar operation should be used.
805
806 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
807
808 ## Assembly Annotation
809
810 Assembly code annotation is required for SV to be able to successfully
811 mark instructions as "prefixed".
812
813 A reasonable (prototype) starting point:
814
815 ```
816 svp64 [field=value]*
817 ```
818
819 Fields:
820
821 * ew=8/16/32 - element width
822 * sew=8/16/32 - source element width
823 * vec=2/3/4 - SUBVL
824 * mode=mr/satu/sats/crpred
825 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
826
827 similar to x86 "rex" prefix.
828
829 For actual assembler:
830
831 ```
832 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
833 ```
834
835 Qualifiers:
836
837 * m={pred}: predicate mask mode
838 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
839 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
840 * ew={N}: ew=8/16/32 - sets elwidth override
841 * sw={N}: sw=8/16/32 - sets source elwidth override
842 * ff={xx}: see fail-first mode
843 * pr={xx}: see predicate-result mode
844 * sat{x}: satu / sats - see saturation mode
845 * mr: see map-reduce mode
846 * mrr: map-reduce, reverse-gear (VL-1 downto 0)
847 * mr.svm see map-reduce with sub-vector mode
848 * crm: see map-reduce CR mode
849 * crm.svm see map-reduce CR with sub-vector mode
850 * sz: predication with source-zeroing
851 * dz: predication with dest-zeroing
852
853 For modes:
854
855 * pred-result:
856 - pm=lt/gt/le/ge/eq/ne/so/ns
857 - RC1 mode
858 * fail-first
859 - ff=lt/gt/le/ge/eq/ne/so/ns
860 - RC1 mode
861 * saturation:
862 - sats
863 - satu
864 * map-reduce:
865 - mr OR crm: "normal" map-reduce mode or CR-mode.
866 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
867
868 ## Parallel-reduction algorithm
869
870 The principle of SVP64 is that SVP64 is a fully-independent
871 Abstraction of hardware-looping in between issue and execute phases
872 that has no relation to the operation it issues.
873 Additional state cannot be saved on context-switching beyond that
874 of SVSTATE, making things slightly tricky.
875
876 Executable demo pseudocode, full version
877 [here](https://git.libre-soc.org/?p=libreriscv.git;a=blob;f=openpower/sv/test_preduce.py;hb=HEAD)
878
879 ```
880 [[!inline pages="openpower/sv/preduce.py" raw="yes" ]]
881 ```
882
883 This algorithm works by noting when data remains in-place rather than
884 being reduced, and referring to that alternative position on subsequent
885 layers of reduction. It is re-entrant. If however interrupted and
886 restored, some implementations may take longer to re-establish the
887 context.
888
889 Its application by default is that:
890
891 * RA, FRA or BFA is the first register as the first operand
892 (ci index offset in the above pseudocode)
893 * RB, FRB or BFB is the second (co index offset)
894 * RT (result) also uses ci **if RA==RT**
895
896 For more complex applications a REMAP Schedule must be used
897
898 *Programmers's note: if passed a predicate mask with only one bit set,
899 this algorithm takes no action, similar to when a predicate mask is
900 all zero.*
901
902 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
903 implemented in hardware with MVs that ensure lane-crossing is minimised.
904 The mistake which would be catastrophic to SVP64 to make is to then limit
905 the Reduction Sequence for all implementors based solely and exclusively
906 on what one specific internal microarchitecture does. In SIMD ISAs
907 the internal SIMD Architectural design is exposed and imposed on the
908 programmer. Cray-style Vector ISAs on the other hand provide convenient,
909 compact and efficient encodings of abstract concepts.* **It is the
910 Implementor's responsibility to produce a design that complies with the
911 above algorithm, utilising internal Micro-coding and other techniques to
912 transparently insert micro-architectural lane-crossing Move operations
913 if necessary or desired, to give the level of efficiency or performance
914 required.**
915
916 ## Element-width overrides <a name="elwidth"> </>
917
918 Element-width overrides are best illustrated with a packed structure
919 union in the c programming language. The following should be taken
920 literally, and assume always a little-endian layout:
921
922 ```
923 #pragma pack
924 typedef union {
925 uint8_t b[];
926 uint16_t s[];
927 uint32_t i[];
928 uint64_t l[];
929 uint8_t actual_bytes[8];
930 } el_reg_t;
931
932 elreg_t int_regfile[128];
933 ```
934
935 Accessing (get and set) of registers given a value, register (in `elreg_t`
936 form), and that all arithmetic, numbering and pseudo-Memory format is
937 LE-endian and LSB0-numbered below:
938
939 ```
940 elreg_t& get_polymorphed_reg(elreg_t const& reg, bitwidth, offset):
941 el_reg_t res; // result
942 res.l = 0; // TODO: going to need sign-extending / zero-extending
943 if !reg.isvec: // scalar access has no element offset
944 offset = 0
945 if bitwidth == 8:
946 reg.b = int_regfile[reg].b[offset]
947 elif bitwidth == 16:
948 reg.s = int_regfile[reg].s[offset]
949 elif bitwidth == 32:
950 reg.i = int_regfile[reg].i[offset]
951 elif bitwidth == 64:
952 reg.l = int_regfile[reg].l[offset]
953 return reg
954
955 set_polymorphed_reg(elreg_t& reg, bitwidth, offset, val):
956 if (!reg.isvec):
957 # for safety mask out hi bits
958 bytemask = (8 << bitwidth) - 1
959 val &= bytemask
960 # not a vector: first element only, overwrites high bits.
961 # and with the *Architectural* definition being LE,
962 # storing in the first DWORD works perfectly.
963 int_regfile[reg].l[0] = val
964 elif bitwidth == 8:
965 int_regfile[reg].b[offset] = val
966 elif bitwidth == 16:
967 int_regfile[reg].s[offset] = val
968 elif bitwidth == 32:
969 int_regfile[reg].i[offset] = val
970 elif bitwidth == 64:
971 int_regfile[reg].l[offset] = val
972 ```
973
974 In effect the GPR registers r0 to r127 (and corresponding FPRs fp0
975 to fp127) are reinterpreted to be "starting points" in a byte-addressable
976 memory. Vectors - which become just a virtual naming construct - effectively
977 overlap.
978
979 It is extremely important for implementors to note that the only circumstance
980 where upper portions of an underlying 64-bit register are zero'd out is
981 when the destination is a scalar. The ideal register file has byte-level
982 write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.
983
984 An example ADD operation with predication and element width overrides:
985
986 ```
987  for (i = 0; i < VL; i++)
988 if (predval & 1<<i) # predication
989 src1 = get_polymorphed_reg(RA, srcwid, irs1)
990 src2 = get_polymorphed_reg(RB, srcwid, irs2)
991 result = src1 + src2 # actual add here
992 set_polymorphed_reg(RT, destwid, ird, result)
993 if (!RT.isvec) break
994 if (RT.isvec)  { id += 1; }
995 if (RA.isvec)  { irs1 += 1; }
996 if (RB.isvec)  { irs2 += 1; }
997 ```
998
999 Thus it can be clearly seen that elements are packed by their
1000 element width, and the packing starts from the source (or destination)
1001 specified by the instruction.
1002
1003 ## Twin (implicit) result operations
1004
1005 Some operations in the Power ISA already target two 64-bit scalar
1006 registers: `lq` for example, and LD with update. Some mathematical
1007 algorithms are more efficient when there are two outputs rather than one,
1008 providing feedback loops between elements (the most well-known being add
1009 with carry). 64-bit multiply for example actually internally produces
1010 a 128 bit result, which clearly cannot be stored in a single 64 bit
1011 register. Some ISAs recommend "macro op fusion": the practice of setting
1012 a convention whereby if two commonly used instructions (mullo, mulhi) use
1013 the same ALU but one selects the low part of an identical operation and
1014 the other selects the high part, then optimised micro-architectures may
1015 "fuse" those two instructions together, using Micro-coding techniques,
1016 internally.
1017
1018 The practice and convention of macro-op fusion however is not compatible
1019 with SVP64 Horizontal-First, because Horizontal Mode may only be applied
1020 to a single instruction at a time, and SVP64 is based on the principle of
1021 strict Program Order even at the element level. Thus it becomes necessary
1022 to add explicit more complex single instructions with more operands than
1023 would normally be seen in the average RISC ISA (3-in, 2-out, in some
1024 cases). If it was not for Power ISA already having LD/ST with update as
1025 well as Condition Codes and `lq` this would be hard to justify.
1026
1027 With limited space in the `EXTRA` Field, and Power ISA opcodes being only
1028 32 bit, 5 operands is quite an ask. `lq` however sets a precedent: `RTp`
1029 stands for "RT pair". In other words the result is stored in RT and RT+1.
1030 For Scalar operations, following this precedent is perfectly reasonable.
1031 In Scalar mode, `maddedu` therefore stores the two halves of the 128-bit
1032 multiply into RT and RT+1.
1033
1034 What, then, of `sv.maddedu`? If the destination is hard-coded to RT and
1035 RT+1 the instruction is not useful when Vectorised because the output
1036 will be overwritten on the next element. To solve this is easy: define
1037 the destination registers as RT and RT+MAXVL respectively. This makes
1038 it easy for compilers to statically allocate registers even when VL
1039 changes dynamically.
1040
1041 Bear in mind that both RT and RT+MAXVL are starting points for Vectors,
1042 and bear in mind that element-width overrides still have to be taken
1043 into consideration, the starting point for the implicit destination is
1044 best illustrated in pseudocode:
1045
1046 ```
1047 # demo of maddedu
1048  for (i = 0; i < VL; i++)
1049 if (predval & 1<<i) # predication
1050 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1051 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1052 src2 = get_polymorphed_reg(RC, srcwid, irs3)
1053 result = src1*src2 + src2
1054 destmask = (2<<destwid)-1
1055 # store two halves of result, both start from RT.
1056 set_polymorphed_reg(RT, destwid, ird , result&destmask)
1057 set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
1058 if (!RT.isvec) break
1059 if (RT.isvec)  { id += 1; }
1060 if (RA.isvec)  { irs1 += 1; }
1061 if (RB.isvec)  { irs2 += 1; }
1062 if (RC.isvec)  { irs3 += 1; }
1063 ```
1064
1065 The significant part here is that the second half is stored
1066 starting not from RT+MAXVL at all: it is the *element* index
1067 that is offset by MAXVL, both halves actually starting from RT.
1068 If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements
1069 RT0 to RT2 are stored:
1070
1071 ```
1072 LSB0: 63:32 31:0
1073 MSB0: 0:31 32:63
1074 r0 unchanged unchanged
1075 r1 RT1.lo RT0.lo
1076 r2 unchanged RT2.lo
1077 r3 RT0.hi unchanged
1078 r4 RT2.hi RT1.hi
1079 r5 unchanged unchanged
1080 ```
1081
1082 Note that all of the LO halves start from r1, but that the HI halves
1083 start from half-way into r3. The reason is that with MAXVL bring 5 and
1084 elwidth being 32, this is the 5th element offset (in 32 bit quantities)
1085 counting from r1.
1086
1087 *Programmer's note: accessing registers that have been placed starting
1088 on a non-contiguous boundary (half-way along a scalar register) can
1089 be inconvenient: REMAP can provide an offset but it requires extra
1090 instructions to set up. A simple solution is to ensure that MAXVL is
1091 rounded up such that the Vector ends cleanly on a contiguous register
1092 boundary. MAXVL=6 in the above example would achieve that*
1093
1094 Additional DRAFT Scalar instructions in 3-in 2-out form with an implicit
1095 2nd destination:
1096
1097 * [[isa/svfixedarith]]
1098 * [[isa/svfparith]]
1099
1100 [[!tag standards]]
1101
1102 ------
1103
1104 \newpage{}
1105