(no commit message)
[libreriscv.git] / openpower / sv / svp64 / appendix.mdwn
1 # Appendix
2
3 * <https://bugs.libre-soc.org/show_bug.cgi?id=574> Saturation
4 * <https://bugs.libre-soc.org/show_bug.cgi?id=558#c47> Parallel Prefix
5 * <https://bugs.libre-soc.org/show_bug.cgi?id=697> Reduce Modes
6 * <https://bugs.libre-soc.org/show_bug.cgi?id=864> parallel prefix simulator
7 * <https://bugs.libre-soc.org/show_bug.cgi?id=809> OV sv.addex discussion
8 * ARM SVE Fault-first <https://alastairreid.github.io/papers/sve-ieee-micro-2017.pdf>
9
10 This is the appendix to [[sv/svp64]], providing explanations of modes
11 etc. leaving the main svp64 page's primary purpose as outlining the
12 instruction format.
13
14 Table of contents:
15
16 [[!toc]]
17
18 ## Partial Implementations
19
20 It is perfectly legal to implement subsets of SVP64 as long as illegal
21 instruction traps are always raised on unimplemented features,
22 so that soft-emulation is possible,
23 even for future revisions of SVP64. With SVP64 being partly controlled
24 through contextual SPRs, a little care has to be taken.
25
26 **All** SPRs
27 not implemented including reserved ones for future use must raise an illegal
28 instruction trap if read or written. This allows software the
29 opportunity to emulate the context created by the given SPR.
30
31 See [[sv/compliancy_levels]] for full details.
32
33 ## XER, SO and other global flags
34
35 Vector systems are expected to be high performance. This is achieved
36 through parallelism, which requires that elements in the vector be
37 independent. XER SO/OV and other global "accumulation" flags (CR.SO) cause
38 Read-Write Hazards on single-bit global resources, having a significant
39 detrimental effect.
40
41 Consequently in SV, XER.SO behaviour is disregarded (including
42 in `cmp` instructions). XER.SO is not read, but XER.OV may be written,
43 breaking the Read-Modify-Write Hazard Chain that complicates
44 microarchitectural implementations.
45 This includes when `scalar identity behaviour` occurs. If precise
46 OpenPOWER v3.0/1 scalar behaviour is desired then OpenPOWER v3.0/1
47 instructions should be used without an SV Prefix.
48
49 TODO jacob add about OV https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/ia-large-integer-arithmetic-paper.pdf
50
51 Of note here is that XER.SO and OV may already be disregarded in the
52 Power ISA v3.0/1 SFFS (Scalar Fixed and Floating) Compliancy Subset.
53 SVP64 simply makes it mandatory to disregard XER.SO even for other Subsets,
54 but only for SVP64 Prefixed Operations.
55
56 XER.CA/CA32 on the other hand is expected and required to be implemented
57 according to standard Power ISA Scalar behaviour. Interestingly, due
58 to SVP64 being in effect a hardware for-loop around Scalar instructions
59 executing in precise Program Order, a little thought shows that a Vectorised
60 Carry-In-Out add is in effect a Big Integer Add, taking a single bit Carry In
61 and producing, at the end, a single bit Carry out. High performance
62 implementations may exploit this observation to deploy efficient
63 Parallel Carry Lookahead.
64
65 ```
66 # assume VL=4, this results in 4 sequential ops (below)
67 sv.adde r0.v, r4.v, r8.v
68
69 # instructions that get executed in backend hardware:
70 adde r0, r4, r8 # takes carry-in, produces carry-out
71 adde r1, r5, r9 # takes carry from previous
72 ...
73 adde r3, r7, r11 # likewise
74 ```
75
76 It can clearly be seen that the carry chains from one
77 64 bit add to the next, the end result being that a
78 256-bit "Big Integer Add with Carry" has been performed, and that
79 CA contains the 257th bit. A one-instruction 512-bit Add-with-Carry
80 may be performed by setting VL=8, and a one-instruction
81 1024-bit Add-with-Carry by setting VL=16, and so on. More on
82 this in [[openpower/sv/biginteger]]
83
84 ## EXTRA Field Mapping
85
86 The purpose of the 9-bit EXTRA field mapping is to mark individual
87 registers (RT, RA, BFA) as either scalar or vector, and to extend
88 their numbering from 0..31 in Power ISA v3.0 to 0..127 in SVP64.
89 Three of the 9 bits may also be used up for a 2nd Predicate (Twin
90 Predication) leaving a mere 6 bits for qualifying registers. As can
91 be seen there is significant pressure on these (and in fact all) SVP64 bits.
92
93 In Power ISA v3.1 prefixing there are bits which describe and classify
94 the prefix in a fashion that is independent of the suffix. MLSS for
95 example. For SVP64 there is insufficient space to make the SVP64 Prefix
96 "self-describing", and consequently every single Scalar instruction
97 had to be individually analysed, by rote, to craft an EXTRA Field Mapping.
98 This process was semi-automated and is described in this section.
99 The final results, which are part of the SVP64 Specification, are here:
100 [[openpower/opcode_regs_deduped]]
101
102 * Firstly, every instruction's mnemonic (`add RT, RA, RB`) was analysed
103 from reading the markdown formatted version of the Scalar pseudocode which
104 is machine-readable and found in [[openpower/isatables]]. The analysis
105 gives, by instruction, a "Register Profile". `add RT, RA, RB` for
106 example is given a designation `RM-2R-1W` because it requires two GPR
107 reads and one GPR write.
108 * Secondly, the total number of registers was added up (2R-1W is 3
109 registers) and if less than or equal to three then that instruction
110 could be given an EXTRA3 designation. Four or more is given an EXTRA2
111 designation because there are only 9 bits available.
112 * Thirdly, the instruction was analysed to see if Twin or Single
113 Predication was suitable. As a general rule this was if there
114 was only a single operand and a single result (`extw` and LD/ST)
115 however it was found that some 2 or 3 operand instructions also
116 qualify. Given that 3 of the 9 bits of EXTRA had to be sacrificed for use
117 in Twin Predication, some compromises were made, here. LDST is
118 Twin but also has 3 operands in some operations, so only EXTRA2 can be used.
119 * Fourthly, a packing format was decided: for 2R-1W an EXTRA3 indexing
120 could have been decided that RA would be indexed 0 (EXTRA bits 0-2), RB
121 indexed 1 (EXTRA bits 3-5) and RT indexed 2 (EXTRA bits 6-8). In some
122 cases (LD/ST with update) RA-as-a-source is given a **different** EXTRA
123 index from RA-as-a-result (because it is possible to do, and perceived
124 to be useful). Rc=1 co-results (CR0, CR1) are always given the same
125 EXTRA index as their main result (RT, FRT).
126 * Fifthly, in an automated process the results of the analysis were
127 outputted in CSV Format for use in machine-readable form by sv_analysis.py
128 <https://git.libre-soc.org/?p=openpower-isa.git;a=blob;f=src/openpower/sv/sv_analysis.py;hb=HEAD>
129
130 This process was laborious but logical, and, crucially, once a decision
131 is made (and ratified) cannot be reversed. Qualifying future Power ISA
132 Scalar instructions for SVP64 is **strongly** advised to utilise this
133 same process and the same sv_analysis.py program as a canonical method
134 of maintaining the relationships. Alterations to that same program
135 which change the Designation is **prohibited** once finalised (ratified
136 through the Power ISA WG Process). It would be similar to deciding that
137 `add` should be changed from X-Form
138 to D-Form.
139
140 ## Single Predication <a name="1p"> </a>
141
142 This is a standard mode normally found in Vector ISAs. every element
143 in every source Vector and in the destination uses the same bit of one
144 single predicate mask.
145
146 In SVSTATE, for Single-predication, implementors MUST increment both
147 srcstep and dststep, but depending on whether sz and/or dz are set,
148 srcstep and dststep can still potentially become different indices.
149 Only when sz=dz is srcstep guaranteed to equal dststep at all times.
150
151 Note that in some Mode Formats there is only one flag (zz). This indicates
152 that *both* sz *and* dz are set to the same.
153
154 Example 1:
155
156 * VL=4
157 * mask=0b1101
158 * sz=0, dz=1
159
160 The following schedule for srcstep and dststep will occur:
161
162 | srcstep | dststep | comment |
163 | ---- | ----- | -------- |
164 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
165 | 1 | 2 | sz=1 but dz=0: dst skips mask[1], src soes not |
166 | 2 | 3 | mask[src=2] and mask[dst=3] are 1 |
167 | end | end | loop has ended because dst reached VL-1 |
168
169 Example 2:
170
171 * VL=4
172 * mask=0b1101
173 * sz=1, dz=0
174
175 The following schedule for srcstep and dststep will occur:
176
177 | srcstep | dststep | comment |
178 | ---- | ----- | -------- |
179 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
180 | 2 | 1 | sz=0 but dz=1: src skips mask[1], dst does not |
181 | 3 | 2 | mask[src=3] and mask[dst=2] are 1 |
182 | end | end | loop has ended because src reached VL-1 |
183
184 In both these examples it is crucial to note that despite there being
185 a single predicate mask, with sz and dz being different, srcstep and
186 dststep are being requested to react differently.
187
188 Example 3:
189
190 * VL=4
191 * mask=0b1101
192 * sz=0, dz=0
193
194 The following schedule for srcstep and dststep will occur:
195
196 | srcstep | dststep | comment |
197 | ---- | ----- | -------- |
198 | 0 | 0 | both mask[src=0] and mask[dst=0] are 1 |
199 | 2 | 2 | sz=0 and dz=0: both src and dst skip mask[1] |
200 | 3 | 3 | mask[src=3] and mask[dst=3] are 1 |
201 | end | end | loop has ended because src and dst reached VL-1 |
202
203 Here, both srcstep and dststep remain in lockstep because sz=dz=1
204
205 ## Twin Predication <a name="2p"> </a>
206
207 This is a novel concept that allows predication to be applied to a single
208 source and a single dest register. The following types of traditional
209 Vector operations may be encoded with it, *without requiring explicit
210 opcodes to do so*
211
212 * VSPLAT (a single scalar distributed across a vector)
213 * VEXTRACT (like LLVM IR [`extractelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#extractelement-instruction))
214 * VINSERT (like LLVM IR [`insertelement`](https://releases.llvm.org/11.0.0/docs/LangRef.html#insertelement-instruction))
215 * VCOMPRESS (like LLVM IR [`llvm.masked.compressstore.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-compressstore-intrinsics))
216 * VEXPAND (like LLVM IR [`llvm.masked.expandload.*`](https://releases.llvm.org/11.0.0/docs/LangRef.html#llvm-masked-expandload-intrinsics))
217
218 Those patterns (and more) may be applied to:
219
220 * mv (the usual way that V\* ISA operations are created)
221 * exts\* sign-extension
222 * rwlinm and other RS-RA shift operations (**note**: excluding
223 those that take RA as both a src and dest. These are not
224 1-src 1-dest, they are 2-src, 1-dest)
225 * LD and ST (treating AGEN as one source)
226 * FP fclass, fsgn, fneg, fabs, fcvt, frecip, fsqrt etc.
227 * Condition Register ops mfcr, mtcr and other similar
228
229 This is a huge list that creates extremely powerful combinations,
230 particularly given that one of the predicate options is `(1<<r3)`
231
232 Additional unusual capabilities of Twin Predication include a back-to-back
233 version of VCOMPRESS-VEXPAND which is effectively the ability to do
234 sequentially ordered multiple VINSERTs. The source predicate selects a
235 sequentially ordered subset of elements to be inserted; the destination
236 predicate specifies the sequentially ordered recipient locations.
237 This is equivalent to
238 `llvm.masked.compressstore.*`
239 followed by
240 `llvm.masked.expandload.*`
241 with a single instruction, but abstracted out from Load/Store and applicable
242 in general to any 2P instruction.
243
244 This extreme power and flexibility comes down to the fact that SVP64
245 is not actually a Vector ISA: it is a loop-abstraction-concept that
246 is applied *in general* to Scalar operations, just like the x86 `REP`
247 instruction (if put on steroids).
248
249 ## Pack/Unpack
250
251 The pack/unpack concept of VSX `vpack` is abstracted out as Sub-Vector
252 reordering. Two bits in the `SVSHAPE` [[sv/spr]] enable either "packing"
253 or "unpacking" on the subvectors vec2/3/4.
254
255 First, illustrating a "normal" SVP64 operation with `SUBVL!=1:` (assuming
256 no elwidth overrides), note that the VL loop is outer and the SUBVL
257 loop inner:
258
259 ```
260 def index():
261 for i in range(VL):
262 for j in range(SUBVL):
263 yield i*SUBVL+j
264
265 for idx in index():
266 operation_on(RA+idx)
267 ```
268
269 For pack/unpack (again, no elwidth overrides), note that now there is the
270 option to swap the SUBVL and VL loop orders.
271 In effect the Pack/Unpack performs a Transpose of the subvector elements.
272 Illustrated this time with a GPR mv operation:
273
274 ```
275 # yield an outer-SUBVL or inner VL loop with SUBVL
276 def index_p(outer):
277 if outer:
278 for j in range(SUBVL): # subvl is outer
279 for i in range(VL): # vl is inner
280 yield i+VL*j
281 else:
282 for i in range(VL): # vl is outer
283 for j in range(SUBVL): # subvl is inner
284 yield i*SUBVL+j
285
286 # walk through both source and dest indices simultaneously
287 for src_idx, dst_idx in zip(index_p(PACK), index_p(UNPACK)):
288 move_operation(RT+dst_idx, RA+src_idx)
289 ```
290
291 "yield" from python is used here for simplicity and clarity.
292 The two Finite State Machines for the generation of the source
293 and destination element offsets progress incrementally in
294 lock-step.
295
296 Example VL=2, SUBVL=3, PACK_en=1 - elements grouped by
297 vec3 will be redistributed such that Sub-elements 0 are
298 packed together, Sub-elements 1 are packed together, as
299 are Sub-elements 2.
300
301 ```
302 srcstep=0 srcstep=1
303 0 1 2 3 4 5
304
305 dststep=0 dststep=1 dststep=2
306 0 3 1 4 2 5
307 ```
308
309 Setting of both `PACK` and `UNPACK` is neither prohibited nor `UNDEFINED`
310 because the reordering is fully deterministic, and additional REMAP
311 reordering may be applied. Combined with Matrix REMAP this would give
312 potentially up to 4 Dimensions of reordering.
313
314 Pack/Unpack has quirky interactions on [[sv/mv.swizzle]] because it can
315 set a different subvector length for destination, and has a slightly
316 different pseudocode algorithm for Vertical-First Mode.
317
318 Ordering is as follows:
319
320 * SVSHAPE srcstep, dststep, ssubstep and dsubstep are advanced sequentially
321 depending on PACK/UNPACK.
322 * srcstep and dststep are pushed through REMAP to compute actual Element offsets.
323 * Swizzle is independently applied to ssubstep and dsubstep
324
325 Pack/Unpack is enabled (set up) through [[sv/svstep]].
326
327 ## Reduce modes
328
329 Reduction in SVP64 is deterministic and somewhat of a misnomer.
330 A normal Vector ISA would have explicit Reduce opcodes with defined
331 characteristics per operation: in SX Aurora there is even an additional
332 scalar argument containing the initial reduction value, and the default
333 is either 0 or 1 depending on the specifics of the explicit opcode.
334 SVP64 fundamentally has to utilise *existing* Scalar Power ISA v3.0B
335 operations, which presents some unique challenges.
336
337 The solution turns out to be to simply define reduction as permitting
338 deterministic element-based schedules to be issued using the base Scalar
339 operations, and to rely on the underlying microarchitecture to resolve
340 Register Hazards at the element level. This goes back to the fundamental
341 principle that SV is nothing more than a Sub-Program-Counter sitting
342 between Decode and Issue phases.
343
344 For Scalar Reduction, Microarchitectures *may* take opportunities to
345 parallelise the reduction but only if in doing so they preserve strict
346 Program Order at the Element Level. Opportunities where this is possible
347 include an `OR` operation or a MIN/MAX operation: it may be possible to
348 parallelise the reduction, but for Floating Point it is not permitted
349 due to different results being obtained if the reduction is not executed
350 in strict Program-Sequential Order.
351
352 In essence it becomes the programmer's responsibility to leverage the
353 pre-determined schedules to desired effect.
354
355 ### Scalar result reduction and iteration
356
357 Scalar Reduction per se does not exist, instead is implemented in SVP64
358 as a simple and natural relaxation of the usual restriction on the Vector
359 Looping which would terminate if the destination was marked as a Scalar.
360 Scalar Reduction by contrast *keeps issuing Vector Element Operations*
361 even though the destination register is marked as scalar *and*
362 the same register is used as a source register. Thus it is
363 up to the programmer to be aware of this, observe some conventions,
364 and thus end up achieving the desired outcome of scalar reduction.
365
366 It is also important to appreciate that there is no actual imposition or
367 restriction on how this mode is utilised: there will therefore be several
368 valuable uses (including Vector Iteration and "Reverse-Gear") and it is
369 up to the programmer to make best use of the (strictly deterministic)
370 capability provided.
371
372 In this mode, which is suited to operations involving carry or overflow,
373 one register must be assigned, by convention by the programmer to be the
374 "accumulator". Scalar reduction is thus categorised by:
375
376 * One of the sources is a Vector
377 * the destination is a scalar
378 * optionally but most usefully when one source scalar register is
379 also the scalar destination (which may be informally termed by
380 convention the "accumulator")
381 * That the source register type is the same as the destination register
382 type identified as the "accumulator". Scalar reduction on `cmp`,
383 `setb` or `isel` makes no sense for example because of the mixture
384 between CRs and GPRs.
385
386 *Note that issuing instructions in Scalar reduce mode such as `setb`
387 are neither `UNDEFINED` nor prohibited, despite them not making much
388 sense at first glance. Scalar reduce is strictly defined behaviour,
389 and the cost in hardware terms of prohibition of seemingly non-sensical
390 operations is too great. Therefore it is permitted and required to
391 be executed successfully. Implementors **MAY** choose to optimise
392 such instructions in instances where their use results in "extraneous
393 execution", i.e. where it is clear that the sequence of operations,
394 comprising multiple overwrites to a scalar destination **without**
395 cumulative, iterative, or reductive behaviour (no "accumulator"), may
396 discard all but the last element operation. Identification of such
397 is trivial to do for `setb` and `cmp`: the source register type is a
398 completely different register file from the destination. Likewise Scalar
399 reduction when the destination is a Vector is as if the Reduction Mode
400 was not requested. However it would clearly be unacceptable to perform
401 such optimisations on cache-inhibited LD/ST, so some considerable care
402 needs to be taken.*
403
404 Typical applications include simple operations such as `ADD r3, r10.v,
405 r3` where, clearly, r3 is being used to accumulate the addition of all
406 elements of the vector starting at r10.
407
408 ```
409 # add RT, RA,RB but when RT==RA
410 for i in range(VL):
411 iregs[RA] += iregs[RB+i] # RT==RA
412 ```
413
414 However, *unless* the operation is marked as "mapreduce" (`sv.add/mr`)
415 SV ordinarily **terminates** at the first scalar operation. Only by
416 marking the operation as "mapreduce" will it continue to issue multiple
417 sub-looped (element) instructions in `Program Order`.
418
419 To perform the loop in reverse order, the ```RG``` (reverse gear) bit
420 must be set. This may be useful in situations where the results may be
421 different (floating-point) if executed in a different order. Given that
422 there is no actual prohibition on Reduce Mode being applied when the
423 destination is a Vector, the "Reverse Gear" bit turns out to be a way to
424 apply Iterative or Cumulative Vector operations in reverse. `sv.add/rg
425 r3.v, r4.v, r4.v` for example will start at the opposite end of the
426 Vector and push a cumulative series of overlapping add operations into
427 the Execution units of the underlying hardware.
428
429 Other examples include shift-mask operations where a Vector of inserts
430 into a single destination register is required (see [[sv/bitmanip]],
431 bmset), as a way to construct a value quickly from multiple arbitrary
432 bit-ranges and bit-offsets. Using the same register as both the source
433 and destination, with Vectors of different offsets masks and values to
434 be inserted has multiple applications including Video, cryptography and
435 JIT compilation.
436
437 ```
438 # assume VL=4:
439 # * Vector of shift-offsets contained in RC (r12.v)
440 # * Vector of masks contained in RB (r8.v)
441 # * Vector of values to be masked-in in RA (r4.v)
442 # * Scalar destination RT (r0) to receive all mask-offset values
443 sv.bmset/mr r0, r4.v, r8.v, r12.v
444 ```
445
446 Due to the Deterministic Scheduling, Subtract and Divide are still
447 permitted to be executed in this mode, although from an algorithmic
448 perspective it is strongly discouraged. It would be better to use
449 addition followed by one final subtract, or in the case of divide, to get
450 better accuracy, to perform a multiply cascade followed by a final divide.
451
452 Note that single-operand or three-operand scalar-dest reduce is perfectly
453 well permitted: the programmer may still declare one register, used
454 as both a Vector source and Scalar destination, to be utilised as the
455 "accumulator". In the case of `sv.fmadds` and `sv.maddhw` etc this
456 naturally fits well with the normal expected usage of these operations.
457
458 If an interrupt or exception occurs in the middle of the scalar mapreduce,
459 the scalar destination register **MUST** be updated with the current
460 (intermediate) result, because this is how ```Program Order``` is
461 preserved (Vector Loops are to be considered to be just another way
462 of issuing instructions in Program Order). In this way, after return
463 from interrupt, the scalar mapreduce may continue where it left off.
464 This provides "precise" exception behaviour.
465
466 Note that hardware is perfectly permitted to perform multi-issue parallel
467 optimisation of the scalar reduce operation: it's just that as far as
468 the user is concerned, all exceptions and interrupts **MUST** be precise.
469
470 ## Fail-on-first <a name="fail-first"> </a>
471
472 Data-dependent fail-on-first has two distinct variants: one for LD/ST (see
473 [[sv/ldst]], the other for arithmetic operations (actually, CR-driven)
474 [[sv/normal]] and CR operations [[sv/cr_ops]]. Note in each case the
475 assumption is that vector elements are required appear to be executed
476 in sequential Program Order, element 0 being the first.
477
478 * LD/ST ffirst (not to be confused with *Data-Dependent* LD/ST ffirst)
479 treats the first LD/ST in a vector (element 0) as an ordinary one.
480 Exceptions occur "as normal" on the first element. However for elements
481 1 and above, if an exception would occur, then VL is **truncated**
482 to the previous element.
483 * Data-driven (CR-driven) fail-on-first activates when Rc=1 or other
484 CR-creating operation produces a result (including cmp). Similar to
485 branch, an analysis of the CR is performed and if the test fails,
486 the vector operation terminates and discards all element operations
487 above the current one (and the current one if VLi is not set), and
488 VL is truncated to either the *previous* element or the current one,
489 depending on whether VLi (VL "inclusive") is set.
490
491 Thus the new VL comprises a contiguous vector of results, all of which
492 pass the testing criteria (equal to zero, less than zero).
493
494 The CR-based data-driven fail-on-first is new and not
495 found in ARM SVE or RVV. At the same time it is also
496 "old" because it is a generalisation of the Z80 [Block
497 compare](https://rvbelzen.tripod.com/z80prgtemp/z80prg04.htm)
498 instructions, especially
499 [CPIR](http://z80-heaven.wikidot.com/instructions-set:cpir) which is
500 based on CP (compare) as the ultimate "element" (suffix) operation
501 to which the repeat (prefix) is applied. It is extremely useful for
502 reducing instruction count, however requires speculative execution
503 involving modifications of VL to get high performance implementations.
504 An additional mode (RC1=1) effectively turns what would otherwise be an
505 arithmetic operation into a type of `cmp`. The CR is stored (and the
506 CR.eq bit tested against the `inv` field). If the CR.eq bit is equal to
507 `inv` then the Vector is truncated and the loop ends. Note that when
508 RC1=1 the result elements are never stored, only the CRs.
509
510 VLi is only available as an option when `Rc=0` (or for instructions
511 which do not have Rc). When set, the current element is always also
512 included in the count (the new length that VL will be set to). This may
513 be useful in combination with "inv" to truncate the Vector to *exclude*
514 elements that fail a test, or, in the case of implementations of strncpy,
515 to include the terminating zero.
516
517 In CR-based data-driven fail-on-first there is only the option to select
518 and test one bit of each CR (just as with branch BO). For more complex
519 tests this may be insufficient. If that is the case, a vectorised crops
520 (crand, cror) may be used, and ffirst applied to the crop instead of to
521 the arithmetic vector.
522
523 One extremely important aspect of ffirst is:
524
525 * LDST ffirst may never set VL equal to zero. This because on the first
526 element an exception must be raised "as normal".
527 * CR-based data-dependent ffirst on the other hand **can** set VL equal
528 to zero. This is the only means in the entirety of SV that VL may be set
529 to zero (with the exception of via the SV.STATE SPR). When VL is set
530 zero due to the first element failing the CR bit-test, all subsequent
531 vectorised operations are effectively `nops` which is
532 *precisely the desired and intended behaviour*.
533
534 Another aspect is that for ffirst LD/STs, VL may be truncated arbitrarily
535 to a nonzero value for any implementation-specific reason. For example:
536 it is perfectly reasonable for implementations to alter VL when ffirst
537 LD or ST operations are initiated on a nonaligned boundary, such that
538 within a loop the subsequent iteration of that loop begins subsequent
539 ffirst LD/ST operations on an aligned boundary. Likewise, to reduce
540 workloads or balance resources.
541
542 CR-based data-dependent first on the other hand MUST not truncate VL
543 arbitrarily to a length decided by the hardware: VL MUST only be truncated
544 based explicitly on whether a test fails. This because it is a precise
545 test on which algorithms will rely.
546
547 *Note: there is no reverse-direction for Data-dependent Fail-First. REMAP
548 will need to be activated to invert the ordering of element traversal.*
549
550 ### Data-dependent fail-first on CR operations (crand etc)
551
552 Operations that actually produce or alter CR Field as a result do not
553 also in turn have an Rc=1 mode. However it makes no sense to try to test
554 the 4 bits of a CR Field for being equal or not equal to zero. Moreover,
555 the result is already in the form that is desired: it is a CR field.
556 Therefore, CR-based operations have their own SVP64 Mode, described in
557 [[sv/cr_ops]]
558
559 There are two primary different types of CR operations:
560
561 * Those which have a 3-bit operand field (referring to a CR Field)
562 * Those which have a 5-bit operand (referring to a bit within the
563 whole 32-bit CR)
564
565 More details can be found in [[sv/cr_ops]].
566
567 ## pred-result mode
568
569 Pred-result mode may not be applied on CR-based operations.
570
571 Although CR operations (mtcr, crand, cror) may be Vectorised, predicated,
572 pred-result mode applies to operations that have an Rc=1 mode, or make
573 sense to add an RC1 option.
574
575 Predicate-result merges common CR testing with predication, saving
576 on instruction count. In essence, a Condition Register Field test is
577 performed, and if it fails it is considered to have been *as if* the
578 destination predicate bit was zero. Given that there are no CR-based
579 operations that produce Rc=1 co-results, there can be no pred-result
580 mode for mtcr and other CR-based instructions
581
582 Arithmetic and Logical Pred-result, which does have Rc=1 or for which
583 RC1 Mode makes sense, is covered in [[sv/normal]]
584
585 ## CR Operations
586
587 CRs are slightly more involved than INT or FP registers due to the
588 possibility for indexing individual bits (crops BA/BB/BT). Again however
589 the access pattern needs to be understandable in relation to v3.0B / v3.1B
590 numbering, with a clear linear relationship and mapping existing when
591 SV is applied.
592
593 ### CR EXTRA mapping table and algorithm <a name="cr_extra"></a>
594
595 Numbering relationships for CR fields are already complex due to being
596 in BE format (*the relationship is not clearly explained in the v3.0B
597 or v3.1 specification*). However with some care and consideration the
598 exact same mapping used for INT and FP regfiles may be applied, just to
599 the upper bits, as explained below. Firstly and most importantly a new
600 notation `CR{field number}` is used to indicate access to a particular
601 Condition Register Field (as opposed to the notation `CR[bit]` which
602 accesses one bit of the 32 bit Power ISA v3.0B Condition Register).
603
604 `CR{n}` refers to `CR0` when `n=0` and consequently, for CR0-7, is defined, in v3.0B pseudocode, as:
605
606 ```
607 CR{n} = CR[32+n*4:35+n*4]
608 ```
609
610 For SVP64 the relationship for the sequential numbering of elements is to
611 the CR **fields** within the CR Register, not to individual bits within
612 the CR register.
613
614 The `CR{n}` notation is designed to give *linear sequential
615 numbering* in the Vector domain on a straight sequential Vector Loop.
616
617 In OpenPOWER v3.0/1, BF/BT/BA/BB are all 5 bits. The top 3 bits (0:2)
618 select one of the 8 CRs; the bottom 2 bits (3:4) select one of 4 bits *in*
619 that CR (EQ/LT/GT/SO). The numbering was determined (after 4 months of
620 analysis and research) to be as follows:
621
622 ```
623 CR_index = (BA>>2) # top 3 bits
624 bit_index = (BA & 0b11) # low 2 bits
625 CR_reg = CR{CR_index} # get the CR
626 # finally get the bit from the CR.
627 CR_bit = (CR_reg & (1<<bit_index)) != 0
628 ```
629
630 When it comes to applying SV, it is the *CR Field* number `CR_reg`
631 to which SV EXTRA2/3
632 applies, **not** the `CR_bit` portion (bits 3-4):
633
634 ```
635 if extra3_mode:
636 spec = EXTRA3
637 else:
638 spec = EXTRA2<<1 | 0b0
639 if spec[0]:
640 # vector constructs "BA[0:2] spec[1:2] 00 BA[3:4]"
641 return ((BA >> 2)<<6) | # hi 3 bits shifted up
642 (spec[1:2]<<4) | # to make room for these
643 (BA & 0b11) # CR_bit on the end
644 else:
645 # scalar constructs "00 spec[1:2] BA[0:4]"
646 return (spec[1:2] << 5) | BA
647 ```
648
649 Thus, for example, to access a given bit for a CR in SV mode, the v3.0B
650 algorithm to determine CR\_reg is modified to as follows:
651
652 ```
653 CR_index = (BA>>2) # top 3 bits
654 if spec[0]:
655 # vector mode, 0-124 increments of 4
656 CR_index = (CR_index<<4) | (spec[1:2] << 2)
657 else:
658 # scalar mode, 0-32 increments of 1
659 CR_index = (spec[1:2]<<3) | CR_index
660 # same as for v3.0/v3.1 from this point onwards
661 bit_index = (BA & 0b11) # low 2 bits
662 CR_reg = CR{CR_index} # get the CR
663 # finally get the bit from the CR.
664 CR_bit = (CR_reg & (1<<bit_index)) != 0
665 ```
666
667 Note here that the decoding pattern to determine CR\_bit does not change.
668
669 Note: high-performance implementations may read/write Vectors of CRs in
670 batches of aligned 32-bit chunks (CR0-7, CR7-15). This is to greatly
671 simplify internal design. If instructions are issued where CR Vectors
672 do not start on a 32-bit aligned boundary, performance may be affected.
673
674 ### CR fields as inputs/outputs of vector operations
675
676 CRs (or, the arithmetic operations associated with them)
677 may be marked as Vectorised or Scalar. When Rc=1 in arithmetic operations that have no explicit EXTRA to cover the CR, the CR is Vectorised if the destination is Vectorised. Likewise if the destination is scalar then so is the CR.
678
679 When vectorized, the CR inputs/outputs are sequentially read/written
680 to 4-bit CR fields. Vectorised Integer results, when Rc=1, will begin
681 writing to CR8 (TBD evaluate) and increase sequentially from there.
682 This is so that:
683
684 * implementations may rely on the Vector CRs being aligned to 8. This
685 means that CRs may be read or written in aligned batches of 32 bits
686 (8 CRs per batch), for high performance implementations.
687 * scalar Rc=1 operation (CR0, CR1) and callee-saved CRs (CR2-4) are not
688 overwritten by vector Rc=1 operations except for very large VL
689 * CR-based predication, from CR32, is also not interfered with
690 (except by large VL).
691
692 However when the SV result (destination) is marked as a scalar by the
693 EXTRA field the *standard* v3.0B behaviour applies: the accompanying
694 CR when Rc=1 is written to. This is CR0 for integer operations and CR1
695 for FP operations.
696
697 Note that yes, the CR Fields are genuinely Vectorised. Unlike in SIMD VSX which
698 has a single CR (CR6) for a given SIMD result, SV Vectorised OpenPOWER
699 v3.0B scalar operations produce a **tuple** of element results: the
700 result of the operation as one part of that element *and a corresponding
701 CR element*. Greatly simplified pseudocode:
702
703 ```
704 for i in range(VL):
705 # calculate the vector result of an add
706 iregs[RT+i] = iregs[RA+i] + iregs[RB+i]
707 # now calculate CR bits
708 CRs{8+i}.eq = iregs[RT+i] == 0
709 CRs{8+i}.gt = iregs[RT+i] > 0
710 ... etc
711 ```
712
713 If a "cumulated" CR based analysis of results is desired (a la VSX CR6)
714 then a followup instruction must be performed, setting "reduce" mode on
715 the Vector of CRs, using cr ops (crand, crnor) to do so. This provides far
716 more flexibility in analysing vectors than standard Vector ISAs. Normal
717 Vector ISAs are typically restricted to "were all results nonzero" and
718 "were some results nonzero". The application of mapreduce to Vectorised
719 cr operations allows far more sophisticated analysis, particularly in
720 conjunction with the new crweird operations see [[sv/cr_int_predication]].
721
722 Note in particular that the use of a separate instruction in this way
723 ensures that high performance multi-issue OoO inplementations do not
724 have the computation of the cumulative analysis CR as a bottleneck and
725 hindrance, regardless of the length of VL.
726
727 Additionally,
728 SVP64 [[sv/branches]] may be used, even when the branch itself is to
729 the following instruction. The combined side-effects of CTR reduction
730 and VL truncation provide several benefits.
731
732 (see [[discussion]]. some alternative schemes are described there)
733
734 ### Rc=1 when SUBVL!=1
735
736 sub-vectors are effectively a form of Packed SIMD (length 2 to 4). Only 1 bit of
737 predicate is allocated per subvector; likewise only one CR is allocated
738 per subvector.
739
740 This leaves a conundrum as to how to apply CR computation per subvector,
741 when normally Rc=1 is exclusively applied to scalar elements. A solution
742 is to perform a bitwise OR or AND of the subvector tests. Given that
743 OE is ignored in SVP64, this field may (when available) be used to select OR or
744 AND behavior.
745
746 #### Table of CR fields
747
748 CRn is the notation used by the OpenPower spec to refer to CR field #i,
749 so FP instructions with Rc=1 write to CR1 (n=1).
750
751 CRs are not stored in SPRs: they are registers in their own right.
752 Therefore context-switching the full set of CRs involves a Vectorised
753 mfcr or mtcr, using VL=8 to do so. This is exactly as how
754 scalar OpenPOWER context-switches CRs: it is just that there are now
755 more of them.
756
757 The 64 SV CRs are arranged similarly to the way the 128 integer registers
758 are arranged. TODO a python program that auto-generates a CSV file
759 which can be included in a table, which is in a new page (so as not to
760 overwhelm this one). [[svp64/cr_names]]
761
762 ## Register Profiles
763
764 Instructions are broken down by Register Profiles as listed in the
765 following auto-generated page: [[opcode_regs_deduped]]. These tables,
766 despite being auto-generated, are part of the Specification.
767
768 ## SV pseudocode illustration
769
770 ### Single-predicated Instruction
771
772 illustration of normal mode add operation: zeroing not included, elwidth
773 overrides not included. if there is no predicate, it is set to all 1s
774
775 ```
776 function op_add(rd, rs1, rs2) # add not VADD!
777 int i, id=0, irs1=0, irs2=0;
778 predval = get_pred_val(FALSE, rd);
779 for (i = 0; i < VL; i++)
780 STATE.srcoffs = i # save context
781 if (predval & 1<<i) # predication uses intregs
782 ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
783 if (!int_vec[rd].isvec) break;
784 if (rd.isvec) { id += 1; }
785 if (rs1.isvec) { irs1 += 1; }
786 if (rs2.isvec) { irs2 += 1; }
787 if (id == VL or irs1 == VL or irs2 == VL) {
788 # end VL hardware loop
789 STATE.srcoffs = 0; # reset
790 return;
791 }
792 ```
793
794 This has several modes:
795
796 * RT.v = RA.v RB.v
797 * RT.v = RA.v RB.s (and RA.s RB.v)
798 * RT.v = RA.s RB.s
799 * RT.s = RA.v RB.v
800 * RT.s = RA.v RB.s (and RA.s RB.v)
801 * RT.s = RA.s RB.s
802
803 All of these may be predicated. Vector-Vector is straightfoward.
804 When one of source is a Vector and the other a Scalar, it is clear that
805 each element of the Vector source should be added to the Scalar source,
806 each result placed into the Vector (or, if the destination is a scalar,
807 only the first nonpredicated result).
808
809 The one that is not obvious is RT=vector but both RA/RB=scalar.
810 Here this acts as a "splat scalar result", copying the same result into
811 all nonpredicated result elements. If a fixed destination scalar was
812 intended, then an all-Scalar operation should be used.
813
814 See <https://bugs.libre-soc.org/show_bug.cgi?id=552>
815
816 ## Assembly Annotation
817
818 Assembly code annotation is required for SV to be able to successfully
819 mark instructions as "prefixed".
820
821 A reasonable (prototype) starting point:
822
823 ```
824 svp64 [field=value]*
825 ```
826
827 Fields:
828
829 * ew=8/16/32 - element width
830 * sew=8/16/32 - source element width
831 * vec=2/3/4 - SUBVL
832 * mode=mr/satu/sats/crpred
833 * pred=1\<\<3/r3/~r3/r10/~r10/r30/~r30/lt/gt/le/ge/eq/ne
834
835 similar to x86 "rex" prefix.
836
837 For actual assembler:
838
839 ```
840 sv.asmcode/mode.vec{N}.ew=8,sw=16,m={pred},sm={pred} reg.v, src.s
841 ```
842
843 Qualifiers:
844
845 * m={pred}: predicate mask mode
846 * sm={pred}: source-predicate mask mode (only allowed in Twin-predication)
847 * vec{N}: vec2 OR vec3 OR vec4 - sets SUBVL=2/3/4
848 * ew={N}: ew=8/16/32 - sets elwidth override
849 * sw={N}: sw=8/16/32 - sets source elwidth override
850 * ff={xx}: see fail-first mode
851 * pr={xx}: see predicate-result mode
852 * sat{x}: satu / sats - see saturation mode
853 * mr: see map-reduce mode
854 * mrr: map-reduce, reverse-gear (VL-1 downto 0)
855 * mr.svm see map-reduce with sub-vector mode
856 * crm: see map-reduce CR mode
857 * crm.svm see map-reduce CR with sub-vector mode
858 * sz: predication with source-zeroing
859 * dz: predication with dest-zeroing
860
861 For modes:
862
863 * pred-result:
864 - pm=lt/gt/le/ge/eq/ne/so/ns
865 - RC1 mode
866 * fail-first
867 - ff=lt/gt/le/ge/eq/ne/so/ns
868 - RC1 mode
869 * saturation:
870 - sats
871 - satu
872 * map-reduce:
873 - mr OR crm: "normal" map-reduce mode or CR-mode.
874 - mr.svm OR crm.svm: when vec2/3/4 set, sub-vector mapreduce is enabled
875
876 ## Parallel-reduction algorithm
877
878 The principle of SVP64 is that SVP64 is a fully-independent
879 Abstraction of hardware-looping in between issue and execute phases
880 that has no relation to the operation it issues.
881 Additional state cannot be saved on context-switching beyond that
882 of SVSTATE, making things slightly tricky.
883
884 Executable demo pseudocode, full version
885 [here](https://git.libre-soc.org/?p=libreriscv.git;a=blob;f=openpower/sv/test_preduce.py;hb=HEAD)
886
887 ```
888 [[!inline pages="openpower/sv/preduce.py" raw="yes" ]]
889 ```
890
891 This algorithm works by noting when data remains in-place rather than
892 being reduced, and referring to that alternative position on subsequent
893 layers of reduction. It is re-entrant. If however interrupted and
894 restored, some implementations may take longer to re-establish the
895 context.
896
897 Its application by default is that:
898
899 * RA, FRA or BFA is the first register as the first operand
900 (ci index offset in the above pseudocode)
901 * RB, FRB or BFB is the second (co index offset)
902 * RT (result) also uses ci **if RA==RT**
903
904 For more complex applications a REMAP Schedule must be used
905
906 *Programmers's note: if passed a predicate mask with only one bit set,
907 this algorithm takes no action, similar to when a predicate mask is
908 all zero.*
909
910 *Implementor's Note: many SIMD-based Parallel Reduction Algorithms are
911 implemented in hardware with MVs that ensure lane-crossing is minimised.
912 The mistake which would be catastrophic to SVP64 to make is to then limit
913 the Reduction Sequence for all implementors based solely and exclusively
914 on what one specific internal microarchitecture does. In SIMD ISAs
915 the internal SIMD Architectural design is exposed and imposed on the
916 programmer. Cray-style Vector ISAs on the other hand provide convenient,
917 compact and efficient encodings of abstract concepts.* **It is the
918 Implementor's responsibility to produce a design that complies with the
919 above algorithm, utilising internal Micro-coding and other techniques to
920 transparently insert micro-architectural lane-crossing Move operations
921 if necessary or desired, to give the level of efficiency or performance
922 required.**
923
924 ## Element-width overrides <a name="elwidth"> </>
925
926 Element-width overrides are best illustrated with a packed structure
927 union in the c programming language. The following should be taken
928 literally, and assume always a little-endian layout:
929
930 ```
931 #pragma pack
932 typedef union {
933 uint8_t b[];
934 uint16_t s[];
935 uint32_t i[];
936 uint64_t l[];
937 uint8_t actual_bytes[8];
938 } el_reg_t;
939
940 elreg_t int_regfile[128];
941 ```
942
943 Accessing (get and set) of registers given a value, register (in `elreg_t`
944 form), and that all arithmetic, numbering and pseudo-Memory format is
945 LE-endian and LSB0-numbered below:
946
947 ```
948 elreg_t& get_polymorphed_reg(elreg_t const& reg, bitwidth, offset):
949 el_reg_t res; // result
950 res.l = 0; // TODO: going to need sign-extending / zero-extending
951 if !reg.isvec: // scalar access has no element offset
952 offset = 0
953 if bitwidth == 8:
954 reg.b = int_regfile[reg].b[offset]
955 elif bitwidth == 16:
956 reg.s = int_regfile[reg].s[offset]
957 elif bitwidth == 32:
958 reg.i = int_regfile[reg].i[offset]
959 elif bitwidth == 64:
960 reg.l = int_regfile[reg].l[offset]
961 return reg
962
963 set_polymorphed_reg(elreg_t& reg, bitwidth, offset, val):
964 if (!reg.isvec):
965 # for safety mask out hi bits
966 bytemask = (8 << bitwidth) - 1
967 val &= bytemask
968 # not a vector: first element only, overwrites high bits.
969 # and with the *Architectural* definition being LE,
970 # storing in the first DWORD works perfectly.
971 int_regfile[reg].l[0] = val
972 elif bitwidth == 8:
973 int_regfile[reg].b[offset] = val
974 elif bitwidth == 16:
975 int_regfile[reg].s[offset] = val
976 elif bitwidth == 32:
977 int_regfile[reg].i[offset] = val
978 elif bitwidth == 64:
979 int_regfile[reg].l[offset] = val
980 ```
981
982 In effect the GPR registers r0 to r127 (and corresponding FPRs fp0
983 to fp127) are reinterpreted to be "starting points" in a byte-addressable
984 memory. Vectors - which become just a virtual naming construct - effectively
985 overlap.
986
987 It is extremely important for implementors to note that the only circumstance
988 where upper portions of an underlying 64-bit register are zero'd out is
989 when the destination is a scalar. The ideal register file has byte-level
990 write-enable lines, just like most SRAMs, in order to avoid READ-MODIFY-WRITE.
991
992 An example ADD operation with predication and element width overrides:
993
994 ```
995  for (i = 0; i < VL; i++)
996 if (predval & 1<<i) # predication
997 src1 = get_polymorphed_reg(RA, srcwid, irs1)
998 src2 = get_polymorphed_reg(RB, srcwid, irs2)
999 result = src1 + src2 # actual add here
1000 set_polymorphed_reg(RT, destwid, ird, result)
1001 if (!RT.isvec) break
1002 if (RT.isvec)  { id += 1; }
1003 if (RA.isvec)  { irs1 += 1; }
1004 if (RB.isvec)  { irs2 += 1; }
1005 ```
1006
1007 Thus it can be clearly seen that elements are packed by their
1008 element width, and the packing starts from the source (or destination)
1009 specified by the instruction.
1010
1011 ## Twin (implicit) result operations
1012
1013 Some operations in the Power ISA already target two 64-bit scalar
1014 registers: `lq` for example, and LD with update. Some mathematical
1015 algorithms are more efficient when there are two outputs rather than one,
1016 providing feedback loops between elements (the most well-known being add
1017 with carry). 64-bit multiply for example actually internally produces
1018 a 128 bit result, which clearly cannot be stored in a single 64 bit
1019 register. Some ISAs recommend "macro op fusion": the practice of setting
1020 a convention whereby if two commonly used instructions (mullo, mulhi) use
1021 the same ALU but one selects the low part of an identical operation and
1022 the other selects the high part, then optimised micro-architectures may
1023 "fuse" those two instructions together, using Micro-coding techniques,
1024 internally.
1025
1026 The practice and convention of macro-op fusion however is not compatible
1027 with SVP64 Horizontal-First, because Horizontal Mode may only be applied
1028 to a single instruction at a time, and SVP64 is based on the principle of
1029 strict Program Order even at the element level. Thus it becomes necessary
1030 to add explicit more complex single instructions with more operands than
1031 would normally be seen in the average RISC ISA (3-in, 2-out, in some
1032 cases). If it was not for Power ISA already having LD/ST with update as
1033 well as Condition Codes and `lq` this would be hard to justify.
1034
1035 With limited space in the `EXTRA` Field, and Power ISA opcodes being only
1036 32 bit, 5 operands is quite an ask. `lq` however sets a precedent: `RTp`
1037 stands for "RT pair". In other words the result is stored in RT and RT+1.
1038 For Scalar operations, following this precedent is perfectly reasonable.
1039 In Scalar mode, `maddedu` therefore stores the two halves of the 128-bit
1040 multiply into RT and RT+1.
1041
1042 What, then, of `sv.maddedu`? If the destination is hard-coded to RT and
1043 RT+1 the instruction is not useful when Vectorised because the output
1044 will be overwritten on the next element. To solve this is easy: define
1045 the destination registers as RT and RT+MAXVL respectively. This makes
1046 it easy for compilers to statically allocate registers even when VL
1047 changes dynamically.
1048
1049 Bear in mind that both RT and RT+MAXVL are starting points for Vectors,
1050 and bear in mind that element-width overrides still have to be taken
1051 into consideration, the starting point for the implicit destination is
1052 best illustrated in pseudocode:
1053
1054 ```
1055 # demo of maddedu
1056  for (i = 0; i < VL; i++)
1057 if (predval & 1<<i) # predication
1058 src1 = get_polymorphed_reg(RA, srcwid, irs1)
1059 src2 = get_polymorphed_reg(RB, srcwid, irs2)
1060 src2 = get_polymorphed_reg(RC, srcwid, irs3)
1061 result = src1*src2 + src2
1062 destmask = (2<<destwid)-1
1063 # store two halves of result, both start from RT.
1064 set_polymorphed_reg(RT, destwid, ird , result&destmask)
1065 set_polymorphed_reg(RT, destwid, ird+MAXVL, result>>destwid)
1066 if (!RT.isvec) break
1067 if (RT.isvec)  { id += 1; }
1068 if (RA.isvec)  { irs1 += 1; }
1069 if (RB.isvec)  { irs2 += 1; }
1070 if (RC.isvec)  { irs3 += 1; }
1071 ```
1072
1073 The significant part here is that the second half is stored
1074 starting not from RT+MAXVL at all: it is the *element* index
1075 that is offset by MAXVL, both halves actually starting from RT.
1076 If VL is 3, MAXVL is 5, RT is 1, and dest elwidth is 32 then the elements
1077 RT0 to RT2 are stored:
1078
1079 ```
1080 LSB0: 63:32 31:0
1081 MSB0: 0:31 32:63
1082 r0 unchanged unchanged
1083 r1 RT1.lo RT0.lo
1084 r2 unchanged RT2.lo
1085 r3 RT0.hi unchanged
1086 r4 RT2.hi RT1.hi
1087 r5 unchanged unchanged
1088 ```
1089
1090 Note that all of the LO halves start from r1, but that the HI halves
1091 start from half-way into r3. The reason is that with MAXVL bring 5 and
1092 elwidth being 32, this is the 5th element offset (in 32 bit quantities)
1093 counting from r1.
1094
1095 *Programmer's note: accessing registers that have been placed starting
1096 on a non-contiguous boundary (half-way along a scalar register) can
1097 be inconvenient: REMAP can provide an offset but it requires extra
1098 instructions to set up. A simple solution is to ensure that MAXVL is
1099 rounded up such that the Vector ends cleanly on a contiguous register
1100 boundary. MAXVL=6 in the above example would achieve that*
1101
1102 Additional DRAFT Scalar instructions in 3-in 2-out form with an implicit
1103 2nd destination:
1104
1105 * [[isa/svfixedarith]]
1106 * [[isa/svfparith]]
1107
1108 [[!tag standards]]
1109
1110 ------
1111
1112 \newpage{}
1113