clarify abridged spec
[libreriscv.git] / simple_v_extension / specification.mdwn
1 # Simple-V (Parallelism Extension Proposal) Specification
2
3 * Copyright (C) 2017, 2018, 2019 Luke Kenneth Casson Leighton
4 * Status: DRAFTv0.6
5 * Last edited: 21 jun 2019
6 * Ancillary resource: [[opcodes]]
7 * Ancillary resource: [[sv_prefix_proposal]]
8 * Ancillary resource: [[abridged_spec]]
9 * Ancillary resource: [[vblock_format]]
10
11 With thanks to:
12
13 * Allen Baum
14 * Bruce Hoult
15 * comp.arch
16 * Jacob Bachmeyer
17 * Guy Lemurieux
18 * Jacob Lifshay
19 * Terje Mathisen
20 * The RISC-V Founders, without whom this all would not be possible.
21
22 [[!toc ]]
23
24 # Summary and Background: Rationale
25
26 Simple-V is a uniform parallelism API for RISC-V hardware that has several
27 unplanned side-effects including code-size reduction, expansion of
28 HINT space and more. The reason for
29 creating it is to provide a manageable way to turn a pre-existing design
30 into a parallel one, in a step-by-step incremental fashion, without adding any new opcodes, thus allowing
31 the implementor to focus on adding hardware where it is needed and necessary.
32 The primary target is for mobile-class 3D GPUs and VPUs, with secondary
33 goals being to reduce executable size (by extending the effectiveness of RV opcodes, RVC in particular) and reduce context-switch latency.
34
35 Critically: **No new instructions are added**. The parallelism (if any
36 is implemented) is implicitly added by tagging *standard* scalar registers
37 for redirection. When such a tagged register is used in any instruction,
38 it indicates that the PC shall **not** be incremented; instead a loop
39 is activated where *multiple* instructions are issued to the pipeline
40 (as determined by a length CSR), with contiguously incrementing register
41 numbers starting from the tagged register. When the last "element"
42 has been reached, only then is the PC permitted to move on. Thus
43 Simple-V effectively sits (slots) *in between* the instruction decode phase
44 and the ALU(s).
45
46 The barrier to entry with SV is therefore very low. The minimum
47 compliant implementation is software-emulation (traps), requiring
48 only the CSRs and CSR tables, and that an exception be thrown if an
49 instruction's registers are detected to have been tagged. The looping
50 that would otherwise be done in hardware is thus carried out in software,
51 instead. Whilst much slower, it is "compliant" with the SV specification,
52 and may be suited for implementation in RV32E and also in situations
53 where the implementor wishes to focus on certain aspects of SV, without
54 unnecessary time and resources into the silicon, whilst also conforming
55 strictly with the API. A good area to punt to software would be the
56 polymorphic element width capability for example.
57
58 Hardware Parallelism, if any, is therefore added at the implementor's
59 discretion to turn what would otherwise be a sequential loop into a
60 parallel one.
61
62 To emphasise that clearly: Simple-V (SV) is *not*:
63
64 * A SIMD system
65 * A SIMT system
66 * A Vectorisation Microarchitecture
67 * A microarchitecture of any specific kind
68 * A mandary parallel processor microarchitecture of any kind
69 * A supercomputer extension
70
71 SV does **not** tell implementors how or even if they should implement
72 parallelism: it is a hardware "API" (Application Programming Interface)
73 that, if implemented, presents a uniform and consistent way to *express*
74 parallelism, at the same time leaving the choice of if, how, how much,
75 when and whether to parallelise operations **entirely to the implementor**.
76
77 # Basic Operation
78
79 The principle of SV is as follows:
80
81 * Standard RV instructions are "prefixed" (extended) through a 48/64
82 bit format (single instruction option) or a variable
83 length VLIW-like prefix (multi or "grouped" option).
84 * The prefix(es) indicate which registers are "tagged" as
85 "vectorised". Predicates can also be added, and element widths
86 overridden on any src or dest register.
87 * A "Vector Length" CSR is set, indicating the span of any future
88 "parallel" operations.
89 * If any operation (a **scalar** standard RV opcode) uses a register
90 that has been so "marked" ("tagged"), a hardware "macro-unrolling loop"
91 is activated, of length VL, that effectively issues **multiple**
92 identical instructions using contiguous sequentially-incrementing
93 register numbers, based on the "tags".
94 * **Whether they be executed sequentially or in parallel or a
95 mixture of both or punted to software-emulation in a trap handler
96 is entirely up to the implementor**.
97
98 In this way an entire scalar algorithm may be vectorised with
99 the minimum of modification to the hardware and to compiler toolchains.
100
101 To reiterate: **There are *no* new opcodes**. The scheme works *entirely*
102 on hidden context that augments *scalar* RISCV instructions.
103
104 # CSRs <a name="csrs"></a>
105
106 * An optional "reshaping" CSR key-value table which remaps from a 1D
107 linear shape to 2D or 3D, including full transposition.
108
109 There are five additional CSRs, available in any privilege level:
110
111 * MVL (the Maximum Vector Length)
112 * VL (which has different characteristics from standard CSRs)
113 * SUBVL (effectively a kind of SIMD)
114 * STATE (containing copies of MVL, VL and SUBVL as well as context information)
115 * PCVBLK (the current operation being executed within a VBLOCK Group)
116
117 For User Mode there are the following CSRs:
118
119 * uePCVBLK (a copy of the sub-execution Program Counter, that is relative
120 to the start of the current VBLOCK Group, set on a trap).
121 * ueSTATE (useful for saving and restoring during context switch,
122 and for providing fast transitions)
123
124 There are also two additional CSRs for Supervisor-Mode:
125
126 * sePCVBLK
127 * seSTATE
128
129 And likewise for M-Mode:
130
131 * mePCVBLK
132 * meSTATE
133
134 The u/m/s CSRs are treated and handled exactly like their (x)epc
135 equivalents. On entry to or exit from a privilege level, the contents of its (x)eSTATE are swapped with STATE.
136
137 Thus for example, a User Mode trap will end up swapping STATE and ueSTATE
138 (on both entry and exit), allowing User Mode traps to have their own
139 Vectorisation Context set up, separated from and unaffected by normal
140 user applications. If an M Mode trap occurs in the middle of the U Mode trap, STATE is swapped with meSTATE, and restored on exit: the U Mode trap continues unaware that the M Mode trap even occurred.
141
142 Likewise, Supervisor Mode may perform context-switches, safe in the
143 knowledge that its Vectorisation State is unaffected by User Mode.
144
145 The access pattern for these groups of CSRs in each mode follows the
146 same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
147
148 * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
149 * In S-Mode, accessing and changing of the M-Mode CSRs is transparently
150 identical
151 to changing the S-Mode CSRs. Accessing and changing the U-Mode
152 CSRs is permitted.
153 * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
154 is prohibited.
155
156 An interesting side effect of SV STATE being
157 separate and distinct in S Mode
158 is that
159 Vectorised saving of an entire register file to the stack is a single
160 instruction (through accidental provision of LOAD-MULTI semantics). If the
161 SVPrefix P64-LD-type format is used, LOAD-MULTI may even be done with a
162 single standalone 64 bit opcode (P64 may set up SUBVL, VL and MVL from an
163 immediate field, to cover the full regfile). It can even be predicated, which opens up some very
164 interesting possibilities.
165
166 (x)EPCVBLK CSRs must be treated exactly like their corresponding (x)epc
167 equivalents. See VBLOCK section for details.
168
169 ## MAXVECTORLENGTH (MVL) <a name="mvl" />
170
171 MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
172 is variable length and may be dynamically set. MVL is
173 however limited to the regfile bitwidth XLEN (1-32 for RV32,
174 1-64 for RV64 and so on).
175
176 The reason for setting this limit is so that predication registers, when
177 marked as such, may fit into a single register as opposed to fanning
178 out over several registers. This keeps the hardware implementation a
179 little simpler.
180
181 The other important factor to note is that the actual MVL is internally
182 stored **offset by one**, so that it can fit into only 6 bits (for RV64)
183 and still cover a range up to XLEN bits. Attempts to set MVL to zero will
184 return an exception. This is expressed more clearly in the "pseudocode"
185 section, where there are subtle differences between CSRRW and CSRRWI.
186
187 ## Vector Length (VL) <a name="vl" />
188
189 VSETVL is slightly different from RVV. Similar to RVV, VL is set to be within
190 the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
191
192 VL = rd = MIN(vlen, MVL)
193
194 where 1 <= MVL <= XLEN
195
196 However just like MVL it is important to note that the range for VL has
197 subtle design implications, covered in the "CSR pseudocode" section
198
199 The fixed (specific) setting of VL allows vector LOAD/STORE to be used
200 to switch the entire bank of registers using a single instruction (see
201 Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
202 is down to the fact that predication bits fit into a single register of
203 length XLEN bits.
204
205 The second and most important change is that, within the limits set by
206 MVL, the value passed in **must** be set in VL (and in the
207 destination register).
208
209 This has implication for the microarchitecture, as VL is required to be
210 set (limits from MVL notwithstanding) to the actual value
211 requested. RVV has the option to set VL to an arbitrary value that suits
212 the conditions and the micro-architecture: SV does *not* permit this.
213
214 The reason is so that if SV is to be used for a context-switch or as a
215 substitute for LOAD/STORE-Multiple, the operation can be done with only
216 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
217 single LD/ST operation). If VL does *not* get set to the register file
218 length when VSETVL is called, then a software-loop would be needed.
219 To avoid this need, VL *must* be set to exactly what is requested
220 (limits notwithstanding).
221
222 Therefore, in turn, unlike RVV, implementors *must* provide
223 pseudo-parallelism (using sequential loops in hardware) if actual
224 hardware-parallelism in the ALUs is not deployed. A hybrid is also
225 permitted (as used in Broadcom's VideoCore-IV) however this must be
226 *entirely* transparent to the ISA.
227
228 The third change is that VSETVL is implemented as a CSR, where the
229 behaviour of CSRRW (and CSRRWI) must be changed to specifically store
230 the *new* value in the destination register, **not** the old value.
231 Where context-load/save is to be implemented in the usual fashion
232 by using a single CSRRW instruction to obtain the old value, the
233 *secondary* CSR must be used (STATE). This CSR by contrast behaves
234 exactly as standard CSRs, and contains more than just VL.
235
236 One interesting side-effect of using CSRRWI to set VL is that this
237 may be done with a single instruction, useful particularly for a
238 context-load/save. There are however limitations: CSRWI's immediate
239 is limited to 0-31 (representing VL=1-32).
240
241 Note that when VL is set to 1, vector operations cease (but not subvector
242 operations: that requires setting SUBVL=1) the hardware loop is reduced
243 to a single element: scalar operations. This is in effect the default,
244 normal operating mode. However it is important to appreciate that this
245 does **not** result in the Register table or SUBVL being disabled. Only
246 when the Register table is empty (P48/64 prefix fields notwithstanding)
247 would SV have no effect.
248
249 ## SUBVL - Sub Vector Length
250
251 This is a "group by quantity" that effectivrly asks each iteration
252 of the hardware loop to load SUBVL elements of width elwidth at a
253 time. Effectively, SUBVL is like a SIMD multiplier: instead of just 1
254 operation issued, SUBVL operations are issued.
255
256 Another way to view SUBVL is that each element in the VL length vector is
257 now SUBVL times elwidth bits in length and now comprises SUBVL discrete
258 sub operations. An inner SUBVL for-loop within a VL for-loop in effect,
259 with the sub-element increased every time in the innermost loop. This
260 is best illustrated in the (simplified) pseudocode example, later.
261
262 The primary use case for SUBVL is for 3D FP Vectors. A Vector of 3D
263 coordinates X,Y,Z for example may be loaded and multiplied the stored, per
264 VL element iteration, rather than having to set VL to three times larger.
265
266 Legal values are 1, 2, 3 and 4 (and the STATE CSR must hold the 2 bit
267 values 0b00 thru 0b11 to represent them).
268
269 Setting this CSR to 0 must raise an exception. Setting it to a value
270 greater than 4 likewise.
271
272 The main effect of SUBVL is that predication bits are applied per
273 **group**, rather than by individual element.
274
275 This saves a not insignificant number of instructions when handling 3D
276 vectors, as otherwise a much longer predicate mask would have to be set
277 up with regularly-repeated bit patterns.
278
279 See SUBVL Pseudocode illustration for details.
280
281 ## STATE
282
283 This is a standard CSR that contains sufficient information for a
284 full context save/restore. It contains (and permits setting of):
285
286 * MVL
287 * VL
288 * destoffs - the destination element offset of the current parallel
289 instruction being executed
290 * srcoffs - for twin-predication, the source element offset as well.
291 * SUBVL
292 * svdestoffs - the subvector destination element offset of the current
293 parallel instruction being executed
294 * svsrcoffs - for twin-predication, the subvector source element offset
295 as well.
296
297 Interestingly STATE may hypothetically also be modified to make the
298 immediately-following instruction to skip a certain number of elements,
299 by playing with destoffs and srcoffs (and the subvector offsets as well)
300
301 Setting destoffs and srcoffs is realistically intended for saving state
302 so that exceptions (page faults in particular) may be serviced and the
303 hardware-loop that was being executed at the time of the trap, from
304 user-mode (or Supervisor-mode), may be returned to and continued from
305 exactly where it left off. The reason why this works is because setting
306 User-Mode STATE will not change (not be used) in M-Mode or S-Mode (and
307 is entirely why M-Mode and S-Mode have their own STATE CSRs, meSTATE
308 and seSTATE).
309
310 The format of the STATE CSR is as follows:
311
312 | (29..28 | (27..26) | (25..24) | (23..18) | (17..12) | (11..6) | (5...0) |
313 | ------- | -------- | -------- | -------- | -------- | ------- | ------- |
314 | dsvoffs | ssvoffs | subvl | destoffs | srcoffs | vl | maxvl |
315
316 When setting this CSR, the following characteristics will be enforced:
317
318 * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
319 * **VL** will be truncated (after offset) to be within the range 1 to MAXVL
320 * **SUBVL** which sets a SIMD-like quantity, has only 4 values so there
321 are no changes needed
322 * **srcoffs** will be truncated to be within the range 0 to VL-1
323 * **destoffs** will be truncated to be within the range 0 to VL-1
324 * **ssvoffs** will be truncated to be within the range 0 to SUBVL-1
325 * **dsvoffs** will be truncated to be within the range 0 to SUBVL-1
326
327 NOTE: if the following instruction is not a twin predicated instruction,
328 and destoffs or dsvoffs has been set to non-zero, subsequent execution
329 behaviour is undefined. **USE WITH CARE**.
330
331 ### Hardware rules for when to increment STATE offsets
332
333 The offsets inside STATE are like the indices in a loop, except
334 in hardware. They are also partially (conceptually) similar to a
335 "sub-execution Program Counter". As such, and to allow proper context
336 switching and to define correct exception behaviour, the following rules
337 must be observed:
338
339 * When the VL CSR is set, srcoffs and destoffs are reset to zero.
340 * Each instruction that contains a "tagged" register shall start
341 execution at the *current* value of srcoffs (and destoffs in the case
342 of twin predication)
343 * Unpredicated bits (in nonzeroing mode) shall cause the element operation
344 to skip, incrementing the srcoffs (or destoffs)
345 * On execution of an element operation, Exceptions shall **NOT** cause
346 srcoffs or destoffs to increment.
347 * On completion of the full Vector Loop (srcoffs = VL-1 or destoffs =
348 VL-1 after the last element is executed), both srcoffs and destoffs
349 shall be reset to zero.
350
351 This latter is why srcoffs and destoffs may be stored as values from
352 0 to XLEN-1 in the STATE CSR, because as loop indices they refer to
353 elements. srcoffs and destoffs never need to be set to VL: their maximum
354 operating values are limited to 0 to VL-1.
355
356 The same corresponding rules apply to SUBVL, svsrcoffs and svdestoffs.
357
358 ## MVL and VL Pseudocode
359
360 The pseudo-code for get and set of VL and MVL use the following internal
361 functions as follows:
362
363 set_mvl_csr(value, rd):
364 regs[rd] = STATE.MVL
365 STATE.MVL = MIN(value, STATE.MVL)
366
367 get_mvl_csr(rd):
368 regs[rd] = STATE.VL
369
370 set_vl_csr(value, rd):
371 STATE.VL = MIN(value, STATE.MVL)
372 regs[rd] = STATE.VL # yes returning the new value NOT the old CSR
373 return STATE.VL
374
375 get_vl_csr(rd):
376 regs[rd] = STATE.VL
377 return STATE.VL
378
379 Note that where setting MVL behaves as a normal CSR (returns the old
380 value), unlike standard CSR behaviour, setting VL will return the **new**
381 value of VL **not** the old one.
382
383 For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
384 maximise the effectiveness, an immediate of 0 is used to set VL=1,
385 an immediate of 1 is used to set VL=2 and so on:
386
387 CSRRWI_Set_MVL(value):
388 set_mvl_csr(value+1, x0)
389
390 CSRRWI_Set_VL(value):
391 set_vl_csr(value+1, x0)
392
393 However for CSRRW the following pseudocode is used for MVL and VL,
394 where setting the value to zero will cause an exception to be raised.
395 The reason is that if VL or MVL are set to zero, the STATE CSR is
396 not capable of storing that value.
397
398 CSRRW_Set_MVL(rs1, rd):
399 value = regs[rs1]
400 if value == 0 or value > XLEN:
401 raise Exception
402 set_mvl_csr(value, rd)
403
404 CSRRW_Set_VL(rs1, rd):
405 value = regs[rs1]
406 if value == 0 or value > XLEN:
407 raise Exception
408 set_vl_csr(value, rd)
409
410 In this way, when CSRRW is utilised with a loop variable, the value
411 that goes into VL (and into the destination register) may be used
412 in an instruction-minimal fashion:
413
414 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
415 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
416 CSRRWI MVL, 3 # sets MVL == **4** (not 3)
417 j zerotest # in case loop counter a0 already 0
418 loop:
419 CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
420 ld a3, a1 # load 4 registers a3-6 from x
421 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
422 ld a7, a2 # load 4 registers a7-10 from y
423 add a1, a1, t1 # increment pointer to x by vl*8
424 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
425 sub a0, a0, t0 # n -= vl (t0)
426 st a7, a2 # store 4 registers a7-10 to y
427 add a2, a2, t1 # increment pointer to y by vl*8
428 zerotest:
429 bnez a0, loop # repeat if n != 0
430
431 With the STATE CSR, just like with CSRRWI, in order to maximise the
432 utilisation of the limited bitspace, "000000" in binary represents
433 VL==1, "00001" represents VL==2 and so on (likewise for MVL):
434
435 CSRRW_Set_SV_STATE(rs1, rd):
436 value = regs[rs1]
437 get_state_csr(rd)
438 STATE.MVL = set_mvl_csr(value[11:6]+1)
439 STATE.VL = set_vl_csr(value[5:0]+1)
440 STATE.destoffs = value[23:18]>>18
441 STATE.srcoffs = value[23:18]>>12
442
443 get_state_csr(rd):
444 regs[rd] = (STATE.MVL-1) | (STATE.VL-1)<<6 | (STATE.srcoffs)<<12 |
445 (STATE.destoffs)<<18
446 return regs[rd]
447
448 In both cases, whilst CSR read of VL and MVL return the exact values
449 of VL and MVL respectively, reading and writing the STATE CSR returns
450 those values **minus one**. This is absolutely critical to implement
451 if the STATE CSR is to be used for fast context-switching.
452
453 ## VL, MVL and SUBVL instruction aliases
454
455 This table contains pseudo-assembly instruction aliases. Note the
456 subtraction of 1 from the CSRRWI pseudo variants, to compensate for the
457 reduced range of the 5 bit immediate.
458
459 | alias | CSR |
460 | - | - |
461 | SETVL rd, rs | CSRRW VL, rd, rs |
462 | SETVLi rd, #n | CSRRWI VL, rd, #n-1 |
463 | GETVL rd | CSRRW VL, rd, x0 |
464 | SETMVL rd, rs | CSRRW MVL, rd, rs |
465 | SETMVLi rd, #n | CSRRWI MVL,rd, #n-1 |
466 | GETMVL rd | CSRRW MVL, rd, x0 |
467
468 Note: CSRRC and other bitsetting may still be used, they are however not particularly useful (very obscure).
469
470 ## Register key-value (CAM) table <a name="regcsrtable" />
471
472 *NOTE: in prior versions of SV, this table used to be writable and
473 accessible via CSRs. It is now stored in the VBLOCK instruction format. Note
474 that this table does *not* get applied to the SVPrefix P48/64 format,
475 only to scalar opcodes*
476
477 The purpose of the Register table is three-fold:
478
479 * To mark integer and floating-point registers as requiring "redirection"
480 if it is ever used as a source or destination in any given operation.
481 This involves a level of indirection through a 5-to-7-bit lookup table,
482 such that **unmodified** operands with 5 bits (3 for some RVC ops) may
483 access up to **128** registers.
484 * To indicate whether, after redirection through the lookup table, the
485 register is a vector (or remains a scalar).
486 * To over-ride the implicit or explicit bitwidth that the operation would
487 normally give the register.
488
489 Note: clearly, if an RVC operation uses a 3 bit spec'd register (x8-x15)
490 and the Register table contains entried that only refer to registerd
491 x1-x14 or x16-x31, such operations will *never* activate the VL hardware
492 loop!
493
494 If however the (16 bit) Register table does contain such an entry (x8-x15
495 or x2 in the case of LWSP), that src or dest reg may be redirected
496 anywhere to the *full* 128 register range. Thus, RVC becomes far more
497 powerful and has many more opportunities to reduce code size that in
498 Standard RV32/RV64 executables.
499
500 16 bit format:
501
502 | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
503 | ------ | | - | - | - | ------ | ------- |
504 | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
505 | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
506 | .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
507 | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
508
509 8 bit format:
510
511 | RegCAM | | 7 | (6..5) | (4..0) |
512 | ------ | | - | ------ | ------- |
513 | 0 | | i/f | vew0 | regnum |
514
515 Showing the mapping (relationship) between 8-bit and 16-bit format:
516
517 | RegCAM | 15 | (14..8) | 7 | (6..5) | (4..0) |
518 | ------ | - | - | - | ------ | ------- |
519 | 0 | isvec=1 | regnum0<<2 | i/f | vew0 | regnum0 |
520 | 1 | isvec=1 | regnum1<<2 | i/f | vew1 | regnum1 |
521 | 2 | isvec=1 | regnum2<<2 | i/f | vew2 | regnum2 |
522 | 3 | isvec=1 | regnum2<<2 | i/f | vew3 | regnum3 |
523
524 i/f is set to "1" to indicate that the redirection/tag entry is to
525 be applied to integer registers; 0 indicates that it is relevant to
526 floating-point registers.
527
528 The 8 bit format is used for a much more compact expression. "isvec"
529 is implicit and, similar to [[sv-prefix-proposal]], the target vector
530 is "regnum<<2", implicitly. Contrast this with the 16-bit format where
531 the target vector is *explicitly* named in bits 8 to 14, and bit 15 may
532 optionally set "scalar" mode.
533
534 Note that whilst SVPrefix adds one extra bit to each of rd, rs1 etc.,
535 and thus the "vector" mode need only shift the (6 bit) regnum by 1 to
536 get the actual (7 bit) register number to use, there is not enough space
537 in the 8 bit format (only 5 bits for regnum) so "regnum<<2" is required.
538
539 vew has the following meanings, indicating that the instruction's
540 operand size is "over-ridden" in a polymorphic fashion:
541
542 | vew | bitwidth |
543 | --- | ------------------- |
544 | 00 | default (XLEN/FLEN) |
545 | 01 | 8 bit |
546 | 10 | 16 bit |
547 | 11 | 32 bit |
548
549 As the above table is a CAM (key-value store) it may be appropriate
550 (faster, implementation-wise) to expand it as follows:
551
552 struct vectorised fp_vec[32], int_vec[32];
553
554 for (i = 0; i < len; i++) // from VBLOCK Format
555 tb = int_vec if CSRvec[i].type == 0 else fp_vec
556 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
557 tb[idx].elwidth = CSRvec[i].elwidth
558 tb[idx].regidx = CSRvec[i].regidx // indirection
559 tb[idx].isvector = CSRvec[i].isvector // 0=scalar
560
561 ## Predication Table <a name="predication_csr_table"></a>
562
563 *NOTE: in prior versions of SV, this table used to be writable and
564 accessible via CSRs. It is now stored in the VBLOCK instruction format.
565 The table does **not** apply to SVPrefix opcodes*
566
567 The Predication Table is a key-value store indicating whether, if a
568 given destination register (integer or floating-point) is referred to
569 in an instruction, it is to be predicated. Like the Register table, it
570 is an indirect lookup that allows the RV opcodes to not need modification.
571
572 It is particularly important to note
573 that the *actual* register used can be *different* from the one that is
574 in the instruction, due to the redirection through the lookup table.
575
576 * regidx is the register that in combination with the
577 i/f flag, if that integer or floating-point register is referred to in a
578 (standard RV) instruction results in the lookup table being referenced
579 to find the predication mask to use for this operation.
580 * predidx is the *actual* (full, 7 bit) register to be used for the
581 predication mask.
582 * inv indicates that the predication mask bits are to be inverted
583 prior to use *without* actually modifying the contents of the
584 register from which those bits originated.
585 * zeroing is either 1 or 0, and if set to 1, the operation must
586 place zeros in any element position where the predication mask is
587 set to zero. If zeroing is set to 0, unpredicated elements *must*
588 be left alone. Some microarchitectures may choose to interpret
589 this as skipping the operation entirely. Others which wish to
590 stick more closely to a SIMD architecture may choose instead to
591 interpret unpredicated elements as an internal "copy element"
592 operation (which would be necessary in SIMD microarchitectures
593 that perform register-renaming)
594 * ffirst is a special mode that stops sequential element processing when
595 a data-dependent condition occurs, whether a trap or a conditional test.
596 The handling of each (trap or conditional test) is slightly different:
597 see Instruction sections for further details
598
599 16 bit format:
600
601 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
602 | ----- | - | - | - | - | ------- | ------- |
603 | 0 | predidx | zero0 | inv0 | i/f | regidx | ffirst0 |
604 | 1 | predidx | zero1 | inv1 | i/f | regidx | ffirst1 |
605 | 2 | predidx | zero2 | inv2 | i/f | regidx | ffirst2 |
606 | 3 | predidx | zero3 | inv3 | i/f | regidx | ffirst3 |
607
608 Note: predidx=x0, zero=1, inv=1 is a RESERVED encoding. Its use must
609 generate an illegal instruction trap.
610
611 8 bit format:
612
613 | PrCSR | 7 | 6 | 5 | (4..0) |
614 | ----- | - | - | - | ------- |
615 | 0 | zero0 | inv0 | i/f | regnum |
616
617 The 8 bit format is a compact and less expressive variant of the full
618 16 bit format. Using the 8 bit formatis very different: the predicate
619 register to use is implicit, and numbering begins inplicitly from x9. The
620 regnum is still used to "activate" predication, in the same fashion as
621 described above.
622
623 Thus if we map from 8 to 16 bit format, the table becomes:
624
625 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
626 | ----- | - | - | - | - | ------- | ------- |
627 | 0 | x9 | zero0 | inv0 | i/f | regnum | ff=0 |
628 | 1 | x10 | zero1 | inv1 | i/f | regnum | ff=0 |
629 | 2 | x11 | zero2 | inv2 | i/f | regnum | ff=0 |
630 | 3 | x12 | zero3 | inv3 | i/f | regnum | ff=0 |
631
632 The 16 bit Predication CSR Table is a key-value store, so
633 implementation-wise it will be faster to turn the table around (maintain
634 topologically equivalent state):
635
636 struct pred {
637 bool zero; // zeroing
638 bool inv; // register at predidx is inverted
639 bool ffirst; // fail-on-first
640 bool enabled; // use this to tell if the table-entry is active
641 int predidx; // redirection: actual int register to use
642 }
643
644 struct pred fp_pred_reg[32]; // 64 in future (bank=1)
645 struct pred int_pred_reg[32]; // 64 in future (bank=1)
646
647 for (i = 0; i < len; i++) // number of Predication entries in VBLOCK
648 tb = int_pred_reg if PredicateTable[i].type == 0 else fp_pred_reg;
649 idx = PredicateTable[i].regidx
650 tb[idx].zero = CSRpred[i].zero
651 tb[idx].inv = CSRpred[i].inv
652 tb[idx].ffirst = CSRpred[i].ffirst
653 tb[idx].predidx = CSRpred[i].predidx
654 tb[idx].enabled = true
655
656 So when an operation is to be predicated, it is the internal state that
657 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
658 pseudo-code for operations is given, where p is the explicit (direct)
659 reference to the predication register to be used:
660
661 for (int i=0; i<vl; ++i)
662 if ([!]preg[p][i])
663 (d ? vreg[rd][i] : sreg[rd]) =
664 iop(s1 ? vreg[rs1][i] : sreg[rs1],
665 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
666
667 This instead becomes an *indirect* reference using the *internal* state
668 table generated from the Predication CSR key-value store, which is used
669 as follows.
670
671 if type(iop) == INT:
672 preg = int_pred_reg[rd]
673 else:
674 preg = fp_pred_reg[rd]
675
676 for (int i=0; i<vl; ++i)
677 predicate, zeroing = get_pred_val(type(iop) == INT, rd):
678 if (predicate && (1<<i))
679 result = iop(s1 ? regfile[rs1+i] : regfile[rs1],
680 s2 ? regfile[rs2+i] : regfile[rs2]);
681 (d ? regfile[rd+i] : regfile[rd]) = result
682 if preg.ffirst and result == 0:
683 VL = i # result was zero, end loop early, return VL
684 return
685 else if (zeroing)
686 (d ? regfile[rd+i] : regfile[rd]) = 0
687
688 Note:
689
690 * d, s1 and s2 are booleans indicating whether destination,
691 source1 and source2 are vector or scalar
692 * key-value CSR-redirection of rd, rs1 and rs2 have NOT been included
693 above, for clarity. rd, rs1 and rs2 all also must ALSO go through
694 register-level redirection (from the Register table) if they are
695 vectors.
696 * fail-on-first mode stops execution early whenever an operation
697 returns a zero value. floating-point results count both
698 positive-zero as well as negative-zero as "fail".
699
700 If written as a function, obtaining the predication mask (and whether
701 zeroing takes place) may be done as follows:
702
703 def get_pred_val(bool is_fp_op, int reg):
704 tb = int_reg if is_fp_op else fp_reg
705 if (!tb[reg].enabled):
706 return ~0x0, False // all enabled; no zeroing
707 tb = int_pred if is_fp_op else fp_pred
708 if (!tb[reg].enabled):
709 return ~0x0, False // all enabled; no zeroing
710 predidx = tb[reg].predidx // redirection occurs HERE
711 predicate = intreg[predidx] // actual predicate HERE
712 if (tb[reg].inv):
713 predicate = ~predicate // invert ALL bits
714 return predicate, tb[reg].zero
715
716 Note here, critically, that **only** if the register is marked
717 in its **register** table entry as being "active" does the testing
718 proceed further to check if the **predicate** table entry is
719 also active.
720
721 Note also that this is in direct contrast to branch operations
722 for the storage of comparisions: in these specific circumstances
723 the requirement for there to be an active *register* entry
724 is removed.
725
726 ## Fail-on-First Mode <a name="ffirst-mode"></a>
727
728 ffirst is a special data-dependent predicate mode. There are two
729 variants: one is for faults: typically for LOAD/STORE operations,
730 which may encounter end of page faults during a series of operations.
731 The other variant is comparisons such as FEQ (or the augmented behaviour
732 of Branch), and any operation that returns a result of zero (whether
733 integer or floating-point). In the FP case, this includes negative-zero.
734
735 Note that the execution order must "appear" to be sequential for ffirst
736 mode to work correctly. An in-order architecture must execute the element
737 operations in sequence, whilst an out-of-order architecture must *commit*
738 the element operations in sequence (giving the appearance of in-order
739 execution).
740
741 Note also, that if ffirst mode is needed without predication, a special
742 "always-on" Predicate Table Entry may be constructed by setting
743 inverse-on and using x0 as the predicate register. This
744 will have the effect of creating a mask of all ones, allowing ffirst
745 to be set.
746
747 ### Fail-on-first traps
748
749 Except for the first element, ffault stops sequential element processing
750 when a trap occurs. The first element is treated normally (as if ffirst
751 is clear). Should any subsequent element instruction require a trap,
752 instead it and subsequent indexed elements are ignored (or cancelled in
753 out-of-order designs), and VL is set to the *last* instruction that did
754 not take the trap.
755
756 Note that predicated-out elements (where the predicate mask bit is zero)
757 are clearly excluded (i.e. the trap will not occur). However, note that
758 the loop still had to test the predicate bit: thus on return,
759 VL is set to include elements that did not take the trap *and* includes
760 the elements that were predicated (masked) out (not tested up to the
761 point where the trap occurred).
762
763 If SUBVL is being used (SUBVL!=1), the first *sub-group* of elements
764 will cause a trap as normal (as if ffirst is not set); subsequently,
765 the trap must not occur in the *sub-group* of elements. SUBVL will **NOT**
766 be modified.
767
768 Given that predication bits apply to SUBVL groups, the same rules apply
769 to predicated-out (masked-out) sub-groups in calculating the value that VL
770 is set to.
771
772 ### Fail-on-first conditional tests
773
774 ffault stops sequential element conditional testing on the first element result
775 being zero. VL is set to the number of elements that were processed before
776 the fail-condition was encountered.
777
778 Note that just as with traps, if SUBVL!=1, the first of any of the *sub-group*
779 will cause the processing to end, and, even if there were elements within
780 the *sub-group* that passed the test, that sub-group is still (entirely)
781 excluded from the count (from setting VL). i.e. VL is set to the total
782 number of *sub-groups* that had no fail-condition up until execution was
783 stopped.
784
785 Note again that, just as with traps, predicated-out (masked-out) elements
786 are included in the count leading up to the fail-condition, even though they
787 were not tested.
788
789 The pseudo-code for Predication makes this clearer and simpler than it is
790 in words (the loop ends, VL is set to the current element index, "i").
791
792 ## REMAP CSR <a name="remap" />
793
794 (Note: both the REMAP and SHAPE sections are best read after the
795 rest of the document has been read)
796
797 There is one 32-bit CSR which may be used to indicate which registers,
798 if used in any operation, must be "reshaped" (re-mapped) from a linear
799 form to a 2D or 3D transposed form, or "offset" to permit arbitrary
800 access to elements within a register.
801
802 The 32-bit REMAP CSR may reshape up to 3 registers:
803
804 | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
805 | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
806 | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
807
808 regidx0-2 refer not to the Register CSR CAM entry but to the underlying
809 *real* register (see regidx, the value) and consequently is 7-bits wide.
810 When set to zero (referring to x0), clearly reshaping x0 is pointless,
811 so is used to indicate "disabled".
812 shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
813 Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
814
815 It is anticipated that these specialist CSRs not be very often used.
816 Unlike the CSR Register and Predication tables, the REMAP CSRs use
817 the full 7-bit regidx so that they can be set once and left alone,
818 whilst the CSR Register entries pointing to them are disabled, instead.
819
820 ## SHAPE 1D/2D/3D vector-matrix remapping CSRs
821
822 (Note: both the REMAP and SHAPE sections are best read after the
823 rest of the document has been read)
824
825 There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
826 which have the same format. When each SHAPE CSR is set entirely to zeros,
827 remapping is disabled: the register's elements are a linear (1D) vector.
828
829 | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
830 | ------- | -- | ------- | -- | ------- | -- | ------- |
831 | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
832
833 offs is a 3-bit field, spread out across bits 7, 15 and 23, which
834 is added to the element index during the loop calculation.
835
836 xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
837 that the array dimensionality for that dimension is 1. A value of xdimsz=2
838 would indicate that in the first dimension there are 3 elements in the
839 array. The format of the array is therefore as follows:
840
841 array[xdim+1][ydim+1][zdim+1]
842
843 However whilst illustrative of the dimensionality, that does not take the
844 "permute" setting into account. "permute" may be any one of six values
845 (0-5, with values of 6 and 7 being reserved, and not legal). The table
846 below shows how the permutation dimensionality order works:
847
848 | permute | order | array format |
849 | ------- | ----- | ------------------------ |
850 | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
851 | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
852 | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
853 | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
854 | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
855 | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
856
857 In other words, the "permute" option changes the order in which
858 nested for-loops over the array would be done. The algorithm below
859 shows this more clearly, and may be executed as a python program:
860
861 # mapidx = REMAP.shape2
862 xdim = 3 # SHAPE[mapidx].xdim_sz+1
863 ydim = 4 # SHAPE[mapidx].ydim_sz+1
864 zdim = 5 # SHAPE[mapidx].zdim_sz+1
865
866 lims = [xdim, ydim, zdim]
867 idxs = [0,0,0] # starting indices
868 order = [1,0,2] # experiment with different permutations, here
869 offs = 0 # experiment with different offsets, here
870
871 for idx in range(xdim * ydim * zdim):
872 new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
873 print new_idx,
874 for i in range(3):
875 idxs[order[i]] = idxs[order[i]] + 1
876 if (idxs[order[i]] != lims[order[i]]):
877 break
878 print
879 idxs[order[i]] = 0
880
881 Here, it is assumed that this algorithm be run within all pseudo-code
882 throughout this document where a (parallelism) for-loop would normally
883 run from 0 to VL-1 to refer to contiguous register
884 elements; instead, where REMAP indicates to do so, the element index
885 is run through the above algorithm to work out the **actual** element
886 index, instead. Given that there are three possible SHAPE entries, up to
887 three separate registers in any given operation may be simultaneously
888 remapped:
889
890 function op_add(rd, rs1, rs2) # add not VADD!
891 ...
892 ...
893  for (i = 0; i < VL; i++)
894 xSTATE.srcoffs = i # save context
895 if (predval & 1<<i) # predication uses intregs
896    ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
897 ireg[rs2+remap(irs2)];
898 if (!int_vec[rd ].isvector) break;
899 if (int_vec[rd ].isvector)  { id += 1; }
900 if (int_vec[rs1].isvector)  { irs1 += 1; }
901 if (int_vec[rs2].isvector)  { irs2 += 1; }
902
903 By changing remappings, 2D matrices may be transposed "in-place" for one
904 operation, followed by setting a different permutation order without
905 having to move the values in the registers to or from memory. Also,
906 the reason for having REMAP separate from the three SHAPE CSRs is so
907 that in a chain of matrix multiplications and additions, for example,
908 the SHAPE CSRs need only be set up once; only the REMAP CSR need be
909 changed to target different registers.
910
911 Note that:
912
913 * Over-running the register file clearly has to be detected and
914 an illegal instruction exception thrown
915 * When non-default elwidths are set, the exact same algorithm still
916 applies (i.e. it offsets elements *within* registers rather than
917 entire registers).
918 * If permute option 000 is utilised, the actual order of the
919 reindexing does not change!
920 * If two or more dimensions are set to zero, the actual order does not change!
921 * The above algorithm is pseudo-code **only**. Actual implementations
922 will need to take into account the fact that the element for-looping
923 must be **re-entrant**, due to the possibility of exceptions occurring.
924 See MSTATE CSR, which records the current element index.
925 * Twin-predicated operations require **two** separate and distinct
926 element offsets. The above pseudo-code algorithm will be applied
927 separately and independently to each, should each of the two
928 operands be remapped. *This even includes C.LDSP* and other operations
929 in that category, where in that case it will be the **offset** that is
930 remapped (see Compressed Stack LOAD/STORE section).
931 * Offset is especially useful, on its own, for accessing elements
932 within the middle of a register. Without offsets, it is necessary
933 to either use a predicated MV, skipping the first elements, or
934 performing a LOAD/STORE cycle to memory.
935 With offsets, the data does not have to be moved.
936 * Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to
937 less than MVL is **perfectly legal**, albeit very obscure. It permits
938 entries to be regularly presented to operands **more than once**, thus
939 allowing the same underlying registers to act as an accumulator of
940 multiple vector or matrix operations, for example.
941
942 Clearly here some considerable care needs to be taken as the remapping
943 could hypothetically create arithmetic operations that target the
944 exact same underlying registers, resulting in data corruption due to
945 pipeline overlaps. Out-of-order / Superscalar micro-architectures with
946 register-renaming will have an easier time dealing with this than
947 DSP-style SIMD micro-architectures.
948
949 # Instruction Execution Order
950
951 Simple-V behaves as if it is a hardware-level "macro expansion system",
952 substituting and expanding a single instruction into multiple sequential
953 instructions with contiguous and sequentially-incrementing registers.
954 As such, it does **not** modify - or specify - the behaviour and semantics of
955 the execution order: that may be deduced from the **existing** RV
956 specification in each and every case.
957
958 So for example if a particular micro-architecture permits out-of-order
959 execution, and it is augmented with Simple-V, then wherever instructions
960 may be out-of-order then so may the "post-expansion" SV ones.
961
962 If on the other hand there are memory guarantees which specifically
963 prevent and prohibit certain instructions from being re-ordered
964 (such as the Atomicity Axiom, or FENCE constraints), then clearly
965 those constraints **MUST** also be obeyed "post-expansion".
966
967 It should be absolutely clear that SV is **not** about providing new
968 functionality or changing the existing behaviour of a micro-architetural
969 design, or about changing the RISC-V Specification.
970 It is **purely** about compacting what would otherwise be contiguous
971 instructions that use sequentially-increasing register numbers down
972 to the **one** instruction.
973
974 # Instructions <a name="instructions" />
975
976 Despite being a 98% complete and accurate topological remap of RVV
977 concepts and functionality, no new instructions are needed.
978 Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
979 becomes a critical dependency for efficient manipulation of predication
980 masks (as a bit-field). Despite the removal of all operations,
981 with the exception of CLIP and VSELECT.X
982 *all instructions from RVV Base are topologically re-mapped and retain their
983 complete functionality, intact*. Note that if RV64G ever had
984 a MV.X added as well as FCLIP, the full functionality of RVV-Base would
985 be obtained in SV.
986
987 Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
988 equivalents, so are left out of Simple-V. VSELECT could be included if
989 there existed a MV.X instruction in RV (MV.X is a hypothetical
990 non-immediate variant of MV that would allow another register to
991 specify which register was to be copied). Note that if any of these three
992 instructions are added to any given RV extension, their functionality
993 will be inherently parallelised.
994
995 With some exceptions, where it does not make sense or is simply too
996 challenging, all RV-Base instructions are parallelised:
997
998 * CSR instructions, whilst a case could be made for fast-polling of
999 a CSR into multiple registers, or for being able to copy multiple
1000 contiguously addressed CSRs into contiguous registers, and so on,
1001 are the fundamental core basis of SV. If parallelised, extreme
1002 care would need to be taken. Additionally, CSR reads are done
1003 using x0, and it is *really* inadviseable to tag x0.
1004 * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
1005 left as scalar.
1006 * LR/SC could hypothetically be parallelised however their purpose is
1007 single (complex) atomic memory operations where the LR must be followed
1008 up by a matching SC. A sequence of parallel LR instructions followed
1009 by a sequence of parallel SC instructions therefore is guaranteed to
1010 not be useful. Not least: the guarantees of a Multi-LR/SC
1011 would be impossible to provide if emulated in a trap.
1012 * EBREAK, NOP, FENCE and others do not use registers so are not inherently
1013 paralleliseable anyway.
1014
1015 All other operations using registers are automatically parallelised.
1016 This includes AMOMAX, AMOSWAP and so on, where particular care and
1017 attention must be paid.
1018
1019 Example pseudo-code for an integer ADD operation (including scalar
1020 operations). Floating-point uses the FP Register Table.
1021
1022 function op_add(rd, rs1, rs2) # add not VADD!
1023  int i, id=0, irs1=0, irs2=0;
1024  predval = get_pred_val(FALSE, rd);
1025  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
1026  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
1027  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
1028  for (i = 0; i < VL; i++)
1029 xSTATE.srcoffs = i # save context
1030 if (predval & 1<<i) # predication uses intregs
1031    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
1032 if (!int_vec[rd ].isvector) break;
1033 if (int_vec[rd ].isvector)  { id += 1; }
1034 if (int_vec[rs1].isvector)  { irs1 += 1; }
1035 if (int_vec[rs2].isvector)  { irs2 += 1; }
1036
1037 Note that for simplicity there is quite a lot missing from the above
1038 pseudo-code: element widths, zeroing on predication, dimensional
1039 reshaping and offsets and so on. However it demonstrates the basic
1040 principle. Augmentations that produce the full pseudo-code are covered in
1041 other sections.
1042
1043 ## SUBVL Pseudocode <a name="subvl-pseudocode"></a>
1044
1045 Adding in support for SUBVL is a matter of adding in an extra inner
1046 for-loop, where register src and dest are still incremented inside the
1047 inner part. Not that the predication is still taken from the VL index.
1048
1049 So whilst elements are indexed by "(i * SUBVL + s)", predicate bits are
1050 indexed by "(i)"
1051
1052 function op_add(rd, rs1, rs2) # add not VADD!
1053  int i, id=0, irs1=0, irs2=0;
1054  predval = get_pred_val(FALSE, rd);
1055  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
1056  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
1057  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
1058  for (i = 0; i < VL; i++)
1059 xSTATE.srcoffs = i # save context
1060 for (s = 0; s < SUBVL; s++)
1061 xSTATE.ssvoffs = s # save context
1062 if (predval & 1<<i) # predication uses intregs
1063 # actual add is here (at last)
1064    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
1065 if (!int_vec[rd ].isvector) break;
1066 if (int_vec[rd ].isvector)  { id += 1; }
1067 if (int_vec[rs1].isvector)  { irs1 += 1; }
1068 if (int_vec[rs2].isvector)  { irs2 += 1; }
1069 if (id == VL or irs1 == VL or irs2 == VL) {
1070 # end VL hardware loop
1071 xSTATE.srcoffs = 0; # reset
1072 xSTATE.ssvoffs = 0; # reset
1073 return;
1074 }
1075
1076
1077 NOTE: pseudocode simplified greatly: zeroing, proper predicate handling,
1078 elwidth handling etc. all left out.
1079
1080 ## Instruction Format
1081
1082 It is critical to appreciate that there are
1083 **no operations added to SV, at all**.
1084
1085 Instead, by using CSRs to tag registers as an indication of "changed
1086 behaviour", SV *overloads* pre-existing branch operations into predicated
1087 variants, and implicitly overloads arithmetic operations, MV, FCVT, and
1088 LOAD/STORE depending on CSR configurations for bitwidth and predication.
1089 **Everything** becomes parallelised. *This includes Compressed
1090 instructions* as well as any future instructions and Custom Extensions.
1091
1092 Note: CSR tags to change behaviour of instructions is nothing new, including
1093 in RISC-V. UXL, SXL and MXL change the behaviour so that XLEN=32/64/128.
1094 FRM changes the behaviour of the floating-point unit, to alter the rounding
1095 mode. Other architectures change the LOAD/STORE byte-order from big-endian
1096 to little-endian on a per-instruction basis. SV is just a little more...
1097 comprehensive in its effect on instructions.
1098
1099 ## Branch Instructions
1100
1101 Branch operations are augmented slightly to be a little more like FP
1102 Compares (FEQ, FNE etc.), by permitting the cumulation (and storage)
1103 of multiple comparisons into a register (taken indirectly from the predicate
1104 table). As such, "ffirst" - fail-on-first - condition mode can be enabled.
1105 See ffirst mode in the Predication Table section.
1106
1107 ### Standard Branch <a name="standard_branch"></a>
1108
1109 Branch operations use standard RV opcodes that are reinterpreted to
1110 be "predicate variants" in the instance where either of the two src
1111 registers are marked as vectors (active=1, vector=1).
1112
1113 Note that the predication register to use (if one is enabled) is taken from
1114 the *first* src register, and that this is used, just as with predicated
1115 arithmetic operations, to mask whether the comparison operations take
1116 place or not. The target (destination) predication register
1117 to use (if one is enabled) is taken from the *second* src register.
1118
1119 If either of src1 or src2 are scalars (whether by there being no
1120 CSR register entry or whether by the CSR entry specifically marking
1121 the register as "scalar") the comparison goes ahead as vector-scalar
1122 or scalar-vector.
1123
1124 In instances where no vectorisation is detected on either src registers
1125 the operation is treated as an absolutely standard scalar branch operation.
1126 Where vectorisation is present on either or both src registers, the
1127 branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
1128 those tests that are predicated out).
1129
1130 Note that when zero-predication is enabled (from source rs1),
1131 a cleared bit in the predicate indicates that the result
1132 of the compare is set to "false", i.e. that the corresponding
1133 destination bit (or result)) be set to zero. Contrast this with
1134 when zeroing is not set: bits in the destination predicate are
1135 only *set*; they are **not** cleared. This is important to appreciate,
1136 as there may be an expectation that, going into the hardware-loop,
1137 the destination predicate is always expected to be set to zero:
1138 this is **not** the case. The destination predicate is only set
1139 to zero if **zeroing** is enabled.
1140
1141 Note that just as with the standard (scalar, non-predicated) branch
1142 operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
1143 src1 and src2.
1144
1145 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
1146 for predicated compare operations of function "cmp":
1147
1148 for (int i=0; i<vl; ++i)
1149 if ([!]preg[p][i])
1150 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
1151 s2 ? vreg[rs2][i] : sreg[rs2]);
1152
1153 With associated predication, vector-length adjustments and so on,
1154 and temporarily ignoring bitwidth (which makes the comparisons more
1155 complex), this becomes:
1156
1157 s1 = reg_is_vectorised(src1);
1158 s2 = reg_is_vectorised(src2);
1159
1160 if not s1 && not s2
1161 if cmp(rs1, rs2) # scalar compare
1162 goto branch
1163 return
1164
1165 preg = int_pred_reg[rd]
1166 reg = int_regfile
1167
1168 ps = get_pred_val(I/F==INT, rs1);
1169 rd = get_pred_val(I/F==INT, rs2); # this may not exist
1170
1171 if not exists(rd) or zeroing:
1172 result = 0
1173 else
1174 result = preg[rd]
1175
1176 for (int i = 0; i < VL; ++i)
1177 if (zeroing)
1178 if not (ps & (1<<i))
1179 result &= ~(1<<i);
1180 else if (ps & (1<<i))
1181 if (cmp(s1 ? reg[src1+i]:reg[src1],
1182 s2 ? reg[src2+i]:reg[src2])
1183 result |= 1<<i;
1184 else
1185 result &= ~(1<<i);
1186
1187 if not exists(rd)
1188 if result == ps
1189 goto branch
1190 else
1191 preg[rd] = result # store in destination
1192 if preg[rd] == ps
1193 goto branch
1194
1195 Notes:
1196
1197 * Predicated SIMD comparisons would break src1 and src2 further down
1198 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
1199 Reordering") setting Vector-Length times (number of SIMD elements) bits
1200 in Predicate Register rd, as opposed to just Vector-Length bits.
1201 * The execution of "parallelised" instructions **must** be implemented
1202 as "re-entrant" (to use a term from software). If an exception (trap)
1203 occurs during the middle of a vectorised
1204 Branch (now a SV predicated compare) operation, the partial results
1205 of any comparisons must be written out to the destination
1206 register before the trap is permitted to begin. If however there
1207 is no predicate, the **entire** set of comparisons must be **restarted**,
1208 with the offset loop indices set back to zero. This is because
1209 there is no place to store the temporary result during the handling
1210 of traps.
1211
1212 TODO: predication now taken from src2. also branch goes ahead
1213 if all compares are successful.
1214
1215 Note also that where normally, predication requires that there must
1216 also be a CSR register entry for the register being used in order
1217 for the **predication** CSR register entry to also be active,
1218 for branches this is **not** the case. src2 does **not** have
1219 to have its CSR register entry marked as active in order for
1220 predication on src2 to be active.
1221
1222 Also note: SV Branch operations are **not** twin-predicated
1223 (see Twin Predication section). This would require three
1224 element offsets: one to track src1, one to track src2 and a third
1225 to track where to store the accumulation of the results. Given
1226 that the element offsets need to be exposed via CSRs so that
1227 the parallel hardware looping may be made re-entrant on traps
1228 and exceptions, the decision was made not to make SV Branches
1229 twin-predicated.
1230
1231 ### Floating-point Comparisons
1232
1233 There does not exist floating-point branch operations, only compare.
1234 Interestingly no change is needed to the instruction format because
1235 FP Compare already stores a 1 or a zero in its "rd" integer register
1236 target, i.e. it's not actually a Branch at all: it's a compare.
1237
1238 In RV (scalar) Base, a branch on a floating-point compare is
1239 done via the sequence "FEQ x1, f0, f5; BEQ x1, x0, #jumploc".
1240 This does extend to SV, as long as x1 (in the example sequence given)
1241 is vectorised. When that is the case, x1..x(1+VL-1) will also be
1242 set to 0 or 1 depending on whether f0==f5, f1==f6, f2==f7 and so on.
1243 The BEQ that follows will *also* compare x1==x0, x2==x0, x3==x0 and
1244 so on. Consequently, unlike integer-branch, FP Compare needs no
1245 modification in its behaviour.
1246
1247 In addition, it is noted that an entry "FNE" (the opposite of FEQ) is missing,
1248 and whilst in ordinary branch code this is fine because the standard
1249 RVF compare can always be followed up with an integer BEQ or a BNE (or
1250 a compressed comparison to zero or non-zero), in predication terms that
1251 becomes more of an impact. To deal with this, SV's predication has
1252 had "invert" added to it.
1253
1254 Also: note that FP Compare may be predicated, using the destination
1255 integer register (rd) to determine the predicate. FP Compare is **not**
1256 a twin-predication operation, as, again, just as with SV Branches,
1257 there are three registers involved: FP src1, FP src2 and INT rd.
1258
1259 Also: note that ffirst (fail first mode) applies directly to this operation.
1260
1261 ### Compressed Branch Instruction
1262
1263 Compressed Branch instructions are, just like standard Branch instructions,
1264 reinterpreted to be vectorised and predicated based on the source register
1265 (rs1s) CSR entries. As however there is only the one source register,
1266 given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
1267 to store the results of the comparisions is taken from CSR predication
1268 table entries for **x0**.
1269
1270 The specific required use of x0 is, with a little thought, quite obvious,
1271 but is counterintuitive. Clearly it is **not** recommended to redirect
1272 x0 with a CSR register entry, however as a means to opaquely obtain
1273 a predication target it is the only sensible option that does not involve
1274 additional special CSRs (or, worse, additional special opcodes).
1275
1276 Note also that, just as with standard branches, the 2nd source
1277 (in this case x0 rather than src2) does **not** have to have its CSR
1278 register table marked as "active" in order for predication to work.
1279
1280 ## Vectorised Dual-operand instructions
1281
1282 There is a series of 2-operand instructions involving copying (and
1283 sometimes alteration):
1284
1285 * C.MV
1286 * FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
1287 * C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
1288 * LOAD(-FP) and STORE(-FP)
1289
1290 All of these operations follow the same two-operand pattern, so it is
1291 *both* the source *and* destination predication masks that are taken into
1292 account. This is different from
1293 the three-operand arithmetic instructions, where the predication mask
1294 is taken from the *destination* register, and applied uniformly to the
1295 elements of the source register(s), element-for-element.
1296
1297 The pseudo-code pattern for twin-predicated operations is as
1298 follows:
1299
1300 function op(rd, rs):
1301  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1302  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1303  ps = get_pred_val(FALSE, rs); # predication on src
1304  pd = get_pred_val(FALSE, rd); # ... AND on dest
1305  for (int i = 0, int j = 0; i < VL && j < VL;):
1306 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1307 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1308 xSTATE.srcoffs = i # save context
1309 xSTATE.destoffs = j # save context
1310 reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
1311 if (int_csr[rs].isvec) i++;
1312 if (int_csr[rd].isvec) j++; else break
1313
1314 This pattern covers scalar-scalar, scalar-vector, vector-scalar
1315 and vector-vector, and predicated variants of all of those.
1316 Zeroing is not presently included (TODO). As such, when compared
1317 to RVV, the twin-predicated variants of C.MV and FMV cover
1318 **all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
1319 VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
1320
1321 Note that:
1322
1323 * elwidth (SIMD) is not covered in the pseudo-code above
1324 * ending the loop early in scalar cases (VINSERT, VEXTRACT) is also
1325 not covered
1326 * zero predication is also not shown (TODO).
1327
1328 ### C.MV Instruction <a name="c_mv"></a>
1329
1330 There is no MV instruction in RV however there is a C.MV instruction.
1331 It is used for copying integer-to-integer registers (vectorised FMV
1332 is used for copying floating-point).
1333
1334 If either the source or the destination register are marked as vectors
1335 C.MV is reinterpreted to be a vectorised (multi-register) predicated
1336 move operation. The actual instruction's format does not change:
1337
1338 [[!table data="""
1339 15 12 | 11 7 | 6 2 | 1 0 |
1340 funct4 | rd | rs | op |
1341 4 | 5 | 5 | 2 |
1342 C.MV | dest | src | C0 |
1343 """]]
1344
1345 A simplified version of the pseudocode for this operation is as follows:
1346
1347 function op_mv(rd, rs) # MV not VMV!
1348  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1349  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1350  ps = get_pred_val(FALSE, rs); # predication on src
1351  pd = get_pred_val(FALSE, rd); # ... AND on dest
1352  for (int i = 0, int j = 0; i < VL && j < VL;):
1353 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1354 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1355 xSTATE.srcoffs = i # save context
1356 xSTATE.destoffs = j # save context
1357 ireg[rd+j] <= ireg[rs+i];
1358 if (int_csr[rs].isvec) i++;
1359 if (int_csr[rd].isvec) j++; else break
1360
1361 There are several different instructions from RVV that are covered by
1362 this one opcode:
1363
1364 [[!table data="""
1365 src | dest | predication | op |
1366 scalar | vector | none | VSPLAT |
1367 scalar | vector | destination | sparse VSPLAT |
1368 scalar | vector | 1-bit dest | VINSERT |
1369 vector | scalar | 1-bit? src | VEXTRACT |
1370 vector | vector | none | VCOPY |
1371 vector | vector | src | Vector Gather |
1372 vector | vector | dest | Vector Scatter |
1373 vector | vector | src & dest | Gather/Scatter |
1374 vector | vector | src == dest | sparse VCOPY |
1375 """]]
1376
1377 Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
1378 operations with inversion on the src and dest predication for one of the
1379 two C.MV operations.
1380
1381 Note that in the instance where the Compressed Extension is not implemented,
1382 MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs.
1383 Note that the behaviour is **different** from C.MV because with addi the
1384 predication mask to use is taken **only** from rd and is applied against
1385 all elements: rs[i] = rd[i].
1386
1387 ### FMV, FNEG and FABS Instructions
1388
1389 These are identical in form to C.MV, except covering floating-point
1390 register copying. The same double-predication rules also apply.
1391 However when elwidth is not set to default the instruction is implicitly
1392 and automatic converted to a (vectorised) floating-point type conversion
1393 operation of the appropriate size covering the source and destination
1394 register bitwidths.
1395
1396 (Note that FMV, FNEG and FABS are all actually pseudo-instructions)
1397
1398 ### FVCT Instructions
1399
1400 These are again identical in form to C.MV, except that they cover
1401 floating-point to integer and integer to floating-point. When element
1402 width in each vector is set to default, the instructions behave exactly
1403 as they are defined for standard RV (scalar) operations, except vectorised
1404 in exactly the same fashion as outlined in C.MV.
1405
1406 However when the source or destination element width is not set to default,
1407 the opcode's explicit element widths are *over-ridden* to new definitions,
1408 and the opcode's element width is taken as indicative of the SIMD width
1409 (if applicable i.e. if packed SIMD is requested) instead.
1410
1411 For example FCVT.S.L would normally be used to convert a 64-bit
1412 integer in register rs1 to a 64-bit floating-point number in rd.
1413 If however the source rs1 is set to be a vector, where elwidth is set to
1414 default/2 and "packed SIMD" is enabled, then the first 32 bits of
1415 rs1 are converted to a floating-point number to be stored in rd's
1416 first element and the higher 32-bits *also* converted to floating-point
1417 and stored in the second. The 32 bit size comes from the fact that
1418 FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to
1419 divide that by two it means that rs1 element width is to be taken as 32.
1420
1421 Similar rules apply to the destination register.
1422
1423 ## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
1424
1425 An earlier draft of SV modified the behaviour of LOAD/STORE (modified
1426 the interpretation of the instruction fields). This
1427 actually undermined the fundamental principle of SV, namely that there
1428 be no modifications to the scalar behaviour (except where absolutely
1429 necessary), in order to simplify an implementor's task if considering
1430 converting a pre-existing scalar design to support parallelism.
1431
1432 So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
1433 do not change in SV, however just as with C.MV it is important to note
1434 that dual-predication is possible.
1435
1436 In vectorised architectures there are usually at least two different modes
1437 for LOAD/STORE:
1438
1439 * Read (or write for STORE) from sequential locations, where one
1440 register specifies the address, and the one address is incremented
1441 by a fixed amount. This is usually known as "Unit Stride" mode.
1442 * Read (or write) from multiple indirected addresses, where the
1443 vector elements each specify separate and distinct addresses.
1444
1445 To support these different addressing modes, the CSR Register "isvector"
1446 bit is used. So, for a LOAD, when the src register is set to
1447 scalar, the LOADs are sequentially incremented by the src register
1448 element width, and when the src register is set to "vector", the
1449 elements are treated as indirection addresses. Simplified
1450 pseudo-code would look like this:
1451
1452 function op_ld(rd, rs) # LD not VLD!
1453  rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
1454  rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
1455  ps = get_pred_val(FALSE, rs); # predication on src
1456  pd = get_pred_val(FALSE, rd); # ... AND on dest
1457  for (int i = 0, int j = 0; i < VL && j < VL;):
1458 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1459 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1460 if (int_csr[rd].isvec)
1461 # indirect mode (multi mode)
1462 srcbase = ireg[rsv+i];
1463 else
1464 # unit stride mode
1465 srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
1466 ireg[rdv+j] <= mem[srcbase + imm_offs];
1467 if (!int_csr[rs].isvec &&
1468 !int_csr[rd].isvec) break # scalar-scalar LD
1469 if (int_csr[rs].isvec) i++;
1470 if (int_csr[rd].isvec) j++;
1471
1472 Notes:
1473
1474 * For simplicity, zeroing and elwidth is not included in the above:
1475 the key focus here is the decision-making for srcbase; vectorised
1476 rs means use sequentially-numbered registers as the indirection
1477 address, and scalar rs is "offset" mode.
1478 * The test towards the end for whether both source and destination are
1479 scalar is what makes the above pseudo-code provide the "standard" RV
1480 Base behaviour for LD operations.
1481 * The offset in bytes (XLEN/8) changes depending on whether the
1482 operation is a LB (1 byte), LH (2 byes), LW (4 bytes) or LD
1483 (8 bytes), and also whether the element width is over-ridden
1484 (see special element width section).
1485
1486 ## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
1487
1488 C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
1489 where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
1490 It is therefore possible to use predicated C.LWSP to efficiently
1491 pop registers off the stack (by predicating x2 as the source), cherry-picking
1492 which registers to store to (by predicating the destination). Likewise
1493 for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
1494
1495 The two modes ("unit stride" and multi-indirection) are still supported,
1496 as with standard LD/ST. Essentially, the only difference is that the
1497 use of x2 is hard-coded into the instruction.
1498
1499 **Note**: it is still possible to redirect x2 to an alternative target
1500 register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
1501 general-purpose LOAD/STORE operations.
1502
1503 ## Compressed LOAD / STORE Instructions
1504
1505 Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
1506 where the same rules apply and the same pseudo-code apply as for
1507 non-compressed LOAD/STORE. Again: setting scalar or vector mode
1508 on the src for LOAD and dest for STORE switches mode from "Unit Stride"
1509 to "Multi-indirection", respectively.
1510
1511 # Element bitwidth polymorphism <a name="elwidth"></a>
1512
1513 Element bitwidth is best covered as its own special section, as it
1514 is quite involved and applies uniformly across-the-board. SV restricts
1515 bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
1516
1517 The effect of setting an element bitwidth is to re-cast each entry
1518 in the register table, and for all memory operations involving
1519 load/stores of certain specific sizes, to a completely different width.
1520 Thus In c-style terms, on an RV64 architecture, effectively each register
1521 now looks like this:
1522
1523 typedef union {
1524 uint8_t b[8];
1525 uint16_t s[4];
1526 uint32_t i[2];
1527 uint64_t l[1];
1528 } reg_t;
1529
1530 // integer table: assume maximum SV 7-bit regfile size
1531 reg_t int_regfile[128];
1532
1533 where the CSR Register table entry (not the instruction alone) determines
1534 which of those union entries is to be used on each operation, and the
1535 VL element offset in the hardware-loop specifies the index into each array.
1536
1537 However a naive interpretation of the data structure above masks the
1538 fact that setting VL greater than 8, for example, when the bitwidth is 8,
1539 accessing one specific register "spills over" to the following parts of
1540 the register file in a sequential fashion. So a much more accurate way
1541 to reflect this would be:
1542
1543 typedef union {
1544 uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
1545 uint8_t b[0]; // array of type uint8_t
1546 uint16_t s[0];
1547 uint32_t i[0];
1548 uint64_t l[0];
1549 uint128_t d[0];
1550 } reg_t;
1551
1552 reg_t int_regfile[128];
1553
1554 where when accessing any individual regfile[n].b entry it is permitted
1555 (in c) to arbitrarily over-run the *declared* length of the array (zero),
1556 and thus "overspill" to consecutive register file entries in a fashion
1557 that is completely transparent to a greatly-simplified software / pseudo-code
1558 representation.
1559 It is however critical to note that it is clearly the responsibility of
1560 the implementor to ensure that, towards the end of the register file,
1561 an exception is thrown if attempts to access beyond the "real" register
1562 bytes is ever attempted.
1563
1564 Now we may modify pseudo-code an operation where all element bitwidths have
1565 been set to the same size, where this pseudo-code is otherwise identical
1566 to its "non" polymorphic versions (above):
1567
1568 function op_add(rd, rs1, rs2) # add not VADD!
1569 ...
1570 ...
1571  for (i = 0; i < VL; i++)
1572 ...
1573 ...
1574 // TODO, calculate if over-run occurs, for each elwidth
1575 if (elwidth == 8) {
1576    int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
1577     int_regfile[rs2].i[irs2];
1578 } else if elwidth == 16 {
1579    int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
1580     int_regfile[rs2].s[irs2];
1581 } else if elwidth == 32 {
1582    int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
1583     int_regfile[rs2].i[irs2];
1584 } else { // elwidth == 64
1585    int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
1586     int_regfile[rs2].l[irs2];
1587 }
1588 ...
1589 ...
1590
1591 So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
1592 following sequentially on respectively from the same) are "type-cast"
1593 to 8-bit; for 16-bit entries likewise and so on.
1594
1595 However that only covers the case where the element widths are the same.
1596 Where the element widths are different, the following algorithm applies:
1597
1598 * Analyse the bitwidth of all source operands and work out the
1599 maximum. Record this as "maxsrcbitwidth"
1600 * If any given source operand requires sign-extension or zero-extension
1601 (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
1602 sign-extension / zero-extension or whatever is specified in the standard
1603 RV specification, **change** that to sign-extending from the respective
1604 individual source operand's bitwidth from the CSR table out to
1605 "maxsrcbitwidth" (previously calculated), instead.
1606 * Following separate and distinct (optional) sign/zero-extension of all
1607 source operands as specifically required for that operation, carry out the
1608 operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
1609 this may be a "null" (copy) operation, and that with FCVT, the changes
1610 to the source and destination bitwidths may also turn FVCT effectively
1611 into a copy).
1612 * If the destination operand requires sign-extension or zero-extension,
1613 instead of a mandatory fixed size (typically 32-bit for arithmetic,
1614 for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
1615 etc.), overload the RV specification with the bitwidth from the
1616 destination register's elwidth entry.
1617 * Finally, store the (optionally) sign/zero-extended value into its
1618 destination: memory for sb/sw etc., or an offset section of the register
1619 file for an arithmetic operation.
1620
1621 In this way, polymorphic bitwidths are achieved without requiring a
1622 massive 64-way permutation of calculations **per opcode**, for example
1623 (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
1624 rd bitwidths). The pseudo-code is therefore as follows:
1625
1626 typedef union {
1627 uint8_t b;
1628 uint16_t s;
1629 uint32_t i;
1630 uint64_t l;
1631 } el_reg_t;
1632
1633 bw(elwidth):
1634 if elwidth == 0:
1635 return xlen
1636 if elwidth == 1:
1637 return xlen / 2
1638 if elwidth == 2:
1639 return xlen * 2
1640 // elwidth == 3:
1641 return 8
1642
1643 get_max_elwidth(rs1, rs2):
1644 return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
1645 bw(int_csr[rs2].elwidth)) # again XLEN if no entry
1646
1647 get_polymorphed_reg(reg, bitwidth, offset):
1648 el_reg_t res;
1649 res.l = 0; // TODO: going to need sign-extending / zero-extending
1650 if bitwidth == 8:
1651 reg.b = int_regfile[reg].b[offset]
1652 elif bitwidth == 16:
1653 reg.s = int_regfile[reg].s[offset]
1654 elif bitwidth == 32:
1655 reg.i = int_regfile[reg].i[offset]
1656 elif bitwidth == 64:
1657 reg.l = int_regfile[reg].l[offset]
1658 return res
1659
1660 set_polymorphed_reg(reg, bitwidth, offset, val):
1661 if (!int_csr[reg].isvec):
1662 # sign/zero-extend depending on opcode requirements, from
1663 # the reg's bitwidth out to the full bitwidth of the regfile
1664 val = sign_or_zero_extend(val, bitwidth, xlen)
1665 int_regfile[reg].l[0] = val
1666 elif bitwidth == 8:
1667 int_regfile[reg].b[offset] = val
1668 elif bitwidth == 16:
1669 int_regfile[reg].s[offset] = val
1670 elif bitwidth == 32:
1671 int_regfile[reg].i[offset] = val
1672 elif bitwidth == 64:
1673 int_regfile[reg].l[offset] = val
1674
1675 maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
1676 destwid = int_csr[rs1].elwidth # destination element width
1677  for (i = 0; i < VL; i++)
1678 if (predval & 1<<i) # predication uses intregs
1679 // TODO, calculate if over-run occurs, for each elwidth
1680 src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
1681 // TODO, sign/zero-extend src1 and src2 as operation requires
1682 if (op_requires_sign_extend_src1)
1683 src1 = sign_extend(src1, maxsrcwid)
1684 src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
1685 result = src1 + src2 # actual add here
1686 // TODO, sign/zero-extend result, as operation requires
1687 if (op_requires_sign_extend_dest)
1688 result = sign_extend(result, maxsrcwid)
1689 set_polymorphed_reg(rd, destwid, ird, result)
1690 if (!int_vec[rd].isvector) break
1691 if (int_vec[rd ].isvector)  { id += 1; }
1692 if (int_vec[rs1].isvector)  { irs1 += 1; }
1693 if (int_vec[rs2].isvector)  { irs2 += 1; }
1694
1695 Whilst specific sign-extension and zero-extension pseudocode call
1696 details are left out, due to each operation being different, the above
1697 should be clear that;
1698
1699 * the source operands are extended out to the maximum bitwidth of all
1700 source operands
1701 * the operation takes place at that maximum source bitwidth (the
1702 destination bitwidth is not involved at this point, at all)
1703 * the result is extended (or potentially even, truncated) before being
1704 stored in the destination. i.e. truncation (if required) to the
1705 destination width occurs **after** the operation **not** before.
1706 * when the destination is not marked as "vectorised", the **full**
1707 (standard, scalar) register file entry is taken up, i.e. the
1708 element is either sign-extended or zero-extended to cover the
1709 full register bitwidth (XLEN) if it is not already XLEN bits long.
1710
1711 Implementors are entirely free to optimise the above, particularly
1712 if it is specifically known that any given operation will complete
1713 accurately in less bits, as long as the results produced are
1714 directly equivalent and equal, for all inputs and all outputs,
1715 to those produced by the above algorithm.
1716
1717 ## Polymorphic floating-point operation exceptions and error-handling
1718
1719 For floating-point operations, conversion takes place without
1720 raising any kind of exception. Exactly as specified in the standard
1721 RV specification, NAN (or appropriate) is stored if the result
1722 is beyond the range of the destination, and, again, exactly as
1723 with the standard RV specification just as with scalar
1724 operations, the floating-point flag is raised (FCSR). And, again, just as
1725 with scalar operations, it is software's responsibility to check this flag.
1726 Given that the FCSR flags are "accrued", the fact that multiple element
1727 operations could have occurred is not a problem.
1728
1729 Note that it is perfectly legitimate for floating-point bitwidths of
1730 only 8 to be specified. However whilst it is possible to apply IEEE 754
1731 principles, no actual standard yet exists. Implementors wishing to
1732 provide hardware-level 8-bit support rather than throw a trap to emulate
1733 in software should contact the author of this specification before
1734 proceeding.
1735
1736 ## Polymorphic shift operators
1737
1738 A special note is needed for changing the element width of left and right
1739 shift operators, particularly right-shift. Even for standard RV base,
1740 in order for correct results to be returned, the second operand RS2 must
1741 be truncated to be within the range of RS1's bitwidth. spike's implementation
1742 of sll for example is as follows:
1743
1744 WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));
1745
1746 which means: where XLEN is 32 (for RV32), restrict RS2 to cover the
1747 range 0..31 so that RS1 will only be left-shifted by the amount that
1748 is possible to fit into a 32-bit register. Whilst this appears not
1749 to matter for hardware, it matters greatly in software implementations,
1750 and it also matters where an RV64 system is set to "RV32" mode, such
1751 that the underlying registers RS1 and RS2 comprise 64 hardware bits
1752 each.
1753
1754 For SV, where each operand's element bitwidth may be over-ridden, the
1755 rule about determining the operation's bitwidth *still applies*, being
1756 defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
1757 **also applies to the truncation of RS2**. In other words, *after*
1758 determining the maximum bitwidth, RS2's range must **also be truncated**
1759 to ensure a correct answer. Example:
1760
1761 * RS1 is over-ridden to a 16-bit width
1762 * RS2 is over-ridden to an 8-bit width
1763 * RD is over-ridden to a 64-bit width
1764 * the maximum bitwidth is thus determined to be 16-bit - max(8,16)
1765 * RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
1766
1767 Pseudocode (in spike) for this example would therefore be:
1768
1769 WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
1770
1771 This example illustrates that considerable care therefore needs to be
1772 taken to ensure that left and right shift operations are implemented
1773 correctly. The key is that
1774
1775 * The operation bitwidth is determined by the maximum bitwidth
1776 of the *source registers*, **not** the destination register bitwidth
1777 * The result is then sign-extend (or truncated) as appropriate.
1778
1779 ## Polymorphic MULH/MULHU/MULHSU
1780
1781 MULH is designed to take the top half MSBs of a multiply that
1782 does not fit within the range of the source operands, such that
1783 smaller width operations may produce a full double-width multiply
1784 in two cycles. The issue is: SV allows the source operands to
1785 have variable bitwidth.
1786
1787 Here again special attention has to be paid to the rules regarding
1788 bitwidth, which, again, are that the operation is performed at
1789 the maximum bitwidth of the **source** registers. Therefore:
1790
1791 * An 8-bit x 8-bit multiply will create a 16-bit result that must
1792 be shifted down by 8 bits
1793 * A 16-bit x 8-bit multiply will create a 24-bit result that must
1794 be shifted down by 16 bits (top 8 bits being zero)
1795 * A 16-bit x 16-bit multiply will create a 32-bit result that must
1796 be shifted down by 16 bits
1797 * A 32-bit x 16-bit multiply will create a 48-bit result that must
1798 be shifted down by 32 bits
1799 * A 32-bit x 8-bit multiply will create a 40-bit result that must
1800 be shifted down by 32 bits
1801
1802 So again, just as with shift-left and shift-right, the result
1803 is shifted down by the maximum of the two source register bitwidths.
1804 And, exactly again, truncation or sign-extension is performed on the
1805 result. If sign-extension is to be carried out, it is performed
1806 from the same maximum of the two source register bitwidths out
1807 to the result element's bitwidth.
1808
1809 If truncation occurs, i.e. the top MSBs of the result are lost,
1810 this is "Officially Not Our Problem", i.e. it is assumed that the
1811 programmer actually desires the result to be truncated. i.e. if the
1812 programmer wanted all of the bits, they would have set the destination
1813 elwidth to accommodate them.
1814
1815 ## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
1816
1817 Polymorphic element widths in vectorised form means that the data
1818 being loaded (or stored) across multiple registers needs to be treated
1819 (reinterpreted) as a contiguous stream of elwidth-wide items, where
1820 the source register's element width is **independent** from the destination's.
1821
1822 This makes for a slightly more complex algorithm when using indirection
1823 on the "addressed" register (source for LOAD and destination for STORE),
1824 particularly given that the LOAD/STORE instruction provides important
1825 information about the width of the data to be reinterpreted.
1826
1827 Let's illustrate the "load" part, where the pseudo-code for elwidth=default
1828 was as follows, and i is the loop from 0 to VL-1:
1829
1830 srcbase = ireg[rs+i];
1831 return mem[srcbase + imm]; // returns XLEN bits
1832
1833 Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
1834 chunks are taken from the source memory location addressed by the current
1835 indexed source address register, and only when a full 32-bits-worth
1836 are taken will the index be moved on to the next contiguous source
1837 address register:
1838
1839 bitwidth = bw(elwidth); // source elwidth from CSR reg entry
1840 elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
1841 srcbase = ireg[rs+i/(elsperblock)]; // integer divide
1842 offs = i % elsperblock; // modulo
1843 return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
1844
1845 Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
1846 and 128 for LQ.
1847
1848 The principle is basically exactly the same as if the srcbase were pointing
1849 at the memory of the *register* file: memory is re-interpreted as containing
1850 groups of elwidth-wide discrete elements.
1851
1852 When storing the result from a load, it's important to respect the fact
1853 that the destination register has its *own separate element width*. Thus,
1854 when each element is loaded (at the source element width), any sign-extension
1855 or zero-extension (or truncation) needs to be done to the *destination*
1856 bitwidth. Also, the storing has the exact same analogous algorithm as
1857 above, where in fact it is just the set\_polymorphed\_reg pseudocode
1858 (completely unchanged) used above.
1859
1860 One issue remains: when the source element width is **greater** than
1861 the width of the operation, it is obvious that a single LB for example
1862 cannot possibly obtain 16-bit-wide data. This condition may be detected
1863 where, when using integer divide, elsperblock (the width of the LOAD
1864 divided by the bitwidth of the element) is zero.
1865
1866 The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
1867
1868 elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
1869
1870 The elements, if the element bitwidth is larger than the LD operation's
1871 size, will then be sign/zero-extended to the full LD operation size, as
1872 specified by the LOAD (LDU instead of LD, LBU instead of LB), before
1873 being passed on to the second phase.
1874
1875 As LOAD/STORE may be twin-predicated, it is important to note that
1876 the rules on twin predication still apply, except where in previous
1877 pseudo-code (elwidth=default for both source and target) it was
1878 the *registers* that the predication was applied to, it is now the
1879 **elements** that the predication is applied to.
1880
1881 Thus the full pseudocode for all LD operations may be written out
1882 as follows:
1883
1884 function LBU(rd, rs):
1885 load_elwidthed(rd, rs, 8, true)
1886 function LB(rd, rs):
1887 load_elwidthed(rd, rs, 8, false)
1888 function LH(rd, rs):
1889 load_elwidthed(rd, rs, 16, false)
1890 ...
1891 ...
1892 function LQ(rd, rs):
1893 load_elwidthed(rd, rs, 128, false)
1894
1895 # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
1896 function load_memory(rs, imm, i, opwidth):
1897 elwidth = int_csr[rs].elwidth
1898 bitwidth = bw(elwidth);
1899 elsperblock = min(1, opwidth / bitwidth)
1900 srcbase = ireg[rs+i/(elsperblock)];
1901 offs = i % elsperblock;
1902 return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
1903
1904 function load_elwidthed(rd, rs, opwidth, unsigned):
1905 destwid = int_csr[rd].elwidth # destination element width
1906  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1907  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1908  ps = get_pred_val(FALSE, rs); # predication on src
1909  pd = get_pred_val(FALSE, rd); # ... AND on dest
1910  for (int i = 0, int j = 0; i < VL && j < VL;):
1911 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1912 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1913 val = load_memory(rs, imm, i, opwidth)
1914 if unsigned:
1915 val = zero_extend(val, min(opwidth, bitwidth))
1916 else:
1917 val = sign_extend(val, min(opwidth, bitwidth))
1918 set_polymorphed_reg(rd, bitwidth, j, val)
1919 if (int_csr[rs].isvec) i++;
1920 if (int_csr[rd].isvec) j++; else break;
1921
1922 Note:
1923
1924 * when comparing against for example the twin-predicated c.mv
1925 pseudo-code, the pattern of independent incrementing of rd and rs
1926 is preserved unchanged.
1927 * just as with the c.mv pseudocode, zeroing is not included and must be
1928 taken into account (TODO).
1929 * that due to the use of a twin-predication algorithm, LOAD/STORE also
1930 take on the same VSPLAT, VINSERT, VREDUCE, VEXTRACT, VGATHER and
1931 VSCATTER characteristics.
1932 * that due to the use of the same set\_polymorphed\_reg pseudocode,
1933 a destination that is not vectorised (marked as scalar) will
1934 result in the element being fully sign-extended or zero-extended
1935 out to the full register file bitwidth (XLEN). When the source
1936 is also marked as scalar, this is how the compatibility with
1937 standard RV LOAD/STORE is preserved by this algorithm.
1938
1939 ### Example Tables showing LOAD elements
1940
1941 This section contains examples of vectorised LOAD operations, showing
1942 how the two stage process works (three if zero/sign-extension is included).
1943
1944
1945 #### Example: LD x8, x5(0), x8 CSR-elwidth=32, x5 CSR-elwidth=16, VL=7
1946
1947 This is:
1948
1949 * a 64-bit load, with an offset of zero
1950 * with a source-address elwidth of 16-bit
1951 * into a destination-register with an elwidth of 32-bit
1952 * where VL=7
1953 * from register x5 (actually x5-x6) to x8 (actually x8 to half of x11)
1954 * RV64, where XLEN=64 is assumed.
1955
1956 First, the memory table, which, due to the
1957 element width being 16 and the operation being LD (64), the 64-bits
1958 loaded from memory are subdivided into groups of **four** elements.
1959 And, with VL being 7 (deliberately to illustrate that this is reasonable
1960 and possible), the first four are sourced from the offset addresses pointed
1961 to by x5, and the next three from the ofset addresses pointed to by
1962 the next contiguous register, x6:
1963
1964 [[!table data="""
1965 addr | byte 0 | byte 1 | byte 2 | byte 3 | byte 4 | byte 5 | byte 6 | byte 7 |
1966 @x5 | elem 0 || elem 1 || elem 2 || elem 3 ||
1967 @x6 | elem 4 || elem 5 || elem 6 || not loaded ||
1968 """]]
1969
1970 Next, the elements are zero-extended from 16-bit to 32-bit, as whilst
1971 the elwidth CSR entry for x5 is 16-bit, the destination elwidth on x8 is 32.
1972
1973 [[!table data="""
1974 byte 3 | byte 2 | byte 1 | byte 0 |
1975 0x0 | 0x0 | elem0 ||
1976 0x0 | 0x0 | elem1 ||
1977 0x0 | 0x0 | elem2 ||
1978 0x0 | 0x0 | elem3 ||
1979 0x0 | 0x0 | elem4 ||
1980 0x0 | 0x0 | elem5 ||
1981 0x0 | 0x0 | elem6 ||
1982 0x0 | 0x0 | elem7 ||
1983 """]]
1984
1985 Lastly, the elements are stored in contiguous blocks, as if x8 was also
1986 byte-addressable "memory". That "memory" happens to cover registers
1987 x8, x9, x10 and x11, with the last 32 "bits" of x11 being **UNMODIFIED**:
1988
1989 [[!table data="""
1990 reg# | byte 7 | byte 6 | byte 5 | byte 4 | byte 3 | byte 2 | byte 1 | byte 0 |
1991 x8 | 0x0 | 0x0 | elem 1 || 0x0 | 0x0 | elem 0 ||
1992 x9 | 0x0 | 0x0 | elem 3 || 0x0 | 0x0 | elem 2 ||
1993 x10 | 0x0 | 0x0 | elem 5 || 0x0 | 0x0 | elem 4 ||
1994 x11 | **UNMODIFIED** |||| 0x0 | 0x0 | elem 6 ||
1995 """]]
1996
1997 Thus we have data that is loaded from the **addresses** pointed to by
1998 x5 and x6, zero-extended from 16-bit to 32-bit, stored in the **registers**
1999 x8 through to half of x11.
2000 The end result is that elements 0 and 1 end up in x8, with element 8 being
2001 shifted up 32 bits, and so on, until finally element 6 is in the
2002 LSBs of x11.
2003
2004 Note that whilst the memory addressing table is shown left-to-right byte order,
2005 the registers are shown in right-to-left (MSB) order. This does **not**
2006 imply that bit or byte-reversal is carried out: it's just easier to visualise
2007 memory as being contiguous bytes, and emphasises that registers are not
2008 really actually "memory" as such.
2009
2010 ## Why SV bitwidth specification is restricted to 4 entries
2011
2012 The four entries for SV element bitwidths only allows three over-rides:
2013
2014 * 8 bit
2015 * 16 hit
2016 * 32 bit
2017
2018 This would seem inadequate, surely it would be better to have 3 bits or
2019 more and allow 64, 128 and some other options besides. The answer here
2020 is, it gets too complex, no RV128 implementation yet exists, and so RV64's
2021 default is 64 bit, so the 4 major element widths are covered anyway.
2022
2023 There is an absolutely crucial aspect oF SV here that explicitly
2024 needs spelling out, and it's whether the "vectorised" bit is set in
2025 the Register's CSR entry.
2026
2027 If "vectorised" is clear (not set), this indicates that the operation
2028 is "scalar". Under these circumstances, when set on a destination (RD),
2029 then sign-extension and zero-extension, whilst changed to match the
2030 override bitwidth (if set), will erase the **full** register entry
2031 (64-bit if RV64).
2032
2033 When vectorised is *set*, this indicates that the operation now treats
2034 **elements** as if they were independent registers, so regardless of
2035 the length, any parts of a given actual register that are not involved
2036 in the operation are **NOT** modified, but are **PRESERVED**.
2037
2038 For example:
2039
2040 * when the vector bit is clear and elwidth set to 16 on the destination
2041 register, operations are truncated to 16 bit and then sign or zero
2042 extended to the *FULL* XLEN register width.
2043 * when the vector bit is set, elwidth is 16 and VL=1 (or other value where
2044 groups of elwidth sized elements do not fill an entire XLEN register),
2045 the "top" bits of the destination register do *NOT* get modified, zero'd
2046 or otherwise overwritten.
2047
2048 SIMD micro-architectures may implement this by using predication on
2049 any elements in a given actual register that are beyond the end of
2050 multi-element operation.
2051
2052 Other microarchitectures may choose to provide byte-level write-enable
2053 lines on the register file, such that each 64 bit register in an RV64
2054 system requires 8 WE lines. Scalar RV64 operations would require
2055 activation of all 8 lines, where SV elwidth based operations would
2056 activate the required subset of those byte-level write lines.
2057
2058 Example:
2059
2060 * rs1, rs2 and rd are all set to 8-bit
2061 * VL is set to 3
2062 * RV64 architecture is set (UXL=64)
2063 * add operation is carried out
2064 * bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16]
2065 concatenated with similar add operations on bits 15..8 and 7..0
2066 * bits 24 through 63 **remain as they originally were**.
2067
2068 Example SIMD micro-architectural implementation:
2069
2070 * SIMD architecture works out the nearest round number of elements
2071 that would fit into a full RV64 register (in this case: 8)
2072 * SIMD architecture creates a hidden predicate, binary 0b00000111
2073 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
2074 * SIMD architecture goes ahead with the add operation as if it
2075 was a full 8-wide batch of 8 adds
2076 * SIMD architecture passes top 5 elements through the adders
2077 (which are "disabled" due to zero-bit predication)
2078 * SIMD architecture gets the 5 unmodified top 8-bits back unmodified
2079 and stores them in rd.
2080
2081 This requires a read on rd, however this is required anyway in order
2082 to support non-zeroing mode.
2083
2084 ## Polymorphic floating-point
2085
2086 Standard scalar RV integer operations base the register width on XLEN,
2087 which may be changed (UXL in USTATUS, and the corresponding MXL and
2088 SXL in MSTATUS and SSTATUS respectively). Integer LOAD, STORE and
2089 arithmetic operations are therefore restricted to an active XLEN bits,
2090 with sign or zero extension to pad out the upper bits when XLEN has
2091 been dynamically set to less than the actual register size.
2092
2093 For scalar floating-point, the active (used / changed) bits are
2094 specified exclusively by the operation: ADD.S specifies an active
2095 32-bits, with the upper bits of the source registers needing to
2096 be all 1s ("NaN-boxed"), and the destination upper bits being
2097 *set* to all 1s (including on LOAD/STOREs).
2098
2099 Where elwidth is set to default (on any source or the destination)
2100 it is obvious that this NaN-boxing behaviour can and should be
2101 preserved. When elwidth is non-default things are less obvious,
2102 so need to be thought through. Here is a normal (scalar) sequence,
2103 assuming an RV64 which supports Quad (128-bit) FLEN:
2104
2105 * FLD loads 64-bit wide from memory. Top 64 MSBs are set to all 1s
2106 * ADD.D performs a 64-bit-wide add. Top 64 MSBs of destination set to 1s.
2107 * FSD stores lowest 64-bits from the 128-bit-wide register to memory:
2108 top 64 MSBs ignored.
2109
2110 Therefore it makes sense to mirror this behaviour when, for example,
2111 elwidth is set to 32. Assume elwidth set to 32 on all source and
2112 destination registers:
2113
2114 * FLD loads 64-bit wide from memory as **two** 32-bit single-precision
2115 floating-point numbers.
2116 * ADD.D performs **two** 32-bit-wide adds, storing one of the adds
2117 in bits 0-31 and the second in bits 32-63.
2118 * FSD stores lowest 64-bits from the 128-bit-wide register to memory
2119
2120 Here's the thing: it does not make sense to overwrite the top 64 MSBs
2121 of the registers either during the FLD **or** the ADD.D. The reason
2122 is that, effectively, the top 64 MSBs actually represent a completely
2123 independent 64-bit register, so overwriting it is not only gratuitous
2124 but may actually be harmful for a future extension to SV which may
2125 have a way to directly access those top 64 bits.
2126
2127 The decision is therefore **not** to touch the upper parts of floating-point
2128 registers whereever elwidth is set to non-default values, including
2129 when "isvec" is false in a given register's CSR entry. Only when the
2130 elwidth is set to default **and** isvec is false will the standard
2131 RV behaviour be followed, namely that the upper bits be modified.
2132
2133 Ultimately if elwidth is default and isvec false on *all* source
2134 and destination registers, a SimpleV instruction defaults completely
2135 to standard RV scalar behaviour (this holds true for **all** operations,
2136 right across the board).
2137
2138 The nice thing here is that ADD.S, ADD.D and ADD.Q when elwidth are
2139 non-default values are effectively all the same: they all still perform
2140 multiple ADD operations, just at different widths. A future extension
2141 to SimpleV may actually allow ADD.S to access the upper bits of the
2142 register, effectively breaking down a 128-bit register into a bank
2143 of 4 independently-accesible 32-bit registers.
2144
2145 In the meantime, although when e.g. setting VL to 8 it would technically
2146 make no difference to the ALU whether ADD.S, ADD.D or ADD.Q is used,
2147 using ADD.Q may be an easy way to signal to the microarchitecture that
2148 it is to receive a higher VL value. On a superscalar OoO architecture
2149 there may be absolutely no difference, however on simpler SIMD-style
2150 microarchitectures they may not necessarily have the infrastructure in
2151 place to know the difference, such that when VL=8 and an ADD.D instruction
2152 is issued, it completes in 2 cycles (or more) rather than one, where
2153 if an ADD.Q had been issued instead on such simpler microarchitectures
2154 it would complete in one.
2155
2156 ## Specific instruction walk-throughs
2157
2158 This section covers walk-throughs of the above-outlined procedure
2159 for converting standard RISC-V scalar arithmetic operations to
2160 polymorphic widths, to ensure that it is correct.
2161
2162 ### add
2163
2164 Standard Scalar RV32/RV64 (xlen):
2165
2166 * RS1 @ xlen bits
2167 * RS2 @ xlen bits
2168 * add @ xlen bits
2169 * RD @ xlen bits
2170
2171 Polymorphic variant:
2172
2173 * RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
2174 * RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
2175 * add @ max(rs1, rs2) bits
2176 * RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate
2177
2178 Note here that polymorphic add zero-extends its source operands,
2179 where addw sign-extends.
2180
2181 ### addw
2182
2183 The RV Specification specifically states that "W" variants of arithmetic
2184 operations always produce 32-bit signed values. In a polymorphic
2185 environment it is reasonable to assume that the signed aspect is
2186 preserved, where it is the length of the operands and the result
2187 that may be changed.
2188
2189 Standard Scalar RV64 (xlen):
2190
2191 * RS1 @ xlen bits
2192 * RS2 @ xlen bits
2193 * add @ xlen bits
2194 * RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
2195
2196 Polymorphic variant:
2197
2198 * RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
2199 * RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
2200 * add @ max(rs1, rs2) bits
2201 * RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
2202
2203 Note here that polymorphic addw sign-extends its source operands,
2204 where add zero-extends.
2205
2206 This requires a little more in-depth analysis. Where the bitwidth of
2207 rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
2208 only where the bitwidth of either rs1 or rs2 are different, will the
2209 lesser-width operand be sign-extended.
2210
2211 Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
2212 where for add they are both zero-extended. This holds true for all arithmetic
2213 operations ending with "W".
2214
2215 ### addiw
2216
2217 Standard Scalar RV64I:
2218
2219 * RS1 @ xlen bits, truncated to 32-bit
2220 * immed @ 12 bits, sign-extended to 32-bit
2221 * add @ 32 bits
2222 * RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
2223
2224 Polymorphic variant:
2225
2226 * RS1 @ rs1 bits
2227 * immed @ 12 bits, sign-extend to max(rs1, 12) bits
2228 * add @ max(rs1, 12) bits
2229 * RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
2230
2231 # Predication Element Zeroing
2232
2233 The introduction of zeroing on traditional vector predication is usually
2234 intended as an optimisation for lane-based microarchitectures with register
2235 renaming to be able to save power by avoiding a register read on elements
2236 that are passed through en-masse through the ALU. Simpler microarchitectures
2237 do not have this issue: they simply do not pass the element through to
2238 the ALU at all, and therefore do not store it back in the destination.
2239 More complex non-lane-based micro-architectures can, when zeroing is
2240 not set, use the predication bits to simply avoid sending element-based
2241 operations to the ALUs, entirely: thus, over the long term, potentially
2242 keeping all ALUs 100% occupied even when elements are predicated out.
2243
2244 SimpleV's design principle is not based on or influenced by
2245 microarchitectural design factors: it is a hardware-level API.
2246 Therefore, looking purely at whether zeroing is *useful* or not,
2247 (whether less instructions are needed for certain scenarios),
2248 given that a case can be made for zeroing *and* non-zeroing, the
2249 decision was taken to add support for both.
2250
2251 ## Single-predication (based on destination register)
2252
2253 Zeroing on predication for arithmetic operations is taken from
2254 the destination register's predicate. i.e. the predication *and*
2255 zeroing settings to be applied to the whole operation come from the
2256 CSR Predication table entry for the destination register.
2257 Thus when zeroing is set on predication of a destination element,
2258 if the predication bit is clear, then the destination element is *set*
2259 to zero (twin-predication is slightly different, and will be covered
2260 next).
2261
2262 Thus the pseudo-code loop for a predicated arithmetic operation
2263 is modified to as follows:
2264
2265  for (i = 0; i < VL; i++)
2266 if not zeroing: # an optimisation
2267 while (!(predval & 1<<i) && i < VL)
2268 if (int_vec[rd ].isvector)  { id += 1; }
2269 if (int_vec[rs1].isvector)  { irs1 += 1; }
2270 if (int_vec[rs2].isvector)  { irs2 += 1; }
2271 if i == VL:
2272 return
2273 if (predval & 1<<i)
2274 src1 = ....
2275 src2 = ...
2276 else:
2277 result = src1 + src2 # actual add (or other op) here
2278 set_polymorphed_reg(rd, destwid, ird, result)
2279 if int_vec[rd].ffirst and result == 0:
2280 VL = i # result was zero, end loop early, return VL
2281 return
2282 if (!int_vec[rd].isvector) return
2283 else if zeroing:
2284 result = 0
2285 set_polymorphed_reg(rd, destwid, ird, result)
2286 if (int_vec[rd ].isvector)  { id += 1; }
2287 else if (predval & 1<<i) return
2288 if (int_vec[rs1].isvector)  { irs1 += 1; }
2289 if (int_vec[rs2].isvector)  { irs2 += 1; }
2290 if (rd == VL or rs1 == VL or rs2 == VL): return
2291
2292 The optimisation to skip elements entirely is only possible for certain
2293 micro-architectures when zeroing is not set. However for lane-based
2294 micro-architectures this optimisation may not be practical, as it
2295 implies that elements end up in different "lanes". Under these
2296 circumstances it is perfectly fine to simply have the lanes
2297 "inactive" for predicated elements, even though it results in
2298 less than 100% ALU utilisation.
2299
2300 ## Twin-predication (based on source and destination register)
2301
2302 Twin-predication is not that much different, except that that
2303 the source is independently zero-predicated from the destination.
2304 This means that the source may be zero-predicated *or* the
2305 destination zero-predicated *or both*, or neither.
2306
2307 When with twin-predication, zeroing is set on the source and not
2308 the destination, if a predicate bit is set it indicates that a zero
2309 data element is passed through the operation (the exception being:
2310 if the source data element is to be treated as an address - a LOAD -
2311 then the data returned *from* the LOAD is zero, rather than looking up an
2312 *address* of zero.
2313
2314 When zeroing is set on the destination and not the source, then just
2315 as with single-predicated operations, a zero is stored into the destination
2316 element (or target memory address for a STORE).
2317
2318 Zeroing on both source and destination effectively result in a bitwise
2319 NOR operation of the source and destination predicate: the result is that
2320 where either source predicate OR destination predicate is set to 0,
2321 a zero element will ultimately end up in the destination register.
2322
2323 However: this may not necessarily be the case for all operations;
2324 implementors, particularly of custom instructions, clearly need to
2325 think through the implications in each and every case.
2326
2327 Here is pseudo-code for a twin zero-predicated operation:
2328
2329 function op_mv(rd, rs) # MV not VMV!
2330  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
2331  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
2332  ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
2333  pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
2334  for (int i = 0, int j = 0; i < VL && j < VL):
2335 if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
2336 if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
2337 if ((pd & 1<<j))
2338 if ((pd & 1<<j))
2339 sourcedata = ireg[rs+i];
2340 else
2341 sourcedata = 0
2342 ireg[rd+j] <= sourcedata
2343 else if (zerodst)
2344 ireg[rd+j] <= 0
2345 if (int_csr[rs].isvec)
2346 i++;
2347 if (int_csr[rd].isvec)
2348 j++;
2349 else
2350 if ((pd & 1<<j))
2351 break;
2352
2353 Note that in the instance where the destination is a scalar, the hardware
2354 loop is ended the moment a value *or a zero* is placed into the destination
2355 register/element. Also note that, for clarity, variable element widths
2356 have been left out of the above.
2357
2358 # Exceptions
2359
2360 TODO: expand. Exceptions may occur at any time, in any given underlying
2361 scalar operation. This implies that context-switching (traps) may
2362 occur, and operation must be returned to where it left off. That in
2363 turn implies that the full state - including the current parallel
2364 element being processed - has to be saved and restored. This is
2365 what the **STATE** CSR is for.
2366
2367 The implications are that all underlying individual scalar operations
2368 "issued" by the parallelisation have to appear to be executed sequentially.
2369 The further implications are that if two or more individual element
2370 operations are underway, and one with an earlier index causes an exception,
2371 it may be necessary for the microarchitecture to **discard** or terminate
2372 operations with higher indices.
2373
2374 This being somewhat dissatisfactory, an "opaque predication" variant
2375 of the STATE CSR is being considered.
2376
2377 # Hints
2378
2379 A "HINT" is an operation that has no effect on architectural state,
2380 where its use may, by agreed convention, give advance notification
2381 to the microarchitecture: branch prediction notification would be
2382 a good example. Usually HINTs are where rd=x0.
2383
2384 With Simple-V being capable of issuing *parallel* instructions where
2385 rd=x0, the space for possible HINTs is expanded considerably. VL
2386 could be used to indicate different hints. In addition, if predication
2387 is set, the predication register itself could hypothetically be passed
2388 in as a *parameter* to the HINT operation.
2389
2390 No specific hints are yet defined in Simple-V
2391
2392 # Vector Block Format <a name="vliw-format"></a>
2393
2394 See ancillary resource: [[vblock_format]]
2395
2396 # Subsets of RV functionality
2397
2398 This section describes the differences when SV is implemented on top of
2399 different subsets of RV.
2400
2401 ## Common options
2402
2403 It is permitted to only implement SVprefix and not the VBLOCK instruction
2404 format option, and vice-versa. UNIX Platforms **MUST** raise illegal
2405 instruction on seeing an unsupported VBLOCK or SVprefix opcode, so that
2406 traps may emulate the format.
2407
2408 It is permitted in SVprefix to either not implement VL or not implement
2409 SUBVL (see [[sv_prefix_proposal]] for full details. Again, UNIX Platforms
2410 *MUST* raise illegal instruction on implementations that do not support
2411 VL or SUBVL.
2412
2413 It is permitted to limit the size of either (or both) the register files
2414 down to the original size of the standard RV architecture. However, below
2415 the mandatory limits set in the RV standard will result in non-compliance
2416 with the SV Specification.
2417
2418 ## RV32 / RV32F
2419
2420 When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
2421 maximum limit for predication is also restricted to 32 bits. Whilst not
2422 actually specifically an "option" it is worth noting.
2423
2424 ## RV32G
2425
2426 Normally in standard RV32 it does not make much sense to have
2427 RV32G, The critical instructions that are missing in standard RV32
2428 are those for moving data to and from the double-width floating-point
2429 registers into the integer ones, as well as the FCVT routines.
2430
2431 In an earlier draft of SV, it was possible to specify an elwidth
2432 of double the standard register size: this had to be dropped,
2433 and may be reintroduced in future revisions.
2434
2435 ## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
2436
2437 When floating-point is not implemented, the size of the User Register and
2438 Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
2439 per table).
2440
2441 ## RV32E
2442
2443 In embedded scenarios the User Register and Predication CSRs may be
2444 dropped entirely, or optionally limited to 1 CSR, such that the combined
2445 number of entries from the M-Mode CSR Register table plus U-Mode
2446 CSR Register table is either 4 16-bit entries or (if the U-Mode is
2447 zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
2448 the Predication CSR tables.
2449
2450 RV32E is the most likely candidate for simply detecting that registers
2451 are marked as "vectorised", and generating an appropriate exception
2452 for the VL loop to be implemented in software.
2453
2454 ## RV128
2455
2456 RV128 has not been especially considered, here, however it has some
2457 extremely large possibilities: double the element width implies
2458 256-bit operands, spanning 2 128-bit registers each, and predication
2459 of total length 128 bit given that XLEN is now 128.
2460
2461 # Under consideration <a name="issues"></a>
2462
2463 for element-grouping, if there is unused space within a register
2464 (3 16-bit elements in a 64-bit register for example), recommend:
2465
2466 * For the unused elements in an integer register, the used element
2467 closest to the MSB is sign-extended on write and the unused elements
2468 are ignored on read.
2469 * The unused elements in a floating-point register are treated as-if
2470 they are set to all ones on write and are ignored on read, matching the
2471 existing standard for storing smaller FP values in larger registers.
2472
2473 ---
2474
2475 info register,
2476
2477 > One solution is to just not support LR/SC wider than a fixed
2478 > implementation-dependent size, which must be at least 
2479 >1 XLEN word, which can be read from a read-only CSR
2480 > that can also be used for info like the kind and width of 
2481 > hw parallelism supported (128-bit SIMD, minimal virtual 
2482 > parallelism, etc.) and other things (like maybe the number 
2483 > of registers supported). 
2484
2485 > That CSR would have to have a flag to make a read trap so
2486 > a hypervisor can simulate different values.
2487
2488 ----
2489
2490 > And what about instructions like JALR? 
2491
2492 answer: they're not vectorised, so not a problem
2493
2494 ----
2495
2496 * if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
2497 XLEN if elwidth==default
2498 * if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
2499 *32* if elwidth == default
2500
2501 ---
2502
2503 TODO: document different lengths for INT / FP regfiles, and provide
2504 as part of info register. 00=32, 01=64, 10=128, 11=reserved.
2505
2506 ---
2507
2508 TODO, update to remove RegCam and PredCam CSRs, just use SVprefix and
2509 VBLOCK format
2510
2511 ---
2512
2513 Could the 8 bit Register VBLOCK format use regnum<<1 instead, only accessing regs 0 to 64?
2514
2515 --
2516
2517 Expand the range of SUBVL and its associated svsrcoffs and svdestoffs by
2518 adding a 2nd STATE CSR (or extending STATE to 64 bits). Future version?
2519
2520 --
2521
2522 TODO evaluate strncpy and strlen
2523 <https://groups.google.com/forum/m/#!msg/comp.arch/bGBeaNjAKvc/_vbqyxTUAQAJ>
2524
2525 RVV version: <a name="strncpy"></>
2526
2527 strncpy:
2528 mv a3, a0 # Copy dst
2529 loop:
2530 setvli x0, a2, vint8 # Vectors of bytes.
2531 vlbff.v v1, (a1) # Get src bytes
2532 vseq.vi v0, v1, 0 # Flag zero bytes
2533 vmfirst a4, v0 # Zero found?
2534 vmsif.v v0, v0 # Set mask up to and including zero byte. Ppplio
2535 vsb.v v1, (a3), v0.t # Write out bytes
2536 bgez a4, exit # Done
2537 csrr t1, vl # Get number of bytes fetched
2538 add a1, a1, t1 # Bump src pointer
2539 sub a2, a2, t1 # Decrement count.
2540 add a3, a3, t1 # Bump dst pointer
2541 bnez a2, loop # Anymore?
2542
2543 exit:
2544 ret
2545
2546 SV version (WIP):
2547
2548 strncpy:
2549 mv a3, a0
2550 SETMVLI 8 # set max vector to 8
2551 RegCSR[a3] = 8bit, a3, scalar
2552 RegCSR[a1] = 8bit, a1, scalar
2553 RegCSR[t0] = 8bit, t0, vector
2554 PredTb[t0] = ffirst, x0, inv
2555 loop:
2556 SETVLI a2, t4 # t4 and VL now 1..8
2557 ldb t0, (a1) # t0 fail first mode
2558 bne t0, x0, allnonzero # still ff
2559 # VL points to last nonzero
2560 GETVL t4 # from bne tests
2561 addi t4, t4, 1 # include zero
2562 SETVL t4 # set exactly to t4
2563 stb t0, (a3) # store incl zero
2564 ret # end subroutine
2565 allnonzero:
2566 stb t0, (a3) # VL legal range
2567 GETVL t4 # from bne tests
2568 add a1, a1, t4 # Bump src pointer
2569 sub a2, a2, t4 # Decrement count.
2570 add a3, a3, t4 # Bump dst pointer
2571 bnez a2, loop # Anymore?
2572 exit:
2573 ret
2574
2575 Notes:
2576
2577 * Setting MVL to 8 is just an example. If enough registers are spare it may be set to XLEN which will require a bank of 8 scalar registers for a1, a3 and t0.
2578 * obviously if that is done, t0 is not separated by 8 full registers, and would overwrite t1 thru t7. x80 would work well, as an example, instead.
2579 * with the exception of the GETVL (a pseudo code alias for csrr), every single instruction above may use RVC.
2580 * RVC C.BNEZ can be used because rs1' may be extended to the full 128 registers through redirection
2581 * RVC C.LW and C.SW may be used because the W format may be overridden by the 8 bit format. All of t0, a3 and a1 are overridden to make that work.
2582 * with the exception of the GETVL, all Vector Context may be done in VBLOCK form.
2583 * setting predication to x0 (zero) and invert on t0 is a trick to enable just ffirst on t0
2584 * ldb and bne are both using t0, both in ffirst mode
2585 * ldb will end on illegal mem, reduce VL, but copied all sorts of stuff into t0
2586 * bne t0 x0 tests up to the NEW VL for nonzero, vector t0 against scalar x0
2587 * however as t0 is in ffirst mode, the first fail wil ALSO stop the compares, and reduce VL as well
2588 * the branch only goes to allnonzero if all tests succeed
2589 * if it did not, we can safely increment VL by 1 (using a4) to include the zero.
2590 * SETVL sets *exactly* the requested amount into VL.
2591 * the SETVL just after allnonzero label is needed in case the ldb ffirst activates but the bne allzeros does not.
2592 * this would cause the stb to copy up to the end of the legal memory
2593 * of course, on the next loop the ldb would throw a trap, as a1 now points to the first illegal mem location.
2594
2595 RVV version:
2596
2597 mv a3, a0 # Save start
2598 loop:
2599 setvli a1, x0, vint8 # byte vec, x0 (Zero reg) => use max hardware len
2600 vldbff.v v1, (a3) # Get bytes
2601 csrr a1, vl # Get bytes actually read e.g. if fault
2602 vseq.vi v0, v1, 0 # Set v0[i] where v1[i] = 0
2603 add a3, a3, a1 # Bump pointer
2604 vmfirst a2, v0 # Find first set bit in mask, returns -1 if none
2605 bltz a2, loop # Not found?
2606 add a0, a0, a1 # Sum start + bump
2607 add a3, a3, a2 # Add index of zero byte
2608 sub a0, a3, a0 # Subtract start address+bump
2609 ret