update name from VLIW format to VBLOCK
[libreriscv.git] / simple_v_extension / specification.mdwn
1 # Simple-V (Parallelism Extension Proposal) Specification
2
3 * Copyright (C) 2017, 2018, 2019 Luke Kenneth Casson Leighton
4 * Status: DRAFTv0.6
5 * Last edited: 21 jun 2019
6 * Ancillary resource: [[opcodes]] [[sv_prefix_proposal]]
7
8 With thanks to:
9
10 * Allen Baum
11 * Bruce Hoult
12 * comp.arch
13 * Jacob Bachmeyer
14 * Guy Lemurieux
15 * Jacob Lifshay
16 * Terje Mathisen
17 * The RISC-V Founders, without whom this all would not be possible.
18
19 [[!toc ]]
20
21 # Summary and Background: Rationale
22
23 Simple-V is a uniform parallelism API for RISC-V hardware that has several
24 unplanned side-effects including code-size reduction, expansion of
25 HINT space and more. The reason for
26 creating it is to provide a manageable way to turn a pre-existing design
27 into a parallel one, in a step-by-step incremental fashion, without adding any new opcodes, thus allowing
28 the implementor to focus on adding hardware where it is needed and necessary.
29 The primary target is for mobile-class 3D GPUs and VPUs, with secondary
30 goals being to reduce executable size (by extending the effectiveness of RV opcodes, RVC in particular) and reduce context-switch latency.
31
32 Critically: **No new instructions are added**. The parallelism (if any
33 is implemented) is implicitly added by tagging *standard* scalar registers
34 for redirection. When such a tagged register is used in any instruction,
35 it indicates that the PC shall **not** be incremented; instead a loop
36 is activated where *multiple* instructions are issued to the pipeline
37 (as determined by a length CSR), with contiguously incrementing register
38 numbers starting from the tagged register. When the last "element"
39 has been reached, only then is the PC permitted to move on. Thus
40 Simple-V effectively sits (slots) *in between* the instruction decode phase
41 and the ALU(s).
42
43 The barrier to entry with SV is therefore very low. The minimum
44 compliant implementation is software-emulation (traps), requiring
45 only the CSRs and CSR tables, and that an exception be thrown if an
46 instruction's registers are detected to have been tagged. The looping
47 that would otherwise be done in hardware is thus carried out in software,
48 instead. Whilst much slower, it is "compliant" with the SV specification,
49 and may be suited for implementation in RV32E and also in situations
50 where the implementor wishes to focus on certain aspects of SV, without
51 unnecessary time and resources into the silicon, whilst also conforming
52 strictly with the API. A good area to punt to software would be the
53 polymorphic element width capability for example.
54
55 Hardware Parallelism, if any, is therefore added at the implementor's
56 discretion to turn what would otherwise be a sequential loop into a
57 parallel one.
58
59 To emphasise that clearly: Simple-V (SV) is *not*:
60
61 * A SIMD system
62 * A SIMT system
63 * A Vectorisation Microarchitecture
64 * A microarchitecture of any specific kind
65 * A mandary parallel processor microarchitecture of any kind
66 * A supercomputer extension
67
68 SV does **not** tell implementors how or even if they should implement
69 parallelism: it is a hardware "API" (Application Programming Interface)
70 that, if implemented, presents a uniform and consistent way to *express*
71 parallelism, at the same time leaving the choice of if, how, how much,
72 when and whether to parallelise operations **entirely to the implementor**.
73
74 # Basic Operation
75
76 The principle of SV is as follows:
77
78 * Standard RV instructions are "prefixed" (extended) through a 48/64
79 bit format (single instruction option) or a variable
80 length VLIW-like prefix (multi or "grouped" option).
81 * The prefix(es) indicate which registers are "tagged" as
82 "vectorised". Predicates can also be added, and element widths
83 overridden on any src or dest register.
84 * A "Vector Length" CSR is set, indicating the span of any future
85 "parallel" operations.
86 * If any operation (a **scalar** standard RV opcode) uses a register
87 that has been so "marked" ("tagged"), a hardware "macro-unrolling loop"
88 is activated, of length VL, that effectively issues **multiple**
89 identical instructions using contiguous sequentially-incrementing
90 register numbers, based on the "tags".
91 * **Whether they be executed sequentially or in parallel or a
92 mixture of both or punted to software-emulation in a trap handler
93 is entirely up to the implementor**.
94
95 In this way an entire scalar algorithm may be vectorised with
96 the minimum of modification to the hardware and to compiler toolchains.
97
98 To reiterate: **There are *no* new opcodes**. The scheme works *entirely*
99 on hidden context that augments *scalar* RISCV instructions.
100
101 # CSRs <a name="csrs"></a>
102
103 * An optional "reshaping" CSR key-value table which remaps from a 1D
104 linear shape to 2D or 3D, including full transposition.
105
106 There are five additional CSRs, available in any privilege level:
107
108 * MVL (the Maximum Vector Length)
109 * VL (which has different characteristics from standard CSRs)
110 * SUBVL (effectively a kind of SIMD)
111 * STATE (containing copies of MVL, VL and SUBVL as well as context information)
112 * PCVBLK (the current operation being executed within a VBLOCK Group)
113
114 For User Mode there are the following CSRs:
115
116 * uePCVBLK (a copy of the sub-execution Program Counter, that is relative
117 to the start of the current VBLOCK Group, set on a trap).
118 * ueSTATE (useful for saving and restoring during context switch,
119 and for providing fast transitions)
120
121 There are also two additional CSRs for Supervisor-Mode:
122
123 * sePCVBLK
124 * seSTATE
125
126 And likewise for M-Mode:
127
128 * mePCVBLK
129 * meSTATE
130
131 The u/m/s CSRs are treated and handled exactly like their (x)epc
132 equivalents. On entry to or exit from a privilege level, the contents of its (x)eSTATE are swapped with STATE.
133
134 Thus for example, a User Mode trap will end up swapping STATE and ueSTATE
135 (on both entry and exit), allowing User Mode traps to have their own
136 Vectorisation Context set up, separated from and unaffected by normal
137 user applications. If an M Mode trap occurs in the middle of the U Mode trap, STATE is swapped with meSTATE, and restored on exit: the U Mode trap continues unaware that the M Mode trap even occurred.
138
139 Likewise, Supervisor Mode may perform context-switches, safe in the
140 knowledge that its Vectorisation State is unaffected by User Mode.
141
142 The access pattern for these groups of CSRs in each mode follows the
143 same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
144
145 * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
146 * In S-Mode, accessing and changing of the M-Mode CSRs is transparently
147 identical
148 to changing the S-Mode CSRs. Accessing and changing the U-Mode
149 CSRs is permitted.
150 * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
151 is prohibited.
152
153 An interesting side effect of SV STATE being
154 separate and distinct in S Mode
155 is that
156 Vectorised saving of an entire register file to the stack is a single
157 instruction (through accidental provision of LOAD-MULTI semantics). If the
158 SVPrefix P64-LD-type format is used, LOAD-MULTI may even be done with a
159 single standalone 64 bit opcode (P64 may set up SUBVL, VL and MVL from an
160 immediate field, to cover the full regfile). It can even be predicated, which opens up some very
161 interesting possibilities.
162
163 (x)EPCVBLK CSRs must be treated exactly like their corresponding (x)epc
164 equivalents. See VBLOCK section for details.
165
166 ## MAXVECTORLENGTH (MVL) <a name="mvl" />
167
168 MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
169 is variable length and may be dynamically set. MVL is
170 however limited to the regfile bitwidth XLEN (1-32 for RV32,
171 1-64 for RV64 and so on).
172
173 The reason for setting this limit is so that predication registers, when
174 marked as such, may fit into a single register as opposed to fanning
175 out over several registers. This keeps the hardware implementation a
176 little simpler.
177
178 The other important factor to note is that the actual MVL is internally
179 stored **offset by one**, so that it can fit into only 6 bits (for RV64)
180 and still cover a range up to XLEN bits. Attempts to set MVL to zero will
181 return an exception. This is expressed more clearly in the "pseudocode"
182 section, where there are subtle differences between CSRRW and CSRRWI.
183
184 ## Vector Length (VL) <a name="vl" />
185
186 VSETVL is slightly different from RVV. Similar to RVV, VL is set to be within
187 the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
188
189 VL = rd = MIN(vlen, MVL)
190
191 where 1 <= MVL <= XLEN
192
193 However just like MVL it is important to note that the range for VL has
194 subtle design implications, covered in the "CSR pseudocode" section
195
196 The fixed (specific) setting of VL allows vector LOAD/STORE to be used
197 to switch the entire bank of registers using a single instruction (see
198 Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
199 is down to the fact that predication bits fit into a single register of
200 length XLEN bits.
201
202 The second and most important change is that, within the limits set by
203 MVL, the value passed in **must** be set in VL (and in the
204 destination register).
205
206 This has implication for the microarchitecture, as VL is required to be
207 set (limits from MVL notwithstanding) to the actual value
208 requested. RVV has the option to set VL to an arbitrary value that suits
209 the conditions and the micro-architecture: SV does *not* permit this.
210
211 The reason is so that if SV is to be used for a context-switch or as a
212 substitute for LOAD/STORE-Multiple, the operation can be done with only
213 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
214 single LD/ST operation). If VL does *not* get set to the register file
215 length when VSETVL is called, then a software-loop would be needed.
216 To avoid this need, VL *must* be set to exactly what is requested
217 (limits notwithstanding).
218
219 Therefore, in turn, unlike RVV, implementors *must* provide
220 pseudo-parallelism (using sequential loops in hardware) if actual
221 hardware-parallelism in the ALUs is not deployed. A hybrid is also
222 permitted (as used in Broadcom's VideoCore-IV) however this must be
223 *entirely* transparent to the ISA.
224
225 The third change is that VSETVL is implemented as a CSR, where the
226 behaviour of CSRRW (and CSRRWI) must be changed to specifically store
227 the *new* value in the destination register, **not** the old value.
228 Where context-load/save is to be implemented in the usual fashion
229 by using a single CSRRW instruction to obtain the old value, the
230 *secondary* CSR must be used (STATE). This CSR by contrast behaves
231 exactly as standard CSRs, and contains more than just VL.
232
233 One interesting side-effect of using CSRRWI to set VL is that this
234 may be done with a single instruction, useful particularly for a
235 context-load/save. There are however limitations: CSRWI's immediate
236 is limited to 0-31 (representing VL=1-32).
237
238 Note that when VL is set to 1, vector operations cease (but not subvector
239 operations: that requires setting SUBVL=1) the hardware loop is reduced
240 to a single element: scalar operations. This is in effect the default,
241 normal operating mode. However it is important to appreciate that this
242 does **not** result in the Register table or SUBVL being disabled. Only
243 when the Register table is empty (P48/64 prefix fields notwithstanding)
244 would SV have no effect.
245
246 ## SUBVL - Sub Vector Length
247
248 This is a "group by quantity" that effectivrly asks each iteration
249 of the hardware loop to load SUBVL elements of width elwidth at a
250 time. Effectively, SUBVL is like a SIMD multiplier: instead of just 1
251 operation issued, SUBVL operations are issued.
252
253 Another way to view SUBVL is that each element in the VL length vector is
254 now SUBVL times elwidth bits in length and now comprises SUBVL discrete
255 sub operations. An inner SUBVL for-loop within a VL for-loop in effect,
256 with the sub-element increased every time in the innermost loop. This
257 is best illustrated in the (simplified) pseudocode example, later.
258
259 The primary use case for SUBVL is for 3D FP Vectors. A Vector of 3D
260 coordinates X,Y,Z for example may be loaded and multiplied the stored, per
261 VL element iteration, rather than having to set VL to three times larger.
262
263 Legal values are 1, 2, 3 and 4 (and the STATE CSR must hold the 2 bit
264 values 0b00 thru 0b11 to represent them).
265
266 Setting this CSR to 0 must raise an exception. Setting it to a value
267 greater than 4 likewise.
268
269 The main effect of SUBVL is that predication bits are applied per
270 **group**, rather than by individual element.
271
272 This saves a not insignificant number of instructions when handling 3D
273 vectors, as otherwise a much longer predicate mask would have to be set
274 up with regularly-repeated bit patterns.
275
276 See SUBVL Pseudocode illustration for details.
277
278 ## STATE
279
280 This is a standard CSR that contains sufficient information for a
281 full context save/restore. It contains (and permits setting of):
282
283 * MVL
284 * VL
285 * destoffs - the destination element offset of the current parallel
286 instruction being executed
287 * srcoffs - for twin-predication, the source element offset as well.
288 * SUBVL
289 * svdestoffs - the subvector destination element offset of the current
290 parallel instruction being executed
291 * svsrcoffs - for twin-predication, the subvector source element offset
292 as well.
293
294 Interestingly STATE may hypothetically also be modified to make the
295 immediately-following instruction to skip a certain number of elements,
296 by playing with destoffs and srcoffs (and the subvector offsets as well)
297
298 Setting destoffs and srcoffs is realistically intended for saving state
299 so that exceptions (page faults in particular) may be serviced and the
300 hardware-loop that was being executed at the time of the trap, from
301 user-mode (or Supervisor-mode), may be returned to and continued from
302 exactly where it left off. The reason why this works is because setting
303 User-Mode STATE will not change (not be used) in M-Mode or S-Mode (and
304 is entirely why M-Mode and S-Mode have their own STATE CSRs, meSTATE
305 and seSTATE).
306
307 The format of the STATE CSR is as follows:
308
309 | (29..28 | (27..26) | (25..24) | (23..18) | (17..12) | (11..6) | (5...0) |
310 | ------- | -------- | -------- | -------- | -------- | ------- | ------- |
311 | dsvoffs | ssvoffs | subvl | destoffs | srcoffs | vl | maxvl |
312
313 When setting this CSR, the following characteristics will be enforced:
314
315 * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
316 * **VL** will be truncated (after offset) to be within the range 1 to MAXVL
317 * **SUBVL** which sets a SIMD-like quantity, has only 4 values so there
318 are no changes needed
319 * **srcoffs** will be truncated to be within the range 0 to VL-1
320 * **destoffs** will be truncated to be within the range 0 to VL-1
321 * **ssvoffs** will be truncated to be within the range 0 to SUBVL-1
322 * **dsvoffs** will be truncated to be within the range 0 to SUBVL-1
323
324 NOTE: if the following instruction is not a twin predicated instruction,
325 and destoffs or dsvoffs has been set to non-zero, subsequent execution
326 behaviour is undefined. **USE WITH CARE**.
327
328 ### Hardware rules for when to increment STATE offsets
329
330 The offsets inside STATE are like the indices in a loop, except
331 in hardware. They are also partially (conceptually) similar to a
332 "sub-execution Program Counter". As such, and to allow proper context
333 switching and to define correct exception behaviour, the following rules
334 must be observed:
335
336 * When the VL CSR is set, srcoffs and destoffs are reset to zero.
337 * Each instruction that contains a "tagged" register shall start
338 execution at the *current* value of srcoffs (and destoffs in the case
339 of twin predication)
340 * Unpredicated bits (in nonzeroing mode) shall cause the element operation
341 to skip, incrementing the srcoffs (or destoffs)
342 * On execution of an element operation, Exceptions shall **NOT** cause
343 srcoffs or destoffs to increment.
344 * On completion of the full Vector Loop (srcoffs = VL-1 or destoffs =
345 VL-1 after the last element is executed), both srcoffs and destoffs
346 shall be reset to zero.
347
348 This latter is why srcoffs and destoffs may be stored as values from
349 0 to XLEN-1 in the STATE CSR, because as loop indices they refer to
350 elements. srcoffs and destoffs never need to be set to VL: their maximum
351 operating values are limited to 0 to VL-1.
352
353 The same corresponding rules apply to SUBVL, svsrcoffs and svdestoffs.
354
355 ## MVL and VL Pseudocode
356
357 The pseudo-code for get and set of VL and MVL use the following internal
358 functions as follows:
359
360 set_mvl_csr(value, rd):
361 regs[rd] = STATE.MVL
362 STATE.MVL = MIN(value, STATE.MVL)
363
364 get_mvl_csr(rd):
365 regs[rd] = STATE.VL
366
367 set_vl_csr(value, rd):
368 STATE.VL = MIN(value, STATE.MVL)
369 regs[rd] = STATE.VL # yes returning the new value NOT the old CSR
370 return STATE.VL
371
372 get_vl_csr(rd):
373 regs[rd] = STATE.VL
374 return STATE.VL
375
376 Note that where setting MVL behaves as a normal CSR (returns the old
377 value), unlike standard CSR behaviour, setting VL will return the **new**
378 value of VL **not** the old one.
379
380 For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
381 maximise the effectiveness, an immediate of 0 is used to set VL=1,
382 an immediate of 1 is used to set VL=2 and so on:
383
384 CSRRWI_Set_MVL(value):
385 set_mvl_csr(value+1, x0)
386
387 CSRRWI_Set_VL(value):
388 set_vl_csr(value+1, x0)
389
390 However for CSRRW the following pseudocode is used for MVL and VL,
391 where setting the value to zero will cause an exception to be raised.
392 The reason is that if VL or MVL are set to zero, the STATE CSR is
393 not capable of storing that value.
394
395 CSRRW_Set_MVL(rs1, rd):
396 value = regs[rs1]
397 if value == 0 or value > XLEN:
398 raise Exception
399 set_mvl_csr(value, rd)
400
401 CSRRW_Set_VL(rs1, rd):
402 value = regs[rs1]
403 if value == 0 or value > XLEN:
404 raise Exception
405 set_vl_csr(value, rd)
406
407 In this way, when CSRRW is utilised with a loop variable, the value
408 that goes into VL (and into the destination register) may be used
409 in an instruction-minimal fashion:
410
411 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
412 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
413 CSRRWI MVL, 3 # sets MVL == **4** (not 3)
414 j zerotest # in case loop counter a0 already 0
415 loop:
416 CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
417 ld a3, a1 # load 4 registers a3-6 from x
418 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
419 ld a7, a2 # load 4 registers a7-10 from y
420 add a1, a1, t1 # increment pointer to x by vl*8
421 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
422 sub a0, a0, t0 # n -= vl (t0)
423 st a7, a2 # store 4 registers a7-10 to y
424 add a2, a2, t1 # increment pointer to y by vl*8
425 zerotest:
426 bnez a0, loop # repeat if n != 0
427
428 With the STATE CSR, just like with CSRRWI, in order to maximise the
429 utilisation of the limited bitspace, "000000" in binary represents
430 VL==1, "00001" represents VL==2 and so on (likewise for MVL):
431
432 CSRRW_Set_SV_STATE(rs1, rd):
433 value = regs[rs1]
434 get_state_csr(rd)
435 STATE.MVL = set_mvl_csr(value[11:6]+1)
436 STATE.VL = set_vl_csr(value[5:0]+1)
437 STATE.destoffs = value[23:18]>>18
438 STATE.srcoffs = value[23:18]>>12
439
440 get_state_csr(rd):
441 regs[rd] = (STATE.MVL-1) | (STATE.VL-1)<<6 | (STATE.srcoffs)<<12 |
442 (STATE.destoffs)<<18
443 return regs[rd]
444
445 In both cases, whilst CSR read of VL and MVL return the exact values
446 of VL and MVL respectively, reading and writing the STATE CSR returns
447 those values **minus one**. This is absolutely critical to implement
448 if the STATE CSR is to be used for fast context-switching.
449
450 ## VL, MVL and SUBVL instruction aliases
451
452 This table contains pseudo-assembly instruction aliases. Note the
453 subtraction of 1 from the CSRRWI pseudo variants, to compensate for the
454 reduced range of the 5 bit immediate.
455
456 | alias | CSR |
457 | - | - |
458 | SETVL rd, rs | CSRRW VL, rd, rs |
459 | SETVLi rd, #n | CSRRWI VL, rd, #n-1 |
460 | GETVL rd | CSRRW VL, rd, x0 |
461 | SETMVL rd, rs | CSRRW MVL, rd, rs |
462 | SETMVLi rd, #n | CSRRWI MVL,rd, #n-1 |
463 | GETMVL rd | CSRRW MVL, rd, x0 |
464
465 Note: CSRRC and other bitsetting may still be used, they are however not particularly useful (very obscure).
466
467 ## Register key-value (CAM) table <a name="regcsrtable" />
468
469 *NOTE: in prior versions of SV, this table used to be writable and
470 accessible via CSRs. It is now stored in the VBLOCK instruction format. Note
471 that this table does *not* get applied to the SVPrefix P48/64 format,
472 only to scalar opcodes*
473
474 The purpose of the Register table is three-fold:
475
476 * To mark integer and floating-point registers as requiring "redirection"
477 if it is ever used as a source or destination in any given operation.
478 This involves a level of indirection through a 5-to-7-bit lookup table,
479 such that **unmodified** operands with 5 bits (3 for some RVC ops) may
480 access up to **128** registers.
481 * To indicate whether, after redirection through the lookup table, the
482 register is a vector (or remains a scalar).
483 * To over-ride the implicit or explicit bitwidth that the operation would
484 normally give the register.
485
486 Note: clearly, if an RVC operation uses a 3 bit spec'd register (x8-x15)
487 and the Register table contains entried that only refer to registerd
488 x1-x14 or x16-x31, such operations will *never* activate the VL hardware
489 loop!
490
491 If however the (16 bit) Register table does contain such an entry (x8-x15
492 or x2 in the case of LWSP), that src or dest reg may be redirected
493 anywhere to the *full* 128 register range. Thus, RVC becomes far more
494 powerful and has many more opportunities to reduce code size that in
495 Standard RV32/RV64 executables.
496
497 16 bit format:
498
499 | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
500 | ------ | | - | - | - | ------ | ------- |
501 | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
502 | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
503 | .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
504 | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
505
506 8 bit format:
507
508 | RegCAM | | 7 | (6..5) | (4..0) |
509 | ------ | | - | ------ | ------- |
510 | 0 | | i/f | vew0 | regnum |
511
512 i/f is set to "1" to indicate that the redirection/tag entry is to
513 be applied to integer registers; 0 indicates that it is relevant to
514 floating-point
515 registers.
516
517 The 8 bit format is used for a much more compact expression. "isvec"
518 is implicit and, similar to [[sv-prefix-proposal]], the target vector
519 is "regnum<<2", implicitly. Contrast this with the 16-bit format where
520 the target vector is *explicitly* named in bits 8 to 14, and bit 15 may
521 optionally set "scalar" mode.
522
523 Note that whilst SVPrefix adds one extra bit to each of rd, rs1 etc.,
524 and thus the "vector" mode need only shift the (6 bit) regnum by 1 to
525 get the actual (7 bit) register number to use, there is not enough space
526 in the 8 bit format (only 5 bits for regnum) so "regnum<<2" is required.
527
528 vew has the following meanings, indicating that the instruction's
529 operand size is "over-ridden" in a polymorphic fashion:
530
531 | vew | bitwidth |
532 | --- | ------------------- |
533 | 00 | default (XLEN/FLEN) |
534 | 01 | 8 bit |
535 | 10 | 16 bit |
536 | 11 | 32 bit |
537
538 As the above table is a CAM (key-value store) it may be appropriate
539 (faster, implementation-wise) to expand it as follows:
540
541 struct vectorised fp_vec[32], int_vec[32];
542
543 for (i = 0; i < len; i++) // from VBLOCK Format
544 tb = int_vec if CSRvec[i].type == 0 else fp_vec
545 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
546 tb[idx].elwidth = CSRvec[i].elwidth
547 tb[idx].regidx = CSRvec[i].regidx // indirection
548 tb[idx].isvector = CSRvec[i].isvector // 0=scalar
549
550 ## Predication Table <a name="predication_csr_table"></a>
551
552 *NOTE: in prior versions of SV, this table used to be writable and
553 accessible via CSRs. It is now stored in the VBLOCK instruction format.
554 The table does **not** apply to SVPrefix opcodes*
555
556 The Predication Table is a key-value store indicating whether, if a
557 given destination register (integer or floating-point) is referred to
558 in an instruction, it is to be predicated. Like the Register table, it
559 is an indirect lookup that allows the RV opcodes to not need modification.
560
561 It is particularly important to note
562 that the *actual* register used can be *different* from the one that is
563 in the instruction, due to the redirection through the lookup table.
564
565 * regidx is the register that in combination with the
566 i/f flag, if that integer or floating-point register is referred to in a
567 (standard RV) instruction results in the lookup table being referenced
568 to find the predication mask to use for this operation.
569 * predidx is the *actual* (full, 7 bit) register to be used for the
570 predication mask.
571 * inv indicates that the predication mask bits are to be inverted
572 prior to use *without* actually modifying the contents of the
573 registerfrom which those bits originated.
574 * zeroing is either 1 or 0, and if set to 1, the operation must
575 place zeros in any element position where the predication mask is
576 set to zero. If zeroing is set to 0, unpredicated elements *must*
577 be left alone. Some microarchitectures may choose to interpret
578 this as skipping the operation entirely. Others which wish to
579 stick more closely to a SIMD architecture may choose instead to
580 interpret unpredicated elements as an internal "copy element"
581 operation (which would be necessary in SIMD microarchitectures
582 that perform register-renaming)
583 * ffirst is a special mode that stops sequential element processing when
584 a data-dependent condition occurs, whether a trap or a conditional test.
585 The handling of each (trap or conditional test) is slightly different:
586 see Instruction sections for further details
587
588 16 bit format:
589
590 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
591 | ----- | - | - | - | - | ------- | ------- |
592 | 0 | predidx | zero0 | inv0 | i/f | regidx | ffirst0 |
593 | 1 | predidx | zero1 | inv1 | i/f | regidx | ffirst1 |
594 | 2 | predidx | zero2 | inv2 | i/f | regidx | ffirst2 |
595 | 3 | predidx | zero3 | inv3 | i/f | regidx | ffirst3 |
596
597 Note: predidx=x0, zero=1, inv=1 is a RESERVED encoding. Its use must
598 generate an illegal instruction trap.
599
600 8 bit format:
601
602 | PrCSR | 7 | 6 | 5 | (4..0) |
603 | ----- | - | - | - | ------- |
604 | 0 | zero0 | inv0 | i/f | regnum |
605
606 The 8 bit format is a compact and less expressive variant of the full
607 16 bit format. Using the 8 bit formatis very different: the predicate
608 register to use is implicit, and numbering begins inplicitly from x9. The
609 regnum is still used to "activate" predication, in the same fashion as
610 described above.
611
612 Thus if we map from 8 to 16 bit format, the table becomes:
613
614 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
615 | ----- | - | - | - | - | ------- | ------- |
616 | 0 | x9 | zero0 | inv0 | i/f | regnum | ff=0 |
617 | 1 | x10 | zero1 | inv1 | i/f | regnum | ff=0 |
618 | 2 | x11 | zero2 | inv2 | i/f | regnum | ff=0 |
619 | 3 | x12 | zero3 | inv3 | i/f | regnum | ff=0 |
620
621 The 16 bit Predication CSR Table is a key-value store, so
622 implementation-wise it will be faster to turn the table around (maintain
623 topologically equivalent state):
624
625 struct pred {
626 bool zero; // zeroing
627 bool inv; // register at predidx is inverted
628 bool ffirst; // fail-on-first
629 bool enabled; // use this to tell if the table-entry is active
630 int predidx; // redirection: actual int register to use
631 }
632
633 struct pred fp_pred_reg[32]; // 64 in future (bank=1)
634 struct pred int_pred_reg[32]; // 64 in future (bank=1)
635
636 for (i = 0; i < len; i++) // number of Predication entries in VBLOCK
637 tb = int_pred_reg if PredicateTable[i].type == 0 else fp_pred_reg;
638 idx = PredicateTable[i].regidx
639 tb[idx].zero = CSRpred[i].zero
640 tb[idx].inv = CSRpred[i].inv
641 tb[idx].ffirst = CSRpred[i].ffirst
642 tb[idx].predidx = CSRpred[i].predidx
643 tb[idx].enabled = true
644
645 So when an operation is to be predicated, it is the internal state that
646 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
647 pseudo-code for operations is given, where p is the explicit (direct)
648 reference to the predication register to be used:
649
650 for (int i=0; i<vl; ++i)
651 if ([!]preg[p][i])
652 (d ? vreg[rd][i] : sreg[rd]) =
653 iop(s1 ? vreg[rs1][i] : sreg[rs1],
654 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
655
656 This instead becomes an *indirect* reference using the *internal* state
657 table generated from the Predication CSR key-value store, which is used
658 as follows.
659
660 if type(iop) == INT:
661 preg = int_pred_reg[rd]
662 else:
663 preg = fp_pred_reg[rd]
664
665 for (int i=0; i<vl; ++i)
666 predicate, zeroing = get_pred_val(type(iop) == INT, rd):
667 if (predicate && (1<<i))
668 result = iop(s1 ? regfile[rs1+i] : regfile[rs1],
669 s2 ? regfile[rs2+i] : regfile[rs2]);
670 (d ? regfile[rd+i] : regfile[rd]) = result
671 if preg.ffirst and result == 0:
672 VL = i # result was zero, end loop early, return VL
673 return
674 else if (zeroing)
675 (d ? regfile[rd+i] : regfile[rd]) = 0
676
677 Note:
678
679 * d, s1 and s2 are booleans indicating whether destination,
680 source1 and source2 are vector or scalar
681 * key-value CSR-redirection of rd, rs1 and rs2 have NOT been included
682 above, for clarity. rd, rs1 and rs2 all also must ALSO go through
683 register-level redirection (from the Register table) if they are
684 vectors.
685 * fail-on-first mode stops execution early whenever an operation
686 returns a zero value. floating-point results count both
687 positive-zero as well as negative-zero as "fail".
688
689 If written as a function, obtaining the predication mask (and whether
690 zeroing takes place) may be done as follows:
691
692 def get_pred_val(bool is_fp_op, int reg):
693 tb = int_reg if is_fp_op else fp_reg
694 if (!tb[reg].enabled):
695 return ~0x0, False // all enabled; no zeroing
696 tb = int_pred if is_fp_op else fp_pred
697 if (!tb[reg].enabled):
698 return ~0x0, False // all enabled; no zeroing
699 predidx = tb[reg].predidx // redirection occurs HERE
700 predicate = intreg[predidx] // actual predicate HERE
701 if (tb[reg].inv):
702 predicate = ~predicate // invert ALL bits
703 return predicate, tb[reg].zero
704
705 Note here, critically, that **only** if the register is marked
706 in its **register** table entry as being "active" does the testing
707 proceed further to check if the **predicate** table entry is
708 also active.
709
710 Note also that this is in direct contrast to branch operations
711 for the storage of comparisions: in these specific circumstances
712 the requirement for there to be an active *register* entry
713 is removed.
714
715 ## Fail-on-First Mode <a name="ffirst-mode"></a>
716
717 ffirst is a special data-dependent predicate mode. There are two
718 variants: one is for faults: typically for LOAD/STORE operations,
719 which may encounter end of page faults during a series of operations.
720 The other variant is comparisons such as FEQ (or the augmented behaviour
721 of Branch), and any operation that returns a result of zero (whether
722 integer or floating-point). In the FP case, this includes negative-zero.
723
724 Note that the execution order must "appear" to be sequential for ffirst
725 mode to work correctly. An in-order architecture must execute the element
726 operations in sequence, whilst an out-of-order architecture must *commit*
727 the element operations in sequence (giving the appearance of in-order
728 execution).
729
730 Note also, that if ffirst mode is needed without predication, a special
731 "always-on" Predicate Table Entry may be constructed by setting
732 inverse-on and using x0 as the predicate register. This
733 will have the effect of creating a mask of all ones, allowing ffirst
734 to be set.
735
736 ### Fail-on-first traps
737
738 Except for the first element, ffault stops sequential element processing
739 when a trap occurs. The first element is treated normally (as if ffirst
740 is clear). Should any subsequent element instruction require a trap,
741 instead it and subsequent indexed elements are ignored (or cancelled in
742 out-of-order designs), and VL is set to the *last* instruction that did
743 not take the trap.
744
745 Note that predicated-out elements (where the predicate mask bit is zero)
746 are clearly excluded (i.e. the trap will not occur). However, note that
747 the loop still had to test the predicate bit: thus on return,
748 VL is set to include elements that did not take the trap *and* includes
749 the elements that were predicated (masked) out (not tested up to the
750 point where the trap occurred).
751
752 If SUBVL is being used (SUBVL!=1), the first *sub-group* of elements
753 will cause a trap as normal (as if ffirst is not set); subsequently,
754 the trap must not occur in the *sub-group* of elements. SUBVL will **NOT**
755 be modified.
756
757 Given that predication bits apply to SUBVL groups, the same rules apply
758 to predicated-out (masked-out) sub-groups in calculating the value that VL
759 is set to.
760
761 ### Fail-on-first conditional tests
762
763 ffault stops sequential element conditional testing on the first element result
764 being zero. VL is set to the number of elements that were processed before
765 the fail-condition was encountered.
766
767 Note that just as with traps, if SUBVL!=1, the first of any of the *sub-group*
768 will cause the processing to end, and, even if there were elements within
769 the *sub-group* that passed the test, that sub-group is still (entirely)
770 excluded from the count (from setting VL). i.e. VL is set to the total
771 number of *sub-groups* that had no fail-condition up until execution was
772 stopped.
773
774 Note again that, just as with traps, predicated-out (masked-out) elements
775 are included in the count leading up to the fail-condition, even though they
776 were not tested.
777
778 The pseudo-code for Predication makes this clearer and simpler than it is
779 in words (the loop ends, VL is set to the current element index, "i").
780
781 ## REMAP CSR <a name="remap" />
782
783 (Note: both the REMAP and SHAPE sections are best read after the
784 rest of the document has been read)
785
786 There is one 32-bit CSR which may be used to indicate which registers,
787 if used in any operation, must be "reshaped" (re-mapped) from a linear
788 form to a 2D or 3D transposed form, or "offset" to permit arbitrary
789 access to elements within a register.
790
791 The 32-bit REMAP CSR may reshape up to 3 registers:
792
793 | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
794 | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
795 | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
796
797 regidx0-2 refer not to the Register CSR CAM entry but to the underlying
798 *real* register (see regidx, the value) and consequently is 7-bits wide.
799 When set to zero (referring to x0), clearly reshaping x0 is pointless,
800 so is used to indicate "disabled".
801 shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
802 Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
803
804 It is anticipated that these specialist CSRs not be very often used.
805 Unlike the CSR Register and Predication tables, the REMAP CSRs use
806 the full 7-bit regidx so that they can be set once and left alone,
807 whilst the CSR Register entries pointing to them are disabled, instead.
808
809 ## SHAPE 1D/2D/3D vector-matrix remapping CSRs
810
811 (Note: both the REMAP and SHAPE sections are best read after the
812 rest of the document has been read)
813
814 There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
815 which have the same format. When each SHAPE CSR is set entirely to zeros,
816 remapping is disabled: the register's elements are a linear (1D) vector.
817
818 | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
819 | ------- | -- | ------- | -- | ------- | -- | ------- |
820 | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
821
822 offs is a 3-bit field, spread out across bits 7, 15 and 23, which
823 is added to the element index during the loop calculation.
824
825 xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
826 that the array dimensionality for that dimension is 1. A value of xdimsz=2
827 would indicate that in the first dimension there are 3 elements in the
828 array. The format of the array is therefore as follows:
829
830 array[xdim+1][ydim+1][zdim+1]
831
832 However whilst illustrative of the dimensionality, that does not take the
833 "permute" setting into account. "permute" may be any one of six values
834 (0-5, with values of 6 and 7 being reserved, and not legal). The table
835 below shows how the permutation dimensionality order works:
836
837 | permute | order | array format |
838 | ------- | ----- | ------------------------ |
839 | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
840 | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
841 | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
842 | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
843 | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
844 | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
845
846 In other words, the "permute" option changes the order in which
847 nested for-loops over the array would be done. The algorithm below
848 shows this more clearly, and may be executed as a python program:
849
850 # mapidx = REMAP.shape2
851 xdim = 3 # SHAPE[mapidx].xdim_sz+1
852 ydim = 4 # SHAPE[mapidx].ydim_sz+1
853 zdim = 5 # SHAPE[mapidx].zdim_sz+1
854
855 lims = [xdim, ydim, zdim]
856 idxs = [0,0,0] # starting indices
857 order = [1,0,2] # experiment with different permutations, here
858 offs = 0 # experiment with different offsets, here
859
860 for idx in range(xdim * ydim * zdim):
861 new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
862 print new_idx,
863 for i in range(3):
864 idxs[order[i]] = idxs[order[i]] + 1
865 if (idxs[order[i]] != lims[order[i]]):
866 break
867 print
868 idxs[order[i]] = 0
869
870 Here, it is assumed that this algorithm be run within all pseudo-code
871 throughout this document where a (parallelism) for-loop would normally
872 run from 0 to VL-1 to refer to contiguous register
873 elements; instead, where REMAP indicates to do so, the element index
874 is run through the above algorithm to work out the **actual** element
875 index, instead. Given that there are three possible SHAPE entries, up to
876 three separate registers in any given operation may be simultaneously
877 remapped:
878
879 function op_add(rd, rs1, rs2) # add not VADD!
880 ...
881 ...
882  for (i = 0; i < VL; i++)
883 xSTATE.srcoffs = i # save context
884 if (predval & 1<<i) # predication uses intregs
885    ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
886 ireg[rs2+remap(irs2)];
887 if (!int_vec[rd ].isvector) break;
888 if (int_vec[rd ].isvector)  { id += 1; }
889 if (int_vec[rs1].isvector)  { irs1 += 1; }
890 if (int_vec[rs2].isvector)  { irs2 += 1; }
891
892 By changing remappings, 2D matrices may be transposed "in-place" for one
893 operation, followed by setting a different permutation order without
894 having to move the values in the registers to or from memory. Also,
895 the reason for having REMAP separate from the three SHAPE CSRs is so
896 that in a chain of matrix multiplications and additions, for example,
897 the SHAPE CSRs need only be set up once; only the REMAP CSR need be
898 changed to target different registers.
899
900 Note that:
901
902 * Over-running the register file clearly has to be detected and
903 an illegal instruction exception thrown
904 * When non-default elwidths are set, the exact same algorithm still
905 applies (i.e. it offsets elements *within* registers rather than
906 entire registers).
907 * If permute option 000 is utilised, the actual order of the
908 reindexing does not change!
909 * If two or more dimensions are set to zero, the actual order does not change!
910 * The above algorithm is pseudo-code **only**. Actual implementations
911 will need to take into account the fact that the element for-looping
912 must be **re-entrant**, due to the possibility of exceptions occurring.
913 See MSTATE CSR, which records the current element index.
914 * Twin-predicated operations require **two** separate and distinct
915 element offsets. The above pseudo-code algorithm will be applied
916 separately and independently to each, should each of the two
917 operands be remapped. *This even includes C.LDSP* and other operations
918 in that category, where in that case it will be the **offset** that is
919 remapped (see Compressed Stack LOAD/STORE section).
920 * Offset is especially useful, on its own, for accessing elements
921 within the middle of a register. Without offsets, it is necessary
922 to either use a predicated MV, skipping the first elements, or
923 performing a LOAD/STORE cycle to memory.
924 With offsets, the data does not have to be moved.
925 * Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to
926 less than MVL is **perfectly legal**, albeit very obscure. It permits
927 entries to be regularly presented to operands **more than once**, thus
928 allowing the same underlying registers to act as an accumulator of
929 multiple vector or matrix operations, for example.
930
931 Clearly here some considerable care needs to be taken as the remapping
932 could hypothetically create arithmetic operations that target the
933 exact same underlying registers, resulting in data corruption due to
934 pipeline overlaps. Out-of-order / Superscalar micro-architectures with
935 register-renaming will have an easier time dealing with this than
936 DSP-style SIMD micro-architectures.
937
938 # Instruction Execution Order
939
940 Simple-V behaves as if it is a hardware-level "macro expansion system",
941 substituting and expanding a single instruction into multiple sequential
942 instructions with contiguous and sequentially-incrementing registers.
943 As such, it does **not** modify - or specify - the behaviour and semantics of
944 the execution order: that may be deduced from the **existing** RV
945 specification in each and every case.
946
947 So for example if a particular micro-architecture permits out-of-order
948 execution, and it is augmented with Simple-V, then wherever instructions
949 may be out-of-order then so may the "post-expansion" SV ones.
950
951 If on the other hand there are memory guarantees which specifically
952 prevent and prohibit certain instructions from being re-ordered
953 (such as the Atomicity Axiom, or FENCE constraints), then clearly
954 those constraints **MUST** also be obeyed "post-expansion".
955
956 It should be absolutely clear that SV is **not** about providing new
957 functionality or changing the existing behaviour of a micro-architetural
958 design, or about changing the RISC-V Specification.
959 It is **purely** about compacting what would otherwise be contiguous
960 instructions that use sequentially-increasing register numbers down
961 to the **one** instruction.
962
963 # Instructions <a name="instructions" />
964
965 Despite being a 98% complete and accurate topological remap of RVV
966 concepts and functionality, no new instructions are needed.
967 Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
968 becomes a critical dependency for efficient manipulation of predication
969 masks (as a bit-field). Despite the removal of all operations,
970 with the exception of CLIP and VSELECT.X
971 *all instructions from RVV Base are topologically re-mapped and retain their
972 complete functionality, intact*. Note that if RV64G ever had
973 a MV.X added as well as FCLIP, the full functionality of RVV-Base would
974 be obtained in SV.
975
976 Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
977 equivalents, so are left out of Simple-V. VSELECT could be included if
978 there existed a MV.X instruction in RV (MV.X is a hypothetical
979 non-immediate variant of MV that would allow another register to
980 specify which register was to be copied). Note that if any of these three
981 instructions are added to any given RV extension, their functionality
982 will be inherently parallelised.
983
984 With some exceptions, where it does not make sense or is simply too
985 challenging, all RV-Base instructions are parallelised:
986
987 * CSR instructions, whilst a case could be made for fast-polling of
988 a CSR into multiple registers, or for being able to copy multiple
989 contiguously addressed CSRs into contiguous registers, and so on,
990 are the fundamental core basis of SV. If parallelised, extreme
991 care would need to be taken. Additionally, CSR reads are done
992 using x0, and it is *really* inadviseable to tag x0.
993 * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
994 left as scalar.
995 * LR/SC could hypothetically be parallelised however their purpose is
996 single (complex) atomic memory operations where the LR must be followed
997 up by a matching SC. A sequence of parallel LR instructions followed
998 by a sequence of parallel SC instructions therefore is guaranteed to
999 not be useful. Not least: the guarantees of a Multi-LR/SC
1000 would be impossible to provide if emulated in a trap.
1001 * EBREAK, NOP, FENCE and others do not use registers so are not inherently
1002 paralleliseable anyway.
1003
1004 All other operations using registers are automatically parallelised.
1005 This includes AMOMAX, AMOSWAP and so on, where particular care and
1006 attention must be paid.
1007
1008 Example pseudo-code for an integer ADD operation (including scalar
1009 operations). Floating-point uses the FP Register Table.
1010
1011 function op_add(rd, rs1, rs2) # add not VADD!
1012  int i, id=0, irs1=0, irs2=0;
1013  predval = get_pred_val(FALSE, rd);
1014  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
1015  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
1016  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
1017  for (i = 0; i < VL; i++)
1018 xSTATE.srcoffs = i # save context
1019 if (predval & 1<<i) # predication uses intregs
1020    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
1021 if (!int_vec[rd ].isvector) break;
1022 if (int_vec[rd ].isvector)  { id += 1; }
1023 if (int_vec[rs1].isvector)  { irs1 += 1; }
1024 if (int_vec[rs2].isvector)  { irs2 += 1; }
1025
1026 Note that for simplicity there is quite a lot missing from the above
1027 pseudo-code: element widths, zeroing on predication, dimensional
1028 reshaping and offsets and so on. However it demonstrates the basic
1029 principle. Augmentations that produce the full pseudo-code are covered in
1030 other sections.
1031
1032 ## SUBVL Pseudocode <a name="subvl-pseudocode"></a>
1033
1034 Adding in support for SUBVL is a matter of adding in an extra inner
1035 for-loop, where register src and dest are still incremented inside the
1036 inner part. Not that the predication is still taken from the VL index.
1037
1038 So whilst elements are indexed by "(i * SUBVL + s)", predicate bits are
1039 indexed by "(i)"
1040
1041 function op_add(rd, rs1, rs2) # add not VADD!
1042  int i, id=0, irs1=0, irs2=0;
1043  predval = get_pred_val(FALSE, rd);
1044  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
1045  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
1046  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
1047  for (i = 0; i < VL; i++)
1048 xSTATE.srcoffs = i # save context
1049 for (s = 0; s < SUBVL; s++)
1050 xSTATE.ssvoffs = s # save context
1051 if (predval & 1<<i) # predication uses intregs
1052 # actual add is here (at last)
1053    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
1054 if (!int_vec[rd ].isvector) break;
1055 if (int_vec[rd ].isvector)  { id += 1; }
1056 if (int_vec[rs1].isvector)  { irs1 += 1; }
1057 if (int_vec[rs2].isvector)  { irs2 += 1; }
1058 if (id == VL or irs1 == VL or irs2 == VL) {
1059 # end VL hardware loop
1060 xSTATE.srcoffs = 0; # reset
1061 xSTATE.ssvoffs = 0; # reset
1062 return;
1063 }
1064
1065
1066 NOTE: pseudocode simplified greatly: zeroing, proper predicate handling,
1067 elwidth handling etc. all left out.
1068
1069 ## Instruction Format
1070
1071 It is critical to appreciate that there are
1072 **no operations added to SV, at all**.
1073
1074 Instead, by using CSRs to tag registers as an indication of "changed
1075 behaviour", SV *overloads* pre-existing branch operations into predicated
1076 variants, and implicitly overloads arithmetic operations, MV, FCVT, and
1077 LOAD/STORE depending on CSR configurations for bitwidth and predication.
1078 **Everything** becomes parallelised. *This includes Compressed
1079 instructions* as well as any future instructions and Custom Extensions.
1080
1081 Note: CSR tags to change behaviour of instructions is nothing new, including
1082 in RISC-V. UXL, SXL and MXL change the behaviour so that XLEN=32/64/128.
1083 FRM changes the behaviour of the floating-point unit, to alter the rounding
1084 mode. Other architectures change the LOAD/STORE byte-order from big-endian
1085 to little-endian on a per-instruction basis. SV is just a little more...
1086 comprehensive in its effect on instructions.
1087
1088 ## Branch Instructions
1089
1090 Branch operations are augmented slightly to be a little more like FP
1091 Compares (FEQ, FNE etc.), by permitting the cumulation (and storage)
1092 of multiple comparisons into a register (taken indirectly from the predicate
1093 table). As such, "ffirst" - fail-on-first - condition mode can be enabled.
1094 See ffirst mode in the Predication Table section.
1095
1096 ### Standard Branch <a name="standard_branch"></a>
1097
1098 Branch operations use standard RV opcodes that are reinterpreted to
1099 be "predicate variants" in the instance where either of the two src
1100 registers are marked as vectors (active=1, vector=1).
1101
1102 Note that the predication register to use (if one is enabled) is taken from
1103 the *first* src register, and that this is used, just as with predicated
1104 arithmetic operations, to mask whether the comparison operations take
1105 place or not. The target (destination) predication register
1106 to use (if one is enabled) is taken from the *second* src register.
1107
1108 If either of src1 or src2 are scalars (whether by there being no
1109 CSR register entry or whether by the CSR entry specifically marking
1110 the register as "scalar") the comparison goes ahead as vector-scalar
1111 or scalar-vector.
1112
1113 In instances where no vectorisation is detected on either src registers
1114 the operation is treated as an absolutely standard scalar branch operation.
1115 Where vectorisation is present on either or both src registers, the
1116 branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
1117 those tests that are predicated out).
1118
1119 Note that when zero-predication is enabled (from source rs1),
1120 a cleared bit in the predicate indicates that the result
1121 of the compare is set to "false", i.e. that the corresponding
1122 destination bit (or result)) be set to zero. Contrast this with
1123 when zeroing is not set: bits in the destination predicate are
1124 only *set*; they are **not** cleared. This is important to appreciate,
1125 as there may be an expectation that, going into the hardware-loop,
1126 the destination predicate is always expected to be set to zero:
1127 this is **not** the case. The destination predicate is only set
1128 to zero if **zeroing** is enabled.
1129
1130 Note that just as with the standard (scalar, non-predicated) branch
1131 operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
1132 src1 and src2.
1133
1134 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
1135 for predicated compare operations of function "cmp":
1136
1137 for (int i=0; i<vl; ++i)
1138 if ([!]preg[p][i])
1139 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
1140 s2 ? vreg[rs2][i] : sreg[rs2]);
1141
1142 With associated predication, vector-length adjustments and so on,
1143 and temporarily ignoring bitwidth (which makes the comparisons more
1144 complex), this becomes:
1145
1146 s1 = reg_is_vectorised(src1);
1147 s2 = reg_is_vectorised(src2);
1148
1149 if not s1 && not s2
1150 if cmp(rs1, rs2) # scalar compare
1151 goto branch
1152 return
1153
1154 preg = int_pred_reg[rd]
1155 reg = int_regfile
1156
1157 ps = get_pred_val(I/F==INT, rs1);
1158 rd = get_pred_val(I/F==INT, rs2); # this may not exist
1159
1160 if not exists(rd) or zeroing:
1161 result = 0
1162 else
1163 result = preg[rd]
1164
1165 for (int i = 0; i < VL; ++i)
1166 if (zeroing)
1167 if not (ps & (1<<i))
1168 result &= ~(1<<i);
1169 else if (ps & (1<<i))
1170 if (cmp(s1 ? reg[src1+i]:reg[src1],
1171 s2 ? reg[src2+i]:reg[src2])
1172 result |= 1<<i;
1173 else
1174 result &= ~(1<<i);
1175
1176 if not exists(rd)
1177 if result == ps
1178 goto branch
1179 else
1180 preg[rd] = result # store in destination
1181 if preg[rd] == ps
1182 goto branch
1183
1184 Notes:
1185
1186 * Predicated SIMD comparisons would break src1 and src2 further down
1187 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
1188 Reordering") setting Vector-Length times (number of SIMD elements) bits
1189 in Predicate Register rd, as opposed to just Vector-Length bits.
1190 * The execution of "parallelised" instructions **must** be implemented
1191 as "re-entrant" (to use a term from software). If an exception (trap)
1192 occurs during the middle of a vectorised
1193 Branch (now a SV predicated compare) operation, the partial results
1194 of any comparisons must be written out to the destination
1195 register before the trap is permitted to begin. If however there
1196 is no predicate, the **entire** set of comparisons must be **restarted**,
1197 with the offset loop indices set back to zero. This is because
1198 there is no place to store the temporary result during the handling
1199 of traps.
1200
1201 TODO: predication now taken from src2. also branch goes ahead
1202 if all compares are successful.
1203
1204 Note also that where normally, predication requires that there must
1205 also be a CSR register entry for the register being used in order
1206 for the **predication** CSR register entry to also be active,
1207 for branches this is **not** the case. src2 does **not** have
1208 to have its CSR register entry marked as active in order for
1209 predication on src2 to be active.
1210
1211 Also note: SV Branch operations are **not** twin-predicated
1212 (see Twin Predication section). This would require three
1213 element offsets: one to track src1, one to track src2 and a third
1214 to track where to store the accumulation of the results. Given
1215 that the element offsets need to be exposed via CSRs so that
1216 the parallel hardware looping may be made re-entrant on traps
1217 and exceptions, the decision was made not to make SV Branches
1218 twin-predicated.
1219
1220 ### Floating-point Comparisons
1221
1222 There does not exist floating-point branch operations, only compare.
1223 Interestingly no change is needed to the instruction format because
1224 FP Compare already stores a 1 or a zero in its "rd" integer register
1225 target, i.e. it's not actually a Branch at all: it's a compare.
1226
1227 In RV (scalar) Base, a branch on a floating-point compare is
1228 done via the sequence "FEQ x1, f0, f5; BEQ x1, x0, #jumploc".
1229 This does extend to SV, as long as x1 (in the example sequence given)
1230 is vectorised. When that is the case, x1..x(1+VL-1) will also be
1231 set to 0 or 1 depending on whether f0==f5, f1==f6, f2==f7 and so on.
1232 The BEQ that follows will *also* compare x1==x0, x2==x0, x3==x0 and
1233 so on. Consequently, unlike integer-branch, FP Compare needs no
1234 modification in its behaviour.
1235
1236 In addition, it is noted that an entry "FNE" (the opposite of FEQ) is missing,
1237 and whilst in ordinary branch code this is fine because the standard
1238 RVF compare can always be followed up with an integer BEQ or a BNE (or
1239 a compressed comparison to zero or non-zero), in predication terms that
1240 becomes more of an impact. To deal with this, SV's predication has
1241 had "invert" added to it.
1242
1243 Also: note that FP Compare may be predicated, using the destination
1244 integer register (rd) to determine the predicate. FP Compare is **not**
1245 a twin-predication operation, as, again, just as with SV Branches,
1246 there are three registers involved: FP src1, FP src2 and INT rd.
1247
1248 Also: note that ffirst (fail first mode) applies directly to this operation.
1249
1250 ### Compressed Branch Instruction
1251
1252 Compressed Branch instructions are, just like standard Branch instructions,
1253 reinterpreted to be vectorised and predicated based on the source register
1254 (rs1s) CSR entries. As however there is only the one source register,
1255 given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
1256 to store the results of the comparisions is taken from CSR predication
1257 table entries for **x0**.
1258
1259 The specific required use of x0 is, with a little thought, quite obvious,
1260 but is counterintuitive. Clearly it is **not** recommended to redirect
1261 x0 with a CSR register entry, however as a means to opaquely obtain
1262 a predication target it is the only sensible option that does not involve
1263 additional special CSRs (or, worse, additional special opcodes).
1264
1265 Note also that, just as with standard branches, the 2nd source
1266 (in this case x0 rather than src2) does **not** have to have its CSR
1267 register table marked as "active" in order for predication to work.
1268
1269 ## Vectorised Dual-operand instructions
1270
1271 There is a series of 2-operand instructions involving copying (and
1272 sometimes alteration):
1273
1274 * C.MV
1275 * FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
1276 * C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
1277 * LOAD(-FP) and STORE(-FP)
1278
1279 All of these operations follow the same two-operand pattern, so it is
1280 *both* the source *and* destination predication masks that are taken into
1281 account. This is different from
1282 the three-operand arithmetic instructions, where the predication mask
1283 is taken from the *destination* register, and applied uniformly to the
1284 elements of the source register(s), element-for-element.
1285
1286 The pseudo-code pattern for twin-predicated operations is as
1287 follows:
1288
1289 function op(rd, rs):
1290  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1291  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1292  ps = get_pred_val(FALSE, rs); # predication on src
1293  pd = get_pred_val(FALSE, rd); # ... AND on dest
1294  for (int i = 0, int j = 0; i < VL && j < VL;):
1295 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1296 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1297 xSTATE.srcoffs = i # save context
1298 xSTATE.destoffs = j # save context
1299 reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
1300 if (int_csr[rs].isvec) i++;
1301 if (int_csr[rd].isvec) j++; else break
1302
1303 This pattern covers scalar-scalar, scalar-vector, vector-scalar
1304 and vector-vector, and predicated variants of all of those.
1305 Zeroing is not presently included (TODO). As such, when compared
1306 to RVV, the twin-predicated variants of C.MV and FMV cover
1307 **all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
1308 VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
1309
1310 Note that:
1311
1312 * elwidth (SIMD) is not covered in the pseudo-code above
1313 * ending the loop early in scalar cases (VINSERT, VEXTRACT) is also
1314 not covered
1315 * zero predication is also not shown (TODO).
1316
1317 ### C.MV Instruction <a name="c_mv"></a>
1318
1319 There is no MV instruction in RV however there is a C.MV instruction.
1320 It is used for copying integer-to-integer registers (vectorised FMV
1321 is used for copying floating-point).
1322
1323 If either the source or the destination register are marked as vectors
1324 C.MV is reinterpreted to be a vectorised (multi-register) predicated
1325 move operation. The actual instruction's format does not change:
1326
1327 [[!table data="""
1328 15 12 | 11 7 | 6 2 | 1 0 |
1329 funct4 | rd | rs | op |
1330 4 | 5 | 5 | 2 |
1331 C.MV | dest | src | C0 |
1332 """]]
1333
1334 A simplified version of the pseudocode for this operation is as follows:
1335
1336 function op_mv(rd, rs) # MV not VMV!
1337  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1338  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1339  ps = get_pred_val(FALSE, rs); # predication on src
1340  pd = get_pred_val(FALSE, rd); # ... AND on dest
1341  for (int i = 0, int j = 0; i < VL && j < VL;):
1342 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1343 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1344 xSTATE.srcoffs = i # save context
1345 xSTATE.destoffs = j # save context
1346 ireg[rd+j] <= ireg[rs+i];
1347 if (int_csr[rs].isvec) i++;
1348 if (int_csr[rd].isvec) j++; else break
1349
1350 There are several different instructions from RVV that are covered by
1351 this one opcode:
1352
1353 [[!table data="""
1354 src | dest | predication | op |
1355 scalar | vector | none | VSPLAT |
1356 scalar | vector | destination | sparse VSPLAT |
1357 scalar | vector | 1-bit dest | VINSERT |
1358 vector | scalar | 1-bit? src | VEXTRACT |
1359 vector | vector | none | VCOPY |
1360 vector | vector | src | Vector Gather |
1361 vector | vector | dest | Vector Scatter |
1362 vector | vector | src & dest | Gather/Scatter |
1363 vector | vector | src == dest | sparse VCOPY |
1364 """]]
1365
1366 Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
1367 operations with inversion on the src and dest predication for one of the
1368 two C.MV operations.
1369
1370 Note that in the instance where the Compressed Extension is not implemented,
1371 MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs.
1372 Note that the behaviour is **different** from C.MV because with addi the
1373 predication mask to use is taken **only** from rd and is applied against
1374 all elements: rs[i] = rd[i].
1375
1376 ### FMV, FNEG and FABS Instructions
1377
1378 These are identical in form to C.MV, except covering floating-point
1379 register copying. The same double-predication rules also apply.
1380 However when elwidth is not set to default the instruction is implicitly
1381 and automatic converted to a (vectorised) floating-point type conversion
1382 operation of the appropriate size covering the source and destination
1383 register bitwidths.
1384
1385 (Note that FMV, FNEG and FABS are all actually pseudo-instructions)
1386
1387 ### FVCT Instructions
1388
1389 These are again identical in form to C.MV, except that they cover
1390 floating-point to integer and integer to floating-point. When element
1391 width in each vector is set to default, the instructions behave exactly
1392 as they are defined for standard RV (scalar) operations, except vectorised
1393 in exactly the same fashion as outlined in C.MV.
1394
1395 However when the source or destination element width is not set to default,
1396 the opcode's explicit element widths are *over-ridden* to new definitions,
1397 and the opcode's element width is taken as indicative of the SIMD width
1398 (if applicable i.e. if packed SIMD is requested) instead.
1399
1400 For example FCVT.S.L would normally be used to convert a 64-bit
1401 integer in register rs1 to a 64-bit floating-point number in rd.
1402 If however the source rs1 is set to be a vector, where elwidth is set to
1403 default/2 and "packed SIMD" is enabled, then the first 32 bits of
1404 rs1 are converted to a floating-point number to be stored in rd's
1405 first element and the higher 32-bits *also* converted to floating-point
1406 and stored in the second. The 32 bit size comes from the fact that
1407 FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to
1408 divide that by two it means that rs1 element width is to be taken as 32.
1409
1410 Similar rules apply to the destination register.
1411
1412 ## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
1413
1414 An earlier draft of SV modified the behaviour of LOAD/STORE (modified
1415 the interpretation of the instruction fields). This
1416 actually undermined the fundamental principle of SV, namely that there
1417 be no modifications to the scalar behaviour (except where absolutely
1418 necessary), in order to simplify an implementor's task if considering
1419 converting a pre-existing scalar design to support parallelism.
1420
1421 So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
1422 do not change in SV, however just as with C.MV it is important to note
1423 that dual-predication is possible.
1424
1425 In vectorised architectures there are usually at least two different modes
1426 for LOAD/STORE:
1427
1428 * Read (or write for STORE) from sequential locations, where one
1429 register specifies the address, and the one address is incremented
1430 by a fixed amount. This is usually known as "Unit Stride" mode.
1431 * Read (or write) from multiple indirected addresses, where the
1432 vector elements each specify separate and distinct addresses.
1433
1434 To support these different addressing modes, the CSR Register "isvector"
1435 bit is used. So, for a LOAD, when the src register is set to
1436 scalar, the LOADs are sequentially incremented by the src register
1437 element width, and when the src register is set to "vector", the
1438 elements are treated as indirection addresses. Simplified
1439 pseudo-code would look like this:
1440
1441 function op_ld(rd, rs) # LD not VLD!
1442  rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
1443  rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
1444  ps = get_pred_val(FALSE, rs); # predication on src
1445  pd = get_pred_val(FALSE, rd); # ... AND on dest
1446  for (int i = 0, int j = 0; i < VL && j < VL;):
1447 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1448 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1449 if (int_csr[rd].isvec)
1450 # indirect mode (multi mode)
1451 srcbase = ireg[rsv+i];
1452 else
1453 # unit stride mode
1454 srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
1455 ireg[rdv+j] <= mem[srcbase + imm_offs];
1456 if (!int_csr[rs].isvec &&
1457 !int_csr[rd].isvec) break # scalar-scalar LD
1458 if (int_csr[rs].isvec) i++;
1459 if (int_csr[rd].isvec) j++;
1460
1461 Notes:
1462
1463 * For simplicity, zeroing and elwidth is not included in the above:
1464 the key focus here is the decision-making for srcbase; vectorised
1465 rs means use sequentially-numbered registers as the indirection
1466 address, and scalar rs is "offset" mode.
1467 * The test towards the end for whether both source and destination are
1468 scalar is what makes the above pseudo-code provide the "standard" RV
1469 Base behaviour for LD operations.
1470 * The offset in bytes (XLEN/8) changes depending on whether the
1471 operation is a LB (1 byte), LH (2 byes), LW (4 bytes) or LD
1472 (8 bytes), and also whether the element width is over-ridden
1473 (see special element width section).
1474
1475 ## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
1476
1477 C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
1478 where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
1479 It is therefore possible to use predicated C.LWSP to efficiently
1480 pop registers off the stack (by predicating x2 as the source), cherry-picking
1481 which registers to store to (by predicating the destination). Likewise
1482 for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
1483
1484 The two modes ("unit stride" and multi-indirection) are still supported,
1485 as with standard LD/ST. Essentially, the only difference is that the
1486 use of x2 is hard-coded into the instruction.
1487
1488 **Note**: it is still possible to redirect x2 to an alternative target
1489 register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
1490 general-purpose LOAD/STORE operations.
1491
1492 ## Compressed LOAD / STORE Instructions
1493
1494 Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
1495 where the same rules apply and the same pseudo-code apply as for
1496 non-compressed LOAD/STORE. Again: setting scalar or vector mode
1497 on the src for LOAD and dest for STORE switches mode from "Unit Stride"
1498 to "Multi-indirection", respectively.
1499
1500 # Element bitwidth polymorphism <a name="elwidth"></a>
1501
1502 Element bitwidth is best covered as its own special section, as it
1503 is quite involved and applies uniformly across-the-board. SV restricts
1504 bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
1505
1506 The effect of setting an element bitwidth is to re-cast each entry
1507 in the register table, and for all memory operations involving
1508 load/stores of certain specific sizes, to a completely different width.
1509 Thus In c-style terms, on an RV64 architecture, effectively each register
1510 now looks like this:
1511
1512 typedef union {
1513 uint8_t b[8];
1514 uint16_t s[4];
1515 uint32_t i[2];
1516 uint64_t l[1];
1517 } reg_t;
1518
1519 // integer table: assume maximum SV 7-bit regfile size
1520 reg_t int_regfile[128];
1521
1522 where the CSR Register table entry (not the instruction alone) determines
1523 which of those union entries is to be used on each operation, and the
1524 VL element offset in the hardware-loop specifies the index into each array.
1525
1526 However a naive interpretation of the data structure above masks the
1527 fact that setting VL greater than 8, for example, when the bitwidth is 8,
1528 accessing one specific register "spills over" to the following parts of
1529 the register file in a sequential fashion. So a much more accurate way
1530 to reflect this would be:
1531
1532 typedef union {
1533 uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
1534 uint8_t b[0]; // array of type uint8_t
1535 uint16_t s[0];
1536 uint32_t i[0];
1537 uint64_t l[0];
1538 uint128_t d[0];
1539 } reg_t;
1540
1541 reg_t int_regfile[128];
1542
1543 where when accessing any individual regfile[n].b entry it is permitted
1544 (in c) to arbitrarily over-run the *declared* length of the array (zero),
1545 and thus "overspill" to consecutive register file entries in a fashion
1546 that is completely transparent to a greatly-simplified software / pseudo-code
1547 representation.
1548 It is however critical to note that it is clearly the responsibility of
1549 the implementor to ensure that, towards the end of the register file,
1550 an exception is thrown if attempts to access beyond the "real" register
1551 bytes is ever attempted.
1552
1553 Now we may modify pseudo-code an operation where all element bitwidths have
1554 been set to the same size, where this pseudo-code is otherwise identical
1555 to its "non" polymorphic versions (above):
1556
1557 function op_add(rd, rs1, rs2) # add not VADD!
1558 ...
1559 ...
1560  for (i = 0; i < VL; i++)
1561 ...
1562 ...
1563 // TODO, calculate if over-run occurs, for each elwidth
1564 if (elwidth == 8) {
1565    int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
1566     int_regfile[rs2].i[irs2];
1567 } else if elwidth == 16 {
1568    int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
1569     int_regfile[rs2].s[irs2];
1570 } else if elwidth == 32 {
1571    int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
1572     int_regfile[rs2].i[irs2];
1573 } else { // elwidth == 64
1574    int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
1575     int_regfile[rs2].l[irs2];
1576 }
1577 ...
1578 ...
1579
1580 So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
1581 following sequentially on respectively from the same) are "type-cast"
1582 to 8-bit; for 16-bit entries likewise and so on.
1583
1584 However that only covers the case where the element widths are the same.
1585 Where the element widths are different, the following algorithm applies:
1586
1587 * Analyse the bitwidth of all source operands and work out the
1588 maximum. Record this as "maxsrcbitwidth"
1589 * If any given source operand requires sign-extension or zero-extension
1590 (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
1591 sign-extension / zero-extension or whatever is specified in the standard
1592 RV specification, **change** that to sign-extending from the respective
1593 individual source operand's bitwidth from the CSR table out to
1594 "maxsrcbitwidth" (previously calculated), instead.
1595 * Following separate and distinct (optional) sign/zero-extension of all
1596 source operands as specifically required for that operation, carry out the
1597 operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
1598 this may be a "null" (copy) operation, and that with FCVT, the changes
1599 to the source and destination bitwidths may also turn FVCT effectively
1600 into a copy).
1601 * If the destination operand requires sign-extension or zero-extension,
1602 instead of a mandatory fixed size (typically 32-bit for arithmetic,
1603 for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
1604 etc.), overload the RV specification with the bitwidth from the
1605 destination register's elwidth entry.
1606 * Finally, store the (optionally) sign/zero-extended value into its
1607 destination: memory for sb/sw etc., or an offset section of the register
1608 file for an arithmetic operation.
1609
1610 In this way, polymorphic bitwidths are achieved without requiring a
1611 massive 64-way permutation of calculations **per opcode**, for example
1612 (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
1613 rd bitwidths). The pseudo-code is therefore as follows:
1614
1615 typedef union {
1616 uint8_t b;
1617 uint16_t s;
1618 uint32_t i;
1619 uint64_t l;
1620 } el_reg_t;
1621
1622 bw(elwidth):
1623 if elwidth == 0:
1624 return xlen
1625 if elwidth == 1:
1626 return xlen / 2
1627 if elwidth == 2:
1628 return xlen * 2
1629 // elwidth == 3:
1630 return 8
1631
1632 get_max_elwidth(rs1, rs2):
1633 return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
1634 bw(int_csr[rs2].elwidth)) # again XLEN if no entry
1635
1636 get_polymorphed_reg(reg, bitwidth, offset):
1637 el_reg_t res;
1638 res.l = 0; // TODO: going to need sign-extending / zero-extending
1639 if bitwidth == 8:
1640 reg.b = int_regfile[reg].b[offset]
1641 elif bitwidth == 16:
1642 reg.s = int_regfile[reg].s[offset]
1643 elif bitwidth == 32:
1644 reg.i = int_regfile[reg].i[offset]
1645 elif bitwidth == 64:
1646 reg.l = int_regfile[reg].l[offset]
1647 return res
1648
1649 set_polymorphed_reg(reg, bitwidth, offset, val):
1650 if (!int_csr[reg].isvec):
1651 # sign/zero-extend depending on opcode requirements, from
1652 # the reg's bitwidth out to the full bitwidth of the regfile
1653 val = sign_or_zero_extend(val, bitwidth, xlen)
1654 int_regfile[reg].l[0] = val
1655 elif bitwidth == 8:
1656 int_regfile[reg].b[offset] = val
1657 elif bitwidth == 16:
1658 int_regfile[reg].s[offset] = val
1659 elif bitwidth == 32:
1660 int_regfile[reg].i[offset] = val
1661 elif bitwidth == 64:
1662 int_regfile[reg].l[offset] = val
1663
1664 maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
1665 destwid = int_csr[rs1].elwidth # destination element width
1666  for (i = 0; i < VL; i++)
1667 if (predval & 1<<i) # predication uses intregs
1668 // TODO, calculate if over-run occurs, for each elwidth
1669 src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
1670 // TODO, sign/zero-extend src1 and src2 as operation requires
1671 if (op_requires_sign_extend_src1)
1672 src1 = sign_extend(src1, maxsrcwid)
1673 src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
1674 result = src1 + src2 # actual add here
1675 // TODO, sign/zero-extend result, as operation requires
1676 if (op_requires_sign_extend_dest)
1677 result = sign_extend(result, maxsrcwid)
1678 set_polymorphed_reg(rd, destwid, ird, result)
1679 if (!int_vec[rd].isvector) break
1680 if (int_vec[rd ].isvector)  { id += 1; }
1681 if (int_vec[rs1].isvector)  { irs1 += 1; }
1682 if (int_vec[rs2].isvector)  { irs2 += 1; }
1683
1684 Whilst specific sign-extension and zero-extension pseudocode call
1685 details are left out, due to each operation being different, the above
1686 should be clear that;
1687
1688 * the source operands are extended out to the maximum bitwidth of all
1689 source operands
1690 * the operation takes place at that maximum source bitwidth (the
1691 destination bitwidth is not involved at this point, at all)
1692 * the result is extended (or potentially even, truncated) before being
1693 stored in the destination. i.e. truncation (if required) to the
1694 destination width occurs **after** the operation **not** before.
1695 * when the destination is not marked as "vectorised", the **full**
1696 (standard, scalar) register file entry is taken up, i.e. the
1697 element is either sign-extended or zero-extended to cover the
1698 full register bitwidth (XLEN) if it is not already XLEN bits long.
1699
1700 Implementors are entirely free to optimise the above, particularly
1701 if it is specifically known that any given operation will complete
1702 accurately in less bits, as long as the results produced are
1703 directly equivalent and equal, for all inputs and all outputs,
1704 to those produced by the above algorithm.
1705
1706 ## Polymorphic floating-point operation exceptions and error-handling
1707
1708 For floating-point operations, conversion takes place without
1709 raising any kind of exception. Exactly as specified in the standard
1710 RV specification, NAN (or appropriate) is stored if the result
1711 is beyond the range of the destination, and, again, exactly as
1712 with the standard RV specification just as with scalar
1713 operations, the floating-point flag is raised (FCSR). And, again, just as
1714 with scalar operations, it is software's responsibility to check this flag.
1715 Given that the FCSR flags are "accrued", the fact that multiple element
1716 operations could have occurred is not a problem.
1717
1718 Note that it is perfectly legitimate for floating-point bitwidths of
1719 only 8 to be specified. However whilst it is possible to apply IEEE 754
1720 principles, no actual standard yet exists. Implementors wishing to
1721 provide hardware-level 8-bit support rather than throw a trap to emulate
1722 in software should contact the author of this specification before
1723 proceeding.
1724
1725 ## Polymorphic shift operators
1726
1727 A special note is needed for changing the element width of left and right
1728 shift operators, particularly right-shift. Even for standard RV base,
1729 in order for correct results to be returned, the second operand RS2 must
1730 be truncated to be within the range of RS1's bitwidth. spike's implementation
1731 of sll for example is as follows:
1732
1733 WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));
1734
1735 which means: where XLEN is 32 (for RV32), restrict RS2 to cover the
1736 range 0..31 so that RS1 will only be left-shifted by the amount that
1737 is possible to fit into a 32-bit register. Whilst this appears not
1738 to matter for hardware, it matters greatly in software implementations,
1739 and it also matters where an RV64 system is set to "RV32" mode, such
1740 that the underlying registers RS1 and RS2 comprise 64 hardware bits
1741 each.
1742
1743 For SV, where each operand's element bitwidth may be over-ridden, the
1744 rule about determining the operation's bitwidth *still applies*, being
1745 defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
1746 **also applies to the truncation of RS2**. In other words, *after*
1747 determining the maximum bitwidth, RS2's range must **also be truncated**
1748 to ensure a correct answer. Example:
1749
1750 * RS1 is over-ridden to a 16-bit width
1751 * RS2 is over-ridden to an 8-bit width
1752 * RD is over-ridden to a 64-bit width
1753 * the maximum bitwidth is thus determined to be 16-bit - max(8,16)
1754 * RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
1755
1756 Pseudocode (in spike) for this example would therefore be:
1757
1758 WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
1759
1760 This example illustrates that considerable care therefore needs to be
1761 taken to ensure that left and right shift operations are implemented
1762 correctly. The key is that
1763
1764 * The operation bitwidth is determined by the maximum bitwidth
1765 of the *source registers*, **not** the destination register bitwidth
1766 * The result is then sign-extend (or truncated) as appropriate.
1767
1768 ## Polymorphic MULH/MULHU/MULHSU
1769
1770 MULH is designed to take the top half MSBs of a multiply that
1771 does not fit within the range of the source operands, such that
1772 smaller width operations may produce a full double-width multiply
1773 in two cycles. The issue is: SV allows the source operands to
1774 have variable bitwidth.
1775
1776 Here again special attention has to be paid to the rules regarding
1777 bitwidth, which, again, are that the operation is performed at
1778 the maximum bitwidth of the **source** registers. Therefore:
1779
1780 * An 8-bit x 8-bit multiply will create a 16-bit result that must
1781 be shifted down by 8 bits
1782 * A 16-bit x 8-bit multiply will create a 24-bit result that must
1783 be shifted down by 16 bits (top 8 bits being zero)
1784 * A 16-bit x 16-bit multiply will create a 32-bit result that must
1785 be shifted down by 16 bits
1786 * A 32-bit x 16-bit multiply will create a 48-bit result that must
1787 be shifted down by 32 bits
1788 * A 32-bit x 8-bit multiply will create a 40-bit result that must
1789 be shifted down by 32 bits
1790
1791 So again, just as with shift-left and shift-right, the result
1792 is shifted down by the maximum of the two source register bitwidths.
1793 And, exactly again, truncation or sign-extension is performed on the
1794 result. If sign-extension is to be carried out, it is performed
1795 from the same maximum of the two source register bitwidths out
1796 to the result element's bitwidth.
1797
1798 If truncation occurs, i.e. the top MSBs of the result are lost,
1799 this is "Officially Not Our Problem", i.e. it is assumed that the
1800 programmer actually desires the result to be truncated. i.e. if the
1801 programmer wanted all of the bits, they would have set the destination
1802 elwidth to accommodate them.
1803
1804 ## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
1805
1806 Polymorphic element widths in vectorised form means that the data
1807 being loaded (or stored) across multiple registers needs to be treated
1808 (reinterpreted) as a contiguous stream of elwidth-wide items, where
1809 the source register's element width is **independent** from the destination's.
1810
1811 This makes for a slightly more complex algorithm when using indirection
1812 on the "addressed" register (source for LOAD and destination for STORE),
1813 particularly given that the LOAD/STORE instruction provides important
1814 information about the width of the data to be reinterpreted.
1815
1816 Let's illustrate the "load" part, where the pseudo-code for elwidth=default
1817 was as follows, and i is the loop from 0 to VL-1:
1818
1819 srcbase = ireg[rs+i];
1820 return mem[srcbase + imm]; // returns XLEN bits
1821
1822 Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
1823 chunks are taken from the source memory location addressed by the current
1824 indexed source address register, and only when a full 32-bits-worth
1825 are taken will the index be moved on to the next contiguous source
1826 address register:
1827
1828 bitwidth = bw(elwidth); // source elwidth from CSR reg entry
1829 elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
1830 srcbase = ireg[rs+i/(elsperblock)]; // integer divide
1831 offs = i % elsperblock; // modulo
1832 return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
1833
1834 Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
1835 and 128 for LQ.
1836
1837 The principle is basically exactly the same as if the srcbase were pointing
1838 at the memory of the *register* file: memory is re-interpreted as containing
1839 groups of elwidth-wide discrete elements.
1840
1841 When storing the result from a load, it's important to respect the fact
1842 that the destination register has its *own separate element width*. Thus,
1843 when each element is loaded (at the source element width), any sign-extension
1844 or zero-extension (or truncation) needs to be done to the *destination*
1845 bitwidth. Also, the storing has the exact same analogous algorithm as
1846 above, where in fact it is just the set\_polymorphed\_reg pseudocode
1847 (completely unchanged) used above.
1848
1849 One issue remains: when the source element width is **greater** than
1850 the width of the operation, it is obvious that a single LB for example
1851 cannot possibly obtain 16-bit-wide data. This condition may be detected
1852 where, when using integer divide, elsperblock (the width of the LOAD
1853 divided by the bitwidth of the element) is zero.
1854
1855 The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
1856
1857 elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
1858
1859 The elements, if the element bitwidth is larger than the LD operation's
1860 size, will then be sign/zero-extended to the full LD operation size, as
1861 specified by the LOAD (LDU instead of LD, LBU instead of LB), before
1862 being passed on to the second phase.
1863
1864 As LOAD/STORE may be twin-predicated, it is important to note that
1865 the rules on twin predication still apply, except where in previous
1866 pseudo-code (elwidth=default for both source and target) it was
1867 the *registers* that the predication was applied to, it is now the
1868 **elements** that the predication is applied to.
1869
1870 Thus the full pseudocode for all LD operations may be written out
1871 as follows:
1872
1873 function LBU(rd, rs):
1874 load_elwidthed(rd, rs, 8, true)
1875 function LB(rd, rs):
1876 load_elwidthed(rd, rs, 8, false)
1877 function LH(rd, rs):
1878 load_elwidthed(rd, rs, 16, false)
1879 ...
1880 ...
1881 function LQ(rd, rs):
1882 load_elwidthed(rd, rs, 128, false)
1883
1884 # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
1885 function load_memory(rs, imm, i, opwidth):
1886 elwidth = int_csr[rs].elwidth
1887 bitwidth = bw(elwidth);
1888 elsperblock = min(1, opwidth / bitwidth)
1889 srcbase = ireg[rs+i/(elsperblock)];
1890 offs = i % elsperblock;
1891 return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
1892
1893 function load_elwidthed(rd, rs, opwidth, unsigned):
1894 destwid = int_csr[rd].elwidth # destination element width
1895  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1896  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1897  ps = get_pred_val(FALSE, rs); # predication on src
1898  pd = get_pred_val(FALSE, rd); # ... AND on dest
1899  for (int i = 0, int j = 0; i < VL && j < VL;):
1900 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1901 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1902 val = load_memory(rs, imm, i, opwidth)
1903 if unsigned:
1904 val = zero_extend(val, min(opwidth, bitwidth))
1905 else:
1906 val = sign_extend(val, min(opwidth, bitwidth))
1907 set_polymorphed_reg(rd, bitwidth, j, val)
1908 if (int_csr[rs].isvec) i++;
1909 if (int_csr[rd].isvec) j++; else break;
1910
1911 Note:
1912
1913 * when comparing against for example the twin-predicated c.mv
1914 pseudo-code, the pattern of independent incrementing of rd and rs
1915 is preserved unchanged.
1916 * just as with the c.mv pseudocode, zeroing is not included and must be
1917 taken into account (TODO).
1918 * that due to the use of a twin-predication algorithm, LOAD/STORE also
1919 take on the same VSPLAT, VINSERT, VREDUCE, VEXTRACT, VGATHER and
1920 VSCATTER characteristics.
1921 * that due to the use of the same set\_polymorphed\_reg pseudocode,
1922 a destination that is not vectorised (marked as scalar) will
1923 result in the element being fully sign-extended or zero-extended
1924 out to the full register file bitwidth (XLEN). When the source
1925 is also marked as scalar, this is how the compatibility with
1926 standard RV LOAD/STORE is preserved by this algorithm.
1927
1928 ### Example Tables showing LOAD elements
1929
1930 This section contains examples of vectorised LOAD operations, showing
1931 how the two stage process works (three if zero/sign-extension is included).
1932
1933
1934 #### Example: LD x8, x5(0), x8 CSR-elwidth=32, x5 CSR-elwidth=16, VL=7
1935
1936 This is:
1937
1938 * a 64-bit load, with an offset of zero
1939 * with a source-address elwidth of 16-bit
1940 * into a destination-register with an elwidth of 32-bit
1941 * where VL=7
1942 * from register x5 (actually x5-x6) to x8 (actually x8 to half of x11)
1943 * RV64, where XLEN=64 is assumed.
1944
1945 First, the memory table, which, due to the
1946 element width being 16 and the operation being LD (64), the 64-bits
1947 loaded from memory are subdivided into groups of **four** elements.
1948 And, with VL being 7 (deliberately to illustrate that this is reasonable
1949 and possible), the first four are sourced from the offset addresses pointed
1950 to by x5, and the next three from the ofset addresses pointed to by
1951 the next contiguous register, x6:
1952
1953 [[!table data="""
1954 addr | byte 0 | byte 1 | byte 2 | byte 3 | byte 4 | byte 5 | byte 6 | byte 7 |
1955 @x5 | elem 0 || elem 1 || elem 2 || elem 3 ||
1956 @x6 | elem 4 || elem 5 || elem 6 || not loaded ||
1957 """]]
1958
1959 Next, the elements are zero-extended from 16-bit to 32-bit, as whilst
1960 the elwidth CSR entry for x5 is 16-bit, the destination elwidth on x8 is 32.
1961
1962 [[!table data="""
1963 byte 3 | byte 2 | byte 1 | byte 0 |
1964 0x0 | 0x0 | elem0 ||
1965 0x0 | 0x0 | elem1 ||
1966 0x0 | 0x0 | elem2 ||
1967 0x0 | 0x0 | elem3 ||
1968 0x0 | 0x0 | elem4 ||
1969 0x0 | 0x0 | elem5 ||
1970 0x0 | 0x0 | elem6 ||
1971 0x0 | 0x0 | elem7 ||
1972 """]]
1973
1974 Lastly, the elements are stored in contiguous blocks, as if x8 was also
1975 byte-addressable "memory". That "memory" happens to cover registers
1976 x8, x9, x10 and x11, with the last 32 "bits" of x11 being **UNMODIFIED**:
1977
1978 [[!table data="""
1979 reg# | byte 7 | byte 6 | byte 5 | byte 4 | byte 3 | byte 2 | byte 1 | byte 0 |
1980 x8 | 0x0 | 0x0 | elem 1 || 0x0 | 0x0 | elem 0 ||
1981 x9 | 0x0 | 0x0 | elem 3 || 0x0 | 0x0 | elem 2 ||
1982 x10 | 0x0 | 0x0 | elem 5 || 0x0 | 0x0 | elem 4 ||
1983 x11 | **UNMODIFIED** |||| 0x0 | 0x0 | elem 6 ||
1984 """]]
1985
1986 Thus we have data that is loaded from the **addresses** pointed to by
1987 x5 and x6, zero-extended from 16-bit to 32-bit, stored in the **registers**
1988 x8 through to half of x11.
1989 The end result is that elements 0 and 1 end up in x8, with element 8 being
1990 shifted up 32 bits, and so on, until finally element 6 is in the
1991 LSBs of x11.
1992
1993 Note that whilst the memory addressing table is shown left-to-right byte order,
1994 the registers are shown in right-to-left (MSB) order. This does **not**
1995 imply that bit or byte-reversal is carried out: it's just easier to visualise
1996 memory as being contiguous bytes, and emphasises that registers are not
1997 really actually "memory" as such.
1998
1999 ## Why SV bitwidth specification is restricted to 4 entries
2000
2001 The four entries for SV element bitwidths only allows three over-rides:
2002
2003 * 8 bit
2004 * 16 hit
2005 * 32 bit
2006
2007 This would seem inadequate, surely it would be better to have 3 bits or
2008 more and allow 64, 128 and some other options besides. The answer here
2009 is, it gets too complex, no RV128 implementation yet exists, and so RV64's
2010 default is 64 bit, so the 4 major element widths are covered anyway.
2011
2012 There is an absolutely crucial aspect oF SV here that explicitly
2013 needs spelling out, and it's whether the "vectorised" bit is set in
2014 the Register's CSR entry.
2015
2016 If "vectorised" is clear (not set), this indicates that the operation
2017 is "scalar". Under these circumstances, when set on a destination (RD),
2018 then sign-extension and zero-extension, whilst changed to match the
2019 override bitwidth (if set), will erase the **full** register entry
2020 (64-bit if RV64).
2021
2022 When vectorised is *set*, this indicates that the operation now treats
2023 **elements** as if they were independent registers, so regardless of
2024 the length, any parts of a given actual register that are not involved
2025 in the operation are **NOT** modified, but are **PRESERVED**.
2026
2027 For example:
2028
2029 * when the vector bit is clear and elwidth set to 16 on the destination
2030 register, operations are truncated to 16 bit and then sign or zero
2031 extended to the *FULL* XLEN register width.
2032 * when the vector bit is set, elwidth is 16 and VL=1 (or other value where
2033 groups of elwidth sized elements do not fill an entire XLEN register),
2034 the "top" bits of the destination register do *NOT* get modified, zero'd
2035 or otherwise overwritten.
2036
2037 SIMD micro-architectures may implement this by using predication on
2038 any elements in a given actual register that are beyond the end of
2039 multi-element operation.
2040
2041 Other microarchitectures may choose to provide byte-level write-enable
2042 lines on the register file, such that each 64 bit register in an RV64
2043 system requires 8 WE lines. Scalar RV64 operations would require
2044 activation of all 8 lines, where SV elwidth based operations would
2045 activate the required subset of those byte-level write lines.
2046
2047 Example:
2048
2049 * rs1, rs2 and rd are all set to 8-bit
2050 * VL is set to 3
2051 * RV64 architecture is set (UXL=64)
2052 * add operation is carried out
2053 * bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16]
2054 concatenated with similar add operations on bits 15..8 and 7..0
2055 * bits 24 through 63 **remain as they originally were**.
2056
2057 Example SIMD micro-architectural implementation:
2058
2059 * SIMD architecture works out the nearest round number of elements
2060 that would fit into a full RV64 register (in this case: 8)
2061 * SIMD architecture creates a hidden predicate, binary 0b00000111
2062 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
2063 * SIMD architecture goes ahead with the add operation as if it
2064 was a full 8-wide batch of 8 adds
2065 * SIMD architecture passes top 5 elements through the adders
2066 (which are "disabled" due to zero-bit predication)
2067 * SIMD architecture gets the 5 unmodified top 8-bits back unmodified
2068 and stores them in rd.
2069
2070 This requires a read on rd, however this is required anyway in order
2071 to support non-zeroing mode.
2072
2073 ## Polymorphic floating-point
2074
2075 Standard scalar RV integer operations base the register width on XLEN,
2076 which may be changed (UXL in USTATUS, and the corresponding MXL and
2077 SXL in MSTATUS and SSTATUS respectively). Integer LOAD, STORE and
2078 arithmetic operations are therefore restricted to an active XLEN bits,
2079 with sign or zero extension to pad out the upper bits when XLEN has
2080 been dynamically set to less than the actual register size.
2081
2082 For scalar floating-point, the active (used / changed) bits are
2083 specified exclusively by the operation: ADD.S specifies an active
2084 32-bits, with the upper bits of the source registers needing to
2085 be all 1s ("NaN-boxed"), and the destination upper bits being
2086 *set* to all 1s (including on LOAD/STOREs).
2087
2088 Where elwidth is set to default (on any source or the destination)
2089 it is obvious that this NaN-boxing behaviour can and should be
2090 preserved. When elwidth is non-default things are less obvious,
2091 so need to be thought through. Here is a normal (scalar) sequence,
2092 assuming an RV64 which supports Quad (128-bit) FLEN:
2093
2094 * FLD loads 64-bit wide from memory. Top 64 MSBs are set to all 1s
2095 * ADD.D performs a 64-bit-wide add. Top 64 MSBs of destination set to 1s.
2096 * FSD stores lowest 64-bits from the 128-bit-wide register to memory:
2097 top 64 MSBs ignored.
2098
2099 Therefore it makes sense to mirror this behaviour when, for example,
2100 elwidth is set to 32. Assume elwidth set to 32 on all source and
2101 destination registers:
2102
2103 * FLD loads 64-bit wide from memory as **two** 32-bit single-precision
2104 floating-point numbers.
2105 * ADD.D performs **two** 32-bit-wide adds, storing one of the adds
2106 in bits 0-31 and the second in bits 32-63.
2107 * FSD stores lowest 64-bits from the 128-bit-wide register to memory
2108
2109 Here's the thing: it does not make sense to overwrite the top 64 MSBs
2110 of the registers either during the FLD **or** the ADD.D. The reason
2111 is that, effectively, the top 64 MSBs actually represent a completely
2112 independent 64-bit register, so overwriting it is not only gratuitous
2113 but may actually be harmful for a future extension to SV which may
2114 have a way to directly access those top 64 bits.
2115
2116 The decision is therefore **not** to touch the upper parts of floating-point
2117 registers whereever elwidth is set to non-default values, including
2118 when "isvec" is false in a given register's CSR entry. Only when the
2119 elwidth is set to default **and** isvec is false will the standard
2120 RV behaviour be followed, namely that the upper bits be modified.
2121
2122 Ultimately if elwidth is default and isvec false on *all* source
2123 and destination registers, a SimpleV instruction defaults completely
2124 to standard RV scalar behaviour (this holds true for **all** operations,
2125 right across the board).
2126
2127 The nice thing here is that ADD.S, ADD.D and ADD.Q when elwidth are
2128 non-default values are effectively all the same: they all still perform
2129 multiple ADD operations, just at different widths. A future extension
2130 to SimpleV may actually allow ADD.S to access the upper bits of the
2131 register, effectively breaking down a 128-bit register into a bank
2132 of 4 independently-accesible 32-bit registers.
2133
2134 In the meantime, although when e.g. setting VL to 8 it would technically
2135 make no difference to the ALU whether ADD.S, ADD.D or ADD.Q is used,
2136 using ADD.Q may be an easy way to signal to the microarchitecture that
2137 it is to receive a higher VL value. On a superscalar OoO architecture
2138 there may be absolutely no difference, however on simpler SIMD-style
2139 microarchitectures they may not necessarily have the infrastructure in
2140 place to know the difference, such that when VL=8 and an ADD.D instruction
2141 is issued, it completes in 2 cycles (or more) rather than one, where
2142 if an ADD.Q had been issued instead on such simpler microarchitectures
2143 it would complete in one.
2144
2145 ## Specific instruction walk-throughs
2146
2147 This section covers walk-throughs of the above-outlined procedure
2148 for converting standard RISC-V scalar arithmetic operations to
2149 polymorphic widths, to ensure that it is correct.
2150
2151 ### add
2152
2153 Standard Scalar RV32/RV64 (xlen):
2154
2155 * RS1 @ xlen bits
2156 * RS2 @ xlen bits
2157 * add @ xlen bits
2158 * RD @ xlen bits
2159
2160 Polymorphic variant:
2161
2162 * RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
2163 * RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
2164 * add @ max(rs1, rs2) bits
2165 * RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate
2166
2167 Note here that polymorphic add zero-extends its source operands,
2168 where addw sign-extends.
2169
2170 ### addw
2171
2172 The RV Specification specifically states that "W" variants of arithmetic
2173 operations always produce 32-bit signed values. In a polymorphic
2174 environment it is reasonable to assume that the signed aspect is
2175 preserved, where it is the length of the operands and the result
2176 that may be changed.
2177
2178 Standard Scalar RV64 (xlen):
2179
2180 * RS1 @ xlen bits
2181 * RS2 @ xlen bits
2182 * add @ xlen bits
2183 * RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
2184
2185 Polymorphic variant:
2186
2187 * RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
2188 * RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
2189 * add @ max(rs1, rs2) bits
2190 * RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
2191
2192 Note here that polymorphic addw sign-extends its source operands,
2193 where add zero-extends.
2194
2195 This requires a little more in-depth analysis. Where the bitwidth of
2196 rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
2197 only where the bitwidth of either rs1 or rs2 are different, will the
2198 lesser-width operand be sign-extended.
2199
2200 Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
2201 where for add they are both zero-extended. This holds true for all arithmetic
2202 operations ending with "W".
2203
2204 ### addiw
2205
2206 Standard Scalar RV64I:
2207
2208 * RS1 @ xlen bits, truncated to 32-bit
2209 * immed @ 12 bits, sign-extended to 32-bit
2210 * add @ 32 bits
2211 * RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
2212
2213 Polymorphic variant:
2214
2215 * RS1 @ rs1 bits
2216 * immed @ 12 bits, sign-extend to max(rs1, 12) bits
2217 * add @ max(rs1, 12) bits
2218 * RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
2219
2220 # Predication Element Zeroing
2221
2222 The introduction of zeroing on traditional vector predication is usually
2223 intended as an optimisation for lane-based microarchitectures with register
2224 renaming to be able to save power by avoiding a register read on elements
2225 that are passed through en-masse through the ALU. Simpler microarchitectures
2226 do not have this issue: they simply do not pass the element through to
2227 the ALU at all, and therefore do not store it back in the destination.
2228 More complex non-lane-based micro-architectures can, when zeroing is
2229 not set, use the predication bits to simply avoid sending element-based
2230 operations to the ALUs, entirely: thus, over the long term, potentially
2231 keeping all ALUs 100% occupied even when elements are predicated out.
2232
2233 SimpleV's design principle is not based on or influenced by
2234 microarchitectural design factors: it is a hardware-level API.
2235 Therefore, looking purely at whether zeroing is *useful* or not,
2236 (whether less instructions are needed for certain scenarios),
2237 given that a case can be made for zeroing *and* non-zeroing, the
2238 decision was taken to add support for both.
2239
2240 ## Single-predication (based on destination register)
2241
2242 Zeroing on predication for arithmetic operations is taken from
2243 the destination register's predicate. i.e. the predication *and*
2244 zeroing settings to be applied to the whole operation come from the
2245 CSR Predication table entry for the destination register.
2246 Thus when zeroing is set on predication of a destination element,
2247 if the predication bit is clear, then the destination element is *set*
2248 to zero (twin-predication is slightly different, and will be covered
2249 next).
2250
2251 Thus the pseudo-code loop for a predicated arithmetic operation
2252 is modified to as follows:
2253
2254  for (i = 0; i < VL; i++)
2255 if not zeroing: # an optimisation
2256 while (!(predval & 1<<i) && i < VL)
2257 if (int_vec[rd ].isvector)  { id += 1; }
2258 if (int_vec[rs1].isvector)  { irs1 += 1; }
2259 if (int_vec[rs2].isvector)  { irs2 += 1; }
2260 if i == VL:
2261 return
2262 if (predval & 1<<i)
2263 src1 = ....
2264 src2 = ...
2265 else:
2266 result = src1 + src2 # actual add (or other op) here
2267 set_polymorphed_reg(rd, destwid, ird, result)
2268 if int_vec[rd].ffirst and result == 0:
2269 VL = i # result was zero, end loop early, return VL
2270 return
2271 if (!int_vec[rd].isvector) return
2272 else if zeroing:
2273 result = 0
2274 set_polymorphed_reg(rd, destwid, ird, result)
2275 if (int_vec[rd ].isvector)  { id += 1; }
2276 else if (predval & 1<<i) return
2277 if (int_vec[rs1].isvector)  { irs1 += 1; }
2278 if (int_vec[rs2].isvector)  { irs2 += 1; }
2279 if (rd == VL or rs1 == VL or rs2 == VL): return
2280
2281 The optimisation to skip elements entirely is only possible for certain
2282 micro-architectures when zeroing is not set. However for lane-based
2283 micro-architectures this optimisation may not be practical, as it
2284 implies that elements end up in different "lanes". Under these
2285 circumstances it is perfectly fine to simply have the lanes
2286 "inactive" for predicated elements, even though it results in
2287 less than 100% ALU utilisation.
2288
2289 ## Twin-predication (based on source and destination register)
2290
2291 Twin-predication is not that much different, except that that
2292 the source is independently zero-predicated from the destination.
2293 This means that the source may be zero-predicated *or* the
2294 destination zero-predicated *or both*, or neither.
2295
2296 When with twin-predication, zeroing is set on the source and not
2297 the destination, if a predicate bit is set it indicates that a zero
2298 data element is passed through the operation (the exception being:
2299 if the source data element is to be treated as an address - a LOAD -
2300 then the data returned *from* the LOAD is zero, rather than looking up an
2301 *address* of zero.
2302
2303 When zeroing is set on the destination and not the source, then just
2304 as with single-predicated operations, a zero is stored into the destination
2305 element (or target memory address for a STORE).
2306
2307 Zeroing on both source and destination effectively result in a bitwise
2308 NOR operation of the source and destination predicate: the result is that
2309 where either source predicate OR destination predicate is set to 0,
2310 a zero element will ultimately end up in the destination register.
2311
2312 However: this may not necessarily be the case for all operations;
2313 implementors, particularly of custom instructions, clearly need to
2314 think through the implications in each and every case.
2315
2316 Here is pseudo-code for a twin zero-predicated operation:
2317
2318 function op_mv(rd, rs) # MV not VMV!
2319  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
2320  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
2321  ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
2322  pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
2323  for (int i = 0, int j = 0; i < VL && j < VL):
2324 if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
2325 if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
2326 if ((pd & 1<<j))
2327 if ((pd & 1<<j))
2328 sourcedata = ireg[rs+i];
2329 else
2330 sourcedata = 0
2331 ireg[rd+j] <= sourcedata
2332 else if (zerodst)
2333 ireg[rd+j] <= 0
2334 if (int_csr[rs].isvec)
2335 i++;
2336 if (int_csr[rd].isvec)
2337 j++;
2338 else
2339 if ((pd & 1<<j))
2340 break;
2341
2342 Note that in the instance where the destination is a scalar, the hardware
2343 loop is ended the moment a value *or a zero* is placed into the destination
2344 register/element. Also note that, for clarity, variable element widths
2345 have been left out of the above.
2346
2347 # Exceptions
2348
2349 TODO: expand. Exceptions may occur at any time, in any given underlying
2350 scalar operation. This implies that context-switching (traps) may
2351 occur, and operation must be returned to where it left off. That in
2352 turn implies that the full state - including the current parallel
2353 element being processed - has to be saved and restored. This is
2354 what the **STATE** CSR is for.
2355
2356 The implications are that all underlying individual scalar operations
2357 "issued" by the parallelisation have to appear to be executed sequentially.
2358 The further implications are that if two or more individual element
2359 operations are underway, and one with an earlier index causes an exception,
2360 it may be necessary for the microarchitecture to **discard** or terminate
2361 operations with higher indices.
2362
2363 This being somewhat dissatisfactory, an "opaque predication" variant
2364 of the STATE CSR is being considered.
2365
2366 # Hints
2367
2368 A "HINT" is an operation that has no effect on architectural state,
2369 where its use may, by agreed convention, give advance notification
2370 to the microarchitecture: branch prediction notification would be
2371 a good example. Usually HINTs are where rd=x0.
2372
2373 With Simple-V being capable of issuing *parallel* instructions where
2374 rd=x0, the space for possible HINTs is expanded considerably. VL
2375 could be used to indicate different hints. In addition, if predication
2376 is set, the predication register itself could hypothetically be passed
2377 in as a *parameter* to the HINT operation.
2378
2379 No specific hints are yet defined in Simple-V
2380
2381 # Vector Block Format <a name="vliw-format"></a>
2382
2383 One issue with a former revision of SV was the setup and teardown
2384 time of the CSRs. The cost of the use of a full CSRRW (requiring LI)
2385 to set up registers and predicates was quite high. A VLIW-like format
2386 therefore makes sense (named VBLOCK), and is conceptually reminiscent of
2387 the ARM Thumb2 "IT" instruction.
2388
2389 The format is:
2390
2391 * the standard RISC-V 80 to 192 bit encoding sequence, with bits
2392 defining the options to follow within the block
2393 * An optional VL Block (16-bit)
2394 * Optional predicate entries (8/16-bit blocks: see Predicate Table, above)
2395 * Optional register entries (8/16-bit blocks: see Register Table, above)
2396 * finally some 16/32/48 bit standard RV or SVPrefix opcodes follow.
2397
2398 Thus, the variable-length format from Section 1.5 of the RISC-V ISA is used
2399 as follows:
2400
2401 | base+4 ... base+2 | base | number of bits |
2402 | ------ ----------------- | ---------------- | -------------------------- |
2403 | ..xxxx xxxxxxxxxxxxxxxx | xnnnxxxxx1111111 | (80+16\*nnn)-bit, nnn!=111 |
2404 | {ops}{Pred}{Reg}{VL Block} | SV Prefix | |
2405
2406 A suitable prefix, which fits the Expanded Instruction-Length encoding
2407 for "(80 + 16 times instruction-length)", as defined in Section 1.5
2408 of the RISC-V ISA, is as follows:
2409
2410 | 15 | 14:12 | 11:10 | 9:8 | 7 | 6:0 |
2411 | - | ----- | ----- | ----- | --- | ------- |
2412 | vlset | 16xil | pplen | rplen | mode | 1111111 |
2413
2414 The VL/MAXVL/SubVL Block format:
2415
2416 | 31-30 | 29:28 | 27:22 | 21:17 - 16 |
2417 | - | ----- | ------ | ------ - - |
2418 | 0 | SubVL | VLdest | VLEN vlt |
2419 | 1 | SubVL | VLdest | VLEN |
2420
2421 Note: this format is very similar to that used in [[sv_prefix_proposal]]
2422
2423 If vlt is 0, VLEN is a 5 bit immediate value, offset by one (i.e
2424 a bit sequence of 0b00000 represents VL=1 and so on). If vlt is 1,
2425 it specifies the scalar register from which VL is set by this VBLOCK
2426 instruction group. VL, whether set from the register or the immediate,
2427 is then modified (truncated) to be MIN(VL, MAXVL), and the result stored
2428 in the scalar register specified in VLdest. If VLdest is zero, no store
2429 in the regfile occurs (however VL is still set).
2430
2431 This option will typically be used to start vectorised loops, where
2432 the VBLOCK instruction effectively embeds an optional "SETSUBVL, SETVL"
2433 sequence (in compact form).
2434
2435 When bit 15 is set to 1, MAXVL and VL are both set to the immediate,
2436 VLEN (again, offset by one), which is 6 bits in length, and the same
2437 value stored in scalar register VLdest (if that register is nonzero).
2438 A value of 0b000000 will set MAXVL=VL=1, a value of 0b000001 will
2439 set MAXVL=VL= 2 and so on.
2440
2441 This option will typically not be used so much for loops as it will be
2442 for one-off instructions such as saving the entire register file to the
2443 stack with a single one-off Vectorised and predicated LD/ST, or as a way
2444 to save or restore registers in a function call with a single instruction.
2445
2446 CSRs needed:
2447
2448 * mepcvliw
2449 * sepcvliw
2450 * uepcvliw
2451 * hepcvliw
2452
2453 Notes:
2454
2455 * Bit 7 specifies if the prefix block format is the full 16 bit format
2456 (1) or the compact less expressive format (0). In the 8 bit format,
2457 pplen is multiplied by 2.
2458 * 8 bit format predicate numbering is implicit and begins from x9. Thus
2459 it is critical to put blocks in the correct order as required.
2460 * Bit 7 also specifies if the register block format is 16 bit (1) or 8 bit
2461 (0). In the 8 bit format, rplen is multiplied by 2. If only an odd number
2462 of entries are needed the last may be set to 0x00, indicating "unused".
2463 * Bit 15 specifies if the VL Block is present. If set to 1, the VL Block
2464 immediately follows the VBLOCK instruction Prefix
2465 * Bits 8 and 9 define how many RegCam entries (0 to 3 if bit 15 is 1,
2466 otherwise 0 to 6) follow the (optional) VL Block.
2467 * Bits 10 and 11 define how many PredCam entries (0 to 3 if bit 7 is 1,
2468 otherwise 0 to 6) follow the (optional) RegCam entries
2469 * Bits 14 to 12 (IL) define the actual length of the instruction: total
2470 number of bits is 80 + 16 times IL. Standard RV32, RVC and also
2471 SVPrefix (P48/64-\*-Type) instructions fit into this space, after the
2472 (optional) VL / RegCam / PredCam entries
2473 * In any RVC or 32 Bit opcode, any registers within the VBLOCK-prefixed
2474 format *MUST* have the RegCam and PredCam entries applied to the
2475 operation (and the Vectorisation loop activated)
2476 * P48 and P64 opcodes do **not** take their Register or predication
2477 context from the VBLOCK tables: they do however have VL or SUBVL
2478 applied (unless VLtyp or svlen are set).
2479 * At the end of the VBLOCK Group, the RegCam and PredCam entries
2480 *no longer apply*. VL, MAXVL and SUBVL on the other hand remain at
2481 the values set by the last instruction (whether a CSRRW or the VL
2482 Block header).
2483 * Although an inefficient use of resources, it is fine to set the MAXVL,
2484 VL and SUBVL CSRs with standard CSRRW instructions, within a VBLOCK.
2485
2486 All this would greatly reduce the amount of space utilised by Vectorised
2487 instructions, given that 64-bit CSRRW requires 3, even 4 32-bit opcodes:
2488 the CSR itself, a LI, and the setting up of the value into the RS
2489 register of the CSR, which, again, requires a LI / LUI to get the 32
2490 bit data into the CSR. To get 64-bit data into the register in order
2491 to put it into the CSR(s), LOAD operations from memory are needed!
2492
2493 Given that each 64-bit CSR can hold only 4x PredCAM entries (or 4 RegCAM
2494 entries), that's potentially 6 to eight 32-bit instructions, just to
2495 establish the Vector State!
2496
2497 Not only that: even CSRRW on VL and MAXVL requires 64-bits (even more
2498 bits if VL needs to be set to greater than 32). Bear in mind that in SV,
2499 both MAXVL and VL need to be set.
2500
2501 By contrast, the VBLOCK prefix is only 16 bits, the VL/MAX/SubVL block is
2502 only 16 bits, and as long as not too many predicates and register vector
2503 qualifiers are specified, several 32-bit and 16-bit opcodes can fit into
2504 the format. If the full flexibility of the 16 bit block formats are not
2505 needed, more space is saved by using the 8 bit formats.
2506
2507 In this light, embedding the VL/MAXVL, PredCam and RegCam CSR entries
2508 into a VBLOCK format makes a lot of sense.
2509
2510 Bear in mind the warning in an earlier section that use of VLtyp or svlen
2511 in a P48 or P64 opcode within a VBLOCK Group will result in corruption
2512 (use) of the STATE CSR, as the STATE CSR is shared with SVPrefix. To
2513 avoid this situation, the STATE CSR may be copied into a temp register
2514 and restored afterwards.
2515
2516 Open Questions:
2517
2518 * Is it necessary to stick to the RISC-V 1.5 format? Why not go with
2519 using the 15th bit to allow 80 + 16\*0bnnnn bits? Perhaps to be sane,
2520 limit to 256 bits (16 times 0-11).
2521 * Could a "hint" be used to set which operations are parallel and which
2522 are sequential?
2523 * Could a new sub-instruction opcode format be used, one that does not
2524 conform precisely to RISC-V rules, but *unpacks* to RISC-V opcodes?
2525 no need for byte or bit-alignment
2526 * Could a hardware compression algorithm be deployed? Quite likely,
2527 because of the sub-execution context (sub-VBLOCK PC)
2528
2529 ## Limitations on instructions.
2530
2531 To greatly simplify implementations, it is required to treat the VBLOCK
2532 group as a separate sub-program with its own separate PC. The sub-pc
2533 advances separately whilst the main PC remains pointing at the beginning
2534 of the VBLOCK instruction (not to be confused with how VL works, which
2535 is exactly the same principle, except it is VStart in the STATE CSR
2536 that increments).
2537
2538 This has implications, namely that a new set of CSRs identical to xepc
2539 (mepc, srpc, hepc and uepc) must be created and managed and respected
2540 as being a sub extension of the xepc set of CSRs. Thus, xepcvliw CSRs
2541 must be context switched and saved / restored in traps.
2542
2543 The srcoffs and destoffs indices in the STATE CSR may be similarly
2544 regarded as another sub-execution context, giving in effect two sets of
2545 nested sub-levels of the RISCV Program Counter (actually, three including
2546 SUBVL and ssvoffs).
2547
2548 In addition, as xepcvliw CSRs are relative to the beginning of the VBLOCK,
2549 branches MUST be restricted to within (relative to) the block,
2550 i.e. addressing is now restricted to the start (and very short) length
2551 of the block.
2552
2553 Also: calling subroutines is inadviseable, unless they can be entirely
2554 accomplished within a block.
2555
2556 A normal jump, normal branch and a normal function call may only be taken
2557 by letting the VBLOCK group end, returning to "normal" standard RV mode,
2558 and then using standard RVC, 32 bit or P48/64-\*-type opcodes.
2559
2560 ## Links
2561
2562 * <https://groups.google.com/d/msg/comp.arch/yIFmee-Cx-c/jRcf0evSAAAJ>
2563
2564 # Subsets of RV functionality
2565
2566 This section describes the differences when SV is implemented on top of
2567 different subsets of RV.
2568
2569 ## Common options
2570
2571 It is permitted to only implement SVprefix and not the VBLOCK instruction
2572 format option, and vice-versa. UNIX Platforms **MUST** raise illegal
2573 instruction on seeing an unsupported VBLOCK or SVprefix opcode, so that
2574 traps may emulate the format.
2575
2576 It is permitted in SVprefix to either not implement VL or not implement
2577 SUBVL (see [[sv_prefix_proposal]] for full details. Again, UNIX Platforms
2578 *MUST* raise illegal instruction on implementations that do not support
2579 VL or SUBVL.
2580
2581 It is permitted to limit the size of either (or both) the register files
2582 down to the original size of the standard RV architecture. However, below
2583 the mandatory limits set in the RV standard will result in non-compliance
2584 with the SV Specification.
2585
2586 ## RV32 / RV32F
2587
2588 When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
2589 maximum limit for predication is also restricted to 32 bits. Whilst not
2590 actually specifically an "option" it is worth noting.
2591
2592 ## RV32G
2593
2594 Normally in standard RV32 it does not make much sense to have
2595 RV32G, The critical instructions that are missing in standard RV32
2596 are those for moving data to and from the double-width floating-point
2597 registers into the integer ones, as well as the FCVT routines.
2598
2599 In an earlier draft of SV, it was possible to specify an elwidth
2600 of double the standard register size: this had to be dropped,
2601 and may be reintroduced in future revisions.
2602
2603 ## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
2604
2605 When floating-point is not implemented, the size of the User Register and
2606 Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
2607 per table).
2608
2609 ## RV32E
2610
2611 In embedded scenarios the User Register and Predication CSRs may be
2612 dropped entirely, or optionally limited to 1 CSR, such that the combined
2613 number of entries from the M-Mode CSR Register table plus U-Mode
2614 CSR Register table is either 4 16-bit entries or (if the U-Mode is
2615 zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
2616 the Predication CSR tables.
2617
2618 RV32E is the most likely candidate for simply detecting that registers
2619 are marked as "vectorised", and generating an appropriate exception
2620 for the VL loop to be implemented in software.
2621
2622 ## RV128
2623
2624 RV128 has not been especially considered, here, however it has some
2625 extremely large possibilities: double the element width implies
2626 256-bit operands, spanning 2 128-bit registers each, and predication
2627 of total length 128 bit given that XLEN is now 128.
2628
2629 # Under consideration <a name="issues"></a>
2630
2631 for element-grouping, if there is unused space within a register
2632 (3 16-bit elements in a 64-bit register for example), recommend:
2633
2634 * For the unused elements in an integer register, the used element
2635 closest to the MSB is sign-extended on write and the unused elements
2636 are ignored on read.
2637 * The unused elements in a floating-point register are treated as-if
2638 they are set to all ones on write and are ignored on read, matching the
2639 existing standard for storing smaller FP values in larger registers.
2640
2641 ---
2642
2643 info register,
2644
2645 > One solution is to just not support LR/SC wider than a fixed
2646 > implementation-dependent size, which must be at least 
2647 >1 XLEN word, which can be read from a read-only CSR
2648 > that can also be used for info like the kind and width of 
2649 > hw parallelism supported (128-bit SIMD, minimal virtual 
2650 > parallelism, etc.) and other things (like maybe the number 
2651 > of registers supported). 
2652
2653 > That CSR would have to have a flag to make a read trap so
2654 > a hypervisor can simulate different values.
2655
2656 ----
2657
2658 > And what about instructions like JALR? 
2659
2660 answer: they're not vectorised, so not a problem
2661
2662 ----
2663
2664 * if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
2665 XLEN if elwidth==default
2666 * if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
2667 *32* if elwidth == default
2668
2669 ---
2670
2671 TODO: document different lengths for INT / FP regfiles, and provide
2672 as part of info register. 00=32, 01=64, 10=128, 11=reserved.
2673
2674 ---
2675
2676 TODO, update to remove RegCam and PredCam CSRs, just use SVprefix and
2677 VBLOCK format
2678
2679 ---
2680
2681 Could the 8 bit Register VBLOCK format use regnum<<1 instead, only accessing regs 0 to 64?
2682
2683 --
2684
2685 Expand the range of SUBVL and its associated svsrcoffs and svdestoffs by
2686 adding a 2nd STATE CSR (or extending STATE to 64 bits). Future version?
2687
2688 --
2689
2690 TODO evaluate strncpy and strlen
2691 <https://groups.google.com/forum/m/#!msg/comp.arch/bGBeaNjAKvc/_vbqyxTUAQAJ>
2692
2693 RVV version: <a name="strncpy"></>
2694
2695 strncpy:
2696 mv a3, a0 # Copy dst
2697 loop:
2698 setvli x0, a2, vint8 # Vectors of bytes.
2699 vlbff.v v1, (a1) # Get src bytes
2700 vseq.vi v0, v1, 0 # Flag zero bytes
2701 vmfirst a4, v0 # Zero found?
2702 vmsif.v v0, v0 # Set mask up to and including zero byte. Ppplio
2703 vsb.v v1, (a3), v0.t # Write out bytes
2704 bgez a4, exit # Done
2705 csrr t1, vl # Get number of bytes fetched
2706 add a1, a1, t1 # Bump src pointer
2707 sub a2, a2, t1 # Decrement count.
2708 add a3, a3, t1 # Bump dst pointer
2709 bnez a2, loop # Anymore?
2710
2711 exit:
2712 ret
2713
2714 SV version (WIP):
2715
2716 strncpy:
2717 mv a3, a0
2718 SETMVLI 8 # set max vector to 8
2719 RegCSR[a3] = 8bit, a3, scalar
2720 RegCSR[a1] = 8bit, a1, scalar
2721 RegCSR[t0] = 8bit, t0, vector
2722 PredTb[t0] = ffirst, x0, inv
2723 loop:
2724 SETVLI a2, t4 # t4 and VL now 1..8
2725 ldb t0, (a1) # t0 fail first mode
2726 bne t0, x0, allnonzero # still ff
2727 # VL points to last nonzero
2728 GETVL t4 # from bne tests
2729 addi t4, t4, 1 # include zero
2730 SETVL t4 # set exactly to t4
2731 stb t0, (a3) # store incl zero
2732 ret # end subroutine
2733 allnonzero:
2734 stb t0, (a3) # VL legal range
2735 GETVL t4 # from bne tests
2736 add a1, a1, t4 # Bump src pointer
2737 sub a2, a2, t4 # Decrement count.
2738 add a3, a3, t4 # Bump dst pointer
2739 bnez a2, loop # Anymore?
2740 exit:
2741 ret
2742
2743 Notes:
2744
2745 * Setting MVL to 8 is just an example. If enough registers are spare it may be set to XLEN which will require a bank of 8 scalar registers for a1, a3 and t0.
2746 * obviously if that is done, t0 is not separated by 8 full registers, and would overwrite t1 thru t7. x80 would work well, as an example, instead.
2747 * with the exception of the GETVL (a pseudo code alias for csrr), every single instruction above may use RVC.
2748 * RVC C.BNEZ can be used because rs1' may be extended to the full 128 registers through redirection
2749 * RVC C.LW and C.SW may be used because the W format may be overridden by the 8 bit format. All of t0, a3 and a1 are overridden to make that work.
2750 * with the exception of the GETVL, all Vector Context may be done in VBLOCK form.
2751 * setting predication to x0 (zero) and invert on t0 is a trick to enable just ffirst on t0
2752 * ldb and bne are both using t0, both in ffirst mode
2753 * ldb will end on illegal mem, reduce VL, but copied all sorts of stuff into t0
2754 * bne t0 x0 tests up to the NEW VL for nonzero, vector t0 against scalar x0
2755 * however as t0 is in ffirst mode, the first fail wil ALSO stop the compares, and reduce VL as well
2756 * the branch only goes to allnonzero if all tests succeed
2757 * if it did not, we can safely increment VL by 1 (using a4) to include the zero.
2758 * SETVL sets *exactly* the requested amount into VL.
2759 * the SETVL just after allnonzero label is needed in case the ldb ffirst activates but the bne allzeros does not.
2760 * this would cause the stb to copy up to the end of the legal memory
2761 * of course, on the next loop the ldb would throw a trap, as a1 now points to the first illegal mem location.
2762
2763 RVV version:
2764
2765 mv a3, a0 # Save start
2766 loop:
2767 setvli a1, x0, vint8 # byte vec, x0 (Zero reg) => use max hardware len
2768 vldbff.v v1, (a3) # Get bytes
2769 csrr a1, vl # Get bytes actually read e.g. if fault
2770 vseq.vi v0, v1, 0 # Set v0[i] where v1[i] = 0
2771 add a3, a3, a1 # Bump pointer
2772 vmfirst a2, v0 # Find first set bit in mask, returns -1 if none
2773 bltz a2, loop # Not found?
2774 add a0, a0, a1 # Sum start + bump
2775 add a3, a3, a2 # Add index of zero byte
2776 sub a0, a3, a0 # Subtract start address+bump
2777 ret