(no commit message)
[libreriscv.git] / simple_v_extension / specification.mdwn
1 # Simple-V (Parallelism Extension Proposal) Specification
2
3 * Copyright (C) 2017, 2018, 2019 Luke Kenneth Casson Leighton
4 * Status: DRAFTv0.6
5 * Last edited: 21 jun 2019
6 * Ancillary resource: [[opcodes]] [[sv_prefix_proposal]]
7
8 With thanks to:
9
10 * Allen Baum
11 * Bruce Hoult
12 * comp.arch
13 * Jacob Bachmeyer
14 * Guy Lemurieux
15 * Jacob Lifshay
16 * Terje Mathisen
17 * The RISC-V Founders, without whom this all would not be possible.
18
19 [[!toc ]]
20
21 # Summary and Background: Rationale
22
23 Simple-V is a uniform parallelism API for RISC-V hardware that has several
24 unplanned side-effects including code-size reduction, expansion of
25 HINT space and more. The reason for
26 creating it is to provide a manageable way to turn a pre-existing design
27 into a parallel one, in a step-by-step incremental fashion, without adding any new opcodes, thus allowing
28 the implementor to focus on adding hardware where it is needed and necessary.
29 The primary target is for mobile-class 3D GPUs and VPUs, with secondary
30 goals being to reduce executable size (by extending the effectiveness of RV opcodes, RVC in particular) and reduce context-switch latency.
31
32 Critically: **No new instructions are added**. The parallelism (if any
33 is implemented) is implicitly added by tagging *standard* scalar registers
34 for redirection. When such a tagged register is used in any instruction,
35 it indicates that the PC shall **not** be incremented; instead a loop
36 is activated where *multiple* instructions are issued to the pipeline
37 (as determined by a length CSR), with contiguously incrementing register
38 numbers starting from the tagged register. When the last "element"
39 has been reached, only then is the PC permitted to move on. Thus
40 Simple-V effectively sits (slots) *in between* the instruction decode phase
41 and the ALU(s).
42
43 The barrier to entry with SV is therefore very low. The minimum
44 compliant implementation is software-emulation (traps), requiring
45 only the CSRs and CSR tables, and that an exception be thrown if an
46 instruction's registers are detected to have been tagged. The looping
47 that would otherwise be done in hardware is thus carried out in software,
48 instead. Whilst much slower, it is "compliant" with the SV specification,
49 and may be suited for implementation in RV32E and also in situations
50 where the implementor wishes to focus on certain aspects of SV, without
51 unnecessary time and resources into the silicon, whilst also conforming
52 strictly with the API. A good area to punt to software would be the
53 polymorphic element width capability for example.
54
55 Hardware Parallelism, if any, is therefore added at the implementor's
56 discretion to turn what would otherwise be a sequential loop into a
57 parallel one.
58
59 To emphasise that clearly: Simple-V (SV) is *not*:
60
61 * A SIMD system
62 * A SIMT system
63 * A Vectorisation Microarchitecture
64 * A microarchitecture of any specific kind
65 * A mandary parallel processor microarchitecture of any kind
66 * A supercomputer extension
67
68 SV does **not** tell implementors how or even if they should implement
69 parallelism: it is a hardware "API" (Application Programming Interface)
70 that, if implemented, presents a uniform and consistent way to *express*
71 parallelism, at the same time leaving the choice of if, how, how much,
72 when and whether to parallelise operations **entirely to the implementor**.
73
74 # Basic Operation
75
76 The principle of SV is as follows:
77
78 * Standard RV instructions are "prefixed" (extended) through a 48/64
79 bit format (single instruction option) or a variable
80 length VLIW-like prefix (multi or "grouped" option).
81 * The prefix(es) indicate which registers are "tagged" as
82 "vectorised". Predicates can also be added, and element widths
83 overridden on any src or dest register.
84 * A "Vector Length" CSR is set, indicating the span of any future
85 "parallel" operations.
86 * If any operation (a **scalar** standard RV opcode) uses a register
87 that has been so "marked" ("tagged"), a hardware "macro-unrolling loop"
88 is activated, of length VL, that effectively issues **multiple**
89 identical instructions using contiguous sequentially-incrementing
90 register numbers, based on the "tags".
91 * **Whether they be executed sequentially or in parallel or a
92 mixture of both or punted to software-emulation in a trap handler
93 is entirely up to the implementor**.
94
95 In this way an entire scalar algorithm may be vectorised with
96 the minimum of modification to the hardware and to compiler toolchains.
97
98 To reiterate: **There are *no* new opcodes**. The scheme works *entirely*
99 on hidden context that augments *scalar* RISCV instructions.
100
101 # CSRs <a name="csrs"></a>
102
103 * An optional "reshaping" CSR key-value table which remaps from a 1D
104 linear shape to 2D or 3D, including full transposition.
105
106 There are five additional CSRs, available in any privilege level:
107
108 * MVL (the Maximum Vector Length)
109 * VL (which has different characteristics from standard CSRs)
110 * SUBVL (effectively a kind of SIMD)
111 * STATE (containing copies of MVL, VL and SUBVL as well as context information)
112 * PCVLIW (the current operation being executed within a VLIW Group)
113
114 For User Mode there are the following CSRs:
115
116 * uePCVLIW (a copy of the sub-execution Program Counter, that is relative
117 to the start of the current VLIW Group, set on a trap).
118 * ueSTATE (useful for saving and restoring during context switch,
119 and for providing fast transitions)
120
121 There are also two additional CSRs for Supervisor-Mode:
122
123 * sePCVLIW
124 * seSTATE
125
126 And likewise for M-Mode:
127
128 * mePCVLIW
129 * meSTATE
130
131 The u/m/s CSRs are treated and handled exactly like their (x)epc
132 equivalents. On entry to a privilege level, the contents of its (x)eSTATE
133 and (x)ePCVLIW CSRs are copied into STATE and PCVLIW respectively, and
134 on exit from a priv level the STATE and PCVLIW CSRs are copied to the
135 exited priv level's corresponding CSRs.
136
137 Thus for example, a User Mode trap will end up swapping STATE and ueSTATE
138 (on both entry and exit), allowing User Mode traps to have their own
139 Vectorisation Context set up, separated from and unaffected by normal
140 user applications.
141
142 Likewise, Supervisor Mode may perform context-switches, safe in the
143 knowledge that its Vectorisation State is unaffected by User Mode.
144
145 For this to work, the (x)eSTATE CSR must be saved onto the stack by the
146 trap, just like (x)epc, before modifying the trap atomicity flag (x)ie.
147
148 The access pattern for these groups of CSRs in each mode follows the
149 same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
150
151 * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
152 * In S-Mode, accessing and changing of the M-Mode CSRs is transparently
153 identical
154 to changing the S-Mode CSRs. Accessing and changing the U-Mode
155 CSRs is permitted.
156 * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
157 is prohibited.
158
159 In M-Mode, only the M-Mode CSRs are in effect, i.e. it is only the
160 M-Mode MVL, the M-Mode STATE and so on that influences the processor
161 behaviour. Likewise for S-Mode, and likewise for U-Mode.
162
163 This has the interesting benefit of allowing M-Mode (or S-Mode) to be set
164 up, for context-switching to take place, and, on return back to the higher
165 privileged mode, the CSRs of that mode will be exactly as they were.
166 Thus, it becomes possible for example to set up CSRs suited best to aiding
167 and assisting low-latency fast context-switching *once and only once*
168 (for example at boot time), without the need for re-initialising the
169 CSRs needed to do so.
170
171 Another interesting side effect of separate S Mode CSRs is that
172 Vectorised saving of the entire register file to the stack is a single
173 instruction (accidental provision of LOAD-MULTI semantics). If the
174 SVPrefix P64-LD-type format is used, LOAD-MULTI may even be done with a
175 single standalone 64 bit opcode (P64 may set up both VL and MVL from an
176 immediate field). It can even be predicated, which opens up some very
177 interesting possibilities.
178
179 The (x)EPCVLIW CSRs must be treated exactly like their corresponding (x)epc
180 equivalents. See VLIW section for details.
181
182 ## MAXVECTORLENGTH (MVL) <a name="mvl" />
183
184 MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
185 is variable length and may be dynamically set. MVL is
186 however limited to the regfile bitwidth XLEN (1-32 for RV32,
187 1-64 for RV64 and so on).
188
189 The reason for setting this limit is so that predication registers, when
190 marked as such, may fit into a single register as opposed to fanning
191 out over several registers. This keeps the hardware implementation a
192 little simpler.
193
194 The other important factor to note is that the actual MVL is internally
195 stored **offset by one**, so that it can fit into only 6 bits (for RV64)
196 and still cover a range up to XLEN bits. Attempts to set MVL to zero will
197 return an exception. This is expressed more clearly in the "pseudocode"
198 section, where there are subtle differences between CSRRW and CSRRWI.
199
200 ## Vector Length (VL) <a name="vl" />
201
202 VSETVL is slightly different from RVV. Similar to RVV, VL is set to be within
203 the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
204
205 VL = rd = MIN(vlen, MVL)
206
207 where 1 <= MVL <= XLEN
208
209 However just like MVL it is important to note that the range for VL has
210 subtle design implications, covered in the "CSR pseudocode" section
211
212 The fixed (specific) setting of VL allows vector LOAD/STORE to be used
213 to switch the entire bank of registers using a single instruction (see
214 Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
215 is down to the fact that predication bits fit into a single register of
216 length XLEN bits.
217
218 The second and most important change is that, within the limits set by
219 MVL, the value passed in **must** be set in VL (and in the
220 destination register).
221
222 This has implication for the microarchitecture, as VL is required to be
223 set (limits from MVL notwithstanding) to the actual value
224 requested. RVV has the option to set VL to an arbitrary value that suits
225 the conditions and the micro-architecture: SV does *not* permit this.
226
227 The reason is so that if SV is to be used for a context-switch or as a
228 substitute for LOAD/STORE-Multiple, the operation can be done with only
229 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
230 single LD/ST operation). If VL does *not* get set to the register file
231 length when VSETVL is called, then a software-loop would be needed.
232 To avoid this need, VL *must* be set to exactly what is requested
233 (limits notwithstanding).
234
235 Therefore, in turn, unlike RVV, implementors *must* provide
236 pseudo-parallelism (using sequential loops in hardware) if actual
237 hardware-parallelism in the ALUs is not deployed. A hybrid is also
238 permitted (as used in Broadcom's VideoCore-IV) however this must be
239 *entirely* transparent to the ISA.
240
241 The third change is that VSETVL is implemented as a CSR, where the
242 behaviour of CSRRW (and CSRRWI) must be changed to specifically store
243 the *new* value in the destination register, **not** the old value.
244 Where context-load/save is to be implemented in the usual fashion
245 by using a single CSRRW instruction to obtain the old value, the
246 *secondary* CSR must be used (STATE). This CSR by contrast behaves
247 exactly as standard CSRs, and contains more than just VL.
248
249 One interesting side-effect of using CSRRWI to set VL is that this
250 may be done with a single instruction, useful particularly for a
251 context-load/save. There are however limitations: CSRWI's immediate
252 is limited to 0-31 (representing VL=1-32).
253
254 Note that when VL is set to 1, vector operations cease (but not subvector
255 operations: that requires setting SUBVL=1) the hardware loop is reduced
256 to a single element: scalar operations. This is in effect the default,
257 normal operating mode. However it is important to appreciate that this
258 does **not** result in the Register table or SUBVL being disabled. Only
259 when the Register table is empty (P48/64 prefix fields notwithstanding)
260 would SV have no effect.
261
262 ## SUBVL - Sub Vector Length
263
264 This is a "group by quantity" that effectivrly asks each iteration
265 of the hardware loop to load SUBVL elements of width elwidth at a
266 time. Effectively, SUBVL is like a SIMD multiplier: instead of just 1
267 operation issued, SUBVL operations are issued.
268
269 Another way to view SUBVL is that each element in the VL length vector is
270 now SUBVL times elwidth bits in length and now comprises SUBVL discrete
271 sub operations. An inner SUBVL for-loop within a VL for-loop in effect,
272 with the sub-element increased every time in the innermost loop. This
273 is best illustrated in the (simplified) pseudocode example, later.
274
275 The primary use case for SUBVL is for 3D FP Vectors. A Vector of 3D
276 coordinates X,Y,Z for example may be loaded and multiplied the stored, per
277 VL element iteration, rather than having to set VL to three times larger.
278
279 Legal values are 1, 2, 3 and 4 (and the STATE CSR must hold the 2 bit
280 values 0b00 thru 0b11 to represent them).
281
282 Setting this CSR to 0 must raise an exception. Setting it to a value
283 greater than 4 likewise.
284
285 The main effect of SUBVL is that predication bits are applied per
286 **group**, rather than by individual element.
287
288 This saves a not insignificant number of instructions when handling 3D
289 vectors, as otherwise a much longer predicate mask would have to be set
290 up with regularly-repeated bit patterns.
291
292 See SUBVL Pseudocode illustration for details.
293
294 ## STATE
295
296 This is a standard CSR that contains sufficient information for a
297 full context save/restore. It contains (and permits setting of):
298
299 * MVL
300 * VL
301 * destoffs - the destination element offset of the current parallel
302 instruction being executed
303 * srcoffs - for twin-predication, the source element offset as well.
304 * SUBVL
305 * svdestoffs - the subvector destination element offset of the current
306 parallel instruction being executed
307 * svsrcoffs - for twin-predication, the subvector source element offset
308 as well.
309
310 Interestingly STATE may hypothetically also be modified to make the
311 immediately-following instruction to skip a certain number of elements,
312 by playing with destoffs and srcoffs (and the subvector offsets as well)
313
314 Setting destoffs and srcoffs is realistically intended for saving state
315 so that exceptions (page faults in particular) may be serviced and the
316 hardware-loop that was being executed at the time of the trap, from
317 user-mode (or Supervisor-mode), may be returned to and continued from
318 exactly where it left off. The reason why this works is because setting
319 User-Mode STATE will not change (not be used) in M-Mode or S-Mode (and
320 is entirely why M-Mode and S-Mode have their own STATE CSRs, meSTATE
321 and seSTATE).
322
323 The format of the STATE CSR is as follows:
324
325 | (29..28 | (27..26) | (25..24) | (23..18) | (17..12) | (11..6) | (5...0) |
326 | ------- | -------- | -------- | -------- | -------- | ------- | ------- |
327 | dsvoffs | ssvoffs | subvl | destoffs | srcoffs | vl | maxvl |
328
329 When setting this CSR, the following characteristics will be enforced:
330
331 * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
332 * **VL** will be truncated (after offset) to be within the range 1 to MAXVL
333 * **SUBVL** which sets a SIMD-like quantity, has only 4 values there
334 are no changes needed
335 * **srcoffs** will be truncated to be within the range 0 to VL-1
336 * **destoffs** will be truncated to be within the range 0 to VL-1
337 * **ssvoffs** will be truncated to be within the range 0 to SUBVL-1
338 * **dsvoffs** will be truncated to be within the range 0 to SUBVL-1
339
340 NOTE: if the following instruction is not a twin predicated instruction,
341 and destoffs or dsvoffs has been set to non-zero, subsequent execution
342 behaviour is undefined. **USE WITH CARE**.
343
344 ### Hardware rules for when to increment STATE offsets
345
346 The offsets inside STATE are like the indices in a loop, except
347 in hardware. They are also partially (conceptually) similar to a
348 "sub-execution Program Counter". As such, and to allow proper context
349 switching and to define correct exception behaviour, the following rules
350 must be observed:
351
352 * When the VL CSR is set, srcoffs and destoffs are reset to zero.
353 * Each instruction that contains a "tagged" register shall start
354 execution at the *current* value of srcoffs (and destoffs in the case
355 of twin predication)
356 * Unpredicated bits (in nonzeroing mode) shall cause the element operation
357 to skip, incrementing the srcoffs (or destoffs)
358 * On execution of an element operation, Exceptions shall **NOT** cause
359 srcoffs or destoffs to increment.
360 * On completion of the full Vector Loop (srcoffs = VL-1 or destoffs =
361 VL-1 after the last element is executed), both srcoffs and destoffs
362 shall be reset to zero.
363
364 This latter is why srcoffs and destoffs may be stored as values from
365 0 to XLEN-1 in the STATE CSR, because as loop indices they refer to
366 elements. srcoffs and destoffs never need to be set to VL: their maximum
367 operating values are limited to 0 to VL-1.
368
369 The same corresponding rules apply to SUBVL, svsrcoffs and svdestoffs.
370
371 ## MVL and VL Pseudocode
372
373 The pseudo-code for get and set of VL and MVL use the following internal
374 functions as follows:
375
376 set_mvl_csr(value, rd):
377 regs[rd] = STATE.MVL
378 STATE.MVL = MIN(value, STATE.MVL)
379
380 get_mvl_csr(rd):
381 regs[rd] = STATE.VL
382
383 set_vl_csr(value, rd):
384 STATE.VL = MIN(value, STATE.MVL)
385 regs[rd] = STATE.VL # yes returning the new value NOT the old CSR
386 return STATE.VL
387
388 get_vl_csr(rd):
389 regs[rd] = STATE.VL
390 return STATE.VL
391
392 Note that where setting MVL behaves as a normal CSR (returns the old
393 value), unlike standard CSR behaviour, setting VL will return the **new**
394 value of VL **not** the old one.
395
396 For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
397 maximise the effectiveness, an immediate of 0 is used to set VL=1,
398 an immediate of 1 is used to set VL=2 and so on:
399
400 CSRRWI_Set_MVL(value):
401 set_mvl_csr(value+1, x0)
402
403 CSRRWI_Set_VL(value):
404 set_vl_csr(value+1, x0)
405
406 However for CSRRW the following pseudocode is used for MVL and VL,
407 where setting the value to zero will cause an exception to be raised.
408 The reason is that if VL or MVL are set to zero, the STATE CSR is
409 not capable of storing that value.
410
411 CSRRW_Set_MVL(rs1, rd):
412 value = regs[rs1]
413 if value == 0 or value > XLEN:
414 raise Exception
415 set_mvl_csr(value, rd)
416
417 CSRRW_Set_VL(rs1, rd):
418 value = regs[rs1]
419 if value == 0 or value > XLEN:
420 raise Exception
421 set_vl_csr(value, rd)
422
423 In this way, when CSRRW is utilised with a loop variable, the value
424 that goes into VL (and into the destination register) may be used
425 in an instruction-minimal fashion:
426
427 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
428 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
429 CSRRWI MVL, 3 # sets MVL == **4** (not 3)
430 j zerotest # in case loop counter a0 already 0
431 loop:
432 CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
433 ld a3, a1 # load 4 registers a3-6 from x
434 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
435 ld a7, a2 # load 4 registers a7-10 from y
436 add a1, a1, t1 # increment pointer to x by vl*8
437 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
438 sub a0, a0, t0 # n -= vl (t0)
439 st a7, a2 # store 4 registers a7-10 to y
440 add a2, a2, t1 # increment pointer to y by vl*8
441 zerotest:
442 bnez a0, loop # repeat if n != 0
443
444 With the STATE CSR, just like with CSRRWI, in order to maximise the
445 utilisation of the limited bitspace, "000000" in binary represents
446 VL==1, "00001" represents VL==2 and so on (likewise for MVL):
447
448 CSRRW_Set_SV_STATE(rs1, rd):
449 value = regs[rs1]
450 get_state_csr(rd)
451 STATE.MVL = set_mvl_csr(value[11:6]+1)
452 STATE.VL = set_vl_csr(value[5:0]+1)
453 STATE.destoffs = value[23:18]>>18
454 STATE.srcoffs = value[23:18]>>12
455
456 get_state_csr(rd):
457 regs[rd] = (STATE.MVL-1) | (STATE.VL-1)<<6 | (STATE.srcoffs)<<12 |
458 (STATE.destoffs)<<18
459 return regs[rd]
460
461 In both cases, whilst CSR read of VL and MVL return the exact values
462 of VL and MVL respectively, reading and writing the STATE CSR returns
463 those values **minus one**. This is absolutely critical to implement
464 if the STATE CSR is to be used for fast context-switching.
465
466 ## VL, MVL and SUBVL instruction aliases
467
468 This table contains pseudo-assembly instruction aliases. Note the
469 subtraction of 1 from the CSRRWI pseudo variants, to compensate for the
470 reduced range of the 5 bit immediate.
471
472 | alias | CSR |
473 | - | - |
474 | SETVL rd, rs | CSRRW VL, rd, rs |
475 | SETVLi rd, #n | CSRRWI VL, rd, #n-1 |
476 | GETVL rd | CSRRW VL, rd, x0 |
477 | SETMVL rd, rs | CSRRW MVL, rd, rs |
478 | SETMVLi rd, #n | CSRRWI MVL,rd, #n-1 |
479 | GETMVL rd | CSRRW MVL, rd, x0 |
480
481 Note: CSRRC and other bitsetting may still be used, they are however not particularly useful (very obscure).
482
483 ## Register key-value (CAM) table <a name="regcsrtable" />
484
485 *NOTE: in prior versions of SV, this table used to be writable and
486 accessible via CSRs. It is now stored in the VLIW instruction format. Note
487 that this table does *not* get applied to the SVPrefix P48/64 format,
488 only to scalar opcodes*
489
490 The purpose of the Register table is three-fold:
491
492 * To mark integer and floating-point registers as requiring "redirection"
493 if it is ever used as a source or destination in any given operation.
494 This involves a level of indirection through a 5-to-7-bit lookup table,
495 such that **unmodified** operands with 5 bits (3 for some RVC ops) may
496 access up to **128** registers.
497 * To indicate whether, after redirection through the lookup table, the
498 register is a vector (or remains a scalar).
499 * To over-ride the implicit or explicit bitwidth that the operation would
500 normally give the register.
501
502 Note: clearly, if an RVC operation uses a 3 bit spec'd register (x8-x15)
503 and the Register table contains entried that only refer to registerd
504 x1-x14 or x16-x31, such operations will *never* activate the VL hardware
505 loop!
506
507 If however the (16 bit) Register table does contain such an entry (x8-x15
508 or x2 in the case of LWSP), that src or dest reg may be redirected
509 anywhere to the *full* 128 register range. Thus, RVC becomes far more
510 powerful and has many more opportunities to reduce code size that in
511 Standard RV32/RV64 executables.
512
513 16 bit format:
514
515 | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
516 | ------ | | - | - | - | ------ | ------- |
517 | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
518 | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
519 | .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
520 | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
521
522 8 bit format:
523
524 | RegCAM | | 7 | (6..5) | (4..0) |
525 | ------ | | - | ------ | ------- |
526 | 0 | | i/f | vew0 | regnum |
527
528 i/f is set to "1" to indicate that the redirection/tag entry is to
529 be applied to integer registers; 0 indicates that it is relevant to
530 floating-point
531 registers.
532
533 The 8 bit format is used for a much more compact expression. "isvec"
534 is implicit and, similar to [[sv-prefix-proposal]], the target vector
535 is "regnum<<2", implicitly. Contrast this with the 16-bit format where
536 the target vector is *explicitly* named in bits 8 to 14, and bit 15 may
537 optionally set "scalar" mode.
538
539 Note that whilst SVPrefix adds one extra bit to each of rd, rs1 etc.,
540 and thus the "vector" mode need only shift the (6 bit) regnum by 1 to
541 get the actual (7 bit) register number to use, there is not enough space
542 in the 8 bit format (only 5 bits for regnum) so "regnum<<2" is required.
543
544 vew has the following meanings, indicating that the instruction's
545 operand size is "over-ridden" in a polymorphic fashion:
546
547 | vew | bitwidth |
548 | --- | ------------------- |
549 | 00 | default (XLEN/FLEN) |
550 | 01 | 8 bit |
551 | 10 | 16 bit |
552 | 11 | 32 bit |
553
554 As the above table is a CAM (key-value store) it may be appropriate
555 (faster, implementation-wise) to expand it as follows:
556
557 struct vectorised fp_vec[32], int_vec[32];
558
559 for (i = 0; i < len; i++) // from VLIW Format
560 tb = int_vec if CSRvec[i].type == 0 else fp_vec
561 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
562 tb[idx].elwidth = CSRvec[i].elwidth
563 tb[idx].regidx = CSRvec[i].regidx // indirection
564 tb[idx].isvector = CSRvec[i].isvector // 0=scalar
565
566 ## Predication Table <a name="predication_csr_table"></a>
567
568 *NOTE: in prior versions of SV, this table used to be writable and
569 accessible via CSRs. It is now stored in the VLIW instruction format.
570 The table does **not** apply to SVPrefix opcodes*
571
572 The Predication Table is a key-value store indicating whether, if a
573 given destination register (integer or floating-point) is referred to
574 in an instruction, it is to be predicated. Like the Register table, it
575 is an indirect lookup that allows the RV opcodes to not need modification.
576
577 It is particularly important to note
578 that the *actual* register used can be *different* from the one that is
579 in the instruction, due to the redirection through the lookup table.
580
581 * regidx is the register that in combination with the
582 i/f flag, if that integer or floating-point register is referred to in a
583 (standard RV) instruction results in the lookup table being referenced
584 to find the predication mask to use for this operation.
585 * predidx is the *actual* (full, 7 bit) register to be used for the
586 predication mask.
587 * inv indicates that the predication mask bits are to be inverted
588 prior to use *without* actually modifying the contents of the
589 registerfrom which those bits originated.
590 * zeroing is either 1 or 0, and if set to 1, the operation must
591 place zeros in any element position where the predication mask is
592 set to zero. If zeroing is set to 0, unpredicated elements *must*
593 be left alone. Some microarchitectures may choose to interpret
594 this as skipping the operation entirely. Others which wish to
595 stick more closely to a SIMD architecture may choose instead to
596 interpret unpredicated elements as an internal "copy element"
597 operation (which would be necessary in SIMD microarchitectures
598 that perform register-renaming)
599 * ffirst is a special mode that stops sequential element processing when
600 a data-dependent condition occurs, whether a trap or a conditional test.
601 The handling of each (trap or conditional test) is slightly different:
602 see Instruction sections for further details
603
604 16 bit format:
605
606 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
607 | ----- | - | - | - | - | ------- | ------- |
608 | 0 | predidx | zero0 | inv0 | i/f | regidx | ffirst0 |
609 | 1 | predidx | zero1 | inv1 | i/f | regidx | ffirst1 |
610 | 2 | predidx | zero2 | inv2 | i/f | regidx | ffirst2 |
611 | 3 | predidx | zero3 | inv3 | i/f | regidx | ffirst3 |
612
613 Note: predidx=x0, zero=1, inv=1 is a RESERVED encoding. Its use must
614 generate an illegal instruction trap.
615
616 8 bit format:
617
618 | PrCSR | 7 | 6 | 5 | (4..0) |
619 | ----- | - | - | - | ------- |
620 | 0 | zero0 | inv0 | i/f | regnum |
621
622 The 8 bit format is a compact and less expressive variant of the full
623 16 bit format. Using the 8 bit formatis very different: the predicate
624 register to use is implicit, and numbering begins inplicitly from x9. The
625 regnum is still used to "activate" predication, in the same fashion as
626 described above.
627
628 Thus if we map from 8 to 16 bit format, the table becomes:
629
630 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
631 | ----- | - | - | - | - | ------- | ------- |
632 | 0 | x9 | zero0 | inv0 | i/f | regnum | ff=0 |
633 | 1 | x10 | zero1 | inv1 | i/f | regnum | ff=0 |
634 | 2 | x11 | zero2 | inv2 | i/f | regnum | ff=0 |
635 | 3 | x12 | zero3 | inv3 | i/f | regnum | ff=0 |
636
637 The 16 bit Predication CSR Table is a key-value store, so
638 implementation-wise it will be faster to turn the table around (maintain
639 topologically equivalent state):
640
641 struct pred {
642 bool zero; // zeroing
643 bool inv; // register at predidx is inverted
644 bool ffirst; // fail-on-first
645 bool enabled; // use this to tell if the table-entry is active
646 int predidx; // redirection: actual int register to use
647 }
648
649 struct pred fp_pred_reg[32]; // 64 in future (bank=1)
650 struct pred int_pred_reg[32]; // 64 in future (bank=1)
651
652 for (i = 0; i < len; i++) // number of Predication entries in VBLOCK
653 tb = int_pred_reg if PredicateTable[i].type == 0 else fp_pred_reg;
654 idx = PredicateTable[i].regidx
655 tb[idx].zero = CSRpred[i].zero
656 tb[idx].inv = CSRpred[i].inv
657 tb[idx].ffirst = CSRpred[i].ffirst
658 tb[idx].predidx = CSRpred[i].predidx
659 tb[idx].enabled = true
660
661 So when an operation is to be predicated, it is the internal state that
662 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
663 pseudo-code for operations is given, where p is the explicit (direct)
664 reference to the predication register to be used:
665
666 for (int i=0; i<vl; ++i)
667 if ([!]preg[p][i])
668 (d ? vreg[rd][i] : sreg[rd]) =
669 iop(s1 ? vreg[rs1][i] : sreg[rs1],
670 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
671
672 This instead becomes an *indirect* reference using the *internal* state
673 table generated from the Predication CSR key-value store, which is used
674 as follows.
675
676 if type(iop) == INT:
677 preg = int_pred_reg[rd]
678 else:
679 preg = fp_pred_reg[rd]
680
681 for (int i=0; i<vl; ++i)
682 predicate, zeroing = get_pred_val(type(iop) == INT, rd):
683 if (predicate && (1<<i))
684 result = iop(s1 ? regfile[rs1+i] : regfile[rs1],
685 s2 ? regfile[rs2+i] : regfile[rs2]);
686 (d ? regfile[rd+i] : regfile[rd]) = result
687 if preg.ffirst and result == 0:
688 VL = i # result was zero, end loop early, return VL
689 return
690 else if (zeroing)
691 (d ? regfile[rd+i] : regfile[rd]) = 0
692
693 Note:
694
695 * d, s1 and s2 are booleans indicating whether destination,
696 source1 and source2 are vector or scalar
697 * key-value CSR-redirection of rd, rs1 and rs2 have NOT been included
698 above, for clarity. rd, rs1 and rs2 all also must ALSO go through
699 register-level redirection (from the Register table) if they are
700 vectors.
701 * fail-on-first mode stops execution early whenever an operation
702 returns a zero value. floating-point results count both
703 positive-zero as well as negative-zero as "fail".
704
705 If written as a function, obtaining the predication mask (and whether
706 zeroing takes place) may be done as follows:
707
708 def get_pred_val(bool is_fp_op, int reg):
709 tb = int_reg if is_fp_op else fp_reg
710 if (!tb[reg].enabled):
711 return ~0x0, False // all enabled; no zeroing
712 tb = int_pred if is_fp_op else fp_pred
713 if (!tb[reg].enabled):
714 return ~0x0, False // all enabled; no zeroing
715 predidx = tb[reg].predidx // redirection occurs HERE
716 predicate = intreg[predidx] // actual predicate HERE
717 if (tb[reg].inv):
718 predicate = ~predicate // invert ALL bits
719 return predicate, tb[reg].zero
720
721 Note here, critically, that **only** if the register is marked
722 in its **register** table entry as being "active" does the testing
723 proceed further to check if the **predicate** table entry is
724 also active.
725
726 Note also that this is in direct contrast to branch operations
727 for the storage of comparisions: in these specific circumstances
728 the requirement for there to be an active *register* entry
729 is removed.
730
731 ## Fail-on-First Mode <a name="ffirst-mode"></a>
732
733 ffirst is a special data-dependent predicate mode. There are two
734 variants: one is for faults: typically for LOAD/STORE operations,
735 which may encounter end of page faults during a series of operations.
736 The other variant is comparisons such as FEQ (or the augmented behaviour
737 of Branch), and any operation that returns a result of zero (whether
738 integer or floating-point). In the FP case, this includes negative-zero.
739
740 Note that the execution order must "appear" to be sequential for ffirst
741 mode to work correctly. An in-order architecture must execute the element
742 operations in sequence, whilst an out-of-order architecture must *commit*
743 the element operations in sequence (giving the appearance of in-order
744 execution).
745
746 Note also, that if ffirst mode is needed without predication, a special
747 "always-on" Predicate Table Entry may be constructed by setting
748 inverse-on and using x0 as the predicate register. This
749 will have the effect of creating a mask of all ones, allowing ffirst
750 to be set.
751
752 ### Fail-on-first traps
753
754 Except for the first element, ffault stops sequential element processing
755 when a trap occurs. The first element is treated normally (as if ffirst
756 is clear). Should any subsequent element instruction require a trap,
757 instead it and subsequent indexed elements are ignored (or cancelled in
758 out-of-order designs), and VL is set to the *last* instruction that did
759 not take the trap.
760
761 Note that predicated-out elements (where the predicate mask bit is zero)
762 are clearly excluded (i.e. the trap will not occur). However, note that
763 the loop still had to test the predicate bit: thus on return,
764 VL is set to include elements that did not take the trap *and* includes
765 the elements that were predicated (masked) out (not tested up to the
766 point where the trap occurred).
767
768 If SUBVL is being used (SUBVL!=1), the first *sub-group* of elements
769 will cause a trap as normal (as if ffirst is not set); subsequently,
770 the trap must not occur in the *sub-group* of elements. SUBVL will **NOT**
771 be modified.
772
773 Given that predication bits apply to SUBVL groups, the same rules apply
774 to predicated-out (masked-out) sub-groups in calculating the value that VL
775 is set to.
776
777 ### Fail-on-first conditional tests
778
779 ffault stops sequential element conditional testing on the first element result
780 being zero. VL is set to the number of elements that were processed before
781 the fail-condition was encountered.
782
783 Note that just as with traps, if SUBVL!=1, the first of any of the *sub-group*
784 will cause the processing to end, and, even if there were elements within
785 the *sub-group* that passed the test, that sub-group is still (entirely)
786 excluded from the count (from setting VL). i.e. VL is set to the total
787 number of *sub-groups* that had no fail-condition up until execution was
788 stopped.
789
790 Note again that, just as with traps, predicated-out (masked-out) elements
791 are included in the count leading up to the fail-condition, even though they
792 were not tested.
793
794 The pseudo-code for Predication makes this clearer and simpler than it is
795 in words (the loop ends, VL is set to the current element index, "i").
796
797 ## REMAP CSR <a name="remap" />
798
799 (Note: both the REMAP and SHAPE sections are best read after the
800 rest of the document has been read)
801
802 There is one 32-bit CSR which may be used to indicate which registers,
803 if used in any operation, must be "reshaped" (re-mapped) from a linear
804 form to a 2D or 3D transposed form, or "offset" to permit arbitrary
805 access to elements within a register.
806
807 The 32-bit REMAP CSR may reshape up to 3 registers:
808
809 | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
810 | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
811 | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
812
813 regidx0-2 refer not to the Register CSR CAM entry but to the underlying
814 *real* register (see regidx, the value) and consequently is 7-bits wide.
815 When set to zero (referring to x0), clearly reshaping x0 is pointless,
816 so is used to indicate "disabled".
817 shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
818 Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
819
820 It is anticipated that these specialist CSRs not be very often used.
821 Unlike the CSR Register and Predication tables, the REMAP CSRs use
822 the full 7-bit regidx so that they can be set once and left alone,
823 whilst the CSR Register entries pointing to them are disabled, instead.
824
825 ## SHAPE 1D/2D/3D vector-matrix remapping CSRs
826
827 (Note: both the REMAP and SHAPE sections are best read after the
828 rest of the document has been read)
829
830 There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
831 which have the same format. When each SHAPE CSR is set entirely to zeros,
832 remapping is disabled: the register's elements are a linear (1D) vector.
833
834 | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
835 | ------- | -- | ------- | -- | ------- | -- | ------- |
836 | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
837
838 offs is a 3-bit field, spread out across bits 7, 15 and 23, which
839 is added to the element index during the loop calculation.
840
841 xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
842 that the array dimensionality for that dimension is 1. A value of xdimsz=2
843 would indicate that in the first dimension there are 3 elements in the
844 array. The format of the array is therefore as follows:
845
846 array[xdim+1][ydim+1][zdim+1]
847
848 However whilst illustrative of the dimensionality, that does not take the
849 "permute" setting into account. "permute" may be any one of six values
850 (0-5, with values of 6 and 7 being reserved, and not legal). The table
851 below shows how the permutation dimensionality order works:
852
853 | permute | order | array format |
854 | ------- | ----- | ------------------------ |
855 | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
856 | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
857 | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
858 | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
859 | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
860 | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
861
862 In other words, the "permute" option changes the order in which
863 nested for-loops over the array would be done. The algorithm below
864 shows this more clearly, and may be executed as a python program:
865
866 # mapidx = REMAP.shape2
867 xdim = 3 # SHAPE[mapidx].xdim_sz+1
868 ydim = 4 # SHAPE[mapidx].ydim_sz+1
869 zdim = 5 # SHAPE[mapidx].zdim_sz+1
870
871 lims = [xdim, ydim, zdim]
872 idxs = [0,0,0] # starting indices
873 order = [1,0,2] # experiment with different permutations, here
874 offs = 0 # experiment with different offsets, here
875
876 for idx in range(xdim * ydim * zdim):
877 new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
878 print new_idx,
879 for i in range(3):
880 idxs[order[i]] = idxs[order[i]] + 1
881 if (idxs[order[i]] != lims[order[i]]):
882 break
883 print
884 idxs[order[i]] = 0
885
886 Here, it is assumed that this algorithm be run within all pseudo-code
887 throughout this document where a (parallelism) for-loop would normally
888 run from 0 to VL-1 to refer to contiguous register
889 elements; instead, where REMAP indicates to do so, the element index
890 is run through the above algorithm to work out the **actual** element
891 index, instead. Given that there are three possible SHAPE entries, up to
892 three separate registers in any given operation may be simultaneously
893 remapped:
894
895 function op_add(rd, rs1, rs2) # add not VADD!
896 ...
897 ...
898  for (i = 0; i < VL; i++)
899 xSTATE.srcoffs = i # save context
900 if (predval & 1<<i) # predication uses intregs
901    ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
902 ireg[rs2+remap(irs2)];
903 if (!int_vec[rd ].isvector) break;
904 if (int_vec[rd ].isvector)  { id += 1; }
905 if (int_vec[rs1].isvector)  { irs1 += 1; }
906 if (int_vec[rs2].isvector)  { irs2 += 1; }
907
908 By changing remappings, 2D matrices may be transposed "in-place" for one
909 operation, followed by setting a different permutation order without
910 having to move the values in the registers to or from memory. Also,
911 the reason for having REMAP separate from the three SHAPE CSRs is so
912 that in a chain of matrix multiplications and additions, for example,
913 the SHAPE CSRs need only be set up once; only the REMAP CSR need be
914 changed to target different registers.
915
916 Note that:
917
918 * Over-running the register file clearly has to be detected and
919 an illegal instruction exception thrown
920 * When non-default elwidths are set, the exact same algorithm still
921 applies (i.e. it offsets elements *within* registers rather than
922 entire registers).
923 * If permute option 000 is utilised, the actual order of the
924 reindexing does not change!
925 * If two or more dimensions are set to zero, the actual order does not change!
926 * The above algorithm is pseudo-code **only**. Actual implementations
927 will need to take into account the fact that the element for-looping
928 must be **re-entrant**, due to the possibility of exceptions occurring.
929 See MSTATE CSR, which records the current element index.
930 * Twin-predicated operations require **two** separate and distinct
931 element offsets. The above pseudo-code algorithm will be applied
932 separately and independently to each, should each of the two
933 operands be remapped. *This even includes C.LDSP* and other operations
934 in that category, where in that case it will be the **offset** that is
935 remapped (see Compressed Stack LOAD/STORE section).
936 * Offset is especially useful, on its own, for accessing elements
937 within the middle of a register. Without offsets, it is necessary
938 to either use a predicated MV, skipping the first elements, or
939 performing a LOAD/STORE cycle to memory.
940 With offsets, the data does not have to be moved.
941 * Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to
942 less than MVL is **perfectly legal**, albeit very obscure. It permits
943 entries to be regularly presented to operands **more than once**, thus
944 allowing the same underlying registers to act as an accumulator of
945 multiple vector or matrix operations, for example.
946
947 Clearly here some considerable care needs to be taken as the remapping
948 could hypothetically create arithmetic operations that target the
949 exact same underlying registers, resulting in data corruption due to
950 pipeline overlaps. Out-of-order / Superscalar micro-architectures with
951 register-renaming will have an easier time dealing with this than
952 DSP-style SIMD micro-architectures.
953
954 # Instruction Execution Order
955
956 Simple-V behaves as if it is a hardware-level "macro expansion system",
957 substituting and expanding a single instruction into multiple sequential
958 instructions with contiguous and sequentially-incrementing registers.
959 As such, it does **not** modify - or specify - the behaviour and semantics of
960 the execution order: that may be deduced from the **existing** RV
961 specification in each and every case.
962
963 So for example if a particular micro-architecture permits out-of-order
964 execution, and it is augmented with Simple-V, then wherever instructions
965 may be out-of-order then so may the "post-expansion" SV ones.
966
967 If on the other hand there are memory guarantees which specifically
968 prevent and prohibit certain instructions from being re-ordered
969 (such as the Atomicity Axiom, or FENCE constraints), then clearly
970 those constraints **MUST** also be obeyed "post-expansion".
971
972 It should be absolutely clear that SV is **not** about providing new
973 functionality or changing the existing behaviour of a micro-architetural
974 design, or about changing the RISC-V Specification.
975 It is **purely** about compacting what would otherwise be contiguous
976 instructions that use sequentially-increasing register numbers down
977 to the **one** instruction.
978
979 # Instructions <a name="instructions" />
980
981 Despite being a 98% complete and accurate topological remap of RVV
982 concepts and functionality, no new instructions are needed.
983 Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
984 becomes a critical dependency for efficient manipulation of predication
985 masks (as a bit-field). Despite the removal of all operations,
986 with the exception of CLIP and VSELECT.X
987 *all instructions from RVV Base are topologically re-mapped and retain their
988 complete functionality, intact*. Note that if RV64G ever had
989 a MV.X added as well as FCLIP, the full functionality of RVV-Base would
990 be obtained in SV.
991
992 Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
993 equivalents, so are left out of Simple-V. VSELECT could be included if
994 there existed a MV.X instruction in RV (MV.X is a hypothetical
995 non-immediate variant of MV that would allow another register to
996 specify which register was to be copied). Note that if any of these three
997 instructions are added to any given RV extension, their functionality
998 will be inherently parallelised.
999
1000 With some exceptions, where it does not make sense or is simply too
1001 challenging, all RV-Base instructions are parallelised:
1002
1003 * CSR instructions, whilst a case could be made for fast-polling of
1004 a CSR into multiple registers, or for being able to copy multiple
1005 contiguously addressed CSRs into contiguous registers, and so on,
1006 are the fundamental core basis of SV. If parallelised, extreme
1007 care would need to be taken. Additionally, CSR reads are done
1008 using x0, and it is *really* inadviseable to tag x0.
1009 * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
1010 left as scalar.
1011 * LR/SC could hypothetically be parallelised however their purpose is
1012 single (complex) atomic memory operations where the LR must be followed
1013 up by a matching SC. A sequence of parallel LR instructions followed
1014 by a sequence of parallel SC instructions therefore is guaranteed to
1015 not be useful. Not least: the guarantees of a Multi-LR/SC
1016 would be impossible to provide if emulated in a trap.
1017 * EBREAK, NOP, FENCE and others do not use registers so are not inherently
1018 paralleliseable anyway.
1019
1020 All other operations using registers are automatically parallelised.
1021 This includes AMOMAX, AMOSWAP and so on, where particular care and
1022 attention must be paid.
1023
1024 Example pseudo-code for an integer ADD operation (including scalar
1025 operations). Floating-point uses the FP Register Table.
1026
1027 function op_add(rd, rs1, rs2) # add not VADD!
1028  int i, id=0, irs1=0, irs2=0;
1029  predval = get_pred_val(FALSE, rd);
1030  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
1031  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
1032  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
1033  for (i = 0; i < VL; i++)
1034 xSTATE.srcoffs = i # save context
1035 if (predval & 1<<i) # predication uses intregs
1036    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
1037 if (!int_vec[rd ].isvector) break;
1038 if (int_vec[rd ].isvector)  { id += 1; }
1039 if (int_vec[rs1].isvector)  { irs1 += 1; }
1040 if (int_vec[rs2].isvector)  { irs2 += 1; }
1041
1042 Note that for simplicity there is quite a lot missing from the above
1043 pseudo-code: element widths, zeroing on predication, dimensional
1044 reshaping and offsets and so on. However it demonstrates the basic
1045 principle. Augmentations that produce the full pseudo-code are covered in
1046 other sections.
1047
1048 ## SUBVL Pseudocode <a name="subvl-pseudocode"></a>
1049
1050 Adding in support for SUBVL is a matter of adding in an extra inner
1051 for-loop, where register src and dest are still incremented inside the
1052 inner part. Not that the predication is still taken from the VL index.
1053
1054 So whilst elements are indexed by "(i * SUBVL + s)", predicate bits are
1055 indexed by "(i)"
1056
1057 function op_add(rd, rs1, rs2) # add not VADD!
1058  int i, id=0, irs1=0, irs2=0;
1059  predval = get_pred_val(FALSE, rd);
1060  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
1061  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
1062  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
1063  for (i = 0; i < VL; i++)
1064 xSTATE.srcoffs = i # save context
1065 for (s = 0; s < SUBVL; s++)
1066 xSTATE.ssvoffs = s # save context
1067 if (predval & 1<<i) # predication uses intregs
1068 # actual add is here (at last)
1069    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
1070 if (!int_vec[rd ].isvector) break;
1071 if (int_vec[rd ].isvector)  { id += 1; }
1072 if (int_vec[rs1].isvector)  { irs1 += 1; }
1073 if (int_vec[rs2].isvector)  { irs2 += 1; }
1074 if (id == VL or irs1 == VL or irs2 == VL) {
1075 # end VL hardware loop
1076 xSTATE.srcoffs = 0; # reset
1077 xSTATE.ssvoffs = 0; # reset
1078 return;
1079 }
1080
1081
1082 NOTE: pseudocode simplified greatly: zeroing, proper predicate handling,
1083 elwidth handling etc. all left out.
1084
1085 ## Instruction Format
1086
1087 It is critical to appreciate that there are
1088 **no operations added to SV, at all**.
1089
1090 Instead, by using CSRs to tag registers as an indication of "changed
1091 behaviour", SV *overloads* pre-existing branch operations into predicated
1092 variants, and implicitly overloads arithmetic operations, MV, FCVT, and
1093 LOAD/STORE depending on CSR configurations for bitwidth and predication.
1094 **Everything** becomes parallelised. *This includes Compressed
1095 instructions* as well as any future instructions and Custom Extensions.
1096
1097 Note: CSR tags to change behaviour of instructions is nothing new, including
1098 in RISC-V. UXL, SXL and MXL change the behaviour so that XLEN=32/64/128.
1099 FRM changes the behaviour of the floating-point unit, to alter the rounding
1100 mode. Other architectures change the LOAD/STORE byte-order from big-endian
1101 to little-endian on a per-instruction basis. SV is just a little more...
1102 comprehensive in its effect on instructions.
1103
1104 ## Branch Instructions
1105
1106 Branch operations are augmented slightly to be a little more like FP
1107 Compares (FEQ, FNE etc.), by permitting the cumulation (and storage)
1108 of multiple comparisons into a register (taken indirectly from the predicate
1109 table). As such, "ffirst" - fail-on-first - condition mode can be enabled.
1110 See ffirst mode in the Predication Table section.
1111
1112 ### Standard Branch <a name="standard_branch"></a>
1113
1114 Branch operations use standard RV opcodes that are reinterpreted to
1115 be "predicate variants" in the instance where either of the two src
1116 registers are marked as vectors (active=1, vector=1).
1117
1118 Note that the predication register to use (if one is enabled) is taken from
1119 the *first* src register, and that this is used, just as with predicated
1120 arithmetic operations, to mask whether the comparison operations take
1121 place or not. The target (destination) predication register
1122 to use (if one is enabled) is taken from the *second* src register.
1123
1124 If either of src1 or src2 are scalars (whether by there being no
1125 CSR register entry or whether by the CSR entry specifically marking
1126 the register as "scalar") the comparison goes ahead as vector-scalar
1127 or scalar-vector.
1128
1129 In instances where no vectorisation is detected on either src registers
1130 the operation is treated as an absolutely standard scalar branch operation.
1131 Where vectorisation is present on either or both src registers, the
1132 branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
1133 those tests that are predicated out).
1134
1135 Note that when zero-predication is enabled (from source rs1),
1136 a cleared bit in the predicate indicates that the result
1137 of the compare is set to "false", i.e. that the corresponding
1138 destination bit (or result)) be set to zero. Contrast this with
1139 when zeroing is not set: bits in the destination predicate are
1140 only *set*; they are **not** cleared. This is important to appreciate,
1141 as there may be an expectation that, going into the hardware-loop,
1142 the destination predicate is always expected to be set to zero:
1143 this is **not** the case. The destination predicate is only set
1144 to zero if **zeroing** is enabled.
1145
1146 Note that just as with the standard (scalar, non-predicated) branch
1147 operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
1148 src1 and src2.
1149
1150 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
1151 for predicated compare operations of function "cmp":
1152
1153 for (int i=0; i<vl; ++i)
1154 if ([!]preg[p][i])
1155 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
1156 s2 ? vreg[rs2][i] : sreg[rs2]);
1157
1158 With associated predication, vector-length adjustments and so on,
1159 and temporarily ignoring bitwidth (which makes the comparisons more
1160 complex), this becomes:
1161
1162 s1 = reg_is_vectorised(src1);
1163 s2 = reg_is_vectorised(src2);
1164
1165 if not s1 && not s2
1166 if cmp(rs1, rs2) # scalar compare
1167 goto branch
1168 return
1169
1170 preg = int_pred_reg[rd]
1171 reg = int_regfile
1172
1173 ps = get_pred_val(I/F==INT, rs1);
1174 rd = get_pred_val(I/F==INT, rs2); # this may not exist
1175
1176 if not exists(rd) or zeroing:
1177 result = 0
1178 else
1179 result = preg[rd]
1180
1181 for (int i = 0; i < VL; ++i)
1182 if (zeroing)
1183 if not (ps & (1<<i))
1184 result &= ~(1<<i);
1185 else if (ps & (1<<i))
1186 if (cmp(s1 ? reg[src1+i]:reg[src1],
1187 s2 ? reg[src2+i]:reg[src2])
1188 result |= 1<<i;
1189 else
1190 result &= ~(1<<i);
1191
1192 if not exists(rd)
1193 if result == ps
1194 goto branch
1195 else
1196 preg[rd] = result # store in destination
1197 if preg[rd] == ps
1198 goto branch
1199
1200 Notes:
1201
1202 * Predicated SIMD comparisons would break src1 and src2 further down
1203 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
1204 Reordering") setting Vector-Length times (number of SIMD elements) bits
1205 in Predicate Register rd, as opposed to just Vector-Length bits.
1206 * The execution of "parallelised" instructions **must** be implemented
1207 as "re-entrant" (to use a term from software). If an exception (trap)
1208 occurs during the middle of a vectorised
1209 Branch (now a SV predicated compare) operation, the partial results
1210 of any comparisons must be written out to the destination
1211 register before the trap is permitted to begin. If however there
1212 is no predicate, the **entire** set of comparisons must be **restarted**,
1213 with the offset loop indices set back to zero. This is because
1214 there is no place to store the temporary result during the handling
1215 of traps.
1216
1217 TODO: predication now taken from src2. also branch goes ahead
1218 if all compares are successful.
1219
1220 Note also that where normally, predication requires that there must
1221 also be a CSR register entry for the register being used in order
1222 for the **predication** CSR register entry to also be active,
1223 for branches this is **not** the case. src2 does **not** have
1224 to have its CSR register entry marked as active in order for
1225 predication on src2 to be active.
1226
1227 Also note: SV Branch operations are **not** twin-predicated
1228 (see Twin Predication section). This would require three
1229 element offsets: one to track src1, one to track src2 and a third
1230 to track where to store the accumulation of the results. Given
1231 that the element offsets need to be exposed via CSRs so that
1232 the parallel hardware looping may be made re-entrant on traps
1233 and exceptions, the decision was made not to make SV Branches
1234 twin-predicated.
1235
1236 ### Floating-point Comparisons
1237
1238 There does not exist floating-point branch operations, only compare.
1239 Interestingly no change is needed to the instruction format because
1240 FP Compare already stores a 1 or a zero in its "rd" integer register
1241 target, i.e. it's not actually a Branch at all: it's a compare.
1242
1243 In RV (scalar) Base, a branch on a floating-point compare is
1244 done via the sequence "FEQ x1, f0, f5; BEQ x1, x0, #jumploc".
1245 This does extend to SV, as long as x1 (in the example sequence given)
1246 is vectorised. When that is the case, x1..x(1+VL-1) will also be
1247 set to 0 or 1 depending on whether f0==f5, f1==f6, f2==f7 and so on.
1248 The BEQ that follows will *also* compare x1==x0, x2==x0, x3==x0 and
1249 so on. Consequently, unlike integer-branch, FP Compare needs no
1250 modification in its behaviour.
1251
1252 In addition, it is noted that an entry "FNE" (the opposite of FEQ) is missing,
1253 and whilst in ordinary branch code this is fine because the standard
1254 RVF compare can always be followed up with an integer BEQ or a BNE (or
1255 a compressed comparison to zero or non-zero), in predication terms that
1256 becomes more of an impact. To deal with this, SV's predication has
1257 had "invert" added to it.
1258
1259 Also: note that FP Compare may be predicated, using the destination
1260 integer register (rd) to determine the predicate. FP Compare is **not**
1261 a twin-predication operation, as, again, just as with SV Branches,
1262 there are three registers involved: FP src1, FP src2 and INT rd.
1263
1264 Also: note that ffirst (fail first mode) applies directly to this operation.
1265
1266 ### Compressed Branch Instruction
1267
1268 Compressed Branch instructions are, just like standard Branch instructions,
1269 reinterpreted to be vectorised and predicated based on the source register
1270 (rs1s) CSR entries. As however there is only the one source register,
1271 given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
1272 to store the results of the comparisions is taken from CSR predication
1273 table entries for **x0**.
1274
1275 The specific required use of x0 is, with a little thought, quite obvious,
1276 but is counterintuitive. Clearly it is **not** recommended to redirect
1277 x0 with a CSR register entry, however as a means to opaquely obtain
1278 a predication target it is the only sensible option that does not involve
1279 additional special CSRs (or, worse, additional special opcodes).
1280
1281 Note also that, just as with standard branches, the 2nd source
1282 (in this case x0 rather than src2) does **not** have to have its CSR
1283 register table marked as "active" in order for predication to work.
1284
1285 ## Vectorised Dual-operand instructions
1286
1287 There is a series of 2-operand instructions involving copying (and
1288 sometimes alteration):
1289
1290 * C.MV
1291 * FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
1292 * C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
1293 * LOAD(-FP) and STORE(-FP)
1294
1295 All of these operations follow the same two-operand pattern, so it is
1296 *both* the source *and* destination predication masks that are taken into
1297 account. This is different from
1298 the three-operand arithmetic instructions, where the predication mask
1299 is taken from the *destination* register, and applied uniformly to the
1300 elements of the source register(s), element-for-element.
1301
1302 The pseudo-code pattern for twin-predicated operations is as
1303 follows:
1304
1305 function op(rd, rs):
1306  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1307  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1308  ps = get_pred_val(FALSE, rs); # predication on src
1309  pd = get_pred_val(FALSE, rd); # ... AND on dest
1310  for (int i = 0, int j = 0; i < VL && j < VL;):
1311 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1312 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1313 xSTATE.srcoffs = i # save context
1314 xSTATE.destoffs = j # save context
1315 reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
1316 if (int_csr[rs].isvec) i++;
1317 if (int_csr[rd].isvec) j++; else break
1318
1319 This pattern covers scalar-scalar, scalar-vector, vector-scalar
1320 and vector-vector, and predicated variants of all of those.
1321 Zeroing is not presently included (TODO). As such, when compared
1322 to RVV, the twin-predicated variants of C.MV and FMV cover
1323 **all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
1324 VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
1325
1326 Note that:
1327
1328 * elwidth (SIMD) is not covered in the pseudo-code above
1329 * ending the loop early in scalar cases (VINSERT, VEXTRACT) is also
1330 not covered
1331 * zero predication is also not shown (TODO).
1332
1333 ### C.MV Instruction <a name="c_mv"></a>
1334
1335 There is no MV instruction in RV however there is a C.MV instruction.
1336 It is used for copying integer-to-integer registers (vectorised FMV
1337 is used for copying floating-point).
1338
1339 If either the source or the destination register are marked as vectors
1340 C.MV is reinterpreted to be a vectorised (multi-register) predicated
1341 move operation. The actual instruction's format does not change:
1342
1343 [[!table data="""
1344 15 12 | 11 7 | 6 2 | 1 0 |
1345 funct4 | rd | rs | op |
1346 4 | 5 | 5 | 2 |
1347 C.MV | dest | src | C0 |
1348 """]]
1349
1350 A simplified version of the pseudocode for this operation is as follows:
1351
1352 function op_mv(rd, rs) # MV not VMV!
1353  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1354  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1355  ps = get_pred_val(FALSE, rs); # predication on src
1356  pd = get_pred_val(FALSE, rd); # ... AND on dest
1357  for (int i = 0, int j = 0; i < VL && j < VL;):
1358 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1359 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1360 xSTATE.srcoffs = i # save context
1361 xSTATE.destoffs = j # save context
1362 ireg[rd+j] <= ireg[rs+i];
1363 if (int_csr[rs].isvec) i++;
1364 if (int_csr[rd].isvec) j++; else break
1365
1366 There are several different instructions from RVV that are covered by
1367 this one opcode:
1368
1369 [[!table data="""
1370 src | dest | predication | op |
1371 scalar | vector | none | VSPLAT |
1372 scalar | vector | destination | sparse VSPLAT |
1373 scalar | vector | 1-bit dest | VINSERT |
1374 vector | scalar | 1-bit? src | VEXTRACT |
1375 vector | vector | none | VCOPY |
1376 vector | vector | src | Vector Gather |
1377 vector | vector | dest | Vector Scatter |
1378 vector | vector | src & dest | Gather/Scatter |
1379 vector | vector | src == dest | sparse VCOPY |
1380 """]]
1381
1382 Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
1383 operations with inversion on the src and dest predication for one of the
1384 two C.MV operations.
1385
1386 Note that in the instance where the Compressed Extension is not implemented,
1387 MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs.
1388 Note that the behaviour is **different** from C.MV because with addi the
1389 predication mask to use is taken **only** from rd and is applied against
1390 all elements: rs[i] = rd[i].
1391
1392 ### FMV, FNEG and FABS Instructions
1393
1394 These are identical in form to C.MV, except covering floating-point
1395 register copying. The same double-predication rules also apply.
1396 However when elwidth is not set to default the instruction is implicitly
1397 and automatic converted to a (vectorised) floating-point type conversion
1398 operation of the appropriate size covering the source and destination
1399 register bitwidths.
1400
1401 (Note that FMV, FNEG and FABS are all actually pseudo-instructions)
1402
1403 ### FVCT Instructions
1404
1405 These are again identical in form to C.MV, except that they cover
1406 floating-point to integer and integer to floating-point. When element
1407 width in each vector is set to default, the instructions behave exactly
1408 as they are defined for standard RV (scalar) operations, except vectorised
1409 in exactly the same fashion as outlined in C.MV.
1410
1411 However when the source or destination element width is not set to default,
1412 the opcode's explicit element widths are *over-ridden* to new definitions,
1413 and the opcode's element width is taken as indicative of the SIMD width
1414 (if applicable i.e. if packed SIMD is requested) instead.
1415
1416 For example FCVT.S.L would normally be used to convert a 64-bit
1417 integer in register rs1 to a 64-bit floating-point number in rd.
1418 If however the source rs1 is set to be a vector, where elwidth is set to
1419 default/2 and "packed SIMD" is enabled, then the first 32 bits of
1420 rs1 are converted to a floating-point number to be stored in rd's
1421 first element and the higher 32-bits *also* converted to floating-point
1422 and stored in the second. The 32 bit size comes from the fact that
1423 FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to
1424 divide that by two it means that rs1 element width is to be taken as 32.
1425
1426 Similar rules apply to the destination register.
1427
1428 ## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
1429
1430 An earlier draft of SV modified the behaviour of LOAD/STORE (modified
1431 the interpretation of the instruction fields). This
1432 actually undermined the fundamental principle of SV, namely that there
1433 be no modifications to the scalar behaviour (except where absolutely
1434 necessary), in order to simplify an implementor's task if considering
1435 converting a pre-existing scalar design to support parallelism.
1436
1437 So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
1438 do not change in SV, however just as with C.MV it is important to note
1439 that dual-predication is possible.
1440
1441 In vectorised architectures there are usually at least two different modes
1442 for LOAD/STORE:
1443
1444 * Read (or write for STORE) from sequential locations, where one
1445 register specifies the address, and the one address is incremented
1446 by a fixed amount. This is usually known as "Unit Stride" mode.
1447 * Read (or write) from multiple indirected addresses, where the
1448 vector elements each specify separate and distinct addresses.
1449
1450 To support these different addressing modes, the CSR Register "isvector"
1451 bit is used. So, for a LOAD, when the src register is set to
1452 scalar, the LOADs are sequentially incremented by the src register
1453 element width, and when the src register is set to "vector", the
1454 elements are treated as indirection addresses. Simplified
1455 pseudo-code would look like this:
1456
1457 function op_ld(rd, rs) # LD not VLD!
1458  rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
1459  rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
1460  ps = get_pred_val(FALSE, rs); # predication on src
1461  pd = get_pred_val(FALSE, rd); # ... AND on dest
1462  for (int i = 0, int j = 0; i < VL && j < VL;):
1463 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1464 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1465 if (int_csr[rd].isvec)
1466 # indirect mode (multi mode)
1467 srcbase = ireg[rsv+i];
1468 else
1469 # unit stride mode
1470 srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
1471 ireg[rdv+j] <= mem[srcbase + imm_offs];
1472 if (!int_csr[rs].isvec &&
1473 !int_csr[rd].isvec) break # scalar-scalar LD
1474 if (int_csr[rs].isvec) i++;
1475 if (int_csr[rd].isvec) j++;
1476
1477 Notes:
1478
1479 * For simplicity, zeroing and elwidth is not included in the above:
1480 the key focus here is the decision-making for srcbase; vectorised
1481 rs means use sequentially-numbered registers as the indirection
1482 address, and scalar rs is "offset" mode.
1483 * The test towards the end for whether both source and destination are
1484 scalar is what makes the above pseudo-code provide the "standard" RV
1485 Base behaviour for LD operations.
1486 * The offset in bytes (XLEN/8) changes depending on whether the
1487 operation is a LB (1 byte), LH (2 byes), LW (4 bytes) or LD
1488 (8 bytes), and also whether the element width is over-ridden
1489 (see special element width section).
1490
1491 ## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
1492
1493 C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
1494 where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
1495 It is therefore possible to use predicated C.LWSP to efficiently
1496 pop registers off the stack (by predicating x2 as the source), cherry-picking
1497 which registers to store to (by predicating the destination). Likewise
1498 for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
1499
1500 The two modes ("unit stride" and multi-indirection) are still supported,
1501 as with standard LD/ST. Essentially, the only difference is that the
1502 use of x2 is hard-coded into the instruction.
1503
1504 **Note**: it is still possible to redirect x2 to an alternative target
1505 register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
1506 general-purpose LOAD/STORE operations.
1507
1508 ## Compressed LOAD / STORE Instructions
1509
1510 Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
1511 where the same rules apply and the same pseudo-code apply as for
1512 non-compressed LOAD/STORE. Again: setting scalar or vector mode
1513 on the src for LOAD and dest for STORE switches mode from "Unit Stride"
1514 to "Multi-indirection", respectively.
1515
1516 # Element bitwidth polymorphism <a name="elwidth"></a>
1517
1518 Element bitwidth is best covered as its own special section, as it
1519 is quite involved and applies uniformly across-the-board. SV restricts
1520 bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
1521
1522 The effect of setting an element bitwidth is to re-cast each entry
1523 in the register table, and for all memory operations involving
1524 load/stores of certain specific sizes, to a completely different width.
1525 Thus In c-style terms, on an RV64 architecture, effectively each register
1526 now looks like this:
1527
1528 typedef union {
1529 uint8_t b[8];
1530 uint16_t s[4];
1531 uint32_t i[2];
1532 uint64_t l[1];
1533 } reg_t;
1534
1535 // integer table: assume maximum SV 7-bit regfile size
1536 reg_t int_regfile[128];
1537
1538 where the CSR Register table entry (not the instruction alone) determines
1539 which of those union entries is to be used on each operation, and the
1540 VL element offset in the hardware-loop specifies the index into each array.
1541
1542 However a naive interpretation of the data structure above masks the
1543 fact that setting VL greater than 8, for example, when the bitwidth is 8,
1544 accessing one specific register "spills over" to the following parts of
1545 the register file in a sequential fashion. So a much more accurate way
1546 to reflect this would be:
1547
1548 typedef union {
1549 uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
1550 uint8_t b[0]; // array of type uint8_t
1551 uint16_t s[0];
1552 uint32_t i[0];
1553 uint64_t l[0];
1554 uint128_t d[0];
1555 } reg_t;
1556
1557 reg_t int_regfile[128];
1558
1559 where when accessing any individual regfile[n].b entry it is permitted
1560 (in c) to arbitrarily over-run the *declared* length of the array (zero),
1561 and thus "overspill" to consecutive register file entries in a fashion
1562 that is completely transparent to a greatly-simplified software / pseudo-code
1563 representation.
1564 It is however critical to note that it is clearly the responsibility of
1565 the implementor to ensure that, towards the end of the register file,
1566 an exception is thrown if attempts to access beyond the "real" register
1567 bytes is ever attempted.
1568
1569 Now we may modify pseudo-code an operation where all element bitwidths have
1570 been set to the same size, where this pseudo-code is otherwise identical
1571 to its "non" polymorphic versions (above):
1572
1573 function op_add(rd, rs1, rs2) # add not VADD!
1574 ...
1575 ...
1576  for (i = 0; i < VL; i++)
1577 ...
1578 ...
1579 // TODO, calculate if over-run occurs, for each elwidth
1580 if (elwidth == 8) {
1581    int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
1582     int_regfile[rs2].i[irs2];
1583 } else if elwidth == 16 {
1584    int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
1585     int_regfile[rs2].s[irs2];
1586 } else if elwidth == 32 {
1587    int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
1588     int_regfile[rs2].i[irs2];
1589 } else { // elwidth == 64
1590    int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
1591     int_regfile[rs2].l[irs2];
1592 }
1593 ...
1594 ...
1595
1596 So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
1597 following sequentially on respectively from the same) are "type-cast"
1598 to 8-bit; for 16-bit entries likewise and so on.
1599
1600 However that only covers the case where the element widths are the same.
1601 Where the element widths are different, the following algorithm applies:
1602
1603 * Analyse the bitwidth of all source operands and work out the
1604 maximum. Record this as "maxsrcbitwidth"
1605 * If any given source operand requires sign-extension or zero-extension
1606 (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
1607 sign-extension / zero-extension or whatever is specified in the standard
1608 RV specification, **change** that to sign-extending from the respective
1609 individual source operand's bitwidth from the CSR table out to
1610 "maxsrcbitwidth" (previously calculated), instead.
1611 * Following separate and distinct (optional) sign/zero-extension of all
1612 source operands as specifically required for that operation, carry out the
1613 operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
1614 this may be a "null" (copy) operation, and that with FCVT, the changes
1615 to the source and destination bitwidths may also turn FVCT effectively
1616 into a copy).
1617 * If the destination operand requires sign-extension or zero-extension,
1618 instead of a mandatory fixed size (typically 32-bit for arithmetic,
1619 for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
1620 etc.), overload the RV specification with the bitwidth from the
1621 destination register's elwidth entry.
1622 * Finally, store the (optionally) sign/zero-extended value into its
1623 destination: memory for sb/sw etc., or an offset section of the register
1624 file for an arithmetic operation.
1625
1626 In this way, polymorphic bitwidths are achieved without requiring a
1627 massive 64-way permutation of calculations **per opcode**, for example
1628 (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
1629 rd bitwidths). The pseudo-code is therefore as follows:
1630
1631 typedef union {
1632 uint8_t b;
1633 uint16_t s;
1634 uint32_t i;
1635 uint64_t l;
1636 } el_reg_t;
1637
1638 bw(elwidth):
1639 if elwidth == 0:
1640 return xlen
1641 if elwidth == 1:
1642 return xlen / 2
1643 if elwidth == 2:
1644 return xlen * 2
1645 // elwidth == 3:
1646 return 8
1647
1648 get_max_elwidth(rs1, rs2):
1649 return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
1650 bw(int_csr[rs2].elwidth)) # again XLEN if no entry
1651
1652 get_polymorphed_reg(reg, bitwidth, offset):
1653 el_reg_t res;
1654 res.l = 0; // TODO: going to need sign-extending / zero-extending
1655 if bitwidth == 8:
1656 reg.b = int_regfile[reg].b[offset]
1657 elif bitwidth == 16:
1658 reg.s = int_regfile[reg].s[offset]
1659 elif bitwidth == 32:
1660 reg.i = int_regfile[reg].i[offset]
1661 elif bitwidth == 64:
1662 reg.l = int_regfile[reg].l[offset]
1663 return res
1664
1665 set_polymorphed_reg(reg, bitwidth, offset, val):
1666 if (!int_csr[reg].isvec):
1667 # sign/zero-extend depending on opcode requirements, from
1668 # the reg's bitwidth out to the full bitwidth of the regfile
1669 val = sign_or_zero_extend(val, bitwidth, xlen)
1670 int_regfile[reg].l[0] = val
1671 elif bitwidth == 8:
1672 int_regfile[reg].b[offset] = val
1673 elif bitwidth == 16:
1674 int_regfile[reg].s[offset] = val
1675 elif bitwidth == 32:
1676 int_regfile[reg].i[offset] = val
1677 elif bitwidth == 64:
1678 int_regfile[reg].l[offset] = val
1679
1680 maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
1681 destwid = int_csr[rs1].elwidth # destination element width
1682  for (i = 0; i < VL; i++)
1683 if (predval & 1<<i) # predication uses intregs
1684 // TODO, calculate if over-run occurs, for each elwidth
1685 src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
1686 // TODO, sign/zero-extend src1 and src2 as operation requires
1687 if (op_requires_sign_extend_src1)
1688 src1 = sign_extend(src1, maxsrcwid)
1689 src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
1690 result = src1 + src2 # actual add here
1691 // TODO, sign/zero-extend result, as operation requires
1692 if (op_requires_sign_extend_dest)
1693 result = sign_extend(result, maxsrcwid)
1694 set_polymorphed_reg(rd, destwid, ird, result)
1695 if (!int_vec[rd].isvector) break
1696 if (int_vec[rd ].isvector)  { id += 1; }
1697 if (int_vec[rs1].isvector)  { irs1 += 1; }
1698 if (int_vec[rs2].isvector)  { irs2 += 1; }
1699
1700 Whilst specific sign-extension and zero-extension pseudocode call
1701 details are left out, due to each operation being different, the above
1702 should be clear that;
1703
1704 * the source operands are extended out to the maximum bitwidth of all
1705 source operands
1706 * the operation takes place at that maximum source bitwidth (the
1707 destination bitwidth is not involved at this point, at all)
1708 * the result is extended (or potentially even, truncated) before being
1709 stored in the destination. i.e. truncation (if required) to the
1710 destination width occurs **after** the operation **not** before.
1711 * when the destination is not marked as "vectorised", the **full**
1712 (standard, scalar) register file entry is taken up, i.e. the
1713 element is either sign-extended or zero-extended to cover the
1714 full register bitwidth (XLEN) if it is not already XLEN bits long.
1715
1716 Implementors are entirely free to optimise the above, particularly
1717 if it is specifically known that any given operation will complete
1718 accurately in less bits, as long as the results produced are
1719 directly equivalent and equal, for all inputs and all outputs,
1720 to those produced by the above algorithm.
1721
1722 ## Polymorphic floating-point operation exceptions and error-handling
1723
1724 For floating-point operations, conversion takes place without
1725 raising any kind of exception. Exactly as specified in the standard
1726 RV specification, NAN (or appropriate) is stored if the result
1727 is beyond the range of the destination, and, again, exactly as
1728 with the standard RV specification just as with scalar
1729 operations, the floating-point flag is raised (FCSR). And, again, just as
1730 with scalar operations, it is software's responsibility to check this flag.
1731 Given that the FCSR flags are "accrued", the fact that multiple element
1732 operations could have occurred is not a problem.
1733
1734 Note that it is perfectly legitimate for floating-point bitwidths of
1735 only 8 to be specified. However whilst it is possible to apply IEEE 754
1736 principles, no actual standard yet exists. Implementors wishing to
1737 provide hardware-level 8-bit support rather than throw a trap to emulate
1738 in software should contact the author of this specification before
1739 proceeding.
1740
1741 ## Polymorphic shift operators
1742
1743 A special note is needed for changing the element width of left and right
1744 shift operators, particularly right-shift. Even for standard RV base,
1745 in order for correct results to be returned, the second operand RS2 must
1746 be truncated to be within the range of RS1's bitwidth. spike's implementation
1747 of sll for example is as follows:
1748
1749 WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));
1750
1751 which means: where XLEN is 32 (for RV32), restrict RS2 to cover the
1752 range 0..31 so that RS1 will only be left-shifted by the amount that
1753 is possible to fit into a 32-bit register. Whilst this appears not
1754 to matter for hardware, it matters greatly in software implementations,
1755 and it also matters where an RV64 system is set to "RV32" mode, such
1756 that the underlying registers RS1 and RS2 comprise 64 hardware bits
1757 each.
1758
1759 For SV, where each operand's element bitwidth may be over-ridden, the
1760 rule about determining the operation's bitwidth *still applies*, being
1761 defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
1762 **also applies to the truncation of RS2**. In other words, *after*
1763 determining the maximum bitwidth, RS2's range must **also be truncated**
1764 to ensure a correct answer. Example:
1765
1766 * RS1 is over-ridden to a 16-bit width
1767 * RS2 is over-ridden to an 8-bit width
1768 * RD is over-ridden to a 64-bit width
1769 * the maximum bitwidth is thus determined to be 16-bit - max(8,16)
1770 * RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
1771
1772 Pseudocode (in spike) for this example would therefore be:
1773
1774 WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
1775
1776 This example illustrates that considerable care therefore needs to be
1777 taken to ensure that left and right shift operations are implemented
1778 correctly. The key is that
1779
1780 * The operation bitwidth is determined by the maximum bitwidth
1781 of the *source registers*, **not** the destination register bitwidth
1782 * The result is then sign-extend (or truncated) as appropriate.
1783
1784 ## Polymorphic MULH/MULHU/MULHSU
1785
1786 MULH is designed to take the top half MSBs of a multiply that
1787 does not fit within the range of the source operands, such that
1788 smaller width operations may produce a full double-width multiply
1789 in two cycles. The issue is: SV allows the source operands to
1790 have variable bitwidth.
1791
1792 Here again special attention has to be paid to the rules regarding
1793 bitwidth, which, again, are that the operation is performed at
1794 the maximum bitwidth of the **source** registers. Therefore:
1795
1796 * An 8-bit x 8-bit multiply will create a 16-bit result that must
1797 be shifted down by 8 bits
1798 * A 16-bit x 8-bit multiply will create a 24-bit result that must
1799 be shifted down by 16 bits (top 8 bits being zero)
1800 * A 16-bit x 16-bit multiply will create a 32-bit result that must
1801 be shifted down by 16 bits
1802 * A 32-bit x 16-bit multiply will create a 48-bit result that must
1803 be shifted down by 32 bits
1804 * A 32-bit x 8-bit multiply will create a 40-bit result that must
1805 be shifted down by 32 bits
1806
1807 So again, just as with shift-left and shift-right, the result
1808 is shifted down by the maximum of the two source register bitwidths.
1809 And, exactly again, truncation or sign-extension is performed on the
1810 result. If sign-extension is to be carried out, it is performed
1811 from the same maximum of the two source register bitwidths out
1812 to the result element's bitwidth.
1813
1814 If truncation occurs, i.e. the top MSBs of the result are lost,
1815 this is "Officially Not Our Problem", i.e. it is assumed that the
1816 programmer actually desires the result to be truncated. i.e. if the
1817 programmer wanted all of the bits, they would have set the destination
1818 elwidth to accommodate them.
1819
1820 ## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
1821
1822 Polymorphic element widths in vectorised form means that the data
1823 being loaded (or stored) across multiple registers needs to be treated
1824 (reinterpreted) as a contiguous stream of elwidth-wide items, where
1825 the source register's element width is **independent** from the destination's.
1826
1827 This makes for a slightly more complex algorithm when using indirection
1828 on the "addressed" register (source for LOAD and destination for STORE),
1829 particularly given that the LOAD/STORE instruction provides important
1830 information about the width of the data to be reinterpreted.
1831
1832 Let's illustrate the "load" part, where the pseudo-code for elwidth=default
1833 was as follows, and i is the loop from 0 to VL-1:
1834
1835 srcbase = ireg[rs+i];
1836 return mem[srcbase + imm]; // returns XLEN bits
1837
1838 Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
1839 chunks are taken from the source memory location addressed by the current
1840 indexed source address register, and only when a full 32-bits-worth
1841 are taken will the index be moved on to the next contiguous source
1842 address register:
1843
1844 bitwidth = bw(elwidth); // source elwidth from CSR reg entry
1845 elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
1846 srcbase = ireg[rs+i/(elsperblock)]; // integer divide
1847 offs = i % elsperblock; // modulo
1848 return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
1849
1850 Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
1851 and 128 for LQ.
1852
1853 The principle is basically exactly the same as if the srcbase were pointing
1854 at the memory of the *register* file: memory is re-interpreted as containing
1855 groups of elwidth-wide discrete elements.
1856
1857 When storing the result from a load, it's important to respect the fact
1858 that the destination register has its *own separate element width*. Thus,
1859 when each element is loaded (at the source element width), any sign-extension
1860 or zero-extension (or truncation) needs to be done to the *destination*
1861 bitwidth. Also, the storing has the exact same analogous algorithm as
1862 above, where in fact it is just the set\_polymorphed\_reg pseudocode
1863 (completely unchanged) used above.
1864
1865 One issue remains: when the source element width is **greater** than
1866 the width of the operation, it is obvious that a single LB for example
1867 cannot possibly obtain 16-bit-wide data. This condition may be detected
1868 where, when using integer divide, elsperblock (the width of the LOAD
1869 divided by the bitwidth of the element) is zero.
1870
1871 The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
1872
1873 elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
1874
1875 The elements, if the element bitwidth is larger than the LD operation's
1876 size, will then be sign/zero-extended to the full LD operation size, as
1877 specified by the LOAD (LDU instead of LD, LBU instead of LB), before
1878 being passed on to the second phase.
1879
1880 As LOAD/STORE may be twin-predicated, it is important to note that
1881 the rules on twin predication still apply, except where in previous
1882 pseudo-code (elwidth=default for both source and target) it was
1883 the *registers* that the predication was applied to, it is now the
1884 **elements** that the predication is applied to.
1885
1886 Thus the full pseudocode for all LD operations may be written out
1887 as follows:
1888
1889 function LBU(rd, rs):
1890 load_elwidthed(rd, rs, 8, true)
1891 function LB(rd, rs):
1892 load_elwidthed(rd, rs, 8, false)
1893 function LH(rd, rs):
1894 load_elwidthed(rd, rs, 16, false)
1895 ...
1896 ...
1897 function LQ(rd, rs):
1898 load_elwidthed(rd, rs, 128, false)
1899
1900 # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
1901 function load_memory(rs, imm, i, opwidth):
1902 elwidth = int_csr[rs].elwidth
1903 bitwidth = bw(elwidth);
1904 elsperblock = min(1, opwidth / bitwidth)
1905 srcbase = ireg[rs+i/(elsperblock)];
1906 offs = i % elsperblock;
1907 return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
1908
1909 function load_elwidthed(rd, rs, opwidth, unsigned):
1910 destwid = int_csr[rd].elwidth # destination element width
1911  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1912  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1913  ps = get_pred_val(FALSE, rs); # predication on src
1914  pd = get_pred_val(FALSE, rd); # ... AND on dest
1915  for (int i = 0, int j = 0; i < VL && j < VL;):
1916 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1917 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1918 val = load_memory(rs, imm, i, opwidth)
1919 if unsigned:
1920 val = zero_extend(val, min(opwidth, bitwidth))
1921 else:
1922 val = sign_extend(val, min(opwidth, bitwidth))
1923 set_polymorphed_reg(rd, bitwidth, j, val)
1924 if (int_csr[rs].isvec) i++;
1925 if (int_csr[rd].isvec) j++; else break;
1926
1927 Note:
1928
1929 * when comparing against for example the twin-predicated c.mv
1930 pseudo-code, the pattern of independent incrementing of rd and rs
1931 is preserved unchanged.
1932 * just as with the c.mv pseudocode, zeroing is not included and must be
1933 taken into account (TODO).
1934 * that due to the use of a twin-predication algorithm, LOAD/STORE also
1935 take on the same VSPLAT, VINSERT, VREDUCE, VEXTRACT, VGATHER and
1936 VSCATTER characteristics.
1937 * that due to the use of the same set\_polymorphed\_reg pseudocode,
1938 a destination that is not vectorised (marked as scalar) will
1939 result in the element being fully sign-extended or zero-extended
1940 out to the full register file bitwidth (XLEN). When the source
1941 is also marked as scalar, this is how the compatibility with
1942 standard RV LOAD/STORE is preserved by this algorithm.
1943
1944 ### Example Tables showing LOAD elements
1945
1946 This section contains examples of vectorised LOAD operations, showing
1947 how the two stage process works (three if zero/sign-extension is included).
1948
1949
1950 #### Example: LD x8, x5(0), x8 CSR-elwidth=32, x5 CSR-elwidth=16, VL=7
1951
1952 This is:
1953
1954 * a 64-bit load, with an offset of zero
1955 * with a source-address elwidth of 16-bit
1956 * into a destination-register with an elwidth of 32-bit
1957 * where VL=7
1958 * from register x5 (actually x5-x6) to x8 (actually x8 to half of x11)
1959 * RV64, where XLEN=64 is assumed.
1960
1961 First, the memory table, which, due to the
1962 element width being 16 and the operation being LD (64), the 64-bits
1963 loaded from memory are subdivided into groups of **four** elements.
1964 And, with VL being 7 (deliberately to illustrate that this is reasonable
1965 and possible), the first four are sourced from the offset addresses pointed
1966 to by x5, and the next three from the ofset addresses pointed to by
1967 the next contiguous register, x6:
1968
1969 [[!table data="""
1970 addr | byte 0 | byte 1 | byte 2 | byte 3 | byte 4 | byte 5 | byte 6 | byte 7 |
1971 @x5 | elem 0 || elem 1 || elem 2 || elem 3 ||
1972 @x6 | elem 4 || elem 5 || elem 6 || not loaded ||
1973 """]]
1974
1975 Next, the elements are zero-extended from 16-bit to 32-bit, as whilst
1976 the elwidth CSR entry for x5 is 16-bit, the destination elwidth on x8 is 32.
1977
1978 [[!table data="""
1979 byte 3 | byte 2 | byte 1 | byte 0 |
1980 0x0 | 0x0 | elem0 ||
1981 0x0 | 0x0 | elem1 ||
1982 0x0 | 0x0 | elem2 ||
1983 0x0 | 0x0 | elem3 ||
1984 0x0 | 0x0 | elem4 ||
1985 0x0 | 0x0 | elem5 ||
1986 0x0 | 0x0 | elem6 ||
1987 0x0 | 0x0 | elem7 ||
1988 """]]
1989
1990 Lastly, the elements are stored in contiguous blocks, as if x8 was also
1991 byte-addressable "memory". That "memory" happens to cover registers
1992 x8, x9, x10 and x11, with the last 32 "bits" of x11 being **UNMODIFIED**:
1993
1994 [[!table data="""
1995 reg# | byte 7 | byte 6 | byte 5 | byte 4 | byte 3 | byte 2 | byte 1 | byte 0 |
1996 x8 | 0x0 | 0x0 | elem 1 || 0x0 | 0x0 | elem 0 ||
1997 x9 | 0x0 | 0x0 | elem 3 || 0x0 | 0x0 | elem 2 ||
1998 x10 | 0x0 | 0x0 | elem 5 || 0x0 | 0x0 | elem 4 ||
1999 x11 | **UNMODIFIED** |||| 0x0 | 0x0 | elem 6 ||
2000 """]]
2001
2002 Thus we have data that is loaded from the **addresses** pointed to by
2003 x5 and x6, zero-extended from 16-bit to 32-bit, stored in the **registers**
2004 x8 through to half of x11.
2005 The end result is that elements 0 and 1 end up in x8, with element 8 being
2006 shifted up 32 bits, and so on, until finally element 6 is in the
2007 LSBs of x11.
2008
2009 Note that whilst the memory addressing table is shown left-to-right byte order,
2010 the registers are shown in right-to-left (MSB) order. This does **not**
2011 imply that bit or byte-reversal is carried out: it's just easier to visualise
2012 memory as being contiguous bytes, and emphasises that registers are not
2013 really actually "memory" as such.
2014
2015 ## Why SV bitwidth specification is restricted to 4 entries
2016
2017 The four entries for SV element bitwidths only allows three over-rides:
2018
2019 * 8 bit
2020 * 16 hit
2021 * 32 bit
2022
2023 This would seem inadequate, surely it would be better to have 3 bits or
2024 more and allow 64, 128 and some other options besides. The answer here
2025 is, it gets too complex, no RV128 implementation yet exists, and so RV64's
2026 default is 64 bit, so the 4 major element widths are covered anyway.
2027
2028 There is an absolutely crucial aspect oF SV here that explicitly
2029 needs spelling out, and it's whether the "vectorised" bit is set in
2030 the Register's CSR entry.
2031
2032 If "vectorised" is clear (not set), this indicates that the operation
2033 is "scalar". Under these circumstances, when set on a destination (RD),
2034 then sign-extension and zero-extension, whilst changed to match the
2035 override bitwidth (if set), will erase the **full** register entry
2036 (64-bit if RV64).
2037
2038 When vectorised is *set*, this indicates that the operation now treats
2039 **elements** as if they were independent registers, so regardless of
2040 the length, any parts of a given actual register that are not involved
2041 in the operation are **NOT** modified, but are **PRESERVED**.
2042
2043 For example:
2044
2045 * when the vector bit is clear and elwidth set to 16 on the destination
2046 register, operations are truncated to 16 bit and then sign or zero
2047 extended to the *FULL* XLEN register width.
2048 * when the vector bit is set, elwidth is 16 and VL=1 (or other value where
2049 groups of elwidth sized elements do not fill an entire XLEN register),
2050 the "top" bits of the destination register do *NOT* get modified, zero'd
2051 or otherwise overwritten.
2052
2053 SIMD micro-architectures may implement this by using predication on
2054 any elements in a given actual register that are beyond the end of
2055 multi-element operation.
2056
2057 Other microarchitectures may choose to provide byte-level write-enable
2058 lines on the register file, such that each 64 bit register in an RV64
2059 system requires 8 WE lines. Scalar RV64 operations would require
2060 activation of all 8 lines, where SV elwidth based operations would
2061 activate the required subset of those byte-level write lines.
2062
2063 Example:
2064
2065 * rs1, rs2 and rd are all set to 8-bit
2066 * VL is set to 3
2067 * RV64 architecture is set (UXL=64)
2068 * add operation is carried out
2069 * bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16]
2070 concatenated with similar add operations on bits 15..8 and 7..0
2071 * bits 24 through 63 **remain as they originally were**.
2072
2073 Example SIMD micro-architectural implementation:
2074
2075 * SIMD architecture works out the nearest round number of elements
2076 that would fit into a full RV64 register (in this case: 8)
2077 * SIMD architecture creates a hidden predicate, binary 0b00000111
2078 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
2079 * SIMD architecture goes ahead with the add operation as if it
2080 was a full 8-wide batch of 8 adds
2081 * SIMD architecture passes top 5 elements through the adders
2082 (which are "disabled" due to zero-bit predication)
2083 * SIMD architecture gets the 5 unmodified top 8-bits back unmodified
2084 and stores them in rd.
2085
2086 This requires a read on rd, however this is required anyway in order
2087 to support non-zeroing mode.
2088
2089 ## Polymorphic floating-point
2090
2091 Standard scalar RV integer operations base the register width on XLEN,
2092 which may be changed (UXL in USTATUS, and the corresponding MXL and
2093 SXL in MSTATUS and SSTATUS respectively). Integer LOAD, STORE and
2094 arithmetic operations are therefore restricted to an active XLEN bits,
2095 with sign or zero extension to pad out the upper bits when XLEN has
2096 been dynamically set to less than the actual register size.
2097
2098 For scalar floating-point, the active (used / changed) bits are
2099 specified exclusively by the operation: ADD.S specifies an active
2100 32-bits, with the upper bits of the source registers needing to
2101 be all 1s ("NaN-boxed"), and the destination upper bits being
2102 *set* to all 1s (including on LOAD/STOREs).
2103
2104 Where elwidth is set to default (on any source or the destination)
2105 it is obvious that this NaN-boxing behaviour can and should be
2106 preserved. When elwidth is non-default things are less obvious,
2107 so need to be thought through. Here is a normal (scalar) sequence,
2108 assuming an RV64 which supports Quad (128-bit) FLEN:
2109
2110 * FLD loads 64-bit wide from memory. Top 64 MSBs are set to all 1s
2111 * ADD.D performs a 64-bit-wide add. Top 64 MSBs of destination set to 1s.
2112 * FSD stores lowest 64-bits from the 128-bit-wide register to memory:
2113 top 64 MSBs ignored.
2114
2115 Therefore it makes sense to mirror this behaviour when, for example,
2116 elwidth is set to 32. Assume elwidth set to 32 on all source and
2117 destination registers:
2118
2119 * FLD loads 64-bit wide from memory as **two** 32-bit single-precision
2120 floating-point numbers.
2121 * ADD.D performs **two** 32-bit-wide adds, storing one of the adds
2122 in bits 0-31 and the second in bits 32-63.
2123 * FSD stores lowest 64-bits from the 128-bit-wide register to memory
2124
2125 Here's the thing: it does not make sense to overwrite the top 64 MSBs
2126 of the registers either during the FLD **or** the ADD.D. The reason
2127 is that, effectively, the top 64 MSBs actually represent a completely
2128 independent 64-bit register, so overwriting it is not only gratuitous
2129 but may actually be harmful for a future extension to SV which may
2130 have a way to directly access those top 64 bits.
2131
2132 The decision is therefore **not** to touch the upper parts of floating-point
2133 registers whereever elwidth is set to non-default values, including
2134 when "isvec" is false in a given register's CSR entry. Only when the
2135 elwidth is set to default **and** isvec is false will the standard
2136 RV behaviour be followed, namely that the upper bits be modified.
2137
2138 Ultimately if elwidth is default and isvec false on *all* source
2139 and destination registers, a SimpleV instruction defaults completely
2140 to standard RV scalar behaviour (this holds true for **all** operations,
2141 right across the board).
2142
2143 The nice thing here is that ADD.S, ADD.D and ADD.Q when elwidth are
2144 non-default values are effectively all the same: they all still perform
2145 multiple ADD operations, just at different widths. A future extension
2146 to SimpleV may actually allow ADD.S to access the upper bits of the
2147 register, effectively breaking down a 128-bit register into a bank
2148 of 4 independently-accesible 32-bit registers.
2149
2150 In the meantime, although when e.g. setting VL to 8 it would technically
2151 make no difference to the ALU whether ADD.S, ADD.D or ADD.Q is used,
2152 using ADD.Q may be an easy way to signal to the microarchitecture that
2153 it is to receive a higher VL value. On a superscalar OoO architecture
2154 there may be absolutely no difference, however on simpler SIMD-style
2155 microarchitectures they may not necessarily have the infrastructure in
2156 place to know the difference, such that when VL=8 and an ADD.D instruction
2157 is issued, it completes in 2 cycles (or more) rather than one, where
2158 if an ADD.Q had been issued instead on such simpler microarchitectures
2159 it would complete in one.
2160
2161 ## Specific instruction walk-throughs
2162
2163 This section covers walk-throughs of the above-outlined procedure
2164 for converting standard RISC-V scalar arithmetic operations to
2165 polymorphic widths, to ensure that it is correct.
2166
2167 ### add
2168
2169 Standard Scalar RV32/RV64 (xlen):
2170
2171 * RS1 @ xlen bits
2172 * RS2 @ xlen bits
2173 * add @ xlen bits
2174 * RD @ xlen bits
2175
2176 Polymorphic variant:
2177
2178 * RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
2179 * RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
2180 * add @ max(rs1, rs2) bits
2181 * RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate
2182
2183 Note here that polymorphic add zero-extends its source operands,
2184 where addw sign-extends.
2185
2186 ### addw
2187
2188 The RV Specification specifically states that "W" variants of arithmetic
2189 operations always produce 32-bit signed values. In a polymorphic
2190 environment it is reasonable to assume that the signed aspect is
2191 preserved, where it is the length of the operands and the result
2192 that may be changed.
2193
2194 Standard Scalar RV64 (xlen):
2195
2196 * RS1 @ xlen bits
2197 * RS2 @ xlen bits
2198 * add @ xlen bits
2199 * RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
2200
2201 Polymorphic variant:
2202
2203 * RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
2204 * RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
2205 * add @ max(rs1, rs2) bits
2206 * RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
2207
2208 Note here that polymorphic addw sign-extends its source operands,
2209 where add zero-extends.
2210
2211 This requires a little more in-depth analysis. Where the bitwidth of
2212 rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
2213 only where the bitwidth of either rs1 or rs2 are different, will the
2214 lesser-width operand be sign-extended.
2215
2216 Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
2217 where for add they are both zero-extended. This holds true for all arithmetic
2218 operations ending with "W".
2219
2220 ### addiw
2221
2222 Standard Scalar RV64I:
2223
2224 * RS1 @ xlen bits, truncated to 32-bit
2225 * immed @ 12 bits, sign-extended to 32-bit
2226 * add @ 32 bits
2227 * RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
2228
2229 Polymorphic variant:
2230
2231 * RS1 @ rs1 bits
2232 * immed @ 12 bits, sign-extend to max(rs1, 12) bits
2233 * add @ max(rs1, 12) bits
2234 * RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
2235
2236 # Predication Element Zeroing
2237
2238 The introduction of zeroing on traditional vector predication is usually
2239 intended as an optimisation for lane-based microarchitectures with register
2240 renaming to be able to save power by avoiding a register read on elements
2241 that are passed through en-masse through the ALU. Simpler microarchitectures
2242 do not have this issue: they simply do not pass the element through to
2243 the ALU at all, and therefore do not store it back in the destination.
2244 More complex non-lane-based micro-architectures can, when zeroing is
2245 not set, use the predication bits to simply avoid sending element-based
2246 operations to the ALUs, entirely: thus, over the long term, potentially
2247 keeping all ALUs 100% occupied even when elements are predicated out.
2248
2249 SimpleV's design principle is not based on or influenced by
2250 microarchitectural design factors: it is a hardware-level API.
2251 Therefore, looking purely at whether zeroing is *useful* or not,
2252 (whether less instructions are needed for certain scenarios),
2253 given that a case can be made for zeroing *and* non-zeroing, the
2254 decision was taken to add support for both.
2255
2256 ## Single-predication (based on destination register)
2257
2258 Zeroing on predication for arithmetic operations is taken from
2259 the destination register's predicate. i.e. the predication *and*
2260 zeroing settings to be applied to the whole operation come from the
2261 CSR Predication table entry for the destination register.
2262 Thus when zeroing is set on predication of a destination element,
2263 if the predication bit is clear, then the destination element is *set*
2264 to zero (twin-predication is slightly different, and will be covered
2265 next).
2266
2267 Thus the pseudo-code loop for a predicated arithmetic operation
2268 is modified to as follows:
2269
2270  for (i = 0; i < VL; i++)
2271 if not zeroing: # an optimisation
2272 while (!(predval & 1<<i) && i < VL)
2273 if (int_vec[rd ].isvector)  { id += 1; }
2274 if (int_vec[rs1].isvector)  { irs1 += 1; }
2275 if (int_vec[rs2].isvector)  { irs2 += 1; }
2276 if i == VL:
2277 return
2278 if (predval & 1<<i)
2279 src1 = ....
2280 src2 = ...
2281 else:
2282 result = src1 + src2 # actual add (or other op) here
2283 set_polymorphed_reg(rd, destwid, ird, result)
2284 if int_vec[rd].ffirst and result == 0:
2285 VL = i # result was zero, end loop early, return VL
2286 return
2287 if (!int_vec[rd].isvector) return
2288 else if zeroing:
2289 result = 0
2290 set_polymorphed_reg(rd, destwid, ird, result)
2291 if (int_vec[rd ].isvector)  { id += 1; }
2292 else if (predval & 1<<i) return
2293 if (int_vec[rs1].isvector)  { irs1 += 1; }
2294 if (int_vec[rs2].isvector)  { irs2 += 1; }
2295 if (rd == VL or rs1 == VL or rs2 == VL): return
2296
2297 The optimisation to skip elements entirely is only possible for certain
2298 micro-architectures when zeroing is not set. However for lane-based
2299 micro-architectures this optimisation may not be practical, as it
2300 implies that elements end up in different "lanes". Under these
2301 circumstances it is perfectly fine to simply have the lanes
2302 "inactive" for predicated elements, even though it results in
2303 less than 100% ALU utilisation.
2304
2305 ## Twin-predication (based on source and destination register)
2306
2307 Twin-predication is not that much different, except that that
2308 the source is independently zero-predicated from the destination.
2309 This means that the source may be zero-predicated *or* the
2310 destination zero-predicated *or both*, or neither.
2311
2312 When with twin-predication, zeroing is set on the source and not
2313 the destination, if a predicate bit is set it indicates that a zero
2314 data element is passed through the operation (the exception being:
2315 if the source data element is to be treated as an address - a LOAD -
2316 then the data returned *from* the LOAD is zero, rather than looking up an
2317 *address* of zero.
2318
2319 When zeroing is set on the destination and not the source, then just
2320 as with single-predicated operations, a zero is stored into the destination
2321 element (or target memory address for a STORE).
2322
2323 Zeroing on both source and destination effectively result in a bitwise
2324 NOR operation of the source and destination predicate: the result is that
2325 where either source predicate OR destination predicate is set to 0,
2326 a zero element will ultimately end up in the destination register.
2327
2328 However: this may not necessarily be the case for all operations;
2329 implementors, particularly of custom instructions, clearly need to
2330 think through the implications in each and every case.
2331
2332 Here is pseudo-code for a twin zero-predicated operation:
2333
2334 function op_mv(rd, rs) # MV not VMV!
2335  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
2336  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
2337  ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
2338  pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
2339  for (int i = 0, int j = 0; i < VL && j < VL):
2340 if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
2341 if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
2342 if ((pd & 1<<j))
2343 if ((pd & 1<<j))
2344 sourcedata = ireg[rs+i];
2345 else
2346 sourcedata = 0
2347 ireg[rd+j] <= sourcedata
2348 else if (zerodst)
2349 ireg[rd+j] <= 0
2350 if (int_csr[rs].isvec)
2351 i++;
2352 if (int_csr[rd].isvec)
2353 j++;
2354 else
2355 if ((pd & 1<<j))
2356 break;
2357
2358 Note that in the instance where the destination is a scalar, the hardware
2359 loop is ended the moment a value *or a zero* is placed into the destination
2360 register/element. Also note that, for clarity, variable element widths
2361 have been left out of the above.
2362
2363 # Exceptions
2364
2365 TODO: expand. Exceptions may occur at any time, in any given underlying
2366 scalar operation. This implies that context-switching (traps) may
2367 occur, and operation must be returned to where it left off. That in
2368 turn implies that the full state - including the current parallel
2369 element being processed - has to be saved and restored. This is
2370 what the **STATE** CSR is for.
2371
2372 The implications are that all underlying individual scalar operations
2373 "issued" by the parallelisation have to appear to be executed sequentially.
2374 The further implications are that if two or more individual element
2375 operations are underway, and one with an earlier index causes an exception,
2376 it may be necessary for the microarchitecture to **discard** or terminate
2377 operations with higher indices.
2378
2379 This being somewhat dissatisfactory, an "opaque predication" variant
2380 of the STATE CSR is being considered.
2381
2382 # Hints
2383
2384 A "HINT" is an operation that has no effect on architectural state,
2385 where its use may, by agreed convention, give advance notification
2386 to the microarchitecture: branch prediction notification would be
2387 a good example. Usually HINTs are where rd=x0.
2388
2389 With Simple-V being capable of issuing *parallel* instructions where
2390 rd=x0, the space for possible HINTs is expanded considerably. VL
2391 could be used to indicate different hints. In addition, if predication
2392 is set, the predication register itself could hypothetically be passed
2393 in as a *parameter* to the HINT operation.
2394
2395 No specific hints are yet defined in Simple-V
2396
2397 # Vector Block Format <a name="vliw-format"></a>
2398
2399 One issue with a former revision of SV was the setup and teardown
2400 time of the CSRs. The cost of the use of a full CSRRW (requiring LI)
2401 to set up registers and predicates was quite high. A VLIW-like format
2402 therefore makes sense, and is conceptually reminiscent of the ARM Thumb2
2403 "IT" instruction.
2404
2405 The format is:
2406
2407 * the standard RISC-V 80 to 192 bit encoding sequence, with bits
2408 defining the options to follow within the block
2409 * An optional VL Block (16-bit)
2410 * Optional predicate entries (8/16-bit blocks: see Predicate Table, above)
2411 * Optional register entries (8/16-bit blocks: see Register Table, above)
2412 * finally some 16/32/48 bit standard RV or SVPrefix opcodes follow.
2413
2414 Thus, the variable-length format from Section 1.5 of the RISC-V ISA is used
2415 as follows:
2416
2417 | base+4 ... base+2 | base | number of bits |
2418 | ------ ----------------- | ---------------- | -------------------------- |
2419 | ..xxxx xxxxxxxxxxxxxxxx | xnnnxxxxx1111111 | (80+16\*nnn)-bit, nnn!=111 |
2420 | {ops}{Pred}{Reg}{VL Block} | SV Prefix | |
2421
2422 A suitable prefix, which fits the Expanded Instruction-Length encoding
2423 for "(80 + 16 times instruction-length)", as defined in Section 1.5
2424 of the RISC-V ISA, is as follows:
2425
2426 | 15 | 14:12 | 11:10 | 9:8 | 7 | 6:0 |
2427 | - | ----- | ----- | ----- | --- | ------- |
2428 | vlset | 16xil | pplen | rplen | mode | 1111111 |
2429
2430 The VL/MAXVL/SubVL Block format:
2431
2432 | 31-30 | 29:28 | 27:22 | 21:17 - 16 |
2433 | - | ----- | ------ | ------ - - |
2434 | 0 | SubVL | VLdest | VLEN vlt |
2435 | 1 | SubVL | VLdest | VLEN |
2436
2437 Note: this format is very similar to that used in [[sv_prefix_proposal]]
2438
2439 If vlt is 0, VLEN is a 5 bit immediate value, offset by one (i.e
2440 a bit sequence of 0b00000 represents VL=1 and so on). If vlt is 1,
2441 it specifies the scalar register from which VL is set by this VLIW
2442 instruction group. VL, whether set from the register or the immediate,
2443 is then modified (truncated) to be MIN(VL, MAXVL), and the result stored
2444 in the scalar register specified in VLdest. If VLdest is zero, no store
2445 in the regfile occurs (however VL is still set).
2446
2447 This option will typically be used to start vectorised loops, where
2448 the VLIW instruction effectively embeds an optional "SETSUBVL, SETVL"
2449 sequence (in compact form).
2450
2451 When bit 15 is set to 1, MAXVL and VL are both set to the immediate,
2452 VLEN (again, offset by one), which is 6 bits in length, and the same
2453 value stored in scalar register VLdest (if that register is nonzero).
2454 A value of 0b000000 will set MAXVL=VL=1, a value of 0b000001 will
2455 set MAXVL=VL= 2 and so on.
2456
2457 This option will typically not be used so much for loops as it will be
2458 for one-off instructions such as saving the entire register file to the
2459 stack with a single one-off Vectorised and predicated LD/ST, or as a way
2460 to save or restore registers in a function call with a single instruction.
2461
2462 CSRs needed:
2463
2464 * mepcvliw
2465 * sepcvliw
2466 * uepcvliw
2467 * hepcvliw
2468
2469 Notes:
2470
2471 * Bit 7 specifies if the prefix block format is the full 16 bit format
2472 (1) or the compact less expressive format (0). In the 8 bit format,
2473 pplen is multiplied by 2.
2474 * 8 bit format predicate numbering is implicit and begins from x9. Thus
2475 it is critical to put blocks in the correct order as required.
2476 * Bit 7 also specifies if the register block format is 16 bit (1) or 8 bit
2477 (0). In the 8 bit format, rplen is multiplied by 2. If only an odd number
2478 of entries are needed the last may be set to 0x00, indicating "unused".
2479 * Bit 15 specifies if the VL Block is present. If set to 1, the VL Block
2480 immediately follows the VLIW instruction Prefix
2481 * Bits 8 and 9 define how many RegCam entries (0 to 3 if bit 15 is 1,
2482 otherwise 0 to 6) follow the (optional) VL Block.
2483 * Bits 10 and 11 define how many PredCam entries (0 to 3 if bit 7 is 1,
2484 otherwise 0 to 6) follow the (optional) RegCam entries
2485 * Bits 14 to 12 (IL) define the actual length of the instruction: total
2486 number of bits is 80 + 16 times IL. Standard RV32, RVC and also
2487 SVPrefix (P48/64-\*-Type) instructions fit into this space, after the
2488 (optional) VL / RegCam / PredCam entries
2489 * In any RVC or 32 Bit opcode, any registers within the VLIW-prefixed
2490 format *MUST* have the RegCam and PredCam entries applied to the
2491 operation (and the Vectorisation loop activated)
2492 * P48 and P64 opcodes do **not** take their Register or predication
2493 context from the VLIW Block tables: they do however have VL or SUBVL
2494 applied (unless VLtyp or svlen are set).
2495 * At the end of the VLIW Group, the RegCam and PredCam entries
2496 *no longer apply*. VL, MAXVL and SUBVL on the other hand remain at
2497 the values set by the last instruction (whether a CSRRW or the VL
2498 Block header).
2499 * Although an inefficient use of resources, it is fine to set the MAXVL,
2500 VL and SUBVL CSRs with standard CSRRW instructions, within a VLIW block.
2501
2502 All this would greatly reduce the amount of space utilised by Vectorised
2503 instructions, given that 64-bit CSRRW requires 3, even 4 32-bit opcodes:
2504 the CSR itself, a LI, and the setting up of the value into the RS
2505 register of the CSR, which, again, requires a LI / LUI to get the 32
2506 bit data into the CSR. To get 64-bit data into the register in order
2507 to put it into the CSR(s), LOAD operations from memory are needed!
2508
2509 Given that each 64-bit CSR can hold only 4x PredCAM entries (or 4 RegCAM
2510 entries), that's potentially 6 to eight 32-bit instructions, just to
2511 establish the Vector State!
2512
2513 Not only that: even CSRRW on VL and MAXVL requires 64-bits (even more
2514 bits if VL needs to be set to greater than 32). Bear in mind that in SV,
2515 both MAXVL and VL need to be set.
2516
2517 By contrast, the VLIW prefix is only 16 bits, the VL/MAX/SubVL block is
2518 only 16 bits, and as long as not too many predicates and register vector
2519 qualifiers are specified, several 32-bit and 16-bit opcodes can fit into
2520 the format. If the full flexibility of the 16 bit block formats are not
2521 needed, more space is saved by using the 8 bit formats.
2522
2523 In this light, embedding the VL/MAXVL, PredCam and RegCam CSR entries
2524 into a VLIW format makes a lot of sense.
2525
2526 Bear in mind the warning in an earlier section that use of VLtyp or svlen
2527 in a P48 or P64 opcode within a VLIW Group will result in corruption
2528 (use) of the STATE CSR, as the STATE CSR is shared with SVPrefix. To
2529 avoid this situation, the STATE CSR may be copied into a temp register
2530 and restored afterwards.
2531
2532 Open Questions:
2533
2534 * Is it necessary to stick to the RISC-V 1.5 format? Why not go with
2535 using the 15th bit to allow 80 + 16\*0bnnnn bits? Perhaps to be sane,
2536 limit to 256 bits (16 times 0-11).
2537 * Could a "hint" be used to set which operations are parallel and which
2538 are sequential?
2539 * Could a new sub-instruction opcode format be used, one that does not
2540 conform precisely to RISC-V rules, but *unpacks* to RISC-V opcodes?
2541 no need for byte or bit-alignment
2542 * Could a hardware compression algorithm be deployed? Quite likely,
2543 because of the sub-execution context (sub-VLIW PC)
2544
2545 ## Limitations on instructions.
2546
2547 To greatly simplify implementations, it is required to treat the VLIW
2548 group as a separate sub-program with its own separate PC. The sub-pc
2549 advances separately whilst the main PC remains pointing at the beginning
2550 of the VLIW instruction (not to be confused with how VL works, which
2551 is exactly the same principle, except it is VStart in the STATE CSR
2552 that increments).
2553
2554 This has implications, namely that a new set of CSRs identical to xepc
2555 (mepc, srpc, hepc and uepc) must be created and managed and respected
2556 as being a sub extension of the xepc set of CSRs. Thus, xepcvliw CSRs
2557 must be context switched and saved / restored in traps.
2558
2559 The srcoffs and destoffs indices in the STATE CSR may be similarly
2560 regarded as another sub-execution context, giving in effect two sets of
2561 nested sub-levels of the RISCV Program Counter (actually, three including
2562 SUBVL and ssvoffs).
2563
2564 In addition, as xepcvliw CSRs are relative to the beginning of the VLIW
2565 block, branches MUST be restricted to within (relative to) the block,
2566 i.e. addressing is now restricted to the start (and very short) length
2567 of the block.
2568
2569 Also: calling subroutines is inadviseable, unless they can be entirely
2570 accomplished within a block.
2571
2572 A normal jump, normal branch and a normal function call may only be taken
2573 by letting the VLIW group end, returning to "normal" standard RV mode,
2574 and then using standard RVC, 32 bit or P48/64-\*-type opcodes.
2575
2576 ## Links
2577
2578 * <https://groups.google.com/d/msg/comp.arch/yIFmee-Cx-c/jRcf0evSAAAJ>
2579
2580 # Subsets of RV functionality
2581
2582 This section describes the differences when SV is implemented on top of
2583 different subsets of RV.
2584
2585 ## Common options
2586
2587 It is permitted to only implement SVprefix and not the VLIW instruction
2588 format option, and vice-versa. UNIX Platforms **MUST** raise illegal
2589 instruction on seeing an unsupported VLIW or SVprefix opcode, so that
2590 traps may emulate the format.
2591
2592 It is permitted in SVprefix to either not implement VL or not implement
2593 SUBVL (see [[sv_prefix_proposal]] for full details. Again, UNIX Platforms
2594 *MUST* raise illegal instruction on implementations that do not support
2595 VL or SUBVL.
2596
2597 It is permitted to limit the size of either (or both) the register files
2598 down to the original size of the standard RV architecture. However, below
2599 the mandatory limits set in the RV standard will result in non-compliance
2600 with the SV Specification.
2601
2602 ## RV32 / RV32F
2603
2604 When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
2605 maximum limit for predication is also restricted to 32 bits. Whilst not
2606 actually specifically an "option" it is worth noting.
2607
2608 ## RV32G
2609
2610 Normally in standard RV32 it does not make much sense to have
2611 RV32G, The critical instructions that are missing in standard RV32
2612 are those for moving data to and from the double-width floating-point
2613 registers into the integer ones, as well as the FCVT routines.
2614
2615 In an earlier draft of SV, it was possible to specify an elwidth
2616 of double the standard register size: this had to be dropped,
2617 and may be reintroduced in future revisions.
2618
2619 ## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
2620
2621 When floating-point is not implemented, the size of the User Register and
2622 Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
2623 per table).
2624
2625 ## RV32E
2626
2627 In embedded scenarios the User Register and Predication CSRs may be
2628 dropped entirely, or optionally limited to 1 CSR, such that the combined
2629 number of entries from the M-Mode CSR Register table plus U-Mode
2630 CSR Register table is either 4 16-bit entries or (if the U-Mode is
2631 zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
2632 the Predication CSR tables.
2633
2634 RV32E is the most likely candidate for simply detecting that registers
2635 are marked as "vectorised", and generating an appropriate exception
2636 for the VL loop to be implemented in software.
2637
2638 ## RV128
2639
2640 RV128 has not been especially considered, here, however it has some
2641 extremely large possibilities: double the element width implies
2642 256-bit operands, spanning 2 128-bit registers each, and predication
2643 of total length 128 bit given that XLEN is now 128.
2644
2645 # Under consideration <a name="issues"></a>
2646
2647 for element-grouping, if there is unused space within a register
2648 (3 16-bit elements in a 64-bit register for example), recommend:
2649
2650 * For the unused elements in an integer register, the used element
2651 closest to the MSB is sign-extended on write and the unused elements
2652 are ignored on read.
2653 * The unused elements in a floating-point register are treated as-if
2654 they are set to all ones on write and are ignored on read, matching the
2655 existing standard for storing smaller FP values in larger registers.
2656
2657 ---
2658
2659 info register,
2660
2661 > One solution is to just not support LR/SC wider than a fixed
2662 > implementation-dependent size, which must be at least 
2663 >1 XLEN word, which can be read from a read-only CSR
2664 > that can also be used for info like the kind and width of 
2665 > hw parallelism supported (128-bit SIMD, minimal virtual 
2666 > parallelism, etc.) and other things (like maybe the number 
2667 > of registers supported). 
2668
2669 > That CSR would have to have a flag to make a read trap so
2670 > a hypervisor can simulate different values.
2671
2672 ----
2673
2674 > And what about instructions like JALR? 
2675
2676 answer: they're not vectorised, so not a problem
2677
2678 ----
2679
2680 * if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
2681 XLEN if elwidth==default
2682 * if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
2683 *32* if elwidth == default
2684
2685 ---
2686
2687 TODO: document different lengths for INT / FP regfiles, and provide
2688 as part of info register. 00=32, 01=64, 10=128, 11=reserved.
2689
2690 ---
2691
2692 TODO, update to remove RegCam and PredCam CSRs, just use SVprefix and
2693 VLIW format
2694
2695 ---
2696
2697 Could the 8 bit Register VLIW format use regnum<<1 instead, only accessing regs 0 to 64?
2698
2699 --
2700
2701 Expand the range of SUBVL and its associated svsrcoffs and svdestoffs by
2702 adding a 2nd STATE CSR (or extending STATE to 64 bits). Future version?
2703
2704 --
2705
2706 TODO evaluate strncpy and strlen
2707 <https://groups.google.com/forum/m/#!msg/comp.arch/bGBeaNjAKvc/_vbqyxTUAQAJ>
2708
2709 RVV version: <a name="strncpy"></>
2710
2711 strncpy:
2712 mv a3, a0 # Copy dst
2713 loop:
2714 setvli x0, a2, vint8 # Vectors of bytes.
2715 vlbff.v v1, (a1) # Get src bytes
2716 vseq.vi v0, v1, 0 # Flag zero bytes
2717 vmfirst a4, v0 # Zero found?
2718 vmsif.v v0, v0 # Set mask up to and including zero byte. Ppplio
2719 vsb.v v1, (a3), v0.t # Write out bytes
2720 bgez a4, exit # Done
2721 csrr t1, vl # Get number of bytes fetched
2722 add a1, a1, t1 # Bump src pointer
2723 sub a2, a2, t1 # Decrement count.
2724 add a3, a3, t1 # Bump dst pointer
2725 bnez a2, loop # Anymore?
2726
2727 exit:
2728 ret
2729
2730 SV version (WIP):
2731
2732 strncpy:
2733 mv a3, a0
2734 SETMVLI 8 # set max vector to 8
2735 RegCSR[a3] = 8bit, a3, vector
2736 RegCSR[a1] = 8bit, a3, vector
2737 PredTb[t0] = ffirst, x0, inv
2738 add t2, x0, x0 #t2 = 0
2739 loop:
2740 SETVLI a2, t4 # t4 and VL now 1..8
2741 ldb t0, (a1) # t0 fail first mode
2742 bne t0, x0, allnonzero # still ff
2743 # VL points to last nonzero
2744 GETVL t4 # from bne tests
2745 addi t4, t4, 1 # include zero
2746 SETVL t4 # set exactly to t4
2747 stb t0, (a3) # store incl zero
2748 ret # end subroutine
2749 allnonzero:
2750 stb t0, (a3) # VL legal range
2751 GETVL t4 # from bne tests
2752 add a1, a1, t4 # Bump src pointer
2753 sub a2, a2, t4 # Decrement count.
2754 add a3, a3, t4 # Bump dst pointer
2755 bnez a2, loop # Anymore?
2756 exit:
2757 ret
2758
2759 Notes:
2760
2761 * ldb and bne are both using t0, both in ffirst mode
2762 * ldb will end on illegal mem, reduce VL, but copied all sorts of stuff into t0
2763 # bne behaviour modified to do multiple tests (more like FNE).
2764 * bne t0 x0 tests up to the NEW VL for nonzero, vector t0 against scalar x0
2765 * however as t0 is in ffirst mode, the first fail wil ALSO stop the compares, and reduce VL as well
2766 * the branch only goes to allnonzero if all tests succeed
2767 * if it did not, we can safely increment VL by 1 (using a4) to include the zero.
2768 * SETVL sets *exactly* the requested amount into VL.
2769 * the SETVL just after allnonzero label is needed in case the ldb ffirst activates but the bne allzeros does not.
2770 * this would cause the stb to copy up to the end of the legal memory
2771 * of course, on the next loop the ldb would throw a trap, as a1 points to the first illegal mem location.
2772
2773 RVV version:
2774
2775 mv a3, a0 # Save start
2776 loop:
2777 setvli a1, x0, vint8 # byte vec, x0 (Zero reg) => use max hardware len
2778 vldbff.v v1, (a3) # Get bytes
2779 csrr a1, vl # Get bytes actually read e.g. if fault
2780 vseq.vi v0, v1, 0 # Set v0[i] where v1[i] = 0
2781 add a3, a3, a1 # Bump pointer
2782 vmfirst a2, v0 # Find first set bit in mask, returns -1 if none
2783 bltz a2, loop # Not found?
2784 add a0, a0, a1 # Sum start + bump
2785 add a3, a3, a2 # Add index of zero byte
2786 sub a0, a3, a0 # Subtract start address+bump
2787 ret