clarify format
[libreriscv.git] / simple_v_extension / specification.mdwn
1 # Simple-V (Parallelism Extension Proposal) Specification
2
3 * Copyright (C) 2017, 2018, 2019 Luke Kenneth Casson Leighton
4 * Status: DRAFTv0.6
5 * Last edited: 21 jun 2019
6 * Ancillary resource: [[opcodes]] [[sv_prefix_proposal]]
7
8 With thanks to:
9
10 * Allen Baum
11 * Bruce Hoult
12 * comp.arch
13 * Jacob Bachmeyer
14 * Guy Lemurieux
15 * Jacob Lifshay
16 * Terje Mathisen
17 * The RISC-V Founders, without whom this all would not be possible.
18
19 [[!toc ]]
20
21 # Summary and Background: Rationale
22
23 Simple-V is a uniform parallelism API for RISC-V hardware that has several
24 unplanned side-effects including code-size reduction, expansion of
25 HINT space and more. The reason for
26 creating it is to provide a manageable way to turn a pre-existing design
27 into a parallel one, in a step-by-step incremental fashion, without adding any new opcodes, thus allowing
28 the implementor to focus on adding hardware where it is needed and necessary.
29 The primary target is for mobile-class 3D GPUs and VPUs, with secondary
30 goals being to reduce executable size (by extending the effectiveness of RV opcodes, RVC in particular) and reduce context-switch latency.
31
32 Critically: **No new instructions are added**. The parallelism (if any
33 is implemented) is implicitly added by tagging *standard* scalar registers
34 for redirection. When such a tagged register is used in any instruction,
35 it indicates that the PC shall **not** be incremented; instead a loop
36 is activated where *multiple* instructions are issued to the pipeline
37 (as determined by a length CSR), with contiguously incrementing register
38 numbers starting from the tagged register. When the last "element"
39 has been reached, only then is the PC permitted to move on. Thus
40 Simple-V effectively sits (slots) *in between* the instruction decode phase
41 and the ALU(s).
42
43 The barrier to entry with SV is therefore very low. The minimum
44 compliant implementation is software-emulation (traps), requiring
45 only the CSRs and CSR tables, and that an exception be thrown if an
46 instruction's registers are detected to have been tagged. The looping
47 that would otherwise be done in hardware is thus carried out in software,
48 instead. Whilst much slower, it is "compliant" with the SV specification,
49 and may be suited for implementation in RV32E and also in situations
50 where the implementor wishes to focus on certain aspects of SV, without
51 unnecessary time and resources into the silicon, whilst also conforming
52 strictly with the API. A good area to punt to software would be the
53 polymorphic element width capability for example.
54
55 Hardware Parallelism, if any, is therefore added at the implementor's
56 discretion to turn what would otherwise be a sequential loop into a
57 parallel one.
58
59 To emphasise that clearly: Simple-V (SV) is *not*:
60
61 * A SIMD system
62 * A SIMT system
63 * A Vectorisation Microarchitecture
64 * A microarchitecture of any specific kind
65 * A mandary parallel processor microarchitecture of any kind
66 * A supercomputer extension
67
68 SV does **not** tell implementors how or even if they should implement
69 parallelism: it is a hardware "API" (Application Programming Interface)
70 that, if implemented, presents a uniform and consistent way to *express*
71 parallelism, at the same time leaving the choice of if, how, how much,
72 when and whether to parallelise operations **entirely to the implementor**.
73
74 # Basic Operation
75
76 The principle of SV is as follows:
77
78 * Standard RV instructions are "prefixed" (extended) through a 48/64
79 bit format (single instruction option) or a variable
80 length VLIW-like prefix (multi or "grouped" option).
81 * The prefix(es) indicate which registers are "tagged" as
82 "vectorised". Predicates can also be added, and element widths
83 overridden on any src or dest register.
84 * A "Vector Length" CSR is set, indicating the span of any future
85 "parallel" operations.
86 * If any operation (a **scalar** standard RV opcode) uses a register
87 that has been so "marked" ("tagged"), a hardware "macro-unrolling loop"
88 is activated, of length VL, that effectively issues **multiple**
89 identical instructions using contiguous sequentially-incrementing
90 register numbers, based on the "tags".
91 * **Whether they be executed sequentially or in parallel or a
92 mixture of both or punted to software-emulation in a trap handler
93 is entirely up to the implementor**.
94
95 In this way an entire scalar algorithm may be vectorised with
96 the minimum of modification to the hardware and to compiler toolchains.
97
98 To reiterate: **There are *no* new opcodes**. The scheme works *entirely*
99 on hidden context that augments *scalar* RISCV instructions.
100
101 # CSRs <a name="csrs"></a>
102
103 * An optional "reshaping" CSR key-value table which remaps from a 1D
104 linear shape to 2D or 3D, including full transposition.
105
106 There are five additional CSRs, available in any privilege level:
107
108 * MVL (the Maximum Vector Length)
109 * VL (which has different characteristics from standard CSRs)
110 * SUBVL (effectively a kind of SIMD)
111 * STATE (containing copies of MVL, VL and SUBVL as well as context information)
112 * PCVLIW (the current operation being executed within a VLIW Group)
113
114 For User Mode there are the following CSRs:
115
116 * uePCVLIW (a copy of the sub-execution Program Counter, that is relative
117 to the start of the current VLIW Group, set on a trap).
118 * ueSTATE (useful for saving and restoring during context switch,
119 and for providing fast transitions)
120
121 There are also two additional CSRs for Supervisor-Mode:
122
123 * sePCVLIW
124 * seSTATE
125
126 And likewise for M-Mode:
127
128 * mePCVLIW
129 * meSTATE
130
131 The u/m/s CSRs are treated and handled exactly like their (x)epc
132 equivalents. On entry to a privilege level, the contents of its (x)eSTATE
133 and (x)ePCVLIW CSRs are copied into STATE and PCVLIW respectively, and
134 on exit from a priv level the STATE and PCVLIW CSRs are copied to the
135 exited priv level's corresponding CSRs.
136
137 Thus for example, a User Mode trap will end up swapping STATE and ueSTATE
138 (on both entry and exit), allowing User Mode traps to have their own
139 Vectorisation Context set up, separated from and unaffected by normal
140 user applications.
141
142 Likewise, Supervisor Mode may perform context-switches, safe in the
143 knowledge that its Vectorisation State is unaffected by User Mode.
144
145 For this to work, the (x)eSTATE CSR must be saved onto the stack by the
146 trap, just like (x)epc, before modifying the trap atomicity flag (x)ie.
147
148 The access pattern for these groups of CSRs in each mode follows the
149 same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
150
151 * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
152 * In S-Mode, accessing and changing of the M-Mode CSRs is transparently
153 identical
154 to changing the S-Mode CSRs. Accessing and changing the U-Mode
155 CSRs is permitted.
156 * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
157 is prohibited.
158
159 In M-Mode, only the M-Mode CSRs are in effect, i.e. it is only the
160 M-Mode MVL, the M-Mode STATE and so on that influences the processor
161 behaviour. Likewise for S-Mode, and likewise for U-Mode.
162
163 This has the interesting benefit of allowing M-Mode (or S-Mode) to be set
164 up, for context-switching to take place, and, on return back to the higher
165 privileged mode, the CSRs of that mode will be exactly as they were.
166 Thus, it becomes possible for example to set up CSRs suited best to aiding
167 and assisting low-latency fast context-switching *once and only once*
168 (for example at boot time), without the need for re-initialising the
169 CSRs needed to do so.
170
171 Another interesting side effect of separate S Mode CSRs is that
172 Vectorised saving of the entire register file to the stack is a single
173 instruction (accidental provision of LOAD-MULTI semantics). If the
174 SVPrefix P64-LD-type format is used, LOAD-MULTI may even be done with a
175 single standalone 64 bit opcode (P64 may set up both VL and MVL from an
176 immediate field). It can even be predicated, which opens up some very
177 interesting possibilities.
178
179 The (x)EPCVLIW CSRs must be treated exactly like their corresponding (x)epc
180 equivalents. See VLIW section for details.
181
182 ## MAXVECTORLENGTH (MVL) <a name="mvl" />
183
184 MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
185 is variable length and may be dynamically set. MVL is
186 however limited to the regfile bitwidth XLEN (1-32 for RV32,
187 1-64 for RV64 and so on).
188
189 The reason for setting this limit is so that predication registers, when
190 marked as such, may fit into a single register as opposed to fanning
191 out over several registers. This keeps the hardware implementation a
192 little simpler.
193
194 The other important factor to note is that the actual MVL is internally
195 stored **offset by one**, so that it can fit into only 6 bits (for RV64)
196 and still cover a range up to XLEN bits. Attempts to set MVL to zero will
197 return an exception. This is expressed more clearly in the "pseudocode"
198 section, where there are subtle differences between CSRRW and CSRRWI.
199
200 ## Vector Length (VL) <a name="vl" />
201
202 VSETVL is slightly different from RVV. Similar to RVV, VL is set to be within
203 the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
204
205 VL = rd = MIN(vlen, MVL)
206
207 where 1 <= MVL <= XLEN
208
209 However just like MVL it is important to note that the range for VL has
210 subtle design implications, covered in the "CSR pseudocode" section
211
212 The fixed (specific) setting of VL allows vector LOAD/STORE to be used
213 to switch the entire bank of registers using a single instruction (see
214 Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
215 is down to the fact that predication bits fit into a single register of
216 length XLEN bits.
217
218 The second and most important change is that, within the limits set by
219 MVL, the value passed in **must** be set in VL (and in the
220 destination register).
221
222 This has implication for the microarchitecture, as VL is required to be
223 set (limits from MVL notwithstanding) to the actual value
224 requested. RVV has the option to set VL to an arbitrary value that suits
225 the conditions and the micro-architecture: SV does *not* permit this.
226
227 The reason is so that if SV is to be used for a context-switch or as a
228 substitute for LOAD/STORE-Multiple, the operation can be done with only
229 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
230 single LD/ST operation). If VL does *not* get set to the register file
231 length when VSETVL is called, then a software-loop would be needed.
232 To avoid this need, VL *must* be set to exactly what is requested
233 (limits notwithstanding).
234
235 Therefore, in turn, unlike RVV, implementors *must* provide
236 pseudo-parallelism (using sequential loops in hardware) if actual
237 hardware-parallelism in the ALUs is not deployed. A hybrid is also
238 permitted (as used in Broadcom's VideoCore-IV) however this must be
239 *entirely* transparent to the ISA.
240
241 The third change is that VSETVL is implemented as a CSR, where the
242 behaviour of CSRRW (and CSRRWI) must be changed to specifically store
243 the *new* value in the destination register, **not** the old value.
244 Where context-load/save is to be implemented in the usual fashion
245 by using a single CSRRW instruction to obtain the old value, the
246 *secondary* CSR must be used (STATE). This CSR by contrast behaves
247 exactly as standard CSRs, and contains more than just VL.
248
249 One interesting side-effect of using CSRRWI to set VL is that this
250 may be done with a single instruction, useful particularly for a
251 context-load/save. There are however limitations: CSRWI's immediate
252 is limited to 0-31 (representing VL=1-32).
253
254 Note that when VL is set to 1, vector operations cease (but not subvector
255 operations: that requires setting SUBVL=1) the hardware loop is reduced
256 to a single element: scalar operations. This is in effect the default,
257 normal operating mode. However it is important to appreciate that this
258 does **not** result in the Register table or SUBVL being disabled. Only
259 when the Register table is empty (P48/64 prefix fields notwithstanding)
260 would SV have no effect.
261
262 ## SUBVL - Sub Vector Length
263
264 This is a "group by quantity" that effectivrly asks each iteration
265 of the hardware loop to load SUBVL elements of width elwidth at a
266 time. Effectively, SUBVL is like a SIMD multiplier: instead of just 1
267 operation issued, SUBVL operations are issued.
268
269 Another way to view SUBVL is that each element in the VL length vector is
270 now SUBVL times elwidth bits in length and now comprises SUBVL discrete
271 sub operations. An inner SUBVL for-loop within a VL for-loop in effect,
272 with the sub-element increased every time in the innermost loop. This
273 is best illustrated in the (simplified) pseudocode example, later.
274
275 The primary use case for SUBVL is for 3D FP Vectors. A Vector of 3D
276 coordinates X,Y,Z for example may be loaded and multiplied the stored, per
277 VL element iteration, rather than having to set VL to three times larger.
278
279 Legal values are 1, 2, 3 and 4 (and the STATE CSR must hold the 2 bit
280 values 0b00 thru 0b11 to represent them).
281
282 Setting this CSR to 0 must raise an exception. Setting it to a value
283 greater than 4 likewise.
284
285 The main effect of SUBVL is that predication bits are applied per
286 **group**, rather than by individual element.
287
288 This saves a not insignificant number of instructions when handling 3D
289 vectors, as otherwise a much longer predicate mask would have to be set
290 up with regularly-repeated bit patterns.
291
292 See SUBVL Pseudocode illustration for details.
293
294 ## STATE
295
296 This is a standard CSR that contains sufficient information for a
297 full context save/restore. It contains (and permits setting of):
298
299 * MVL
300 * VL
301 * destoffs - the destination element offset of the current parallel
302 instruction being executed
303 * srcoffs - for twin-predication, the source element offset as well.
304 * SUBVL
305 * svdestoffs - the subvector destination element offset of the current
306 parallel instruction being executed
307 * svsrcoffs - for twin-predication, the subvector source element offset
308 as well.
309
310 Interestingly STATE may hypothetically also be modified to make the
311 immediately-following instruction to skip a certain number of elements,
312 by playing with destoffs and srcoffs (and the subvector offsets as well)
313
314 Setting destoffs and srcoffs is realistically intended for saving state
315 so that exceptions (page faults in particular) may be serviced and the
316 hardware-loop that was being executed at the time of the trap, from
317 user-mode (or Supervisor-mode), may be returned to and continued from
318 exactly where it left off. The reason why this works is because setting
319 User-Mode STATE will not change (not be used) in M-Mode or S-Mode (and
320 is entirely why M-Mode and S-Mode have their own STATE CSRs, meSTATE
321 and seSTATE).
322
323 The format of the STATE CSR is as follows:
324
325 | (30..29 | (28..27) | (26..24) | (23..18) | (17..12) | (11..6) | (5...0) |
326 | ------- | -------- | -------- | -------- | -------- | ------- | ------- |
327 | dsvoffs | ssvoffs | subvl | destoffs | srcoffs | vl | maxvl |
328
329 When setting this CSR, the following characteristics will be enforced:
330
331 * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
332 * **VL** will be truncated (after offset) to be within the range 1 to MAXVL
333 * **SUBVL** which sets a SIMD-like quantity, has only 4 values there
334 are no changes needed
335 * **srcoffs** will be truncated to be within the range 0 to VL-1
336 * **destoffs** will be truncated to be within the range 0 to VL-1
337 * **ssvoffs** will be truncated to be within the range 0 to SUBVL-1
338 * **dsvoffs** will be truncated to be within the range 0 to SUBVL-1
339
340 NOTE: if the following instruction is not a twin predicated instruction,
341 and destoffs or dsvoffs has been set to non-zero, subsequent execution
342 behaviour is undefined. **USE WITH CARE**.
343
344 ### Hardware rules for when to increment STATE offsets
345
346 The offsets inside STATE are like the indices in a loop, except
347 in hardware. They are also partially (conceptually) similar to a
348 "sub-execution Program Counter". As such, and to allow proper context
349 switching and to define correct exception behaviour, the following rules
350 must be observed:
351
352 * When the VL CSR is set, srcoffs and destoffs are reset to zero.
353 * Each instruction that contains a "tagged" register shall start
354 execution at the *current* value of srcoffs (and destoffs in the case
355 of twin predication)
356 * Unpredicated bits (in nonzeroing mode) shall cause the element operation
357 to skip, incrementing the srcoffs (or destoffs)
358 * On execution of an element operation, Exceptions shall **NOT** cause
359 srcoffs or destoffs to increment.
360 * On completion of the full Vector Loop (srcoffs = VL-1 or destoffs =
361 VL-1 after the last element is executed), both srcoffs and destoffs
362 shall be reset to zero.
363
364 This latter is why srcoffs and destoffs may be stored as values from
365 0 to XLEN-1 in the STATE CSR, because as loop indices they refer to
366 elements. srcoffs and destoffs never need to be set to VL: their maximum
367 operating values are limited to 0 to VL-1.
368
369 The same corresponding rules apply to SUBVL, svsrcoffs and svdestoffs.
370
371 ## MVL and VL Pseudocode
372
373 The pseudo-code for get and set of VL and MVL use the following internal
374 functions as follows:
375
376 set_mvl_csr(value, rd):
377 regs[rd] = STATE.MVL
378 STATE.MVL = MIN(value, STATE.MVL)
379
380 get_mvl_csr(rd):
381 regs[rd] = STATE.VL
382
383 set_vl_csr(value, rd):
384 STATE.VL = MIN(value, STATE.MVL)
385 regs[rd] = STATE.VL # yes returning the new value NOT the old CSR
386 return STATE.VL
387
388 get_vl_csr(rd):
389 regs[rd] = STATE.VL
390 return STATE.VL
391
392 Note that where setting MVL behaves as a normal CSR (returns the old
393 value), unlike standard CSR behaviour, setting VL will return the **new**
394 value of VL **not** the old one.
395
396 For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
397 maximise the effectiveness, an immediate of 0 is used to set VL=1,
398 an immediate of 1 is used to set VL=2 and so on:
399
400 CSRRWI_Set_MVL(value):
401 set_mvl_csr(value+1, x0)
402
403 CSRRWI_Set_VL(value):
404 set_vl_csr(value+1, x0)
405
406 However for CSRRW the following pseudocode is used for MVL and VL,
407 where setting the value to zero will cause an exception to be raised.
408 The reason is that if VL or MVL are set to zero, the STATE CSR is
409 not capable of storing that value.
410
411 CSRRW_Set_MVL(rs1, rd):
412 value = regs[rs1]
413 if value == 0 or value > XLEN:
414 raise Exception
415 set_mvl_csr(value, rd)
416
417 CSRRW_Set_VL(rs1, rd):
418 value = regs[rs1]
419 if value == 0 or value > XLEN:
420 raise Exception
421 set_vl_csr(value, rd)
422
423 In this way, when CSRRW is utilised with a loop variable, the value
424 that goes into VL (and into the destination register) may be used
425 in an instruction-minimal fashion:
426
427 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
428 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
429 CSRRWI MVL, 3 # sets MVL == **4** (not 3)
430 j zerotest # in case loop counter a0 already 0
431 loop:
432 CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
433 ld a3, a1 # load 4 registers a3-6 from x
434 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
435 ld a7, a2 # load 4 registers a7-10 from y
436 add a1, a1, t1 # increment pointer to x by vl*8
437 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
438 sub a0, a0, t0 # n -= vl (t0)
439 st a7, a2 # store 4 registers a7-10 to y
440 add a2, a2, t1 # increment pointer to y by vl*8
441 zerotest:
442 bnez a0, loop # repeat if n != 0
443
444 With the STATE CSR, just like with CSRRWI, in order to maximise the
445 utilisation of the limited bitspace, "000000" in binary represents
446 VL==1, "00001" represents VL==2 and so on (likewise for MVL):
447
448 CSRRW_Set_SV_STATE(rs1, rd):
449 value = regs[rs1]
450 get_state_csr(rd)
451 STATE.MVL = set_mvl_csr(value[11:6]+1)
452 STATE.VL = set_vl_csr(value[5:0]+1)
453 STATE.destoffs = value[23:18]>>18
454 STATE.srcoffs = value[23:18]>>12
455
456 get_state_csr(rd):
457 regs[rd] = (STATE.MVL-1) | (STATE.VL-1)<<6 | (STATE.srcoffs)<<12 |
458 (STATE.destoffs)<<18
459 return regs[rd]
460
461 In both cases, whilst CSR read of VL and MVL return the exact values
462 of VL and MVL respectively, reading and writing the STATE CSR returns
463 those values **minus one**. This is absolutely critical to implement
464 if the STATE CSR is to be used for fast context-switching.
465
466 ## VL, MVL and SUBVL instruction aliases
467
468 This table contains pseudo-assembly instruction aliases. Note the
469 subtraction of 1 from the CSRRWI pseudo variants, to compensate for the
470 reduced range of the 5 bit immediate.
471
472 | alias | CSR |
473 | - | - |
474 | SETVL rd, rs | CSRRW VL, rd, rs |
475 | SETVLi rd, #n | CSRRWI VL, rd, #n-1 |
476 | GETVL rd | CSRRW VL, rd, x0 |
477 | SETMVL rd, rs | CSRRW MVL, rd, rs |
478 | SETMVLi rd, #n | CSRRWI MVL,rd, #n-1 |
479 | GETMVL rd | CSRRW MVL, rd, x0 |
480
481 Note: CSRRC and other bitsetting may still be used, they are however not particularly useful (very obscure).
482
483 ## Register key-value (CAM) table <a name="regcsrtable" />
484
485 *NOTE: in prior versions of SV, this table used to be writable and
486 accessible via CSRs. It is now stored in the VLIW instruction format. Note
487 that this table does *not* get applied to the SVPrefix P48/64 format,
488 only to scalar opcodes*
489
490 The purpose of the Register table is three-fold:
491
492 * To mark integer and floating-point registers as requiring "redirection"
493 if it is ever used as a source or destination in any given operation.
494 This involves a level of indirection through a 5-to-7-bit lookup table,
495 such that **unmodified** operands with 5 bits (3 for some RVC ops) may
496 access up to **128** registers.
497 * To indicate whether, after redirection through the lookup table, the
498 register is a vector (or remains a scalar).
499 * To over-ride the implicit or explicit bitwidth that the operation would
500 normally give the register.
501
502 Note: clearly, if an RVC operation uses a 3 bit spec'd register (x8-x15)
503 and the Register table contains entried that only refer to registerd
504 x1-x14 or x16-x31, such operations will *never* activate the VL hardware
505 loop!
506
507 If however the (16 bit) Register table does contain such an entry (x8-x15
508 or x2 in the case of LWSP), that src or dest reg may be redirected
509 anywhere to the *full* 128 register range. Thus, RVC becomes far more
510 powerful and has many more opportunities to reduce code size that in
511 Standard RV32/RV64 executables.
512
513 16 bit format:
514
515 | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
516 | ------ | | - | - | - | ------ | ------- |
517 | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
518 | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
519 | .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
520 | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
521
522 8 bit format:
523
524 | RegCAM | | 7 | (6..5) | (4..0) |
525 | ------ | | - | ------ | ------- |
526 | 0 | | i/f | vew0 | regnum |
527
528 i/f is set to "1" to indicate that the redirection/tag entry is to
529 be applied to integer registers; 0 indicates that it is relevant to
530 floating-point
531 registers.
532
533 The 8 bit format is used for a much more compact expression. "isvec"
534 is implicit and, similar to [[sv-prefix-proposal]], the target vector
535 is "regnum<<2", implicitly. Contrast this with the 16-bit format where
536 the target vector is *explicitly* named in bits 8 to 14, and bit 15 may
537 optionally set "scalar" mode.
538
539 Note that whilst SVPrefix adds one extra bit to each of rd, rs1 etc.,
540 and thus the "vector" mode need only shift the (6 bit) regnum by 1 to
541 get the actual (7 bit) register number to use, there is not enough space
542 in the 8 bit format (only 5 bits for regnum) so "regnum<<2" is required.
543
544 vew has the following meanings, indicating that the instruction's
545 operand size is "over-ridden" in a polymorphic fashion:
546
547 | vew | bitwidth |
548 | --- | ------------------- |
549 | 00 | default (XLEN/FLEN) |
550 | 01 | 8 bit |
551 | 10 | 16 bit |
552 | 11 | 32 bit |
553
554 As the above table is a CAM (key-value store) it may be appropriate
555 (faster, implementation-wise) to expand it as follows:
556
557 struct vectorised fp_vec[32], int_vec[32];
558
559 for (i = 0; i < len; i++) // from VLIW Format
560 tb = int_vec if CSRvec[i].type == 0 else fp_vec
561 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
562 tb[idx].elwidth = CSRvec[i].elwidth
563 tb[idx].regidx = CSRvec[i].regidx // indirection
564 tb[idx].isvector = CSRvec[i].isvector // 0=scalar
565
566 ## Predication Table <a name="predication_csr_table"></a>
567
568 *NOTE: in prior versions of SV, this table used to be writable and
569 accessible via CSRs. It is now stored in the VLIW instruction format.
570 The table does **not** apply to SVPrefix opcodes*
571
572 The Predication Table is a key-value store indicating whether, if a
573 given destination register (integer or floating-point) is referred to
574 in an instruction, it is to be predicated. Like the Register table, it
575 is an indirect lookup that allows the RV opcodes to not need modification.
576
577 It is particularly important to note
578 that the *actual* register used can be *different* from the one that is
579 in the instruction, due to the redirection through the lookup table.
580
581 * regidx is the register that in combination with the
582 i/f flag, if that integer or floating-point register is referred to
583 in a (standard RV) instruction
584 results in the lookup table being referenced to find the predication
585 mask to use for this operation.
586 * predidx is the
587 *actual* (full, 7 bit) register to be used for the predication mask.
588 * inv indicates that the predication mask bits are to be inverted
589 prior to use *without* actually modifying the contents of the
590 registerfrom which those bits originated.
591 * zeroing is either 1 or 0, and if set to 1, the operation must
592 place zeros in any element position where the predication mask is
593 set to zero. If zeroing is set to 0, unpredicated elements *must*
594 be left alone. Some microarchitectures may choose to interpret
595 this as skipping the operation entirely. Others which wish to
596 stick more closely to a SIMD architecture may choose instead to
597 interpret unpredicated elements as an internal "copy element"
598 operation (which would be necessary in SIMD microarchitectures
599 that perform register-renaming)
600
601 16 bit format:
602
603 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
604 | ----- | - | - | - | - | ------- | ------- |
605 | 0 | predkey | zero0 | inv0 | i/f | regidx | rsrvd |
606 | 1 | predkey | zero1 | inv1 | i/f | regidx | rsvd |
607 | ... | predkey | ..... | .... | i/f | ....... | ....... |
608 | 15 | predkey | zero15 | inv15 | i/f | regidx | rsvd |
609
610
611 8 bit format:
612
613 | PrCSR | 7 | 6 | 5 | (4..0) |
614 | ----- | - | - | - | ------- |
615 | 0 | zero0 | inv0 | i/f | regnum |
616
617 The 8 bit format is a compact and less expressive variant of the full
618 16 bit format. Using the 8 bit formatis very different: the predicate
619 register to use is implicit, and numbering begins inplicitly from x9. The
620 regnum is still used to "activate" predication, in the same fashion as
621 described above.
622
623 The 16 bit Predication CSR Table is a key-value store, so
624 implementation-wise it will be faster to turn the table around (maintain
625 topologically equivalent state):
626
627 struct pred {
628 bool zero;
629 bool inv;
630 bool enabled;
631 int predidx; // redirection: actual int register to use
632 }
633
634 struct pred fp_pred_reg[32]; // 64 in future (bank=1)
635 struct pred int_pred_reg[32]; // 64 in future (bank=1)
636
637 for (i = 0; i < 16; i++)
638 tb = int_pred_reg if CSRpred[i].type == 0 else fp_pred_reg;
639 idx = CSRpred[i].regidx
640 tb[idx].zero = CSRpred[i].zero
641 tb[idx].inv = CSRpred[i].inv
642 tb[idx].predidx = CSRpred[i].predidx
643 tb[idx].enabled = true
644
645 So when an operation is to be predicated, it is the internal state that
646 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
647 pseudo-code for operations is given, where p is the explicit (direct)
648 reference to the predication register to be used:
649
650 for (int i=0; i<vl; ++i)
651 if ([!]preg[p][i])
652 (d ? vreg[rd][i] : sreg[rd]) =
653 iop(s1 ? vreg[rs1][i] : sreg[rs1],
654 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
655
656 This instead becomes an *indirect* reference using the *internal* state
657 table generated from the Predication CSR key-value store, which is used
658 as follows.
659
660 if type(iop) == INT:
661 preg = int_pred_reg[rd]
662 else:
663 preg = fp_pred_reg[rd]
664
665 for (int i=0; i<vl; ++i)
666 predicate, zeroing = get_pred_val(type(iop) == INT, rd):
667 if (predicate && (1<<i))
668 (d ? regfile[rd+i] : regfile[rd]) =
669 iop(s1 ? regfile[rs1+i] : regfile[rs1],
670 s2 ? regfile[rs2+i] : regfile[rs2]); // for insts with 2 inputs
671 else if (zeroing)
672 (d ? regfile[rd+i] : regfile[rd]) = 0
673
674 Note:
675
676 * d, s1 and s2 are booleans indicating whether destination,
677 source1 and source2 are vector or scalar
678 * key-value CSR-redirection of rd, rs1 and rs2 have NOT been included
679 above, for clarity. rd, rs1 and rs2 all also must ALSO go through
680 register-level redirection (from the Register table) if they are
681 vectors.
682
683 If written as a function, obtaining the predication mask (and whether
684 zeroing takes place) may be done as follows:
685
686 def get_pred_val(bool is_fp_op, int reg):
687 tb = int_reg if is_fp_op else fp_reg
688 if (!tb[reg].enabled):
689 return ~0x0, False // all enabled; no zeroing
690 tb = int_pred if is_fp_op else fp_pred
691 if (!tb[reg].enabled):
692 return ~0x0, False // all enabled; no zeroing
693 predidx = tb[reg].predidx // redirection occurs HERE
694 predicate = intreg[predidx] // actual predicate HERE
695 if (tb[reg].inv):
696 predicate = ~predicate // invert ALL bits
697 return predicate, tb[reg].zero
698
699 Note here, critically, that **only** if the register is marked
700 in its **register** table entry as being "active" does the testing
701 proceed further to check if the **predicate** table entry is
702 also active.
703
704 Note also that this is in direct contrast to branch operations
705 for the storage of comparisions: in these specific circumstances
706 the requirement for there to be an active *register* entry
707 is removed.
708
709 ## REMAP CSR <a name="remap" />
710
711 (Note: both the REMAP and SHAPE sections are best read after the
712 rest of the document has been read)
713
714 There is one 32-bit CSR which may be used to indicate which registers,
715 if used in any operation, must be "reshaped" (re-mapped) from a linear
716 form to a 2D or 3D transposed form, or "offset" to permit arbitrary
717 access to elements within a register.
718
719 The 32-bit REMAP CSR may reshape up to 3 registers:
720
721 | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
722 | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
723 | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
724
725 regidx0-2 refer not to the Register CSR CAM entry but to the underlying
726 *real* register (see regidx, the value) and consequently is 7-bits wide.
727 When set to zero (referring to x0), clearly reshaping x0 is pointless,
728 so is used to indicate "disabled".
729 shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
730 Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
731
732 It is anticipated that these specialist CSRs not be very often used.
733 Unlike the CSR Register and Predication tables, the REMAP CSRs use
734 the full 7-bit regidx so that they can be set once and left alone,
735 whilst the CSR Register entries pointing to them are disabled, instead.
736
737 ## SHAPE 1D/2D/3D vector-matrix remapping CSRs
738
739 (Note: both the REMAP and SHAPE sections are best read after the
740 rest of the document has been read)
741
742 There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
743 which have the same format. When each SHAPE CSR is set entirely to zeros,
744 remapping is disabled: the register's elements are a linear (1D) vector.
745
746 | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
747 | ------- | -- | ------- | -- | ------- | -- | ------- |
748 | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
749
750 offs is a 3-bit field, spread out across bits 7, 15 and 23, which
751 is added to the element index during the loop calculation.
752
753 xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
754 that the array dimensionality for that dimension is 1. A value of xdimsz=2
755 would indicate that in the first dimension there are 3 elements in the
756 array. The format of the array is therefore as follows:
757
758 array[xdim+1][ydim+1][zdim+1]
759
760 However whilst illustrative of the dimensionality, that does not take the
761 "permute" setting into account. "permute" may be any one of six values
762 (0-5, with values of 6 and 7 being reserved, and not legal). The table
763 below shows how the permutation dimensionality order works:
764
765 | permute | order | array format |
766 | ------- | ----- | ------------------------ |
767 | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
768 | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
769 | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
770 | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
771 | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
772 | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
773
774 In other words, the "permute" option changes the order in which
775 nested for-loops over the array would be done. The algorithm below
776 shows this more clearly, and may be executed as a python program:
777
778 # mapidx = REMAP.shape2
779 xdim = 3 # SHAPE[mapidx].xdim_sz+1
780 ydim = 4 # SHAPE[mapidx].ydim_sz+1
781 zdim = 5 # SHAPE[mapidx].zdim_sz+1
782
783 lims = [xdim, ydim, zdim]
784 idxs = [0,0,0] # starting indices
785 order = [1,0,2] # experiment with different permutations, here
786 offs = 0 # experiment with different offsets, here
787
788 for idx in range(xdim * ydim * zdim):
789 new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
790 print new_idx,
791 for i in range(3):
792 idxs[order[i]] = idxs[order[i]] + 1
793 if (idxs[order[i]] != lims[order[i]]):
794 break
795 print
796 idxs[order[i]] = 0
797
798 Here, it is assumed that this algorithm be run within all pseudo-code
799 throughout this document where a (parallelism) for-loop would normally
800 run from 0 to VL-1 to refer to contiguous register
801 elements; instead, where REMAP indicates to do so, the element index
802 is run through the above algorithm to work out the **actual** element
803 index, instead. Given that there are three possible SHAPE entries, up to
804 three separate registers in any given operation may be simultaneously
805 remapped:
806
807 function op_add(rd, rs1, rs2) # add not VADD!
808 ...
809 ...
810  for (i = 0; i < VL; i++)
811 xSTATE.srcoffs = i # save context
812 if (predval & 1<<i) # predication uses intregs
813    ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
814 ireg[rs2+remap(irs2)];
815 if (!int_vec[rd ].isvector) break;
816 if (int_vec[rd ].isvector)  { id += 1; }
817 if (int_vec[rs1].isvector)  { irs1 += 1; }
818 if (int_vec[rs2].isvector)  { irs2 += 1; }
819
820 By changing remappings, 2D matrices may be transposed "in-place" for one
821 operation, followed by setting a different permutation order without
822 having to move the values in the registers to or from memory. Also,
823 the reason for having REMAP separate from the three SHAPE CSRs is so
824 that in a chain of matrix multiplications and additions, for example,
825 the SHAPE CSRs need only be set up once; only the REMAP CSR need be
826 changed to target different registers.
827
828 Note that:
829
830 * Over-running the register file clearly has to be detected and
831 an illegal instruction exception thrown
832 * When non-default elwidths are set, the exact same algorithm still
833 applies (i.e. it offsets elements *within* registers rather than
834 entire registers).
835 * If permute option 000 is utilised, the actual order of the
836 reindexing does not change!
837 * If two or more dimensions are set to zero, the actual order does not change!
838 * The above algorithm is pseudo-code **only**. Actual implementations
839 will need to take into account the fact that the element for-looping
840 must be **re-entrant**, due to the possibility of exceptions occurring.
841 See MSTATE CSR, which records the current element index.
842 * Twin-predicated operations require **two** separate and distinct
843 element offsets. The above pseudo-code algorithm will be applied
844 separately and independently to each, should each of the two
845 operands be remapped. *This even includes C.LDSP* and other operations
846 in that category, where in that case it will be the **offset** that is
847 remapped (see Compressed Stack LOAD/STORE section).
848 * Offset is especially useful, on its own, for accessing elements
849 within the middle of a register. Without offsets, it is necessary
850 to either use a predicated MV, skipping the first elements, or
851 performing a LOAD/STORE cycle to memory.
852 With offsets, the data does not have to be moved.
853 * Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to
854 less than MVL is **perfectly legal**, albeit very obscure. It permits
855 entries to be regularly presented to operands **more than once**, thus
856 allowing the same underlying registers to act as an accumulator of
857 multiple vector or matrix operations, for example.
858
859 Clearly here some considerable care needs to be taken as the remapping
860 could hypothetically create arithmetic operations that target the
861 exact same underlying registers, resulting in data corruption due to
862 pipeline overlaps. Out-of-order / Superscalar micro-architectures with
863 register-renaming will have an easier time dealing with this than
864 DSP-style SIMD micro-architectures.
865
866 # Instruction Execution Order
867
868 Simple-V behaves as if it is a hardware-level "macro expansion system",
869 substituting and expanding a single instruction into multiple sequential
870 instructions with contiguous and sequentially-incrementing registers.
871 As such, it does **not** modify - or specify - the behaviour and semantics of
872 the execution order: that may be deduced from the **existing** RV
873 specification in each and every case.
874
875 So for example if a particular micro-architecture permits out-of-order
876 execution, and it is augmented with Simple-V, then wherever instructions
877 may be out-of-order then so may the "post-expansion" SV ones.
878
879 If on the other hand there are memory guarantees which specifically
880 prevent and prohibit certain instructions from being re-ordered
881 (such as the Atomicity Axiom, or FENCE constraints), then clearly
882 those constraints **MUST** also be obeyed "post-expansion".
883
884 It should be absolutely clear that SV is **not** about providing new
885 functionality or changing the existing behaviour of a micro-architetural
886 design, or about changing the RISC-V Specification.
887 It is **purely** about compacting what would otherwise be contiguous
888 instructions that use sequentially-increasing register numbers down
889 to the **one** instruction.
890
891 # Instructions <a name="instructions" />
892
893 Despite being a 98% complete and accurate topological remap of RVV
894 concepts and functionality, no new instructions are needed.
895 Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
896 becomes a critical dependency for efficient manipulation of predication
897 masks (as a bit-field). Despite the removal of all operations,
898 with the exception of CLIP and VSELECT.X
899 *all instructions from RVV Base are topologically re-mapped and retain their
900 complete functionality, intact*. Note that if RV64G ever had
901 a MV.X added as well as FCLIP, the full functionality of RVV-Base would
902 be obtained in SV.
903
904 Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
905 equivalents, so are left out of Simple-V. VSELECT could be included if
906 there existed a MV.X instruction in RV (MV.X is a hypothetical
907 non-immediate variant of MV that would allow another register to
908 specify which register was to be copied). Note that if any of these three
909 instructions are added to any given RV extension, their functionality
910 will be inherently parallelised.
911
912 With some exceptions, where it does not make sense or is simply too
913 challenging, all RV-Base instructions are parallelised:
914
915 * CSR instructions, whilst a case could be made for fast-polling of
916 a CSR into multiple registers, or for being able to copy multiple
917 contiguously addressed CSRs into contiguous registers, and so on,
918 are the fundamental core basis of SV. If parallelised, extreme
919 care would need to be taken. Additionally, CSR reads are done
920 using x0, and it is *really* inadviseable to tag x0.
921 * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
922 left as scalar.
923 * LR/SC could hypothetically be parallelised however their purpose is
924 single (complex) atomic memory operations where the LR must be followed
925 up by a matching SC. A sequence of parallel LR instructions followed
926 by a sequence of parallel SC instructions therefore is guaranteed to
927 not be useful. Not least: the guarantees of a Multi-LR/SC
928 would be impossible to provide if emulated in a trap.
929 * EBREAK, NOP, FENCE and others do not use registers so are not inherently
930 paralleliseable anyway.
931
932 All other operations using registers are automatically parallelised.
933 This includes AMOMAX, AMOSWAP and so on, where particular care and
934 attention must be paid.
935
936 Example pseudo-code for an integer ADD operation (including scalar operations).
937 Floating-point uses fp csrs.
938
939 function op_add(rd, rs1, rs2) # add not VADD!
940  int i, id=0, irs1=0, irs2=0;
941  predval = get_pred_val(FALSE, rd);
942  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
943  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
944  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
945  for (i = 0; i < VL; i++)
946 xSTATE.srcoffs = i # save context
947 if (predval & 1<<i) # predication uses intregs
948    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
949 if (!int_vec[rd ].isvector) break;
950 if (int_vec[rd ].isvector)  { id += 1; }
951 if (int_vec[rs1].isvector)  { irs1 += 1; }
952 if (int_vec[rs2].isvector)  { irs2 += 1; }
953
954 Note that for simplicity there is quite a lot missing from the above
955 pseudo-code: element widths, zeroing on predication, dimensional
956 reshaping and offsets and so on. However it demonstrates the basic
957 principle. Augmentations that produce the full pseudo-code are covered in
958 other sections.
959
960 ## SUBVL Pseudocode
961
962 Adding in support for SUBVL is a matter of adding in an extra inner for-loop, where register src and dest are still incremented inside the inner part. Not that the predication is still taken from the VL index.
963
964 So whilst elements are indexed by (i * SUBVL + s), predicate bits are indexed by i
965
966 function op_add(rd, rs1, rs2) # add not VADD!
967  int i, id=0, irs1=0, irs2=0;
968  predval = get_pred_val(FALSE, rd);
969  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
970  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
971  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
972  for (i = 0; i < VL; i++)
973 xSTATE.srcoffs = i # save context
974 for (s = 0; s < SUBVL; s++)
975 xSTATE.ssvoffs = s # save context
976 if (predval & 1<<i) # predication uses intregs
977 # actual add is here (at last)
978    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
979 if (!int_vec[rd ].isvector) break;
980 if (int_vec[rd ].isvector)  { id += 1; }
981 if (int_vec[rs1].isvector)  { irs1 += 1; }
982 if (int_vec[rs2].isvector)  { irs2 += 1; }
983 if (id == VL or irs1 == VL or irs2 == VL) {
984 # end VL hardware loop
985 xSTATE.srcoffs = 0; # reset
986 xSTATE.ssvoffs = 0; # reset
987 return;
988 }
989
990
991 NOTE: pseudocode simplified greatly: zeroing, proper predicate handling, elwidth handling etc. all left out.
992
993 ## Instruction Format
994
995 It is critical to appreciate that there are
996 **no operations added to SV, at all**.
997
998 Instead, by using CSRs to tag registers as an indication of "changed
999 behaviour", SV *overloads* pre-existing branch operations into predicated
1000 variants, and implicitly overloads arithmetic operations, MV, FCVT, and
1001 LOAD/STORE depending on CSR configurations for bitwidth and predication.
1002 **Everything** becomes parallelised. *This includes Compressed
1003 instructions* as well as any future instructions and Custom Extensions.
1004
1005 Note: CSR tags to change behaviour of instructions is nothing new, including
1006 in RISC-V. UXL, SXL and MXL change the behaviour so that XLEN=32/64/128.
1007 FRM changes the behaviour of the floating-point unit, to alter the rounding
1008 mode. Other architectures change the LOAD/STORE byte-order from big-endian
1009 to little-endian on a per-instruction basis. SV is just a little more...
1010 comprehensive in its effect on instructions.
1011
1012 ## Branch Instructions
1013
1014 ### Standard Branch <a name="standard_branch"></a>
1015
1016 Branch operations use standard RV opcodes that are reinterpreted to
1017 be "predicate variants" in the instance where either of the two src
1018 registers are marked as vectors (active=1, vector=1).
1019
1020 Note that the predication register to use (if one is enabled) is taken from
1021 the *first* src register, and that this is used, just as with predicated
1022 arithmetic operations, to mask whether the comparison operations take
1023 place or not. The target (destination) predication register
1024 to use (if one is enabled) is taken from the *second* src register.
1025
1026 If either of src1 or src2 are scalars (whether by there being no
1027 CSR register entry or whether by the CSR entry specifically marking
1028 the register as "scalar") the comparison goes ahead as vector-scalar
1029 or scalar-vector.
1030
1031 In instances where no vectorisation is detected on either src registers
1032 the operation is treated as an absolutely standard scalar branch operation.
1033 Where vectorisation is present on either or both src registers, the
1034 branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
1035 those tests that are predicated out).
1036
1037 Note that when zero-predication is enabled (from source rs1),
1038 a cleared bit in the predicate indicates that the result
1039 of the compare is set to "false", i.e. that the corresponding
1040 destination bit (or result)) be set to zero. Contrast this with
1041 when zeroing is not set: bits in the destination predicate are
1042 only *set*; they are **not** cleared. This is important to appreciate,
1043 as there may be an expectation that, going into the hardware-loop,
1044 the destination predicate is always expected to be set to zero:
1045 this is **not** the case. The destination predicate is only set
1046 to zero if **zeroing** is enabled.
1047
1048 Note that just as with the standard (scalar, non-predicated) branch
1049 operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
1050 src1 and src2.
1051
1052 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
1053 for predicated compare operations of function "cmp":
1054
1055 for (int i=0; i<vl; ++i)
1056 if ([!]preg[p][i])
1057 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
1058 s2 ? vreg[rs2][i] : sreg[rs2]);
1059
1060 With associated predication, vector-length adjustments and so on,
1061 and temporarily ignoring bitwidth (which makes the comparisons more
1062 complex), this becomes:
1063
1064 s1 = reg_is_vectorised(src1);
1065 s2 = reg_is_vectorised(src2);
1066
1067 if not s1 && not s2
1068 if cmp(rs1, rs2) # scalar compare
1069 goto branch
1070 return
1071
1072 preg = int_pred_reg[rd]
1073 reg = int_regfile
1074
1075 ps = get_pred_val(I/F==INT, rs1);
1076 rd = get_pred_val(I/F==INT, rs2); # this may not exist
1077
1078 if not exists(rd) or zeroing:
1079 result = 0
1080 else
1081 result = preg[rd]
1082
1083 for (int i = 0; i < VL; ++i)
1084 if (zeroing)
1085 if not (ps & (1<<i))
1086 result &= ~(1<<i);
1087 else if (ps & (1<<i))
1088 if (cmp(s1 ? reg[src1+i]:reg[src1],
1089 s2 ? reg[src2+i]:reg[src2])
1090 result |= 1<<i;
1091 else
1092 result &= ~(1<<i);
1093
1094 if not exists(rd)
1095 if result == ps
1096 goto branch
1097 else
1098 preg[rd] = result # store in destination
1099 if preg[rd] == ps
1100 goto branch
1101
1102 Notes:
1103
1104 * Predicated SIMD comparisons would break src1 and src2 further down
1105 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
1106 Reordering") setting Vector-Length times (number of SIMD elements) bits
1107 in Predicate Register rd, as opposed to just Vector-Length bits.
1108 * The execution of "parallelised" instructions **must** be implemented
1109 as "re-entrant" (to use a term from software). If an exception (trap)
1110 occurs during the middle of a vectorised
1111 Branch (now a SV predicated compare) operation, the partial results
1112 of any comparisons must be written out to the destination
1113 register before the trap is permitted to begin. If however there
1114 is no predicate, the **entire** set of comparisons must be **restarted**,
1115 with the offset loop indices set back to zero. This is because
1116 there is no place to store the temporary result during the handling
1117 of traps.
1118
1119 TODO: predication now taken from src2. also branch goes ahead
1120 if all compares are successful.
1121
1122 Note also that where normally, predication requires that there must
1123 also be a CSR register entry for the register being used in order
1124 for the **predication** CSR register entry to also be active,
1125 for branches this is **not** the case. src2 does **not** have
1126 to have its CSR register entry marked as active in order for
1127 predication on src2 to be active.
1128
1129 Also note: SV Branch operations are **not** twin-predicated
1130 (see Twin Predication section). This would require three
1131 element offsets: one to track src1, one to track src2 and a third
1132 to track where to store the accumulation of the results. Given
1133 that the element offsets need to be exposed via CSRs so that
1134 the parallel hardware looping may be made re-entrant on traps
1135 and exceptions, the decision was made not to make SV Branches
1136 twin-predicated.
1137
1138 ### Floating-point Comparisons
1139
1140 There does not exist floating-point branch operations, only compare.
1141 Interestingly no change is needed to the instruction format because
1142 FP Compare already stores a 1 or a zero in its "rd" integer register
1143 target, i.e. it's not actually a Branch at all: it's a compare.
1144
1145 In RV (scalar) Base, a branch on a floating-point compare is
1146 done via the sequence "FEQ x1, f0, f5; BEQ x1, x0, #jumploc".
1147 This does extend to SV, as long as x1 (in the example sequence given)
1148 is vectorised. When that is the case, x1..x(1+VL-1) will also be
1149 set to 0 or 1 depending on whether f0==f5, f1==f6, f2==f7 and so on.
1150 The BEQ that follows will *also* compare x1==x0, x2==x0, x3==x0 and
1151 so on. Consequently, unlike integer-branch, FP Compare needs no
1152 modification in its behaviour.
1153
1154 In addition, it is noted that an entry "FNE" (the opposite of FEQ) is missing,
1155 and whilst in ordinary branch code this is fine because the standard
1156 RVF compare can always be followed up with an integer BEQ or a BNE (or
1157 a compressed comparison to zero or non-zero), in predication terms that
1158 becomes more of an impact. To deal with this, SV's predication has
1159 had "invert" added to it.
1160
1161 Also: note that FP Compare may be predicated, using the destination
1162 integer register (rd) to determine the predicate. FP Compare is **not**
1163 a twin-predication operation, as, again, just as with SV Branches,
1164 there are three registers involved: FP src1, FP src2 and INT rd.
1165
1166 ### Compressed Branch Instruction
1167
1168 Compressed Branch instructions are, just like standard Branch instructions,
1169 reinterpreted to be vectorised and predicated based on the source register
1170 (rs1s) CSR entries. As however there is only the one source register,
1171 given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
1172 to store the results of the comparisions is taken from CSR predication
1173 table entries for **x0**.
1174
1175 The specific required use of x0 is, with a little thought, quite obvious,
1176 but is counterintuitive. Clearly it is **not** recommended to redirect
1177 x0 with a CSR register entry, however as a means to opaquely obtain
1178 a predication target it is the only sensible option that does not involve
1179 additional special CSRs (or, worse, additional special opcodes).
1180
1181 Note also that, just as with standard branches, the 2nd source
1182 (in this case x0 rather than src2) does **not** have to have its CSR
1183 register table marked as "active" in order for predication to work.
1184
1185 ## Vectorised Dual-operand instructions
1186
1187 There is a series of 2-operand instructions involving copying (and
1188 sometimes alteration):
1189
1190 * C.MV
1191 * FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
1192 * C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
1193 * LOAD(-FP) and STORE(-FP)
1194
1195 All of these operations follow the same two-operand pattern, so it is
1196 *both* the source *and* destination predication masks that are taken into
1197 account. This is different from
1198 the three-operand arithmetic instructions, where the predication mask
1199 is taken from the *destination* register, and applied uniformly to the
1200 elements of the source register(s), element-for-element.
1201
1202 The pseudo-code pattern for twin-predicated operations is as
1203 follows:
1204
1205 function op(rd, rs):
1206  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1207  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1208  ps = get_pred_val(FALSE, rs); # predication on src
1209  pd = get_pred_val(FALSE, rd); # ... AND on dest
1210  for (int i = 0, int j = 0; i < VL && j < VL;):
1211 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1212 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1213 xSTATE.srcoffs = i # save context
1214 xSTATE.destoffs = j # save context
1215 reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
1216 if (int_csr[rs].isvec) i++;
1217 if (int_csr[rd].isvec) j++; else break
1218
1219 This pattern covers scalar-scalar, scalar-vector, vector-scalar
1220 and vector-vector, and predicated variants of all of those.
1221 Zeroing is not presently included (TODO). As such, when compared
1222 to RVV, the twin-predicated variants of C.MV and FMV cover
1223 **all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
1224 VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
1225
1226 Note that:
1227
1228 * elwidth (SIMD) is not covered in the pseudo-code above
1229 * ending the loop early in scalar cases (VINSERT, VEXTRACT) is also
1230 not covered
1231 * zero predication is also not shown (TODO).
1232
1233 ### C.MV Instruction <a name="c_mv"></a>
1234
1235 There is no MV instruction in RV however there is a C.MV instruction.
1236 It is used for copying integer-to-integer registers (vectorised FMV
1237 is used for copying floating-point).
1238
1239 If either the source or the destination register are marked as vectors
1240 C.MV is reinterpreted to be a vectorised (multi-register) predicated
1241 move operation. The actual instruction's format does not change:
1242
1243 [[!table data="""
1244 15 12 | 11 7 | 6 2 | 1 0 |
1245 funct4 | rd | rs | op |
1246 4 | 5 | 5 | 2 |
1247 C.MV | dest | src | C0 |
1248 """]]
1249
1250 A simplified version of the pseudocode for this operation is as follows:
1251
1252 function op_mv(rd, rs) # MV not VMV!
1253  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1254  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1255  ps = get_pred_val(FALSE, rs); # predication on src
1256  pd = get_pred_val(FALSE, rd); # ... AND on dest
1257  for (int i = 0, int j = 0; i < VL && j < VL;):
1258 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1259 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1260 xSTATE.srcoffs = i # save context
1261 xSTATE.destoffs = j # save context
1262 ireg[rd+j] <= ireg[rs+i];
1263 if (int_csr[rs].isvec) i++;
1264 if (int_csr[rd].isvec) j++; else break
1265
1266 There are several different instructions from RVV that are covered by
1267 this one opcode:
1268
1269 [[!table data="""
1270 src | dest | predication | op |
1271 scalar | vector | none | VSPLAT |
1272 scalar | vector | destination | sparse VSPLAT |
1273 scalar | vector | 1-bit dest | VINSERT |
1274 vector | scalar | 1-bit? src | VEXTRACT |
1275 vector | vector | none | VCOPY |
1276 vector | vector | src | Vector Gather |
1277 vector | vector | dest | Vector Scatter |
1278 vector | vector | src & dest | Gather/Scatter |
1279 vector | vector | src == dest | sparse VCOPY |
1280 """]]
1281
1282 Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
1283 operations with inversion on the src and dest predication for one of the
1284 two C.MV operations.
1285
1286 Note that in the instance where the Compressed Extension is not implemented,
1287 MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs.
1288 Note that the behaviour is **different** from C.MV because with addi the
1289 predication mask to use is taken **only** from rd and is applied against
1290 all elements: rs[i] = rd[i].
1291
1292 ### FMV, FNEG and FABS Instructions
1293
1294 These are identical in form to C.MV, except covering floating-point
1295 register copying. The same double-predication rules also apply.
1296 However when elwidth is not set to default the instruction is implicitly
1297 and automatic converted to a (vectorised) floating-point type conversion
1298 operation of the appropriate size covering the source and destination
1299 register bitwidths.
1300
1301 (Note that FMV, FNEG and FABS are all actually pseudo-instructions)
1302
1303 ### FVCT Instructions
1304
1305 These are again identical in form to C.MV, except that they cover
1306 floating-point to integer and integer to floating-point. When element
1307 width in each vector is set to default, the instructions behave exactly
1308 as they are defined for standard RV (scalar) operations, except vectorised
1309 in exactly the same fashion as outlined in C.MV.
1310
1311 However when the source or destination element width is not set to default,
1312 the opcode's explicit element widths are *over-ridden* to new definitions,
1313 and the opcode's element width is taken as indicative of the SIMD width
1314 (if applicable i.e. if packed SIMD is requested) instead.
1315
1316 For example FCVT.S.L would normally be used to convert a 64-bit
1317 integer in register rs1 to a 64-bit floating-point number in rd.
1318 If however the source rs1 is set to be a vector, where elwidth is set to
1319 default/2 and "packed SIMD" is enabled, then the first 32 bits of
1320 rs1 are converted to a floating-point number to be stored in rd's
1321 first element and the higher 32-bits *also* converted to floating-point
1322 and stored in the second. The 32 bit size comes from the fact that
1323 FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to
1324 divide that by two it means that rs1 element width is to be taken as 32.
1325
1326 Similar rules apply to the destination register.
1327
1328 ## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
1329
1330 An earlier draft of SV modified the behaviour of LOAD/STORE (modified
1331 the interpretation of the instruction fields). This
1332 actually undermined the fundamental principle of SV, namely that there
1333 be no modifications to the scalar behaviour (except where absolutely
1334 necessary), in order to simplify an implementor's task if considering
1335 converting a pre-existing scalar design to support parallelism.
1336
1337 So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
1338 do not change in SV, however just as with C.MV it is important to note
1339 that dual-predication is possible.
1340
1341 In vectorised architectures there are usually at least two different modes
1342 for LOAD/STORE:
1343
1344 * Read (or write for STORE) from sequential locations, where one
1345 register specifies the address, and the one address is incremented
1346 by a fixed amount. This is usually known as "Unit Stride" mode.
1347 * Read (or write) from multiple indirected addresses, where the
1348 vector elements each specify separate and distinct addresses.
1349
1350 To support these different addressing modes, the CSR Register "isvector"
1351 bit is used. So, for a LOAD, when the src register is set to
1352 scalar, the LOADs are sequentially incremented by the src register
1353 element width, and when the src register is set to "vector", the
1354 elements are treated as indirection addresses. Simplified
1355 pseudo-code would look like this:
1356
1357 function op_ld(rd, rs) # LD not VLD!
1358  rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
1359  rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
1360  ps = get_pred_val(FALSE, rs); # predication on src
1361  pd = get_pred_val(FALSE, rd); # ... AND on dest
1362  for (int i = 0, int j = 0; i < VL && j < VL;):
1363 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1364 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1365 if (int_csr[rd].isvec)
1366 # indirect mode (multi mode)
1367 srcbase = ireg[rsv+i];
1368 else
1369 # unit stride mode
1370 srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
1371 ireg[rdv+j] <= mem[srcbase + imm_offs];
1372 if (!int_csr[rs].isvec &&
1373 !int_csr[rd].isvec) break # scalar-scalar LD
1374 if (int_csr[rs].isvec) i++;
1375 if (int_csr[rd].isvec) j++;
1376
1377 Notes:
1378
1379 * For simplicity, zeroing and elwidth is not included in the above:
1380 the key focus here is the decision-making for srcbase; vectorised
1381 rs means use sequentially-numbered registers as the indirection
1382 address, and scalar rs is "offset" mode.
1383 * The test towards the end for whether both source and destination are
1384 scalar is what makes the above pseudo-code provide the "standard" RV
1385 Base behaviour for LD operations.
1386 * The offset in bytes (XLEN/8) changes depending on whether the
1387 operation is a LB (1 byte), LH (2 byes), LW (4 bytes) or LD
1388 (8 bytes), and also whether the element width is over-ridden
1389 (see special element width section).
1390
1391 ## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
1392
1393 C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
1394 where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
1395 It is therefore possible to use predicated C.LWSP to efficiently
1396 pop registers off the stack (by predicating x2 as the source), cherry-picking
1397 which registers to store to (by predicating the destination). Likewise
1398 for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
1399
1400 The two modes ("unit stride" and multi-indirection) are still supported,
1401 as with standard LD/ST. Essentially, the only difference is that the
1402 use of x2 is hard-coded into the instruction.
1403
1404 **Note**: it is still possible to redirect x2 to an alternative target
1405 register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
1406 general-purpose LOAD/STORE operations.
1407
1408 ## Compressed LOAD / STORE Instructions
1409
1410 Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
1411 where the same rules apply and the same pseudo-code apply as for
1412 non-compressed LOAD/STORE. Again: setting scalar or vector mode
1413 on the src for LOAD and dest for STORE switches mode from "Unit Stride"
1414 to "Multi-indirection", respectively.
1415
1416 # Element bitwidth polymorphism <a name="elwidth"></a>
1417
1418 Element bitwidth is best covered as its own special section, as it
1419 is quite involved and applies uniformly across-the-board. SV restricts
1420 bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
1421
1422 The effect of setting an element bitwidth is to re-cast each entry
1423 in the register table, and for all memory operations involving
1424 load/stores of certain specific sizes, to a completely different width.
1425 Thus In c-style terms, on an RV64 architecture, effectively each register
1426 now looks like this:
1427
1428 typedef union {
1429 uint8_t b[8];
1430 uint16_t s[4];
1431 uint32_t i[2];
1432 uint64_t l[1];
1433 } reg_t;
1434
1435 // integer table: assume maximum SV 7-bit regfile size
1436 reg_t int_regfile[128];
1437
1438 where the CSR Register table entry (not the instruction alone) determines
1439 which of those union entries is to be used on each operation, and the
1440 VL element offset in the hardware-loop specifies the index into each array.
1441
1442 However a naive interpretation of the data structure above masks the
1443 fact that setting VL greater than 8, for example, when the bitwidth is 8,
1444 accessing one specific register "spills over" to the following parts of
1445 the register file in a sequential fashion. So a much more accurate way
1446 to reflect this would be:
1447
1448 typedef union {
1449 uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
1450 uint8_t b[0]; // array of type uint8_t
1451 uint16_t s[0];
1452 uint32_t i[0];
1453 uint64_t l[0];
1454 uint128_t d[0];
1455 } reg_t;
1456
1457 reg_t int_regfile[128];
1458
1459 where when accessing any individual regfile[n].b entry it is permitted
1460 (in c) to arbitrarily over-run the *declared* length of the array (zero),
1461 and thus "overspill" to consecutive register file entries in a fashion
1462 that is completely transparent to a greatly-simplified software / pseudo-code
1463 representation.
1464 It is however critical to note that it is clearly the responsibility of
1465 the implementor to ensure that, towards the end of the register file,
1466 an exception is thrown if attempts to access beyond the "real" register
1467 bytes is ever attempted.
1468
1469 Now we may modify pseudo-code an operation where all element bitwidths have
1470 been set to the same size, where this pseudo-code is otherwise identical
1471 to its "non" polymorphic versions (above):
1472
1473 function op_add(rd, rs1, rs2) # add not VADD!
1474 ...
1475 ...
1476  for (i = 0; i < VL; i++)
1477 ...
1478 ...
1479 // TODO, calculate if over-run occurs, for each elwidth
1480 if (elwidth == 8) {
1481    int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
1482     int_regfile[rs2].i[irs2];
1483 } else if elwidth == 16 {
1484    int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
1485     int_regfile[rs2].s[irs2];
1486 } else if elwidth == 32 {
1487    int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
1488     int_regfile[rs2].i[irs2];
1489 } else { // elwidth == 64
1490    int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
1491     int_regfile[rs2].l[irs2];
1492 }
1493 ...
1494 ...
1495
1496 So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
1497 following sequentially on respectively from the same) are "type-cast"
1498 to 8-bit; for 16-bit entries likewise and so on.
1499
1500 However that only covers the case where the element widths are the same.
1501 Where the element widths are different, the following algorithm applies:
1502
1503 * Analyse the bitwidth of all source operands and work out the
1504 maximum. Record this as "maxsrcbitwidth"
1505 * If any given source operand requires sign-extension or zero-extension
1506 (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
1507 sign-extension / zero-extension or whatever is specified in the standard
1508 RV specification, **change** that to sign-extending from the respective
1509 individual source operand's bitwidth from the CSR table out to
1510 "maxsrcbitwidth" (previously calculated), instead.
1511 * Following separate and distinct (optional) sign/zero-extension of all
1512 source operands as specifically required for that operation, carry out the
1513 operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
1514 this may be a "null" (copy) operation, and that with FCVT, the changes
1515 to the source and destination bitwidths may also turn FVCT effectively
1516 into a copy).
1517 * If the destination operand requires sign-extension or zero-extension,
1518 instead of a mandatory fixed size (typically 32-bit for arithmetic,
1519 for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
1520 etc.), overload the RV specification with the bitwidth from the
1521 destination register's elwidth entry.
1522 * Finally, store the (optionally) sign/zero-extended value into its
1523 destination: memory for sb/sw etc., or an offset section of the register
1524 file for an arithmetic operation.
1525
1526 In this way, polymorphic bitwidths are achieved without requiring a
1527 massive 64-way permutation of calculations **per opcode**, for example
1528 (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
1529 rd bitwidths). The pseudo-code is therefore as follows:
1530
1531 typedef union {
1532 uint8_t b;
1533 uint16_t s;
1534 uint32_t i;
1535 uint64_t l;
1536 } el_reg_t;
1537
1538 bw(elwidth):
1539 if elwidth == 0:
1540 return xlen
1541 if elwidth == 1:
1542 return xlen / 2
1543 if elwidth == 2:
1544 return xlen * 2
1545 // elwidth == 3:
1546 return 8
1547
1548 get_max_elwidth(rs1, rs2):
1549 return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
1550 bw(int_csr[rs2].elwidth)) # again XLEN if no entry
1551
1552 get_polymorphed_reg(reg, bitwidth, offset):
1553 el_reg_t res;
1554 res.l = 0; // TODO: going to need sign-extending / zero-extending
1555 if bitwidth == 8:
1556 reg.b = int_regfile[reg].b[offset]
1557 elif bitwidth == 16:
1558 reg.s = int_regfile[reg].s[offset]
1559 elif bitwidth == 32:
1560 reg.i = int_regfile[reg].i[offset]
1561 elif bitwidth == 64:
1562 reg.l = int_regfile[reg].l[offset]
1563 return res
1564
1565 set_polymorphed_reg(reg, bitwidth, offset, val):
1566 if (!int_csr[reg].isvec):
1567 # sign/zero-extend depending on opcode requirements, from
1568 # the reg's bitwidth out to the full bitwidth of the regfile
1569 val = sign_or_zero_extend(val, bitwidth, xlen)
1570 int_regfile[reg].l[0] = val
1571 elif bitwidth == 8:
1572 int_regfile[reg].b[offset] = val
1573 elif bitwidth == 16:
1574 int_regfile[reg].s[offset] = val
1575 elif bitwidth == 32:
1576 int_regfile[reg].i[offset] = val
1577 elif bitwidth == 64:
1578 int_regfile[reg].l[offset] = val
1579
1580 maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
1581 destwid = int_csr[rs1].elwidth # destination element width
1582  for (i = 0; i < VL; i++)
1583 if (predval & 1<<i) # predication uses intregs
1584 // TODO, calculate if over-run occurs, for each elwidth
1585 src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
1586 // TODO, sign/zero-extend src1 and src2 as operation requires
1587 if (op_requires_sign_extend_src1)
1588 src1 = sign_extend(src1, maxsrcwid)
1589 src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
1590 result = src1 + src2 # actual add here
1591 // TODO, sign/zero-extend result, as operation requires
1592 if (op_requires_sign_extend_dest)
1593 result = sign_extend(result, maxsrcwid)
1594 set_polymorphed_reg(rd, destwid, ird, result)
1595 if (!int_vec[rd].isvector) break
1596 if (int_vec[rd ].isvector)  { id += 1; }
1597 if (int_vec[rs1].isvector)  { irs1 += 1; }
1598 if (int_vec[rs2].isvector)  { irs2 += 1; }
1599
1600 Whilst specific sign-extension and zero-extension pseudocode call
1601 details are left out, due to each operation being different, the above
1602 should be clear that;
1603
1604 * the source operands are extended out to the maximum bitwidth of all
1605 source operands
1606 * the operation takes place at that maximum source bitwidth (the
1607 destination bitwidth is not involved at this point, at all)
1608 * the result is extended (or potentially even, truncated) before being
1609 stored in the destination. i.e. truncation (if required) to the
1610 destination width occurs **after** the operation **not** before.
1611 * when the destination is not marked as "vectorised", the **full**
1612 (standard, scalar) register file entry is taken up, i.e. the
1613 element is either sign-extended or zero-extended to cover the
1614 full register bitwidth (XLEN) if it is not already XLEN bits long.
1615
1616 Implementors are entirely free to optimise the above, particularly
1617 if it is specifically known that any given operation will complete
1618 accurately in less bits, as long as the results produced are
1619 directly equivalent and equal, for all inputs and all outputs,
1620 to those produced by the above algorithm.
1621
1622 ## Polymorphic floating-point operation exceptions and error-handling
1623
1624 For floating-point operations, conversion takes place without
1625 raising any kind of exception. Exactly as specified in the standard
1626 RV specification, NAN (or appropriate) is stored if the result
1627 is beyond the range of the destination, and, again, exactly as
1628 with the standard RV specification just as with scalar
1629 operations, the floating-point flag is raised (FCSR). And, again, just as
1630 with scalar operations, it is software's responsibility to check this flag.
1631 Given that the FCSR flags are "accrued", the fact that multiple element
1632 operations could have occurred is not a problem.
1633
1634 Note that it is perfectly legitimate for floating-point bitwidths of
1635 only 8 to be specified. However whilst it is possible to apply IEEE 754
1636 principles, no actual standard yet exists. Implementors wishing to
1637 provide hardware-level 8-bit support rather than throw a trap to emulate
1638 in software should contact the author of this specification before
1639 proceeding.
1640
1641 ## Polymorphic shift operators
1642
1643 A special note is needed for changing the element width of left and right
1644 shift operators, particularly right-shift. Even for standard RV base,
1645 in order for correct results to be returned, the second operand RS2 must
1646 be truncated to be within the range of RS1's bitwidth. spike's implementation
1647 of sll for example is as follows:
1648
1649 WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));
1650
1651 which means: where XLEN is 32 (for RV32), restrict RS2 to cover the
1652 range 0..31 so that RS1 will only be left-shifted by the amount that
1653 is possible to fit into a 32-bit register. Whilst this appears not
1654 to matter for hardware, it matters greatly in software implementations,
1655 and it also matters where an RV64 system is set to "RV32" mode, such
1656 that the underlying registers RS1 and RS2 comprise 64 hardware bits
1657 each.
1658
1659 For SV, where each operand's element bitwidth may be over-ridden, the
1660 rule about determining the operation's bitwidth *still applies*, being
1661 defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
1662 **also applies to the truncation of RS2**. In other words, *after*
1663 determining the maximum bitwidth, RS2's range must **also be truncated**
1664 to ensure a correct answer. Example:
1665
1666 * RS1 is over-ridden to a 16-bit width
1667 * RS2 is over-ridden to an 8-bit width
1668 * RD is over-ridden to a 64-bit width
1669 * the maximum bitwidth is thus determined to be 16-bit - max(8,16)
1670 * RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
1671
1672 Pseudocode (in spike) for this example would therefore be:
1673
1674 WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
1675
1676 This example illustrates that considerable care therefore needs to be
1677 taken to ensure that left and right shift operations are implemented
1678 correctly. The key is that
1679
1680 * The operation bitwidth is determined by the maximum bitwidth
1681 of the *source registers*, **not** the destination register bitwidth
1682 * The result is then sign-extend (or truncated) as appropriate.
1683
1684 ## Polymorphic MULH/MULHU/MULHSU
1685
1686 MULH is designed to take the top half MSBs of a multiply that
1687 does not fit within the range of the source operands, such that
1688 smaller width operations may produce a full double-width multiply
1689 in two cycles. The issue is: SV allows the source operands to
1690 have variable bitwidth.
1691
1692 Here again special attention has to be paid to the rules regarding
1693 bitwidth, which, again, are that the operation is performed at
1694 the maximum bitwidth of the **source** registers. Therefore:
1695
1696 * An 8-bit x 8-bit multiply will create a 16-bit result that must
1697 be shifted down by 8 bits
1698 * A 16-bit x 8-bit multiply will create a 24-bit result that must
1699 be shifted down by 16 bits (top 8 bits being zero)
1700 * A 16-bit x 16-bit multiply will create a 32-bit result that must
1701 be shifted down by 16 bits
1702 * A 32-bit x 16-bit multiply will create a 48-bit result that must
1703 be shifted down by 32 bits
1704 * A 32-bit x 8-bit multiply will create a 40-bit result that must
1705 be shifted down by 32 bits
1706
1707 So again, just as with shift-left and shift-right, the result
1708 is shifted down by the maximum of the two source register bitwidths.
1709 And, exactly again, truncation or sign-extension is performed on the
1710 result. If sign-extension is to be carried out, it is performed
1711 from the same maximum of the two source register bitwidths out
1712 to the result element's bitwidth.
1713
1714 If truncation occurs, i.e. the top MSBs of the result are lost,
1715 this is "Officially Not Our Problem", i.e. it is assumed that the
1716 programmer actually desires the result to be truncated. i.e. if the
1717 programmer wanted all of the bits, they would have set the destination
1718 elwidth to accommodate them.
1719
1720 ## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
1721
1722 Polymorphic element widths in vectorised form means that the data
1723 being loaded (or stored) across multiple registers needs to be treated
1724 (reinterpreted) as a contiguous stream of elwidth-wide items, where
1725 the source register's element width is **independent** from the destination's.
1726
1727 This makes for a slightly more complex algorithm when using indirection
1728 on the "addressed" register (source for LOAD and destination for STORE),
1729 particularly given that the LOAD/STORE instruction provides important
1730 information about the width of the data to be reinterpreted.
1731
1732 Let's illustrate the "load" part, where the pseudo-code for elwidth=default
1733 was as follows, and i is the loop from 0 to VL-1:
1734
1735 srcbase = ireg[rs+i];
1736 return mem[srcbase + imm]; // returns XLEN bits
1737
1738 Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
1739 chunks are taken from the source memory location addressed by the current
1740 indexed source address register, and only when a full 32-bits-worth
1741 are taken will the index be moved on to the next contiguous source
1742 address register:
1743
1744 bitwidth = bw(elwidth); // source elwidth from CSR reg entry
1745 elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
1746 srcbase = ireg[rs+i/(elsperblock)]; // integer divide
1747 offs = i % elsperblock; // modulo
1748 return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
1749
1750 Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
1751 and 128 for LQ.
1752
1753 The principle is basically exactly the same as if the srcbase were pointing
1754 at the memory of the *register* file: memory is re-interpreted as containing
1755 groups of elwidth-wide discrete elements.
1756
1757 When storing the result from a load, it's important to respect the fact
1758 that the destination register has its *own separate element width*. Thus,
1759 when each element is loaded (at the source element width), any sign-extension
1760 or zero-extension (or truncation) needs to be done to the *destination*
1761 bitwidth. Also, the storing has the exact same analogous algorithm as
1762 above, where in fact it is just the set\_polymorphed\_reg pseudocode
1763 (completely unchanged) used above.
1764
1765 One issue remains: when the source element width is **greater** than
1766 the width of the operation, it is obvious that a single LB for example
1767 cannot possibly obtain 16-bit-wide data. This condition may be detected
1768 where, when using integer divide, elsperblock (the width of the LOAD
1769 divided by the bitwidth of the element) is zero.
1770
1771 The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
1772
1773 elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
1774
1775 The elements, if the element bitwidth is larger than the LD operation's
1776 size, will then be sign/zero-extended to the full LD operation size, as
1777 specified by the LOAD (LDU instead of LD, LBU instead of LB), before
1778 being passed on to the second phase.
1779
1780 As LOAD/STORE may be twin-predicated, it is important to note that
1781 the rules on twin predication still apply, except where in previous
1782 pseudo-code (elwidth=default for both source and target) it was
1783 the *registers* that the predication was applied to, it is now the
1784 **elements** that the predication is applied to.
1785
1786 Thus the full pseudocode for all LD operations may be written out
1787 as follows:
1788
1789 function LBU(rd, rs):
1790 load_elwidthed(rd, rs, 8, true)
1791 function LB(rd, rs):
1792 load_elwidthed(rd, rs, 8, false)
1793 function LH(rd, rs):
1794 load_elwidthed(rd, rs, 16, false)
1795 ...
1796 ...
1797 function LQ(rd, rs):
1798 load_elwidthed(rd, rs, 128, false)
1799
1800 # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
1801 function load_memory(rs, imm, i, opwidth):
1802 elwidth = int_csr[rs].elwidth
1803 bitwidth = bw(elwidth);
1804 elsperblock = min(1, opwidth / bitwidth)
1805 srcbase = ireg[rs+i/(elsperblock)];
1806 offs = i % elsperblock;
1807 return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
1808
1809 function load_elwidthed(rd, rs, opwidth, unsigned):
1810 destwid = int_csr[rd].elwidth # destination element width
1811  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1812  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1813  ps = get_pred_val(FALSE, rs); # predication on src
1814  pd = get_pred_val(FALSE, rd); # ... AND on dest
1815  for (int i = 0, int j = 0; i < VL && j < VL;):
1816 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1817 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1818 val = load_memory(rs, imm, i, opwidth)
1819 if unsigned:
1820 val = zero_extend(val, min(opwidth, bitwidth))
1821 else:
1822 val = sign_extend(val, min(opwidth, bitwidth))
1823 set_polymorphed_reg(rd, bitwidth, j, val)
1824 if (int_csr[rs].isvec) i++;
1825 if (int_csr[rd].isvec) j++; else break;
1826
1827 Note:
1828
1829 * when comparing against for example the twin-predicated c.mv
1830 pseudo-code, the pattern of independent incrementing of rd and rs
1831 is preserved unchanged.
1832 * just as with the c.mv pseudocode, zeroing is not included and must be
1833 taken into account (TODO).
1834 * that due to the use of a twin-predication algorithm, LOAD/STORE also
1835 take on the same VSPLAT, VINSERT, VREDUCE, VEXTRACT, VGATHER and
1836 VSCATTER characteristics.
1837 * that due to the use of the same set\_polymorphed\_reg pseudocode,
1838 a destination that is not vectorised (marked as scalar) will
1839 result in the element being fully sign-extended or zero-extended
1840 out to the full register file bitwidth (XLEN). When the source
1841 is also marked as scalar, this is how the compatibility with
1842 standard RV LOAD/STORE is preserved by this algorithm.
1843
1844 ### Example Tables showing LOAD elements
1845
1846 This section contains examples of vectorised LOAD operations, showing
1847 how the two stage process works (three if zero/sign-extension is included).
1848
1849
1850 #### Example: LD x8, x5(0), x8 CSR-elwidth=32, x5 CSR-elwidth=16, VL=7
1851
1852 This is:
1853
1854 * a 64-bit load, with an offset of zero
1855 * with a source-address elwidth of 16-bit
1856 * into a destination-register with an elwidth of 32-bit
1857 * where VL=7
1858 * from register x5 (actually x5-x6) to x8 (actually x8 to half of x11)
1859 * RV64, where XLEN=64 is assumed.
1860
1861 First, the memory table, which, due to the
1862 element width being 16 and the operation being LD (64), the 64-bits
1863 loaded from memory are subdivided into groups of **four** elements.
1864 And, with VL being 7 (deliberately to illustrate that this is reasonable
1865 and possible), the first four are sourced from the offset addresses pointed
1866 to by x5, and the next three from the ofset addresses pointed to by
1867 the next contiguous register, x6:
1868
1869 [[!table data="""
1870 addr | byte 0 | byte 1 | byte 2 | byte 3 | byte 4 | byte 5 | byte 6 | byte 7 |
1871 @x5 | elem 0 || elem 1 || elem 2 || elem 3 ||
1872 @x6 | elem 4 || elem 5 || elem 6 || not loaded ||
1873 """]]
1874
1875 Next, the elements are zero-extended from 16-bit to 32-bit, as whilst
1876 the elwidth CSR entry for x5 is 16-bit, the destination elwidth on x8 is 32.
1877
1878 [[!table data="""
1879 byte 3 | byte 2 | byte 1 | byte 0 |
1880 0x0 | 0x0 | elem0 ||
1881 0x0 | 0x0 | elem1 ||
1882 0x0 | 0x0 | elem2 ||
1883 0x0 | 0x0 | elem3 ||
1884 0x0 | 0x0 | elem4 ||
1885 0x0 | 0x0 | elem5 ||
1886 0x0 | 0x0 | elem6 ||
1887 0x0 | 0x0 | elem7 ||
1888 """]]
1889
1890 Lastly, the elements are stored in contiguous blocks, as if x8 was also
1891 byte-addressable "memory". That "memory" happens to cover registers
1892 x8, x9, x10 and x11, with the last 32 "bits" of x11 being **UNMODIFIED**:
1893
1894 [[!table data="""
1895 reg# | byte 7 | byte 6 | byte 5 | byte 4 | byte 3 | byte 2 | byte 1 | byte 0 |
1896 x8 | 0x0 | 0x0 | elem 1 || 0x0 | 0x0 | elem 0 ||
1897 x9 | 0x0 | 0x0 | elem 3 || 0x0 | 0x0 | elem 2 ||
1898 x10 | 0x0 | 0x0 | elem 5 || 0x0 | 0x0 | elem 4 ||
1899 x11 | **UNMODIFIED** |||| 0x0 | 0x0 | elem 6 ||
1900 """]]
1901
1902 Thus we have data that is loaded from the **addresses** pointed to by
1903 x5 and x6, zero-extended from 16-bit to 32-bit, stored in the **registers**
1904 x8 through to half of x11.
1905 The end result is that elements 0 and 1 end up in x8, with element 8 being
1906 shifted up 32 bits, and so on, until finally element 6 is in the
1907 LSBs of x11.
1908
1909 Note that whilst the memory addressing table is shown left-to-right byte order,
1910 the registers are shown in right-to-left (MSB) order. This does **not**
1911 imply that bit or byte-reversal is carried out: it's just easier to visualise
1912 memory as being contiguous bytes, and emphasises that registers are not
1913 really actually "memory" as such.
1914
1915 ## Why SV bitwidth specification is restricted to 4 entries
1916
1917 The four entries for SV element bitwidths only allows three over-rides:
1918
1919 * 8 bit
1920 * 16 hit
1921 * 32 bit
1922
1923 This would seem inadequate, surely it would be better to have 3 bits or
1924 more and allow 64, 128 and some other options besides. The answer here
1925 is, it gets too complex, no RV128 implementation yet exists, and so RV64's
1926 default is 64 bit, so the 4 major element widths are covered anyway.
1927
1928 There is an absolutely crucial aspect oF SV here that explicitly
1929 needs spelling out, and it's whether the "vectorised" bit is set in
1930 the Register's CSR entry.
1931
1932 If "vectorised" is clear (not set), this indicates that the operation
1933 is "scalar". Under these circumstances, when set on a destination (RD),
1934 then sign-extension and zero-extension, whilst changed to match the
1935 override bitwidth (if set), will erase the **full** register entry
1936 (64-bit if RV64).
1937
1938 When vectorised is *set*, this indicates that the operation now treats
1939 **elements** as if they were independent registers, so regardless of
1940 the length, any parts of a given actual register that are not involved
1941 in the operation are **NOT** modified, but are **PRESERVED**.
1942
1943 For example:
1944
1945 * when the vector bit is clear and elwidth set to 16 on the destination
1946 register, operations are truncated to 16 bit and then sign or zero
1947 extended to the *FULL* XLEN register width.
1948 * when the vector bit is set, elwidth is 16 and VL=1 (or other value where
1949 groups of elwidth sized elements do not fill an entire XLEN register),
1950 the "top" bits of the destination register do *NOT* get modified, zero'd
1951 or otherwise overwritten.
1952
1953 SIMD micro-architectures may implement this by using predication on
1954 any elements in a given actual register that are beyond the end of
1955 multi-element operation.
1956
1957 Other microarchitectures may choose to provide byte-level write-enable
1958 lines on the register file, such that each 64 bit register in an RV64
1959 system requires 8 WE lines. Scalar RV64 operations would require
1960 activation of all 8 lines, where SV elwidth based operations would
1961 activate the required subset of those byte-level write lines.
1962
1963 Example:
1964
1965 * rs1, rs2 and rd are all set to 8-bit
1966 * VL is set to 3
1967 * RV64 architecture is set (UXL=64)
1968 * add operation is carried out
1969 * bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16]
1970 concatenated with similar add operations on bits 15..8 and 7..0
1971 * bits 24 through 63 **remain as they originally were**.
1972
1973 Example SIMD micro-architectural implementation:
1974
1975 * SIMD architecture works out the nearest round number of elements
1976 that would fit into a full RV64 register (in this case: 8)
1977 * SIMD architecture creates a hidden predicate, binary 0b00000111
1978 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
1979 * SIMD architecture goes ahead with the add operation as if it
1980 was a full 8-wide batch of 8 adds
1981 * SIMD architecture passes top 5 elements through the adders
1982 (which are "disabled" due to zero-bit predication)
1983 * SIMD architecture gets the 5 unmodified top 8-bits back unmodified
1984 and stores them in rd.
1985
1986 This requires a read on rd, however this is required anyway in order
1987 to support non-zeroing mode.
1988
1989 ## Polymorphic floating-point
1990
1991 Standard scalar RV integer operations base the register width on XLEN,
1992 which may be changed (UXL in USTATUS, and the corresponding MXL and
1993 SXL in MSTATUS and SSTATUS respectively). Integer LOAD, STORE and
1994 arithmetic operations are therefore restricted to an active XLEN bits,
1995 with sign or zero extension to pad out the upper bits when XLEN has
1996 been dynamically set to less than the actual register size.
1997
1998 For scalar floating-point, the active (used / changed) bits are
1999 specified exclusively by the operation: ADD.S specifies an active
2000 32-bits, with the upper bits of the source registers needing to
2001 be all 1s ("NaN-boxed"), and the destination upper bits being
2002 *set* to all 1s (including on LOAD/STOREs).
2003
2004 Where elwidth is set to default (on any source or the destination)
2005 it is obvious that this NaN-boxing behaviour can and should be
2006 preserved. When elwidth is non-default things are less obvious,
2007 so need to be thought through. Here is a normal (scalar) sequence,
2008 assuming an RV64 which supports Quad (128-bit) FLEN:
2009
2010 * FLD loads 64-bit wide from memory. Top 64 MSBs are set to all 1s
2011 * ADD.D performs a 64-bit-wide add. Top 64 MSBs of destination set to 1s.
2012 * FSD stores lowest 64-bits from the 128-bit-wide register to memory:
2013 top 64 MSBs ignored.
2014
2015 Therefore it makes sense to mirror this behaviour when, for example,
2016 elwidth is set to 32. Assume elwidth set to 32 on all source and
2017 destination registers:
2018
2019 * FLD loads 64-bit wide from memory as **two** 32-bit single-precision
2020 floating-point numbers.
2021 * ADD.D performs **two** 32-bit-wide adds, storing one of the adds
2022 in bits 0-31 and the second in bits 32-63.
2023 * FSD stores lowest 64-bits from the 128-bit-wide register to memory
2024
2025 Here's the thing: it does not make sense to overwrite the top 64 MSBs
2026 of the registers either during the FLD **or** the ADD.D. The reason
2027 is that, effectively, the top 64 MSBs actually represent a completely
2028 independent 64-bit register, so overwriting it is not only gratuitous
2029 but may actually be harmful for a future extension to SV which may
2030 have a way to directly access those top 64 bits.
2031
2032 The decision is therefore **not** to touch the upper parts of floating-point
2033 registers whereever elwidth is set to non-default values, including
2034 when "isvec" is false in a given register's CSR entry. Only when the
2035 elwidth is set to default **and** isvec is false will the standard
2036 RV behaviour be followed, namely that the upper bits be modified.
2037
2038 Ultimately if elwidth is default and isvec false on *all* source
2039 and destination registers, a SimpleV instruction defaults completely
2040 to standard RV scalar behaviour (this holds true for **all** operations,
2041 right across the board).
2042
2043 The nice thing here is that ADD.S, ADD.D and ADD.Q when elwidth are
2044 non-default values are effectively all the same: they all still perform
2045 multiple ADD operations, just at different widths. A future extension
2046 to SimpleV may actually allow ADD.S to access the upper bits of the
2047 register, effectively breaking down a 128-bit register into a bank
2048 of 4 independently-accesible 32-bit registers.
2049
2050 In the meantime, although when e.g. setting VL to 8 it would technically
2051 make no difference to the ALU whether ADD.S, ADD.D or ADD.Q is used,
2052 using ADD.Q may be an easy way to signal to the microarchitecture that
2053 it is to receive a higher VL value. On a superscalar OoO architecture
2054 there may be absolutely no difference, however on simpler SIMD-style
2055 microarchitectures they may not necessarily have the infrastructure in
2056 place to know the difference, such that when VL=8 and an ADD.D instruction
2057 is issued, it completes in 2 cycles (or more) rather than one, where
2058 if an ADD.Q had been issued instead on such simpler microarchitectures
2059 it would complete in one.
2060
2061 ## Specific instruction walk-throughs
2062
2063 This section covers walk-throughs of the above-outlined procedure
2064 for converting standard RISC-V scalar arithmetic operations to
2065 polymorphic widths, to ensure that it is correct.
2066
2067 ### add
2068
2069 Standard Scalar RV32/RV64 (xlen):
2070
2071 * RS1 @ xlen bits
2072 * RS2 @ xlen bits
2073 * add @ xlen bits
2074 * RD @ xlen bits
2075
2076 Polymorphic variant:
2077
2078 * RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
2079 * RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
2080 * add @ max(rs1, rs2) bits
2081 * RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate
2082
2083 Note here that polymorphic add zero-extends its source operands,
2084 where addw sign-extends.
2085
2086 ### addw
2087
2088 The RV Specification specifically states that "W" variants of arithmetic
2089 operations always produce 32-bit signed values. In a polymorphic
2090 environment it is reasonable to assume that the signed aspect is
2091 preserved, where it is the length of the operands and the result
2092 that may be changed.
2093
2094 Standard Scalar RV64 (xlen):
2095
2096 * RS1 @ xlen bits
2097 * RS2 @ xlen bits
2098 * add @ xlen bits
2099 * RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
2100
2101 Polymorphic variant:
2102
2103 * RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
2104 * RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
2105 * add @ max(rs1, rs2) bits
2106 * RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
2107
2108 Note here that polymorphic addw sign-extends its source operands,
2109 where add zero-extends.
2110
2111 This requires a little more in-depth analysis. Where the bitwidth of
2112 rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
2113 only where the bitwidth of either rs1 or rs2 are different, will the
2114 lesser-width operand be sign-extended.
2115
2116 Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
2117 where for add they are both zero-extended. This holds true for all arithmetic
2118 operations ending with "W".
2119
2120 ### addiw
2121
2122 Standard Scalar RV64I:
2123
2124 * RS1 @ xlen bits, truncated to 32-bit
2125 * immed @ 12 bits, sign-extended to 32-bit
2126 * add @ 32 bits
2127 * RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
2128
2129 Polymorphic variant:
2130
2131 * RS1 @ rs1 bits
2132 * immed @ 12 bits, sign-extend to max(rs1, 12) bits
2133 * add @ max(rs1, 12) bits
2134 * RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
2135
2136 # Predication Element Zeroing
2137
2138 The introduction of zeroing on traditional vector predication is usually
2139 intended as an optimisation for lane-based microarchitectures with register
2140 renaming to be able to save power by avoiding a register read on elements
2141 that are passed through en-masse through the ALU. Simpler microarchitectures
2142 do not have this issue: they simply do not pass the element through to
2143 the ALU at all, and therefore do not store it back in the destination.
2144 More complex non-lane-based micro-architectures can, when zeroing is
2145 not set, use the predication bits to simply avoid sending element-based
2146 operations to the ALUs, entirely: thus, over the long term, potentially
2147 keeping all ALUs 100% occupied even when elements are predicated out.
2148
2149 SimpleV's design principle is not based on or influenced by
2150 microarchitectural design factors: it is a hardware-level API.
2151 Therefore, looking purely at whether zeroing is *useful* or not,
2152 (whether less instructions are needed for certain scenarios),
2153 given that a case can be made for zeroing *and* non-zeroing, the
2154 decision was taken to add support for both.
2155
2156 ## Single-predication (based on destination register)
2157
2158 Zeroing on predication for arithmetic operations is taken from
2159 the destination register's predicate. i.e. the predication *and*
2160 zeroing settings to be applied to the whole operation come from the
2161 CSR Predication table entry for the destination register.
2162 Thus when zeroing is set on predication of a destination element,
2163 if the predication bit is clear, then the destination element is *set*
2164 to zero (twin-predication is slightly different, and will be covered
2165 next).
2166
2167 Thus the pseudo-code loop for a predicated arithmetic operation
2168 is modified to as follows:
2169
2170  for (i = 0; i < VL; i++)
2171 if not zeroing: # an optimisation
2172 while (!(predval & 1<<i) && i < VL)
2173 if (int_vec[rd ].isvector)  { id += 1; }
2174 if (int_vec[rs1].isvector)  { irs1 += 1; }
2175 if (int_vec[rs2].isvector)  { irs2 += 1; }
2176 if i == VL:
2177 break
2178 if (predval & 1<<i)
2179 src1 = ....
2180 src2 = ...
2181 else:
2182 result = src1 + src2 # actual add (or other op) here
2183 set_polymorphed_reg(rd, destwid, ird, result)
2184 if (!int_vec[rd].isvector) break
2185 else if zeroing:
2186 result = 0
2187 set_polymorphed_reg(rd, destwid, ird, result)
2188 if (int_vec[rd ].isvector)  { id += 1; }
2189 else if (predval & 1<<i) break;
2190 if (int_vec[rs1].isvector)  { irs1 += 1; }
2191 if (int_vec[rs2].isvector)  { irs2 += 1; }
2192
2193 The optimisation to skip elements entirely is only possible for certain
2194 micro-architectures when zeroing is not set. However for lane-based
2195 micro-architectures this optimisation may not be practical, as it
2196 implies that elements end up in different "lanes". Under these
2197 circumstances it is perfectly fine to simply have the lanes
2198 "inactive" for predicated elements, even though it results in
2199 less than 100% ALU utilisation.
2200
2201 ## Twin-predication (based on source and destination register)
2202
2203 Twin-predication is not that much different, except that that
2204 the source is independently zero-predicated from the destination.
2205 This means that the source may be zero-predicated *or* the
2206 destination zero-predicated *or both*, or neither.
2207
2208 When with twin-predication, zeroing is set on the source and not
2209 the destination, if a predicate bit is set it indicates that a zero
2210 data element is passed through the operation (the exception being:
2211 if the source data element is to be treated as an address - a LOAD -
2212 then the data returned *from* the LOAD is zero, rather than looking up an
2213 *address* of zero.
2214
2215 When zeroing is set on the destination and not the source, then just
2216 as with single-predicated operations, a zero is stored into the destination
2217 element (or target memory address for a STORE).
2218
2219 Zeroing on both source and destination effectively result in a bitwise
2220 NOR operation of the source and destination predicate: the result is that
2221 where either source predicate OR destination predicate is set to 0,
2222 a zero element will ultimately end up in the destination register.
2223
2224 However: this may not necessarily be the case for all operations;
2225 implementors, particularly of custom instructions, clearly need to
2226 think through the implications in each and every case.
2227
2228 Here is pseudo-code for a twin zero-predicated operation:
2229
2230 function op_mv(rd, rs) # MV not VMV!
2231  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
2232  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
2233  ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
2234  pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
2235  for (int i = 0, int j = 0; i < VL && j < VL):
2236 if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
2237 if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
2238 if ((pd & 1<<j))
2239 if ((pd & 1<<j))
2240 sourcedata = ireg[rs+i];
2241 else
2242 sourcedata = 0
2243 ireg[rd+j] <= sourcedata
2244 else if (zerodst)
2245 ireg[rd+j] <= 0
2246 if (int_csr[rs].isvec)
2247 i++;
2248 if (int_csr[rd].isvec)
2249 j++;
2250 else
2251 if ((pd & 1<<j))
2252 break;
2253
2254 Note that in the instance where the destination is a scalar, the hardware
2255 loop is ended the moment a value *or a zero* is placed into the destination
2256 register/element. Also note that, for clarity, variable element widths
2257 have been left out of the above.
2258
2259 # Exceptions
2260
2261 TODO: expand. Exceptions may occur at any time, in any given underlying
2262 scalar operation. This implies that context-switching (traps) may
2263 occur, and operation must be returned to where it left off. That in
2264 turn implies that the full state - including the current parallel
2265 element being processed - has to be saved and restored. This is
2266 what the **STATE** CSR is for.
2267
2268 The implications are that all underlying individual scalar operations
2269 "issued" by the parallelisation have to appear to be executed sequentially.
2270 The further implications are that if two or more individual element
2271 operations are underway, and one with an earlier index causes an exception,
2272 it may be necessary for the microarchitecture to **discard** or terminate
2273 operations with higher indices.
2274
2275 This being somewhat dissatisfactory, an "opaque predication" variant
2276 of the STATE CSR is being considered.
2277
2278 # Hints
2279
2280 A "HINT" is an operation that has no effect on architectural state,
2281 where its use may, by agreed convention, give advance notification
2282 to the microarchitecture: branch prediction notification would be
2283 a good example. Usually HINTs are where rd=x0.
2284
2285 With Simple-V being capable of issuing *parallel* instructions where
2286 rd=x0, the space for possible HINTs is expanded considerably. VL
2287 could be used to indicate different hints. In addition, if predication
2288 is set, the predication register itself could hypothetically be passed
2289 in as a *parameter* to the HINT operation.
2290
2291 No specific hints are yet defined in Simple-V
2292
2293 # Vector Block Format <a name="vliw-format"></a>
2294
2295 One issue with a former revision of SV was the setup and teardown
2296 time of the CSRs. The cost of the use of a full CSRRW (requiring LI)
2297 to set up registers and predicates was quite high. A VLIW-like format
2298 therefore makes sense, and is conceptually reminiscent of the ARM Thumb2
2299 "IT" instruction.
2300
2301 The format is:
2302
2303 * the standard RISC-V 80 to 192 bit encoding sequence, with bits
2304 defining the options to follow within the block
2305 * An optional VL Block (16-bit)
2306 * Optional predicate entries (8/16-bit blocks: see Predicate Table, above)
2307 * Optional register entries (8/16-bit blocks: see Register Table, above)
2308 * finally some 16/32/48 bit standard RV or SVPrefix opcodes follow.
2309
2310 Thus, the variable-length format from Section 1.5 of the RISC-V ISA is used
2311 as follows:
2312
2313 | base+4 ... base+2 | base | number of bits |
2314 | ------ ----------------- | ---------------- | -------------------------- |
2315 | ..xxxx xxxxxxxxxxxxxxxx | xnnnxxxxx1111111 | (80+16\*nnn)-bit, nnn!=111 |
2316 | {ops}{Pred}{Reg}{VL Block} | SV Prefix | |
2317
2318 A suitable prefix, which fits the Expanded Instruction-Length encoding
2319 for "(80 + 16 times instruction-length)", as defined in Section 1.5
2320 of the RISC-V ISA, is as follows:
2321
2322 | 15 | 14:12 | 11:10 | 9:8 | 7 | 6:0 |
2323 | - | ----- | ----- | ----- | --- | ------- |
2324 | vlset | 16xil | pplen | rplen | mode | 1111111 |
2325
2326 The VL/MAXVL/SubVL Block format:
2327
2328 | 31-30 | 29:28 | 27:22 | 21:17 - 16 |
2329 | - | ----- | ------ | ------ - - |
2330 | 0 | SubVL | VLdest | VLEN vlt |
2331 | 1 | SubVL | VLdest | VLEN |
2332
2333 Note: this format is very similar to that used in [[sv_prefix_proposal]]
2334
2335 If vlt is 0, VLEN is a 5 bit immediate value, offset by one (i.e
2336 a bit sequence of 0b00000 represents VL=1 and so on). If vlt is 1,
2337 it specifies the scalar register from which VL is set by this VLIW
2338 instruction group. VL, whether set from the register or the immediate,
2339 is then modified (truncated) to be MIN(VL, MAXVL), and the result stored
2340 in the scalar register specified in VLdest. If VLdest is zero, no store
2341 in the regfile occurs (however VL is still set).
2342
2343 This option will typically be used to start vectorised loops, where
2344 the VLIW instruction effectively embeds an optional "SETSUBVL, SETVL"
2345 sequence (in compact form).
2346
2347 When bit 15 is set to 1, MAXVL and VL are both set to the immediate,
2348 VLEN (again, offset by one), which is 6 bits in length, and the same
2349 value stored in scalar register VLdest (if that register is nonzero).
2350 A value of 0b000000 will set MAXVL=VL=1, a value of 0b000001 will
2351 set MAXVL=VL= 2 and so on.
2352
2353 This option will typically not be used so much for loops as it will be
2354 for one-off instructions such as saving the entire register file to the
2355 stack with a single one-off Vectorised and predicated LD/ST, or as a way
2356 to save or restore registers in a function call with a single instruction.
2357
2358 CSRs needed:
2359
2360 * mepcvliw
2361 * sepcvliw
2362 * uepcvliw
2363 * hepcvliw
2364
2365 Notes:
2366
2367 * Bit 7 specifies if the prefix block format is the full 16 bit format
2368 (1) or the compact less expressive format (0). In the 8 bit format,
2369 pplen is multiplied by 2.
2370 * 8 bit format predicate numbering is implicit and begins from x9. Thus
2371 it is critical to put blocks in the correct order as required.
2372 * Bit 7 also specifies if the register block format is 16 bit (1) or 8 bit
2373 (0). In the 8 bit format, rplen is multiplied by 2. If only an odd number
2374 of entries are needed the last may be set to 0x00, indicating "unused".
2375 * Bit 15 specifies if the VL Block is present. If set to 1, the VL Block
2376 immediately follows the VLIW instruction Prefix
2377 * Bits 8 and 9 define how many RegCam entries (0 to 3 if bit 15 is 1,
2378 otherwise 0 to 6) follow the (optional) VL Block.
2379 * Bits 10 and 11 define how many PredCam entries (0 to 3 if bit 7 is 1,
2380 otherwise 0 to 6) follow the (optional) RegCam entries
2381 * Bits 14 to 12 (IL) define the actual length of the instruction: total
2382 number of bits is 80 + 16 times IL. Standard RV32, RVC and also
2383 SVPrefix (P48/64-\*-Type) instructions fit into this space, after the
2384 (optional) VL / RegCam / PredCam entries
2385 * In any RVC or 32 Bit opcode, any registers within the VLIW-prefixed
2386 format *MUST* have the RegCam and PredCam entries applied to the
2387 operation (and the Vectorisation loop activated)
2388 * P48 and P64 opcodes do **not** take their Register or predication
2389 context from the VLIW Block tables: they do however have VL or SUBVL
2390 applied (unless VLtyp or svlen are set).
2391 * At the end of the VLIW Group, the RegCam and PredCam entries
2392 *no longer apply*. VL, MAXVL and SUBVL on the other hand remain at
2393 the values set by the last instruction (whether a CSRRW or the VL
2394 Block header).
2395 * Although an inefficient use of resources, it is fine to set the MAXVL,
2396 VL and SUBVL CSRs with standard CSRRW instructions, within a VLIW block.
2397
2398 All this would greatly reduce the amount of space utilised by Vectorised
2399 instructions, given that 64-bit CSRRW requires 3, even 4 32-bit opcodes:
2400 the CSR itself, a LI, and the setting up of the value into the RS
2401 register of the CSR, which, again, requires a LI / LUI to get the 32
2402 bit data into the CSR. To get 64-bit data into the register in order
2403 to put it into the CSR(s), LOAD operations from memory are needed!
2404
2405 Given that each 64-bit CSR can hold only 4x PredCAM entries (or 4 RegCAM
2406 entries), that's potentially 6 to eight 32-bit instructions, just to
2407 establish the Vector State!
2408
2409 Not only that: even CSRRW on VL and MAXVL requires 64-bits (even more
2410 bits if VL needs to be set to greater than 32). Bear in mind that in SV,
2411 both MAXVL and VL need to be set.
2412
2413 By contrast, the VLIW prefix is only 16 bits, the VL/MAX/SubVL block is
2414 only 16 bits, and as long as not too many predicates and register vector
2415 qualifiers are specified, several 32-bit and 16-bit opcodes can fit into
2416 the format. If the full flexibility of the 16 bit block formats are not
2417 needed, more space is saved by using the 8 bit formats.
2418
2419 In this light, embedding the VL/MAXVL, PredCam and RegCam CSR entries
2420 into a VLIW format makes a lot of sense.
2421
2422 Bear in mind the warning in an earlier section that use of VLtyp or svlen
2423 in a P48 or P64 opcode within a VLIW Group will result in corruption
2424 (use) of the STATE CSR, as the STATE CSR is shared with SVPrefix. To
2425 avoid this situation, the STATE CSR may be copied into a temp register
2426 and restored afterwards.
2427
2428 Open Questions:
2429
2430 * Is it necessary to stick to the RISC-V 1.5 format? Why not go with
2431 using the 15th bit to allow 80 + 16\*0bnnnn bits? Perhaps to be sane,
2432 limit to 256 bits (16 times 0-11).
2433 * Could a "hint" be used to set which operations are parallel and which
2434 are sequential?
2435 * Could a new sub-instruction opcode format be used, one that does not
2436 conform precisely to RISC-V rules, but *unpacks* to RISC-V opcodes?
2437 no need for byte or bit-alignment
2438 * Could a hardware compression algorithm be deployed? Quite likely,
2439 because of the sub-execution context (sub-VLIW PC)
2440
2441 ## Limitations on instructions.
2442
2443 To greatly simplify implementations, it is required to treat the VLIW
2444 group as a separate sub-program with its own separate PC. The sub-pc
2445 advances separately whilst the main PC remains pointing at the beginning
2446 of the VLIW instruction (not to be confused with how VL works, which
2447 is exactly the same principle, except it is VStart in the STATE CSR
2448 that increments).
2449
2450 This has implications, namely that a new set of CSRs identical to xepc
2451 (mepc, srpc, hepc and uepc) must be created and managed and respected
2452 as being a sub extension of the xepc set of CSRs. Thus, xepcvliw CSRs
2453 must be context switched and saved / restored in traps.
2454
2455 The srcoffs and destoffs indices in the STATE CSR may be similarly
2456 regarded as another sub-execution context, giving in effect two sets of
2457 nested sub-levels of the RISCV Program Counter (actually, three including
2458 SUBVL and ssvoffs).
2459
2460 In addition, as xepcvliw CSRs are relative to the beginning of the VLIW
2461 block, branches MUST be restricted to within (relative to) the block,
2462 i.e. addressing is now restricted to the start (and very short) length
2463 of the block.
2464
2465 Also: calling subroutines is inadviseable, unless they can be entirely
2466 accomplished within a block.
2467
2468 A normal jump, normal branch and a normal function call may only be taken
2469 by letting the VLIW group end, returning to "normal" standard RV mode,
2470 and then using standard RVC, 32 bit or P48/64-\*-type opcodes.
2471
2472 ## Links
2473
2474 * <https://groups.google.com/d/msg/comp.arch/yIFmee-Cx-c/jRcf0evSAAAJ>
2475
2476 # Subsets of RV functionality
2477
2478 This section describes the differences when SV is implemented on top of
2479 different subsets of RV.
2480
2481 ## Common options
2482
2483 It is permitted to only implement SVprefix and not the VLIW instruction
2484 format option, and vice-versa. UNIX Platforms **MUST** raise illegal
2485 instruction on seeing an unsupported VLIW or SVprefix opcode, so that
2486 traps may emulate the format.
2487
2488 It is permitted in SVprefix to either not implement VL or not implement
2489 SUBVL (see [[sv_prefix_proposal]] for full details. Again, UNIX Platforms
2490 *MUST* raise illegal instruction on implementations that do not support
2491 VL or SUBVL.
2492
2493 It is permitted to limit the size of either (or both) the register files
2494 down to the original size of the standard RV architecture. However, below
2495 the mandatory limits set in the RV standard will result in non-compliance
2496 with the SV Specification.
2497
2498 ## RV32 / RV32F
2499
2500 When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
2501 maximum limit for predication is also restricted to 32 bits. Whilst not
2502 actually specifically an "option" it is worth noting.
2503
2504 ## RV32G
2505
2506 Normally in standard RV32 it does not make much sense to have
2507 RV32G, The critical instructions that are missing in standard RV32
2508 are those for moving data to and from the double-width floating-point
2509 registers into the integer ones, as well as the FCVT routines.
2510
2511 In an earlier draft of SV, it was possible to specify an elwidth
2512 of double the standard register size: this had to be dropped,
2513 and may be reintroduced in future revisions.
2514
2515 ## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
2516
2517 When floating-point is not implemented, the size of the User Register and
2518 Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
2519 per table).
2520
2521 ## RV32E
2522
2523 In embedded scenarios the User Register and Predication CSRs may be
2524 dropped entirely, or optionally limited to 1 CSR, such that the combined
2525 number of entries from the M-Mode CSR Register table plus U-Mode
2526 CSR Register table is either 4 16-bit entries or (if the U-Mode is
2527 zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
2528 the Predication CSR tables.
2529
2530 RV32E is the most likely candidate for simply detecting that registers
2531 are marked as "vectorised", and generating an appropriate exception
2532 for the VL loop to be implemented in software.
2533
2534 ## RV128
2535
2536 RV128 has not been especially considered, here, however it has some
2537 extremely large possibilities: double the element width implies
2538 256-bit operands, spanning 2 128-bit registers each, and predication
2539 of total length 128 bit given that XLEN is now 128.
2540
2541 # Under consideration <a name="issues"></a>
2542
2543 for element-grouping, if there is unused space within a register
2544 (3 16-bit elements in a 64-bit register for example), recommend:
2545
2546 * For the unused elements in an integer register, the used element
2547 closest to the MSB is sign-extended on write and the unused elements
2548 are ignored on read.
2549 * The unused elements in a floating-point register are treated as-if
2550 they are set to all ones on write and are ignored on read, matching the
2551 existing standard for storing smaller FP values in larger registers.
2552
2553 ---
2554
2555 info register,
2556
2557 > One solution is to just not support LR/SC wider than a fixed
2558 > implementation-dependent size, which must be at least 
2559 >1 XLEN word, which can be read from a read-only CSR
2560 > that can also be used for info like the kind and width of 
2561 > hw parallelism supported (128-bit SIMD, minimal virtual 
2562 > parallelism, etc.) and other things (like maybe the number 
2563 > of registers supported). 
2564
2565 > That CSR would have to have a flag to make a read trap so
2566 > a hypervisor can simulate different values.
2567
2568 ----
2569
2570 > And what about instructions like JALR? 
2571
2572 answer: they're not vectorised, so not a problem
2573
2574 ----
2575
2576 * if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
2577 XLEN if elwidth==default
2578 * if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
2579 *32* if elwidth == default
2580
2581 ---
2582
2583 TODO: document different lengths for INT / FP regfiles, and provide
2584 as part of info register. 00=32, 01=64, 10=128, 11=reserved.
2585
2586 ---
2587
2588 TODO, update to remove RegCam and PredCam CSRs, just use SVprefix and
2589 VLIW format
2590
2591 ---
2592
2593 Could the 8 bit Register VLIW format use regnum<<1 instead, only accessing regs 0 to 64?
2594
2595 --
2596
2597 TODO evaluate strncpy and strlen
2598 <https://groups.google.com/forum/m/#!msg/comp.arch/bGBeaNjAKvc/_vbqyxTUAQAJ>
2599
2600 RVV version:
2601
2602 strncpy:
2603 mv a3, a0 # Copy dst
2604 loop:
2605 setvli x0, a2, vint8 # Vectors of bytes.
2606 vlbff.v v1, (a1) # Get src bytes
2607 vseq.vi v0, v1, 0 # Flag zero bytes
2608 vmfirst a4, v0 # Zero found?
2609 vmsif.v v0, v0 # Set mask up to and including zero byte. Ppplio
2610 vsb.v v1, (a3), v0.t # Write out bytes
2611 bgez a4, exit # Done
2612 csrr t1, vl # Get number of bytes fetched
2613 add a1, a1, t1 # Bump src pointer
2614 sub a2, a2, t1 # Decrement count.
2615 add a3, a3, t1 # Bump dst pointer
2616 bnez a2, loop # Anymore?
2617
2618 exit:
2619 ret
2620
2621
2622 RVV version:
2623
2624 mv a3, a0 # Save start
2625 loop:
2626 setvli a1, x0, vint8 # byte vec, x0 (Zero reg) => use max hardware len
2627 vldbff.v v1, (a3) # Get bytes
2628 csrr a1, vl # Get bytes actually read e.g. if fault
2629 vseq.vi v0, v1, 0 # Set v0[i] where v1[i] = 0
2630 add a3, a3, a1 # Bump pointer
2631 vmfirst a2, v0 # Find first set bit in mask, returns -1 if none
2632 bltz a2, loop # Not found?
2633 add a0, a0, a1 # Sum start + bump
2634 add a3, a3, a2 # Add index of zero byte
2635 sub a0, a3, a0 # Subtract start address+bump
2636 ret