add abridged spec, split out vblock format to own file
[libreriscv.git] / simple_v_extension / specification.mdwn
1 # Simple-V (Parallelism Extension Proposal) Specification
2
3 * Copyright (C) 2017, 2018, 2019 Luke Kenneth Casson Leighton
4 * Status: DRAFTv0.6
5 * Last edited: 21 jun 2019
6 * Ancillary resource: [[opcodes]]
7 * Ancillary resource: [[sv_prefix_proposal]]
8 * Ancillary resource: [[abridged_spec]]
9 * Ancillary resource: [[vblock_format]]
10
11 With thanks to:
12
13 * Allen Baum
14 * Bruce Hoult
15 * comp.arch
16 * Jacob Bachmeyer
17 * Guy Lemurieux
18 * Jacob Lifshay
19 * Terje Mathisen
20 * The RISC-V Founders, without whom this all would not be possible.
21
22 [[!toc ]]
23
24 # Summary and Background: Rationale
25
26 Simple-V is a uniform parallelism API for RISC-V hardware that has several
27 unplanned side-effects including code-size reduction, expansion of
28 HINT space and more. The reason for
29 creating it is to provide a manageable way to turn a pre-existing design
30 into a parallel one, in a step-by-step incremental fashion, without adding any new opcodes, thus allowing
31 the implementor to focus on adding hardware where it is needed and necessary.
32 The primary target is for mobile-class 3D GPUs and VPUs, with secondary
33 goals being to reduce executable size (by extending the effectiveness of RV opcodes, RVC in particular) and reduce context-switch latency.
34
35 Critically: **No new instructions are added**. The parallelism (if any
36 is implemented) is implicitly added by tagging *standard* scalar registers
37 for redirection. When such a tagged register is used in any instruction,
38 it indicates that the PC shall **not** be incremented; instead a loop
39 is activated where *multiple* instructions are issued to the pipeline
40 (as determined by a length CSR), with contiguously incrementing register
41 numbers starting from the tagged register. When the last "element"
42 has been reached, only then is the PC permitted to move on. Thus
43 Simple-V effectively sits (slots) *in between* the instruction decode phase
44 and the ALU(s).
45
46 The barrier to entry with SV is therefore very low. The minimum
47 compliant implementation is software-emulation (traps), requiring
48 only the CSRs and CSR tables, and that an exception be thrown if an
49 instruction's registers are detected to have been tagged. The looping
50 that would otherwise be done in hardware is thus carried out in software,
51 instead. Whilst much slower, it is "compliant" with the SV specification,
52 and may be suited for implementation in RV32E and also in situations
53 where the implementor wishes to focus on certain aspects of SV, without
54 unnecessary time and resources into the silicon, whilst also conforming
55 strictly with the API. A good area to punt to software would be the
56 polymorphic element width capability for example.
57
58 Hardware Parallelism, if any, is therefore added at the implementor's
59 discretion to turn what would otherwise be a sequential loop into a
60 parallel one.
61
62 To emphasise that clearly: Simple-V (SV) is *not*:
63
64 * A SIMD system
65 * A SIMT system
66 * A Vectorisation Microarchitecture
67 * A microarchitecture of any specific kind
68 * A mandary parallel processor microarchitecture of any kind
69 * A supercomputer extension
70
71 SV does **not** tell implementors how or even if they should implement
72 parallelism: it is a hardware "API" (Application Programming Interface)
73 that, if implemented, presents a uniform and consistent way to *express*
74 parallelism, at the same time leaving the choice of if, how, how much,
75 when and whether to parallelise operations **entirely to the implementor**.
76
77 # Basic Operation
78
79 The principle of SV is as follows:
80
81 * Standard RV instructions are "prefixed" (extended) through a 48/64
82 bit format (single instruction option) or a variable
83 length VLIW-like prefix (multi or "grouped" option).
84 * The prefix(es) indicate which registers are "tagged" as
85 "vectorised". Predicates can also be added, and element widths
86 overridden on any src or dest register.
87 * A "Vector Length" CSR is set, indicating the span of any future
88 "parallel" operations.
89 * If any operation (a **scalar** standard RV opcode) uses a register
90 that has been so "marked" ("tagged"), a hardware "macro-unrolling loop"
91 is activated, of length VL, that effectively issues **multiple**
92 identical instructions using contiguous sequentially-incrementing
93 register numbers, based on the "tags".
94 * **Whether they be executed sequentially or in parallel or a
95 mixture of both or punted to software-emulation in a trap handler
96 is entirely up to the implementor**.
97
98 In this way an entire scalar algorithm may be vectorised with
99 the minimum of modification to the hardware and to compiler toolchains.
100
101 To reiterate: **There are *no* new opcodes**. The scheme works *entirely*
102 on hidden context that augments *scalar* RISCV instructions.
103
104 # CSRs <a name="csrs"></a>
105
106 * An optional "reshaping" CSR key-value table which remaps from a 1D
107 linear shape to 2D or 3D, including full transposition.
108
109 There are five additional CSRs, available in any privilege level:
110
111 * MVL (the Maximum Vector Length)
112 * VL (which has different characteristics from standard CSRs)
113 * SUBVL (effectively a kind of SIMD)
114 * STATE (containing copies of MVL, VL and SUBVL as well as context information)
115 * PCVBLK (the current operation being executed within a VBLOCK Group)
116
117 For User Mode there are the following CSRs:
118
119 * uePCVBLK (a copy of the sub-execution Program Counter, that is relative
120 to the start of the current VBLOCK Group, set on a trap).
121 * ueSTATE (useful for saving and restoring during context switch,
122 and for providing fast transitions)
123
124 There are also two additional CSRs for Supervisor-Mode:
125
126 * sePCVBLK
127 * seSTATE
128
129 And likewise for M-Mode:
130
131 * mePCVBLK
132 * meSTATE
133
134 The u/m/s CSRs are treated and handled exactly like their (x)epc
135 equivalents. On entry to or exit from a privilege level, the contents of its (x)eSTATE are swapped with STATE.
136
137 Thus for example, a User Mode trap will end up swapping STATE and ueSTATE
138 (on both entry and exit), allowing User Mode traps to have their own
139 Vectorisation Context set up, separated from and unaffected by normal
140 user applications. If an M Mode trap occurs in the middle of the U Mode trap, STATE is swapped with meSTATE, and restored on exit: the U Mode trap continues unaware that the M Mode trap even occurred.
141
142 Likewise, Supervisor Mode may perform context-switches, safe in the
143 knowledge that its Vectorisation State is unaffected by User Mode.
144
145 The access pattern for these groups of CSRs in each mode follows the
146 same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
147
148 * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
149 * In S-Mode, accessing and changing of the M-Mode CSRs is transparently
150 identical
151 to changing the S-Mode CSRs. Accessing and changing the U-Mode
152 CSRs is permitted.
153 * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
154 is prohibited.
155
156 An interesting side effect of SV STATE being
157 separate and distinct in S Mode
158 is that
159 Vectorised saving of an entire register file to the stack is a single
160 instruction (through accidental provision of LOAD-MULTI semantics). If the
161 SVPrefix P64-LD-type format is used, LOAD-MULTI may even be done with a
162 single standalone 64 bit opcode (P64 may set up SUBVL, VL and MVL from an
163 immediate field, to cover the full regfile). It can even be predicated, which opens up some very
164 interesting possibilities.
165
166 (x)EPCVBLK CSRs must be treated exactly like their corresponding (x)epc
167 equivalents. See VBLOCK section for details.
168
169 ## MAXVECTORLENGTH (MVL) <a name="mvl" />
170
171 MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
172 is variable length and may be dynamically set. MVL is
173 however limited to the regfile bitwidth XLEN (1-32 for RV32,
174 1-64 for RV64 and so on).
175
176 The reason for setting this limit is so that predication registers, when
177 marked as such, may fit into a single register as opposed to fanning
178 out over several registers. This keeps the hardware implementation a
179 little simpler.
180
181 The other important factor to note is that the actual MVL is internally
182 stored **offset by one**, so that it can fit into only 6 bits (for RV64)
183 and still cover a range up to XLEN bits. Attempts to set MVL to zero will
184 return an exception. This is expressed more clearly in the "pseudocode"
185 section, where there are subtle differences between CSRRW and CSRRWI.
186
187 ## Vector Length (VL) <a name="vl" />
188
189 VSETVL is slightly different from RVV. Similar to RVV, VL is set to be within
190 the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
191
192 VL = rd = MIN(vlen, MVL)
193
194 where 1 <= MVL <= XLEN
195
196 However just like MVL it is important to note that the range for VL has
197 subtle design implications, covered in the "CSR pseudocode" section
198
199 The fixed (specific) setting of VL allows vector LOAD/STORE to be used
200 to switch the entire bank of registers using a single instruction (see
201 Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
202 is down to the fact that predication bits fit into a single register of
203 length XLEN bits.
204
205 The second and most important change is that, within the limits set by
206 MVL, the value passed in **must** be set in VL (and in the
207 destination register).
208
209 This has implication for the microarchitecture, as VL is required to be
210 set (limits from MVL notwithstanding) to the actual value
211 requested. RVV has the option to set VL to an arbitrary value that suits
212 the conditions and the micro-architecture: SV does *not* permit this.
213
214 The reason is so that if SV is to be used for a context-switch or as a
215 substitute for LOAD/STORE-Multiple, the operation can be done with only
216 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
217 single LD/ST operation). If VL does *not* get set to the register file
218 length when VSETVL is called, then a software-loop would be needed.
219 To avoid this need, VL *must* be set to exactly what is requested
220 (limits notwithstanding).
221
222 Therefore, in turn, unlike RVV, implementors *must* provide
223 pseudo-parallelism (using sequential loops in hardware) if actual
224 hardware-parallelism in the ALUs is not deployed. A hybrid is also
225 permitted (as used in Broadcom's VideoCore-IV) however this must be
226 *entirely* transparent to the ISA.
227
228 The third change is that VSETVL is implemented as a CSR, where the
229 behaviour of CSRRW (and CSRRWI) must be changed to specifically store
230 the *new* value in the destination register, **not** the old value.
231 Where context-load/save is to be implemented in the usual fashion
232 by using a single CSRRW instruction to obtain the old value, the
233 *secondary* CSR must be used (STATE). This CSR by contrast behaves
234 exactly as standard CSRs, and contains more than just VL.
235
236 One interesting side-effect of using CSRRWI to set VL is that this
237 may be done with a single instruction, useful particularly for a
238 context-load/save. There are however limitations: CSRWI's immediate
239 is limited to 0-31 (representing VL=1-32).
240
241 Note that when VL is set to 1, vector operations cease (but not subvector
242 operations: that requires setting SUBVL=1) the hardware loop is reduced
243 to a single element: scalar operations. This is in effect the default,
244 normal operating mode. However it is important to appreciate that this
245 does **not** result in the Register table or SUBVL being disabled. Only
246 when the Register table is empty (P48/64 prefix fields notwithstanding)
247 would SV have no effect.
248
249 ## SUBVL - Sub Vector Length
250
251 This is a "group by quantity" that effectivrly asks each iteration
252 of the hardware loop to load SUBVL elements of width elwidth at a
253 time. Effectively, SUBVL is like a SIMD multiplier: instead of just 1
254 operation issued, SUBVL operations are issued.
255
256 Another way to view SUBVL is that each element in the VL length vector is
257 now SUBVL times elwidth bits in length and now comprises SUBVL discrete
258 sub operations. An inner SUBVL for-loop within a VL for-loop in effect,
259 with the sub-element increased every time in the innermost loop. This
260 is best illustrated in the (simplified) pseudocode example, later.
261
262 The primary use case for SUBVL is for 3D FP Vectors. A Vector of 3D
263 coordinates X,Y,Z for example may be loaded and multiplied the stored, per
264 VL element iteration, rather than having to set VL to three times larger.
265
266 Legal values are 1, 2, 3 and 4 (and the STATE CSR must hold the 2 bit
267 values 0b00 thru 0b11 to represent them).
268
269 Setting this CSR to 0 must raise an exception. Setting it to a value
270 greater than 4 likewise.
271
272 The main effect of SUBVL is that predication bits are applied per
273 **group**, rather than by individual element.
274
275 This saves a not insignificant number of instructions when handling 3D
276 vectors, as otherwise a much longer predicate mask would have to be set
277 up with regularly-repeated bit patterns.
278
279 See SUBVL Pseudocode illustration for details.
280
281 ## STATE
282
283 This is a standard CSR that contains sufficient information for a
284 full context save/restore. It contains (and permits setting of):
285
286 * MVL
287 * VL
288 * destoffs - the destination element offset of the current parallel
289 instruction being executed
290 * srcoffs - for twin-predication, the source element offset as well.
291 * SUBVL
292 * svdestoffs - the subvector destination element offset of the current
293 parallel instruction being executed
294 * svsrcoffs - for twin-predication, the subvector source element offset
295 as well.
296
297 Interestingly STATE may hypothetically also be modified to make the
298 immediately-following instruction to skip a certain number of elements,
299 by playing with destoffs and srcoffs (and the subvector offsets as well)
300
301 Setting destoffs and srcoffs is realistically intended for saving state
302 so that exceptions (page faults in particular) may be serviced and the
303 hardware-loop that was being executed at the time of the trap, from
304 user-mode (or Supervisor-mode), may be returned to and continued from
305 exactly where it left off. The reason why this works is because setting
306 User-Mode STATE will not change (not be used) in M-Mode or S-Mode (and
307 is entirely why M-Mode and S-Mode have their own STATE CSRs, meSTATE
308 and seSTATE).
309
310 The format of the STATE CSR is as follows:
311
312 | (29..28 | (27..26) | (25..24) | (23..18) | (17..12) | (11..6) | (5...0) |
313 | ------- | -------- | -------- | -------- | -------- | ------- | ------- |
314 | dsvoffs | ssvoffs | subvl | destoffs | srcoffs | vl | maxvl |
315
316 When setting this CSR, the following characteristics will be enforced:
317
318 * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
319 * **VL** will be truncated (after offset) to be within the range 1 to MAXVL
320 * **SUBVL** which sets a SIMD-like quantity, has only 4 values so there
321 are no changes needed
322 * **srcoffs** will be truncated to be within the range 0 to VL-1
323 * **destoffs** will be truncated to be within the range 0 to VL-1
324 * **ssvoffs** will be truncated to be within the range 0 to SUBVL-1
325 * **dsvoffs** will be truncated to be within the range 0 to SUBVL-1
326
327 NOTE: if the following instruction is not a twin predicated instruction,
328 and destoffs or dsvoffs has been set to non-zero, subsequent execution
329 behaviour is undefined. **USE WITH CARE**.
330
331 ### Hardware rules for when to increment STATE offsets
332
333 The offsets inside STATE are like the indices in a loop, except
334 in hardware. They are also partially (conceptually) similar to a
335 "sub-execution Program Counter". As such, and to allow proper context
336 switching and to define correct exception behaviour, the following rules
337 must be observed:
338
339 * When the VL CSR is set, srcoffs and destoffs are reset to zero.
340 * Each instruction that contains a "tagged" register shall start
341 execution at the *current* value of srcoffs (and destoffs in the case
342 of twin predication)
343 * Unpredicated bits (in nonzeroing mode) shall cause the element operation
344 to skip, incrementing the srcoffs (or destoffs)
345 * On execution of an element operation, Exceptions shall **NOT** cause
346 srcoffs or destoffs to increment.
347 * On completion of the full Vector Loop (srcoffs = VL-1 or destoffs =
348 VL-1 after the last element is executed), both srcoffs and destoffs
349 shall be reset to zero.
350
351 This latter is why srcoffs and destoffs may be stored as values from
352 0 to XLEN-1 in the STATE CSR, because as loop indices they refer to
353 elements. srcoffs and destoffs never need to be set to VL: their maximum
354 operating values are limited to 0 to VL-1.
355
356 The same corresponding rules apply to SUBVL, svsrcoffs and svdestoffs.
357
358 ## MVL and VL Pseudocode
359
360 The pseudo-code for get and set of VL and MVL use the following internal
361 functions as follows:
362
363 set_mvl_csr(value, rd):
364 regs[rd] = STATE.MVL
365 STATE.MVL = MIN(value, STATE.MVL)
366
367 get_mvl_csr(rd):
368 regs[rd] = STATE.VL
369
370 set_vl_csr(value, rd):
371 STATE.VL = MIN(value, STATE.MVL)
372 regs[rd] = STATE.VL # yes returning the new value NOT the old CSR
373 return STATE.VL
374
375 get_vl_csr(rd):
376 regs[rd] = STATE.VL
377 return STATE.VL
378
379 Note that where setting MVL behaves as a normal CSR (returns the old
380 value), unlike standard CSR behaviour, setting VL will return the **new**
381 value of VL **not** the old one.
382
383 For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
384 maximise the effectiveness, an immediate of 0 is used to set VL=1,
385 an immediate of 1 is used to set VL=2 and so on:
386
387 CSRRWI_Set_MVL(value):
388 set_mvl_csr(value+1, x0)
389
390 CSRRWI_Set_VL(value):
391 set_vl_csr(value+1, x0)
392
393 However for CSRRW the following pseudocode is used for MVL and VL,
394 where setting the value to zero will cause an exception to be raised.
395 The reason is that if VL or MVL are set to zero, the STATE CSR is
396 not capable of storing that value.
397
398 CSRRW_Set_MVL(rs1, rd):
399 value = regs[rs1]
400 if value == 0 or value > XLEN:
401 raise Exception
402 set_mvl_csr(value, rd)
403
404 CSRRW_Set_VL(rs1, rd):
405 value = regs[rs1]
406 if value == 0 or value > XLEN:
407 raise Exception
408 set_vl_csr(value, rd)
409
410 In this way, when CSRRW is utilised with a loop variable, the value
411 that goes into VL (and into the destination register) may be used
412 in an instruction-minimal fashion:
413
414 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
415 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
416 CSRRWI MVL, 3 # sets MVL == **4** (not 3)
417 j zerotest # in case loop counter a0 already 0
418 loop:
419 CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
420 ld a3, a1 # load 4 registers a3-6 from x
421 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
422 ld a7, a2 # load 4 registers a7-10 from y
423 add a1, a1, t1 # increment pointer to x by vl*8
424 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
425 sub a0, a0, t0 # n -= vl (t0)
426 st a7, a2 # store 4 registers a7-10 to y
427 add a2, a2, t1 # increment pointer to y by vl*8
428 zerotest:
429 bnez a0, loop # repeat if n != 0
430
431 With the STATE CSR, just like with CSRRWI, in order to maximise the
432 utilisation of the limited bitspace, "000000" in binary represents
433 VL==1, "00001" represents VL==2 and so on (likewise for MVL):
434
435 CSRRW_Set_SV_STATE(rs1, rd):
436 value = regs[rs1]
437 get_state_csr(rd)
438 STATE.MVL = set_mvl_csr(value[11:6]+1)
439 STATE.VL = set_vl_csr(value[5:0]+1)
440 STATE.destoffs = value[23:18]>>18
441 STATE.srcoffs = value[23:18]>>12
442
443 get_state_csr(rd):
444 regs[rd] = (STATE.MVL-1) | (STATE.VL-1)<<6 | (STATE.srcoffs)<<12 |
445 (STATE.destoffs)<<18
446 return regs[rd]
447
448 In both cases, whilst CSR read of VL and MVL return the exact values
449 of VL and MVL respectively, reading and writing the STATE CSR returns
450 those values **minus one**. This is absolutely critical to implement
451 if the STATE CSR is to be used for fast context-switching.
452
453 ## VL, MVL and SUBVL instruction aliases
454
455 This table contains pseudo-assembly instruction aliases. Note the
456 subtraction of 1 from the CSRRWI pseudo variants, to compensate for the
457 reduced range of the 5 bit immediate.
458
459 | alias | CSR |
460 | - | - |
461 | SETVL rd, rs | CSRRW VL, rd, rs |
462 | SETVLi rd, #n | CSRRWI VL, rd, #n-1 |
463 | GETVL rd | CSRRW VL, rd, x0 |
464 | SETMVL rd, rs | CSRRW MVL, rd, rs |
465 | SETMVLi rd, #n | CSRRWI MVL,rd, #n-1 |
466 | GETMVL rd | CSRRW MVL, rd, x0 |
467
468 Note: CSRRC and other bitsetting may still be used, they are however not particularly useful (very obscure).
469
470 ## Register key-value (CAM) table <a name="regcsrtable" />
471
472 *NOTE: in prior versions of SV, this table used to be writable and
473 accessible via CSRs. It is now stored in the VBLOCK instruction format. Note
474 that this table does *not* get applied to the SVPrefix P48/64 format,
475 only to scalar opcodes*
476
477 The purpose of the Register table is three-fold:
478
479 * To mark integer and floating-point registers as requiring "redirection"
480 if it is ever used as a source or destination in any given operation.
481 This involves a level of indirection through a 5-to-7-bit lookup table,
482 such that **unmodified** operands with 5 bits (3 for some RVC ops) may
483 access up to **128** registers.
484 * To indicate whether, after redirection through the lookup table, the
485 register is a vector (or remains a scalar).
486 * To over-ride the implicit or explicit bitwidth that the operation would
487 normally give the register.
488
489 Note: clearly, if an RVC operation uses a 3 bit spec'd register (x8-x15)
490 and the Register table contains entried that only refer to registerd
491 x1-x14 or x16-x31, such operations will *never* activate the VL hardware
492 loop!
493
494 If however the (16 bit) Register table does contain such an entry (x8-x15
495 or x2 in the case of LWSP), that src or dest reg may be redirected
496 anywhere to the *full* 128 register range. Thus, RVC becomes far more
497 powerful and has many more opportunities to reduce code size that in
498 Standard RV32/RV64 executables.
499
500 16 bit format:
501
502 | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
503 | ------ | | - | - | - | ------ | ------- |
504 | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
505 | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
506 | .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
507 | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
508
509 8 bit format:
510
511 | RegCAM | | 7 | (6..5) | (4..0) |
512 | ------ | | - | ------ | ------- |
513 | 0 | | i/f | vew0 | regnum |
514
515 i/f is set to "1" to indicate that the redirection/tag entry is to
516 be applied to integer registers; 0 indicates that it is relevant to
517 floating-point
518 registers.
519
520 The 8 bit format is used for a much more compact expression. "isvec"
521 is implicit and, similar to [[sv-prefix-proposal]], the target vector
522 is "regnum<<2", implicitly. Contrast this with the 16-bit format where
523 the target vector is *explicitly* named in bits 8 to 14, and bit 15 may
524 optionally set "scalar" mode.
525
526 Note that whilst SVPrefix adds one extra bit to each of rd, rs1 etc.,
527 and thus the "vector" mode need only shift the (6 bit) regnum by 1 to
528 get the actual (7 bit) register number to use, there is not enough space
529 in the 8 bit format (only 5 bits for regnum) so "regnum<<2" is required.
530
531 vew has the following meanings, indicating that the instruction's
532 operand size is "over-ridden" in a polymorphic fashion:
533
534 | vew | bitwidth |
535 | --- | ------------------- |
536 | 00 | default (XLEN/FLEN) |
537 | 01 | 8 bit |
538 | 10 | 16 bit |
539 | 11 | 32 bit |
540
541 As the above table is a CAM (key-value store) it may be appropriate
542 (faster, implementation-wise) to expand it as follows:
543
544 struct vectorised fp_vec[32], int_vec[32];
545
546 for (i = 0; i < len; i++) // from VBLOCK Format
547 tb = int_vec if CSRvec[i].type == 0 else fp_vec
548 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
549 tb[idx].elwidth = CSRvec[i].elwidth
550 tb[idx].regidx = CSRvec[i].regidx // indirection
551 tb[idx].isvector = CSRvec[i].isvector // 0=scalar
552
553 ## Predication Table <a name="predication_csr_table"></a>
554
555 *NOTE: in prior versions of SV, this table used to be writable and
556 accessible via CSRs. It is now stored in the VBLOCK instruction format.
557 The table does **not** apply to SVPrefix opcodes*
558
559 The Predication Table is a key-value store indicating whether, if a
560 given destination register (integer or floating-point) is referred to
561 in an instruction, it is to be predicated. Like the Register table, it
562 is an indirect lookup that allows the RV opcodes to not need modification.
563
564 It is particularly important to note
565 that the *actual* register used can be *different* from the one that is
566 in the instruction, due to the redirection through the lookup table.
567
568 * regidx is the register that in combination with the
569 i/f flag, if that integer or floating-point register is referred to in a
570 (standard RV) instruction results in the lookup table being referenced
571 to find the predication mask to use for this operation.
572 * predidx is the *actual* (full, 7 bit) register to be used for the
573 predication mask.
574 * inv indicates that the predication mask bits are to be inverted
575 prior to use *without* actually modifying the contents of the
576 registerfrom which those bits originated.
577 * zeroing is either 1 or 0, and if set to 1, the operation must
578 place zeros in any element position where the predication mask is
579 set to zero. If zeroing is set to 0, unpredicated elements *must*
580 be left alone. Some microarchitectures may choose to interpret
581 this as skipping the operation entirely. Others which wish to
582 stick more closely to a SIMD architecture may choose instead to
583 interpret unpredicated elements as an internal "copy element"
584 operation (which would be necessary in SIMD microarchitectures
585 that perform register-renaming)
586 * ffirst is a special mode that stops sequential element processing when
587 a data-dependent condition occurs, whether a trap or a conditional test.
588 The handling of each (trap or conditional test) is slightly different:
589 see Instruction sections for further details
590
591 16 bit format:
592
593 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
594 | ----- | - | - | - | - | ------- | ------- |
595 | 0 | predidx | zero0 | inv0 | i/f | regidx | ffirst0 |
596 | 1 | predidx | zero1 | inv1 | i/f | regidx | ffirst1 |
597 | 2 | predidx | zero2 | inv2 | i/f | regidx | ffirst2 |
598 | 3 | predidx | zero3 | inv3 | i/f | regidx | ffirst3 |
599
600 Note: predidx=x0, zero=1, inv=1 is a RESERVED encoding. Its use must
601 generate an illegal instruction trap.
602
603 8 bit format:
604
605 | PrCSR | 7 | 6 | 5 | (4..0) |
606 | ----- | - | - | - | ------- |
607 | 0 | zero0 | inv0 | i/f | regnum |
608
609 The 8 bit format is a compact and less expressive variant of the full
610 16 bit format. Using the 8 bit formatis very different: the predicate
611 register to use is implicit, and numbering begins inplicitly from x9. The
612 regnum is still used to "activate" predication, in the same fashion as
613 described above.
614
615 Thus if we map from 8 to 16 bit format, the table becomes:
616
617 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
618 | ----- | - | - | - | - | ------- | ------- |
619 | 0 | x9 | zero0 | inv0 | i/f | regnum | ff=0 |
620 | 1 | x10 | zero1 | inv1 | i/f | regnum | ff=0 |
621 | 2 | x11 | zero2 | inv2 | i/f | regnum | ff=0 |
622 | 3 | x12 | zero3 | inv3 | i/f | regnum | ff=0 |
623
624 The 16 bit Predication CSR Table is a key-value store, so
625 implementation-wise it will be faster to turn the table around (maintain
626 topologically equivalent state):
627
628 struct pred {
629 bool zero; // zeroing
630 bool inv; // register at predidx is inverted
631 bool ffirst; // fail-on-first
632 bool enabled; // use this to tell if the table-entry is active
633 int predidx; // redirection: actual int register to use
634 }
635
636 struct pred fp_pred_reg[32]; // 64 in future (bank=1)
637 struct pred int_pred_reg[32]; // 64 in future (bank=1)
638
639 for (i = 0; i < len; i++) // number of Predication entries in VBLOCK
640 tb = int_pred_reg if PredicateTable[i].type == 0 else fp_pred_reg;
641 idx = PredicateTable[i].regidx
642 tb[idx].zero = CSRpred[i].zero
643 tb[idx].inv = CSRpred[i].inv
644 tb[idx].ffirst = CSRpred[i].ffirst
645 tb[idx].predidx = CSRpred[i].predidx
646 tb[idx].enabled = true
647
648 So when an operation is to be predicated, it is the internal state that
649 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
650 pseudo-code for operations is given, where p is the explicit (direct)
651 reference to the predication register to be used:
652
653 for (int i=0; i<vl; ++i)
654 if ([!]preg[p][i])
655 (d ? vreg[rd][i] : sreg[rd]) =
656 iop(s1 ? vreg[rs1][i] : sreg[rs1],
657 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
658
659 This instead becomes an *indirect* reference using the *internal* state
660 table generated from the Predication CSR key-value store, which is used
661 as follows.
662
663 if type(iop) == INT:
664 preg = int_pred_reg[rd]
665 else:
666 preg = fp_pred_reg[rd]
667
668 for (int i=0; i<vl; ++i)
669 predicate, zeroing = get_pred_val(type(iop) == INT, rd):
670 if (predicate && (1<<i))
671 result = iop(s1 ? regfile[rs1+i] : regfile[rs1],
672 s2 ? regfile[rs2+i] : regfile[rs2]);
673 (d ? regfile[rd+i] : regfile[rd]) = result
674 if preg.ffirst and result == 0:
675 VL = i # result was zero, end loop early, return VL
676 return
677 else if (zeroing)
678 (d ? regfile[rd+i] : regfile[rd]) = 0
679
680 Note:
681
682 * d, s1 and s2 are booleans indicating whether destination,
683 source1 and source2 are vector or scalar
684 * key-value CSR-redirection of rd, rs1 and rs2 have NOT been included
685 above, for clarity. rd, rs1 and rs2 all also must ALSO go through
686 register-level redirection (from the Register table) if they are
687 vectors.
688 * fail-on-first mode stops execution early whenever an operation
689 returns a zero value. floating-point results count both
690 positive-zero as well as negative-zero as "fail".
691
692 If written as a function, obtaining the predication mask (and whether
693 zeroing takes place) may be done as follows:
694
695 def get_pred_val(bool is_fp_op, int reg):
696 tb = int_reg if is_fp_op else fp_reg
697 if (!tb[reg].enabled):
698 return ~0x0, False // all enabled; no zeroing
699 tb = int_pred if is_fp_op else fp_pred
700 if (!tb[reg].enabled):
701 return ~0x0, False // all enabled; no zeroing
702 predidx = tb[reg].predidx // redirection occurs HERE
703 predicate = intreg[predidx] // actual predicate HERE
704 if (tb[reg].inv):
705 predicate = ~predicate // invert ALL bits
706 return predicate, tb[reg].zero
707
708 Note here, critically, that **only** if the register is marked
709 in its **register** table entry as being "active" does the testing
710 proceed further to check if the **predicate** table entry is
711 also active.
712
713 Note also that this is in direct contrast to branch operations
714 for the storage of comparisions: in these specific circumstances
715 the requirement for there to be an active *register* entry
716 is removed.
717
718 ## Fail-on-First Mode <a name="ffirst-mode"></a>
719
720 ffirst is a special data-dependent predicate mode. There are two
721 variants: one is for faults: typically for LOAD/STORE operations,
722 which may encounter end of page faults during a series of operations.
723 The other variant is comparisons such as FEQ (or the augmented behaviour
724 of Branch), and any operation that returns a result of zero (whether
725 integer or floating-point). In the FP case, this includes negative-zero.
726
727 Note that the execution order must "appear" to be sequential for ffirst
728 mode to work correctly. An in-order architecture must execute the element
729 operations in sequence, whilst an out-of-order architecture must *commit*
730 the element operations in sequence (giving the appearance of in-order
731 execution).
732
733 Note also, that if ffirst mode is needed without predication, a special
734 "always-on" Predicate Table Entry may be constructed by setting
735 inverse-on and using x0 as the predicate register. This
736 will have the effect of creating a mask of all ones, allowing ffirst
737 to be set.
738
739 ### Fail-on-first traps
740
741 Except for the first element, ffault stops sequential element processing
742 when a trap occurs. The first element is treated normally (as if ffirst
743 is clear). Should any subsequent element instruction require a trap,
744 instead it and subsequent indexed elements are ignored (or cancelled in
745 out-of-order designs), and VL is set to the *last* instruction that did
746 not take the trap.
747
748 Note that predicated-out elements (where the predicate mask bit is zero)
749 are clearly excluded (i.e. the trap will not occur). However, note that
750 the loop still had to test the predicate bit: thus on return,
751 VL is set to include elements that did not take the trap *and* includes
752 the elements that were predicated (masked) out (not tested up to the
753 point where the trap occurred).
754
755 If SUBVL is being used (SUBVL!=1), the first *sub-group* of elements
756 will cause a trap as normal (as if ffirst is not set); subsequently,
757 the trap must not occur in the *sub-group* of elements. SUBVL will **NOT**
758 be modified.
759
760 Given that predication bits apply to SUBVL groups, the same rules apply
761 to predicated-out (masked-out) sub-groups in calculating the value that VL
762 is set to.
763
764 ### Fail-on-first conditional tests
765
766 ffault stops sequential element conditional testing on the first element result
767 being zero. VL is set to the number of elements that were processed before
768 the fail-condition was encountered.
769
770 Note that just as with traps, if SUBVL!=1, the first of any of the *sub-group*
771 will cause the processing to end, and, even if there were elements within
772 the *sub-group* that passed the test, that sub-group is still (entirely)
773 excluded from the count (from setting VL). i.e. VL is set to the total
774 number of *sub-groups* that had no fail-condition up until execution was
775 stopped.
776
777 Note again that, just as with traps, predicated-out (masked-out) elements
778 are included in the count leading up to the fail-condition, even though they
779 were not tested.
780
781 The pseudo-code for Predication makes this clearer and simpler than it is
782 in words (the loop ends, VL is set to the current element index, "i").
783
784 ## REMAP CSR <a name="remap" />
785
786 (Note: both the REMAP and SHAPE sections are best read after the
787 rest of the document has been read)
788
789 There is one 32-bit CSR which may be used to indicate which registers,
790 if used in any operation, must be "reshaped" (re-mapped) from a linear
791 form to a 2D or 3D transposed form, or "offset" to permit arbitrary
792 access to elements within a register.
793
794 The 32-bit REMAP CSR may reshape up to 3 registers:
795
796 | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
797 | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
798 | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
799
800 regidx0-2 refer not to the Register CSR CAM entry but to the underlying
801 *real* register (see regidx, the value) and consequently is 7-bits wide.
802 When set to zero (referring to x0), clearly reshaping x0 is pointless,
803 so is used to indicate "disabled".
804 shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
805 Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
806
807 It is anticipated that these specialist CSRs not be very often used.
808 Unlike the CSR Register and Predication tables, the REMAP CSRs use
809 the full 7-bit regidx so that they can be set once and left alone,
810 whilst the CSR Register entries pointing to them are disabled, instead.
811
812 ## SHAPE 1D/2D/3D vector-matrix remapping CSRs
813
814 (Note: both the REMAP and SHAPE sections are best read after the
815 rest of the document has been read)
816
817 There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
818 which have the same format. When each SHAPE CSR is set entirely to zeros,
819 remapping is disabled: the register's elements are a linear (1D) vector.
820
821 | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
822 | ------- | -- | ------- | -- | ------- | -- | ------- |
823 | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
824
825 offs is a 3-bit field, spread out across bits 7, 15 and 23, which
826 is added to the element index during the loop calculation.
827
828 xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
829 that the array dimensionality for that dimension is 1. A value of xdimsz=2
830 would indicate that in the first dimension there are 3 elements in the
831 array. The format of the array is therefore as follows:
832
833 array[xdim+1][ydim+1][zdim+1]
834
835 However whilst illustrative of the dimensionality, that does not take the
836 "permute" setting into account. "permute" may be any one of six values
837 (0-5, with values of 6 and 7 being reserved, and not legal). The table
838 below shows how the permutation dimensionality order works:
839
840 | permute | order | array format |
841 | ------- | ----- | ------------------------ |
842 | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
843 | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
844 | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
845 | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
846 | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
847 | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
848
849 In other words, the "permute" option changes the order in which
850 nested for-loops over the array would be done. The algorithm below
851 shows this more clearly, and may be executed as a python program:
852
853 # mapidx = REMAP.shape2
854 xdim = 3 # SHAPE[mapidx].xdim_sz+1
855 ydim = 4 # SHAPE[mapidx].ydim_sz+1
856 zdim = 5 # SHAPE[mapidx].zdim_sz+1
857
858 lims = [xdim, ydim, zdim]
859 idxs = [0,0,0] # starting indices
860 order = [1,0,2] # experiment with different permutations, here
861 offs = 0 # experiment with different offsets, here
862
863 for idx in range(xdim * ydim * zdim):
864 new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
865 print new_idx,
866 for i in range(3):
867 idxs[order[i]] = idxs[order[i]] + 1
868 if (idxs[order[i]] != lims[order[i]]):
869 break
870 print
871 idxs[order[i]] = 0
872
873 Here, it is assumed that this algorithm be run within all pseudo-code
874 throughout this document where a (parallelism) for-loop would normally
875 run from 0 to VL-1 to refer to contiguous register
876 elements; instead, where REMAP indicates to do so, the element index
877 is run through the above algorithm to work out the **actual** element
878 index, instead. Given that there are three possible SHAPE entries, up to
879 three separate registers in any given operation may be simultaneously
880 remapped:
881
882 function op_add(rd, rs1, rs2) # add not VADD!
883 ...
884 ...
885  for (i = 0; i < VL; i++)
886 xSTATE.srcoffs = i # save context
887 if (predval & 1<<i) # predication uses intregs
888    ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
889 ireg[rs2+remap(irs2)];
890 if (!int_vec[rd ].isvector) break;
891 if (int_vec[rd ].isvector)  { id += 1; }
892 if (int_vec[rs1].isvector)  { irs1 += 1; }
893 if (int_vec[rs2].isvector)  { irs2 += 1; }
894
895 By changing remappings, 2D matrices may be transposed "in-place" for one
896 operation, followed by setting a different permutation order without
897 having to move the values in the registers to or from memory. Also,
898 the reason for having REMAP separate from the three SHAPE CSRs is so
899 that in a chain of matrix multiplications and additions, for example,
900 the SHAPE CSRs need only be set up once; only the REMAP CSR need be
901 changed to target different registers.
902
903 Note that:
904
905 * Over-running the register file clearly has to be detected and
906 an illegal instruction exception thrown
907 * When non-default elwidths are set, the exact same algorithm still
908 applies (i.e. it offsets elements *within* registers rather than
909 entire registers).
910 * If permute option 000 is utilised, the actual order of the
911 reindexing does not change!
912 * If two or more dimensions are set to zero, the actual order does not change!
913 * The above algorithm is pseudo-code **only**. Actual implementations
914 will need to take into account the fact that the element for-looping
915 must be **re-entrant**, due to the possibility of exceptions occurring.
916 See MSTATE CSR, which records the current element index.
917 * Twin-predicated operations require **two** separate and distinct
918 element offsets. The above pseudo-code algorithm will be applied
919 separately and independently to each, should each of the two
920 operands be remapped. *This even includes C.LDSP* and other operations
921 in that category, where in that case it will be the **offset** that is
922 remapped (see Compressed Stack LOAD/STORE section).
923 * Offset is especially useful, on its own, for accessing elements
924 within the middle of a register. Without offsets, it is necessary
925 to either use a predicated MV, skipping the first elements, or
926 performing a LOAD/STORE cycle to memory.
927 With offsets, the data does not have to be moved.
928 * Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to
929 less than MVL is **perfectly legal**, albeit very obscure. It permits
930 entries to be regularly presented to operands **more than once**, thus
931 allowing the same underlying registers to act as an accumulator of
932 multiple vector or matrix operations, for example.
933
934 Clearly here some considerable care needs to be taken as the remapping
935 could hypothetically create arithmetic operations that target the
936 exact same underlying registers, resulting in data corruption due to
937 pipeline overlaps. Out-of-order / Superscalar micro-architectures with
938 register-renaming will have an easier time dealing with this than
939 DSP-style SIMD micro-architectures.
940
941 # Instruction Execution Order
942
943 Simple-V behaves as if it is a hardware-level "macro expansion system",
944 substituting and expanding a single instruction into multiple sequential
945 instructions with contiguous and sequentially-incrementing registers.
946 As such, it does **not** modify - or specify - the behaviour and semantics of
947 the execution order: that may be deduced from the **existing** RV
948 specification in each and every case.
949
950 So for example if a particular micro-architecture permits out-of-order
951 execution, and it is augmented with Simple-V, then wherever instructions
952 may be out-of-order then so may the "post-expansion" SV ones.
953
954 If on the other hand there are memory guarantees which specifically
955 prevent and prohibit certain instructions from being re-ordered
956 (such as the Atomicity Axiom, or FENCE constraints), then clearly
957 those constraints **MUST** also be obeyed "post-expansion".
958
959 It should be absolutely clear that SV is **not** about providing new
960 functionality or changing the existing behaviour of a micro-architetural
961 design, or about changing the RISC-V Specification.
962 It is **purely** about compacting what would otherwise be contiguous
963 instructions that use sequentially-increasing register numbers down
964 to the **one** instruction.
965
966 # Instructions <a name="instructions" />
967
968 Despite being a 98% complete and accurate topological remap of RVV
969 concepts and functionality, no new instructions are needed.
970 Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
971 becomes a critical dependency for efficient manipulation of predication
972 masks (as a bit-field). Despite the removal of all operations,
973 with the exception of CLIP and VSELECT.X
974 *all instructions from RVV Base are topologically re-mapped and retain their
975 complete functionality, intact*. Note that if RV64G ever had
976 a MV.X added as well as FCLIP, the full functionality of RVV-Base would
977 be obtained in SV.
978
979 Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
980 equivalents, so are left out of Simple-V. VSELECT could be included if
981 there existed a MV.X instruction in RV (MV.X is a hypothetical
982 non-immediate variant of MV that would allow another register to
983 specify which register was to be copied). Note that if any of these three
984 instructions are added to any given RV extension, their functionality
985 will be inherently parallelised.
986
987 With some exceptions, where it does not make sense or is simply too
988 challenging, all RV-Base instructions are parallelised:
989
990 * CSR instructions, whilst a case could be made for fast-polling of
991 a CSR into multiple registers, or for being able to copy multiple
992 contiguously addressed CSRs into contiguous registers, and so on,
993 are the fundamental core basis of SV. If parallelised, extreme
994 care would need to be taken. Additionally, CSR reads are done
995 using x0, and it is *really* inadviseable to tag x0.
996 * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
997 left as scalar.
998 * LR/SC could hypothetically be parallelised however their purpose is
999 single (complex) atomic memory operations where the LR must be followed
1000 up by a matching SC. A sequence of parallel LR instructions followed
1001 by a sequence of parallel SC instructions therefore is guaranteed to
1002 not be useful. Not least: the guarantees of a Multi-LR/SC
1003 would be impossible to provide if emulated in a trap.
1004 * EBREAK, NOP, FENCE and others do not use registers so are not inherently
1005 paralleliseable anyway.
1006
1007 All other operations using registers are automatically parallelised.
1008 This includes AMOMAX, AMOSWAP and so on, where particular care and
1009 attention must be paid.
1010
1011 Example pseudo-code for an integer ADD operation (including scalar
1012 operations). Floating-point uses the FP Register Table.
1013
1014 function op_add(rd, rs1, rs2) # add not VADD!
1015  int i, id=0, irs1=0, irs2=0;
1016  predval = get_pred_val(FALSE, rd);
1017  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
1018  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
1019  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
1020  for (i = 0; i < VL; i++)
1021 xSTATE.srcoffs = i # save context
1022 if (predval & 1<<i) # predication uses intregs
1023    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
1024 if (!int_vec[rd ].isvector) break;
1025 if (int_vec[rd ].isvector)  { id += 1; }
1026 if (int_vec[rs1].isvector)  { irs1 += 1; }
1027 if (int_vec[rs2].isvector)  { irs2 += 1; }
1028
1029 Note that for simplicity there is quite a lot missing from the above
1030 pseudo-code: element widths, zeroing on predication, dimensional
1031 reshaping and offsets and so on. However it demonstrates the basic
1032 principle. Augmentations that produce the full pseudo-code are covered in
1033 other sections.
1034
1035 ## SUBVL Pseudocode <a name="subvl-pseudocode"></a>
1036
1037 Adding in support for SUBVL is a matter of adding in an extra inner
1038 for-loop, where register src and dest are still incremented inside the
1039 inner part. Not that the predication is still taken from the VL index.
1040
1041 So whilst elements are indexed by "(i * SUBVL + s)", predicate bits are
1042 indexed by "(i)"
1043
1044 function op_add(rd, rs1, rs2) # add not VADD!
1045  int i, id=0, irs1=0, irs2=0;
1046  predval = get_pred_val(FALSE, rd);
1047  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
1048  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
1049  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
1050  for (i = 0; i < VL; i++)
1051 xSTATE.srcoffs = i # save context
1052 for (s = 0; s < SUBVL; s++)
1053 xSTATE.ssvoffs = s # save context
1054 if (predval & 1<<i) # predication uses intregs
1055 # actual add is here (at last)
1056    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
1057 if (!int_vec[rd ].isvector) break;
1058 if (int_vec[rd ].isvector)  { id += 1; }
1059 if (int_vec[rs1].isvector)  { irs1 += 1; }
1060 if (int_vec[rs2].isvector)  { irs2 += 1; }
1061 if (id == VL or irs1 == VL or irs2 == VL) {
1062 # end VL hardware loop
1063 xSTATE.srcoffs = 0; # reset
1064 xSTATE.ssvoffs = 0; # reset
1065 return;
1066 }
1067
1068
1069 NOTE: pseudocode simplified greatly: zeroing, proper predicate handling,
1070 elwidth handling etc. all left out.
1071
1072 ## Instruction Format
1073
1074 It is critical to appreciate that there are
1075 **no operations added to SV, at all**.
1076
1077 Instead, by using CSRs to tag registers as an indication of "changed
1078 behaviour", SV *overloads* pre-existing branch operations into predicated
1079 variants, and implicitly overloads arithmetic operations, MV, FCVT, and
1080 LOAD/STORE depending on CSR configurations for bitwidth and predication.
1081 **Everything** becomes parallelised. *This includes Compressed
1082 instructions* as well as any future instructions and Custom Extensions.
1083
1084 Note: CSR tags to change behaviour of instructions is nothing new, including
1085 in RISC-V. UXL, SXL and MXL change the behaviour so that XLEN=32/64/128.
1086 FRM changes the behaviour of the floating-point unit, to alter the rounding
1087 mode. Other architectures change the LOAD/STORE byte-order from big-endian
1088 to little-endian on a per-instruction basis. SV is just a little more...
1089 comprehensive in its effect on instructions.
1090
1091 ## Branch Instructions
1092
1093 Branch operations are augmented slightly to be a little more like FP
1094 Compares (FEQ, FNE etc.), by permitting the cumulation (and storage)
1095 of multiple comparisons into a register (taken indirectly from the predicate
1096 table). As such, "ffirst" - fail-on-first - condition mode can be enabled.
1097 See ffirst mode in the Predication Table section.
1098
1099 ### Standard Branch <a name="standard_branch"></a>
1100
1101 Branch operations use standard RV opcodes that are reinterpreted to
1102 be "predicate variants" in the instance where either of the two src
1103 registers are marked as vectors (active=1, vector=1).
1104
1105 Note that the predication register to use (if one is enabled) is taken from
1106 the *first* src register, and that this is used, just as with predicated
1107 arithmetic operations, to mask whether the comparison operations take
1108 place or not. The target (destination) predication register
1109 to use (if one is enabled) is taken from the *second* src register.
1110
1111 If either of src1 or src2 are scalars (whether by there being no
1112 CSR register entry or whether by the CSR entry specifically marking
1113 the register as "scalar") the comparison goes ahead as vector-scalar
1114 or scalar-vector.
1115
1116 In instances where no vectorisation is detected on either src registers
1117 the operation is treated as an absolutely standard scalar branch operation.
1118 Where vectorisation is present on either or both src registers, the
1119 branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
1120 those tests that are predicated out).
1121
1122 Note that when zero-predication is enabled (from source rs1),
1123 a cleared bit in the predicate indicates that the result
1124 of the compare is set to "false", i.e. that the corresponding
1125 destination bit (or result)) be set to zero. Contrast this with
1126 when zeroing is not set: bits in the destination predicate are
1127 only *set*; they are **not** cleared. This is important to appreciate,
1128 as there may be an expectation that, going into the hardware-loop,
1129 the destination predicate is always expected to be set to zero:
1130 this is **not** the case. The destination predicate is only set
1131 to zero if **zeroing** is enabled.
1132
1133 Note that just as with the standard (scalar, non-predicated) branch
1134 operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
1135 src1 and src2.
1136
1137 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
1138 for predicated compare operations of function "cmp":
1139
1140 for (int i=0; i<vl; ++i)
1141 if ([!]preg[p][i])
1142 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
1143 s2 ? vreg[rs2][i] : sreg[rs2]);
1144
1145 With associated predication, vector-length adjustments and so on,
1146 and temporarily ignoring bitwidth (which makes the comparisons more
1147 complex), this becomes:
1148
1149 s1 = reg_is_vectorised(src1);
1150 s2 = reg_is_vectorised(src2);
1151
1152 if not s1 && not s2
1153 if cmp(rs1, rs2) # scalar compare
1154 goto branch
1155 return
1156
1157 preg = int_pred_reg[rd]
1158 reg = int_regfile
1159
1160 ps = get_pred_val(I/F==INT, rs1);
1161 rd = get_pred_val(I/F==INT, rs2); # this may not exist
1162
1163 if not exists(rd) or zeroing:
1164 result = 0
1165 else
1166 result = preg[rd]
1167
1168 for (int i = 0; i < VL; ++i)
1169 if (zeroing)
1170 if not (ps & (1<<i))
1171 result &= ~(1<<i);
1172 else if (ps & (1<<i))
1173 if (cmp(s1 ? reg[src1+i]:reg[src1],
1174 s2 ? reg[src2+i]:reg[src2])
1175 result |= 1<<i;
1176 else
1177 result &= ~(1<<i);
1178
1179 if not exists(rd)
1180 if result == ps
1181 goto branch
1182 else
1183 preg[rd] = result # store in destination
1184 if preg[rd] == ps
1185 goto branch
1186
1187 Notes:
1188
1189 * Predicated SIMD comparisons would break src1 and src2 further down
1190 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
1191 Reordering") setting Vector-Length times (number of SIMD elements) bits
1192 in Predicate Register rd, as opposed to just Vector-Length bits.
1193 * The execution of "parallelised" instructions **must** be implemented
1194 as "re-entrant" (to use a term from software). If an exception (trap)
1195 occurs during the middle of a vectorised
1196 Branch (now a SV predicated compare) operation, the partial results
1197 of any comparisons must be written out to the destination
1198 register before the trap is permitted to begin. If however there
1199 is no predicate, the **entire** set of comparisons must be **restarted**,
1200 with the offset loop indices set back to zero. This is because
1201 there is no place to store the temporary result during the handling
1202 of traps.
1203
1204 TODO: predication now taken from src2. also branch goes ahead
1205 if all compares are successful.
1206
1207 Note also that where normally, predication requires that there must
1208 also be a CSR register entry for the register being used in order
1209 for the **predication** CSR register entry to also be active,
1210 for branches this is **not** the case. src2 does **not** have
1211 to have its CSR register entry marked as active in order for
1212 predication on src2 to be active.
1213
1214 Also note: SV Branch operations are **not** twin-predicated
1215 (see Twin Predication section). This would require three
1216 element offsets: one to track src1, one to track src2 and a third
1217 to track where to store the accumulation of the results. Given
1218 that the element offsets need to be exposed via CSRs so that
1219 the parallel hardware looping may be made re-entrant on traps
1220 and exceptions, the decision was made not to make SV Branches
1221 twin-predicated.
1222
1223 ### Floating-point Comparisons
1224
1225 There does not exist floating-point branch operations, only compare.
1226 Interestingly no change is needed to the instruction format because
1227 FP Compare already stores a 1 or a zero in its "rd" integer register
1228 target, i.e. it's not actually a Branch at all: it's a compare.
1229
1230 In RV (scalar) Base, a branch on a floating-point compare is
1231 done via the sequence "FEQ x1, f0, f5; BEQ x1, x0, #jumploc".
1232 This does extend to SV, as long as x1 (in the example sequence given)
1233 is vectorised. When that is the case, x1..x(1+VL-1) will also be
1234 set to 0 or 1 depending on whether f0==f5, f1==f6, f2==f7 and so on.
1235 The BEQ that follows will *also* compare x1==x0, x2==x0, x3==x0 and
1236 so on. Consequently, unlike integer-branch, FP Compare needs no
1237 modification in its behaviour.
1238
1239 In addition, it is noted that an entry "FNE" (the opposite of FEQ) is missing,
1240 and whilst in ordinary branch code this is fine because the standard
1241 RVF compare can always be followed up with an integer BEQ or a BNE (or
1242 a compressed comparison to zero or non-zero), in predication terms that
1243 becomes more of an impact. To deal with this, SV's predication has
1244 had "invert" added to it.
1245
1246 Also: note that FP Compare may be predicated, using the destination
1247 integer register (rd) to determine the predicate. FP Compare is **not**
1248 a twin-predication operation, as, again, just as with SV Branches,
1249 there are three registers involved: FP src1, FP src2 and INT rd.
1250
1251 Also: note that ffirst (fail first mode) applies directly to this operation.
1252
1253 ### Compressed Branch Instruction
1254
1255 Compressed Branch instructions are, just like standard Branch instructions,
1256 reinterpreted to be vectorised and predicated based on the source register
1257 (rs1s) CSR entries. As however there is only the one source register,
1258 given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
1259 to store the results of the comparisions is taken from CSR predication
1260 table entries for **x0**.
1261
1262 The specific required use of x0 is, with a little thought, quite obvious,
1263 but is counterintuitive. Clearly it is **not** recommended to redirect
1264 x0 with a CSR register entry, however as a means to opaquely obtain
1265 a predication target it is the only sensible option that does not involve
1266 additional special CSRs (or, worse, additional special opcodes).
1267
1268 Note also that, just as with standard branches, the 2nd source
1269 (in this case x0 rather than src2) does **not** have to have its CSR
1270 register table marked as "active" in order for predication to work.
1271
1272 ## Vectorised Dual-operand instructions
1273
1274 There is a series of 2-operand instructions involving copying (and
1275 sometimes alteration):
1276
1277 * C.MV
1278 * FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
1279 * C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
1280 * LOAD(-FP) and STORE(-FP)
1281
1282 All of these operations follow the same two-operand pattern, so it is
1283 *both* the source *and* destination predication masks that are taken into
1284 account. This is different from
1285 the three-operand arithmetic instructions, where the predication mask
1286 is taken from the *destination* register, and applied uniformly to the
1287 elements of the source register(s), element-for-element.
1288
1289 The pseudo-code pattern for twin-predicated operations is as
1290 follows:
1291
1292 function op(rd, rs):
1293  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1294  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1295  ps = get_pred_val(FALSE, rs); # predication on src
1296  pd = get_pred_val(FALSE, rd); # ... AND on dest
1297  for (int i = 0, int j = 0; i < VL && j < VL;):
1298 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1299 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1300 xSTATE.srcoffs = i # save context
1301 xSTATE.destoffs = j # save context
1302 reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
1303 if (int_csr[rs].isvec) i++;
1304 if (int_csr[rd].isvec) j++; else break
1305
1306 This pattern covers scalar-scalar, scalar-vector, vector-scalar
1307 and vector-vector, and predicated variants of all of those.
1308 Zeroing is not presently included (TODO). As such, when compared
1309 to RVV, the twin-predicated variants of C.MV and FMV cover
1310 **all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
1311 VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
1312
1313 Note that:
1314
1315 * elwidth (SIMD) is not covered in the pseudo-code above
1316 * ending the loop early in scalar cases (VINSERT, VEXTRACT) is also
1317 not covered
1318 * zero predication is also not shown (TODO).
1319
1320 ### C.MV Instruction <a name="c_mv"></a>
1321
1322 There is no MV instruction in RV however there is a C.MV instruction.
1323 It is used for copying integer-to-integer registers (vectorised FMV
1324 is used for copying floating-point).
1325
1326 If either the source or the destination register are marked as vectors
1327 C.MV is reinterpreted to be a vectorised (multi-register) predicated
1328 move operation. The actual instruction's format does not change:
1329
1330 [[!table data="""
1331 15 12 | 11 7 | 6 2 | 1 0 |
1332 funct4 | rd | rs | op |
1333 4 | 5 | 5 | 2 |
1334 C.MV | dest | src | C0 |
1335 """]]
1336
1337 A simplified version of the pseudocode for this operation is as follows:
1338
1339 function op_mv(rd, rs) # MV not VMV!
1340  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1341  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1342  ps = get_pred_val(FALSE, rs); # predication on src
1343  pd = get_pred_val(FALSE, rd); # ... AND on dest
1344  for (int i = 0, int j = 0; i < VL && j < VL;):
1345 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1346 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1347 xSTATE.srcoffs = i # save context
1348 xSTATE.destoffs = j # save context
1349 ireg[rd+j] <= ireg[rs+i];
1350 if (int_csr[rs].isvec) i++;
1351 if (int_csr[rd].isvec) j++; else break
1352
1353 There are several different instructions from RVV that are covered by
1354 this one opcode:
1355
1356 [[!table data="""
1357 src | dest | predication | op |
1358 scalar | vector | none | VSPLAT |
1359 scalar | vector | destination | sparse VSPLAT |
1360 scalar | vector | 1-bit dest | VINSERT |
1361 vector | scalar | 1-bit? src | VEXTRACT |
1362 vector | vector | none | VCOPY |
1363 vector | vector | src | Vector Gather |
1364 vector | vector | dest | Vector Scatter |
1365 vector | vector | src & dest | Gather/Scatter |
1366 vector | vector | src == dest | sparse VCOPY |
1367 """]]
1368
1369 Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
1370 operations with inversion on the src and dest predication for one of the
1371 two C.MV operations.
1372
1373 Note that in the instance where the Compressed Extension is not implemented,
1374 MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs.
1375 Note that the behaviour is **different** from C.MV because with addi the
1376 predication mask to use is taken **only** from rd and is applied against
1377 all elements: rs[i] = rd[i].
1378
1379 ### FMV, FNEG and FABS Instructions
1380
1381 These are identical in form to C.MV, except covering floating-point
1382 register copying. The same double-predication rules also apply.
1383 However when elwidth is not set to default the instruction is implicitly
1384 and automatic converted to a (vectorised) floating-point type conversion
1385 operation of the appropriate size covering the source and destination
1386 register bitwidths.
1387
1388 (Note that FMV, FNEG and FABS are all actually pseudo-instructions)
1389
1390 ### FVCT Instructions
1391
1392 These are again identical in form to C.MV, except that they cover
1393 floating-point to integer and integer to floating-point. When element
1394 width in each vector is set to default, the instructions behave exactly
1395 as they are defined for standard RV (scalar) operations, except vectorised
1396 in exactly the same fashion as outlined in C.MV.
1397
1398 However when the source or destination element width is not set to default,
1399 the opcode's explicit element widths are *over-ridden* to new definitions,
1400 and the opcode's element width is taken as indicative of the SIMD width
1401 (if applicable i.e. if packed SIMD is requested) instead.
1402
1403 For example FCVT.S.L would normally be used to convert a 64-bit
1404 integer in register rs1 to a 64-bit floating-point number in rd.
1405 If however the source rs1 is set to be a vector, where elwidth is set to
1406 default/2 and "packed SIMD" is enabled, then the first 32 bits of
1407 rs1 are converted to a floating-point number to be stored in rd's
1408 first element and the higher 32-bits *also* converted to floating-point
1409 and stored in the second. The 32 bit size comes from the fact that
1410 FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to
1411 divide that by two it means that rs1 element width is to be taken as 32.
1412
1413 Similar rules apply to the destination register.
1414
1415 ## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
1416
1417 An earlier draft of SV modified the behaviour of LOAD/STORE (modified
1418 the interpretation of the instruction fields). This
1419 actually undermined the fundamental principle of SV, namely that there
1420 be no modifications to the scalar behaviour (except where absolutely
1421 necessary), in order to simplify an implementor's task if considering
1422 converting a pre-existing scalar design to support parallelism.
1423
1424 So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
1425 do not change in SV, however just as with C.MV it is important to note
1426 that dual-predication is possible.
1427
1428 In vectorised architectures there are usually at least two different modes
1429 for LOAD/STORE:
1430
1431 * Read (or write for STORE) from sequential locations, where one
1432 register specifies the address, and the one address is incremented
1433 by a fixed amount. This is usually known as "Unit Stride" mode.
1434 * Read (or write) from multiple indirected addresses, where the
1435 vector elements each specify separate and distinct addresses.
1436
1437 To support these different addressing modes, the CSR Register "isvector"
1438 bit is used. So, for a LOAD, when the src register is set to
1439 scalar, the LOADs are sequentially incremented by the src register
1440 element width, and when the src register is set to "vector", the
1441 elements are treated as indirection addresses. Simplified
1442 pseudo-code would look like this:
1443
1444 function op_ld(rd, rs) # LD not VLD!
1445  rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
1446  rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
1447  ps = get_pred_val(FALSE, rs); # predication on src
1448  pd = get_pred_val(FALSE, rd); # ... AND on dest
1449  for (int i = 0, int j = 0; i < VL && j < VL;):
1450 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1451 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1452 if (int_csr[rd].isvec)
1453 # indirect mode (multi mode)
1454 srcbase = ireg[rsv+i];
1455 else
1456 # unit stride mode
1457 srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
1458 ireg[rdv+j] <= mem[srcbase + imm_offs];
1459 if (!int_csr[rs].isvec &&
1460 !int_csr[rd].isvec) break # scalar-scalar LD
1461 if (int_csr[rs].isvec) i++;
1462 if (int_csr[rd].isvec) j++;
1463
1464 Notes:
1465
1466 * For simplicity, zeroing and elwidth is not included in the above:
1467 the key focus here is the decision-making for srcbase; vectorised
1468 rs means use sequentially-numbered registers as the indirection
1469 address, and scalar rs is "offset" mode.
1470 * The test towards the end for whether both source and destination are
1471 scalar is what makes the above pseudo-code provide the "standard" RV
1472 Base behaviour for LD operations.
1473 * The offset in bytes (XLEN/8) changes depending on whether the
1474 operation is a LB (1 byte), LH (2 byes), LW (4 bytes) or LD
1475 (8 bytes), and also whether the element width is over-ridden
1476 (see special element width section).
1477
1478 ## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
1479
1480 C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
1481 where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
1482 It is therefore possible to use predicated C.LWSP to efficiently
1483 pop registers off the stack (by predicating x2 as the source), cherry-picking
1484 which registers to store to (by predicating the destination). Likewise
1485 for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
1486
1487 The two modes ("unit stride" and multi-indirection) are still supported,
1488 as with standard LD/ST. Essentially, the only difference is that the
1489 use of x2 is hard-coded into the instruction.
1490
1491 **Note**: it is still possible to redirect x2 to an alternative target
1492 register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
1493 general-purpose LOAD/STORE operations.
1494
1495 ## Compressed LOAD / STORE Instructions
1496
1497 Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
1498 where the same rules apply and the same pseudo-code apply as for
1499 non-compressed LOAD/STORE. Again: setting scalar or vector mode
1500 on the src for LOAD and dest for STORE switches mode from "Unit Stride"
1501 to "Multi-indirection", respectively.
1502
1503 # Element bitwidth polymorphism <a name="elwidth"></a>
1504
1505 Element bitwidth is best covered as its own special section, as it
1506 is quite involved and applies uniformly across-the-board. SV restricts
1507 bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
1508
1509 The effect of setting an element bitwidth is to re-cast each entry
1510 in the register table, and for all memory operations involving
1511 load/stores of certain specific sizes, to a completely different width.
1512 Thus In c-style terms, on an RV64 architecture, effectively each register
1513 now looks like this:
1514
1515 typedef union {
1516 uint8_t b[8];
1517 uint16_t s[4];
1518 uint32_t i[2];
1519 uint64_t l[1];
1520 } reg_t;
1521
1522 // integer table: assume maximum SV 7-bit regfile size
1523 reg_t int_regfile[128];
1524
1525 where the CSR Register table entry (not the instruction alone) determines
1526 which of those union entries is to be used on each operation, and the
1527 VL element offset in the hardware-loop specifies the index into each array.
1528
1529 However a naive interpretation of the data structure above masks the
1530 fact that setting VL greater than 8, for example, when the bitwidth is 8,
1531 accessing one specific register "spills over" to the following parts of
1532 the register file in a sequential fashion. So a much more accurate way
1533 to reflect this would be:
1534
1535 typedef union {
1536 uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
1537 uint8_t b[0]; // array of type uint8_t
1538 uint16_t s[0];
1539 uint32_t i[0];
1540 uint64_t l[0];
1541 uint128_t d[0];
1542 } reg_t;
1543
1544 reg_t int_regfile[128];
1545
1546 where when accessing any individual regfile[n].b entry it is permitted
1547 (in c) to arbitrarily over-run the *declared* length of the array (zero),
1548 and thus "overspill" to consecutive register file entries in a fashion
1549 that is completely transparent to a greatly-simplified software / pseudo-code
1550 representation.
1551 It is however critical to note that it is clearly the responsibility of
1552 the implementor to ensure that, towards the end of the register file,
1553 an exception is thrown if attempts to access beyond the "real" register
1554 bytes is ever attempted.
1555
1556 Now we may modify pseudo-code an operation where all element bitwidths have
1557 been set to the same size, where this pseudo-code is otherwise identical
1558 to its "non" polymorphic versions (above):
1559
1560 function op_add(rd, rs1, rs2) # add not VADD!
1561 ...
1562 ...
1563  for (i = 0; i < VL; i++)
1564 ...
1565 ...
1566 // TODO, calculate if over-run occurs, for each elwidth
1567 if (elwidth == 8) {
1568    int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
1569     int_regfile[rs2].i[irs2];
1570 } else if elwidth == 16 {
1571    int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
1572     int_regfile[rs2].s[irs2];
1573 } else if elwidth == 32 {
1574    int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
1575     int_regfile[rs2].i[irs2];
1576 } else { // elwidth == 64
1577    int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
1578     int_regfile[rs2].l[irs2];
1579 }
1580 ...
1581 ...
1582
1583 So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
1584 following sequentially on respectively from the same) are "type-cast"
1585 to 8-bit; for 16-bit entries likewise and so on.
1586
1587 However that only covers the case where the element widths are the same.
1588 Where the element widths are different, the following algorithm applies:
1589
1590 * Analyse the bitwidth of all source operands and work out the
1591 maximum. Record this as "maxsrcbitwidth"
1592 * If any given source operand requires sign-extension or zero-extension
1593 (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
1594 sign-extension / zero-extension or whatever is specified in the standard
1595 RV specification, **change** that to sign-extending from the respective
1596 individual source operand's bitwidth from the CSR table out to
1597 "maxsrcbitwidth" (previously calculated), instead.
1598 * Following separate and distinct (optional) sign/zero-extension of all
1599 source operands as specifically required for that operation, carry out the
1600 operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
1601 this may be a "null" (copy) operation, and that with FCVT, the changes
1602 to the source and destination bitwidths may also turn FVCT effectively
1603 into a copy).
1604 * If the destination operand requires sign-extension or zero-extension,
1605 instead of a mandatory fixed size (typically 32-bit for arithmetic,
1606 for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
1607 etc.), overload the RV specification with the bitwidth from the
1608 destination register's elwidth entry.
1609 * Finally, store the (optionally) sign/zero-extended value into its
1610 destination: memory for sb/sw etc., or an offset section of the register
1611 file for an arithmetic operation.
1612
1613 In this way, polymorphic bitwidths are achieved without requiring a
1614 massive 64-way permutation of calculations **per opcode**, for example
1615 (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
1616 rd bitwidths). The pseudo-code is therefore as follows:
1617
1618 typedef union {
1619 uint8_t b;
1620 uint16_t s;
1621 uint32_t i;
1622 uint64_t l;
1623 } el_reg_t;
1624
1625 bw(elwidth):
1626 if elwidth == 0:
1627 return xlen
1628 if elwidth == 1:
1629 return xlen / 2
1630 if elwidth == 2:
1631 return xlen * 2
1632 // elwidth == 3:
1633 return 8
1634
1635 get_max_elwidth(rs1, rs2):
1636 return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
1637 bw(int_csr[rs2].elwidth)) # again XLEN if no entry
1638
1639 get_polymorphed_reg(reg, bitwidth, offset):
1640 el_reg_t res;
1641 res.l = 0; // TODO: going to need sign-extending / zero-extending
1642 if bitwidth == 8:
1643 reg.b = int_regfile[reg].b[offset]
1644 elif bitwidth == 16:
1645 reg.s = int_regfile[reg].s[offset]
1646 elif bitwidth == 32:
1647 reg.i = int_regfile[reg].i[offset]
1648 elif bitwidth == 64:
1649 reg.l = int_regfile[reg].l[offset]
1650 return res
1651
1652 set_polymorphed_reg(reg, bitwidth, offset, val):
1653 if (!int_csr[reg].isvec):
1654 # sign/zero-extend depending on opcode requirements, from
1655 # the reg's bitwidth out to the full bitwidth of the regfile
1656 val = sign_or_zero_extend(val, bitwidth, xlen)
1657 int_regfile[reg].l[0] = val
1658 elif bitwidth == 8:
1659 int_regfile[reg].b[offset] = val
1660 elif bitwidth == 16:
1661 int_regfile[reg].s[offset] = val
1662 elif bitwidth == 32:
1663 int_regfile[reg].i[offset] = val
1664 elif bitwidth == 64:
1665 int_regfile[reg].l[offset] = val
1666
1667 maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
1668 destwid = int_csr[rs1].elwidth # destination element width
1669  for (i = 0; i < VL; i++)
1670 if (predval & 1<<i) # predication uses intregs
1671 // TODO, calculate if over-run occurs, for each elwidth
1672 src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
1673 // TODO, sign/zero-extend src1 and src2 as operation requires
1674 if (op_requires_sign_extend_src1)
1675 src1 = sign_extend(src1, maxsrcwid)
1676 src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
1677 result = src1 + src2 # actual add here
1678 // TODO, sign/zero-extend result, as operation requires
1679 if (op_requires_sign_extend_dest)
1680 result = sign_extend(result, maxsrcwid)
1681 set_polymorphed_reg(rd, destwid, ird, result)
1682 if (!int_vec[rd].isvector) break
1683 if (int_vec[rd ].isvector)  { id += 1; }
1684 if (int_vec[rs1].isvector)  { irs1 += 1; }
1685 if (int_vec[rs2].isvector)  { irs2 += 1; }
1686
1687 Whilst specific sign-extension and zero-extension pseudocode call
1688 details are left out, due to each operation being different, the above
1689 should be clear that;
1690
1691 * the source operands are extended out to the maximum bitwidth of all
1692 source operands
1693 * the operation takes place at that maximum source bitwidth (the
1694 destination bitwidth is not involved at this point, at all)
1695 * the result is extended (or potentially even, truncated) before being
1696 stored in the destination. i.e. truncation (if required) to the
1697 destination width occurs **after** the operation **not** before.
1698 * when the destination is not marked as "vectorised", the **full**
1699 (standard, scalar) register file entry is taken up, i.e. the
1700 element is either sign-extended or zero-extended to cover the
1701 full register bitwidth (XLEN) if it is not already XLEN bits long.
1702
1703 Implementors are entirely free to optimise the above, particularly
1704 if it is specifically known that any given operation will complete
1705 accurately in less bits, as long as the results produced are
1706 directly equivalent and equal, for all inputs and all outputs,
1707 to those produced by the above algorithm.
1708
1709 ## Polymorphic floating-point operation exceptions and error-handling
1710
1711 For floating-point operations, conversion takes place without
1712 raising any kind of exception. Exactly as specified in the standard
1713 RV specification, NAN (or appropriate) is stored if the result
1714 is beyond the range of the destination, and, again, exactly as
1715 with the standard RV specification just as with scalar
1716 operations, the floating-point flag is raised (FCSR). And, again, just as
1717 with scalar operations, it is software's responsibility to check this flag.
1718 Given that the FCSR flags are "accrued", the fact that multiple element
1719 operations could have occurred is not a problem.
1720
1721 Note that it is perfectly legitimate for floating-point bitwidths of
1722 only 8 to be specified. However whilst it is possible to apply IEEE 754
1723 principles, no actual standard yet exists. Implementors wishing to
1724 provide hardware-level 8-bit support rather than throw a trap to emulate
1725 in software should contact the author of this specification before
1726 proceeding.
1727
1728 ## Polymorphic shift operators
1729
1730 A special note is needed for changing the element width of left and right
1731 shift operators, particularly right-shift. Even for standard RV base,
1732 in order for correct results to be returned, the second operand RS2 must
1733 be truncated to be within the range of RS1's bitwidth. spike's implementation
1734 of sll for example is as follows:
1735
1736 WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));
1737
1738 which means: where XLEN is 32 (for RV32), restrict RS2 to cover the
1739 range 0..31 so that RS1 will only be left-shifted by the amount that
1740 is possible to fit into a 32-bit register. Whilst this appears not
1741 to matter for hardware, it matters greatly in software implementations,
1742 and it also matters where an RV64 system is set to "RV32" mode, such
1743 that the underlying registers RS1 and RS2 comprise 64 hardware bits
1744 each.
1745
1746 For SV, where each operand's element bitwidth may be over-ridden, the
1747 rule about determining the operation's bitwidth *still applies*, being
1748 defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
1749 **also applies to the truncation of RS2**. In other words, *after*
1750 determining the maximum bitwidth, RS2's range must **also be truncated**
1751 to ensure a correct answer. Example:
1752
1753 * RS1 is over-ridden to a 16-bit width
1754 * RS2 is over-ridden to an 8-bit width
1755 * RD is over-ridden to a 64-bit width
1756 * the maximum bitwidth is thus determined to be 16-bit - max(8,16)
1757 * RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
1758
1759 Pseudocode (in spike) for this example would therefore be:
1760
1761 WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
1762
1763 This example illustrates that considerable care therefore needs to be
1764 taken to ensure that left and right shift operations are implemented
1765 correctly. The key is that
1766
1767 * The operation bitwidth is determined by the maximum bitwidth
1768 of the *source registers*, **not** the destination register bitwidth
1769 * The result is then sign-extend (or truncated) as appropriate.
1770
1771 ## Polymorphic MULH/MULHU/MULHSU
1772
1773 MULH is designed to take the top half MSBs of a multiply that
1774 does not fit within the range of the source operands, such that
1775 smaller width operations may produce a full double-width multiply
1776 in two cycles. The issue is: SV allows the source operands to
1777 have variable bitwidth.
1778
1779 Here again special attention has to be paid to the rules regarding
1780 bitwidth, which, again, are that the operation is performed at
1781 the maximum bitwidth of the **source** registers. Therefore:
1782
1783 * An 8-bit x 8-bit multiply will create a 16-bit result that must
1784 be shifted down by 8 bits
1785 * A 16-bit x 8-bit multiply will create a 24-bit result that must
1786 be shifted down by 16 bits (top 8 bits being zero)
1787 * A 16-bit x 16-bit multiply will create a 32-bit result that must
1788 be shifted down by 16 bits
1789 * A 32-bit x 16-bit multiply will create a 48-bit result that must
1790 be shifted down by 32 bits
1791 * A 32-bit x 8-bit multiply will create a 40-bit result that must
1792 be shifted down by 32 bits
1793
1794 So again, just as with shift-left and shift-right, the result
1795 is shifted down by the maximum of the two source register bitwidths.
1796 And, exactly again, truncation or sign-extension is performed on the
1797 result. If sign-extension is to be carried out, it is performed
1798 from the same maximum of the two source register bitwidths out
1799 to the result element's bitwidth.
1800
1801 If truncation occurs, i.e. the top MSBs of the result are lost,
1802 this is "Officially Not Our Problem", i.e. it is assumed that the
1803 programmer actually desires the result to be truncated. i.e. if the
1804 programmer wanted all of the bits, they would have set the destination
1805 elwidth to accommodate them.
1806
1807 ## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
1808
1809 Polymorphic element widths in vectorised form means that the data
1810 being loaded (or stored) across multiple registers needs to be treated
1811 (reinterpreted) as a contiguous stream of elwidth-wide items, where
1812 the source register's element width is **independent** from the destination's.
1813
1814 This makes for a slightly more complex algorithm when using indirection
1815 on the "addressed" register (source for LOAD and destination for STORE),
1816 particularly given that the LOAD/STORE instruction provides important
1817 information about the width of the data to be reinterpreted.
1818
1819 Let's illustrate the "load" part, where the pseudo-code for elwidth=default
1820 was as follows, and i is the loop from 0 to VL-1:
1821
1822 srcbase = ireg[rs+i];
1823 return mem[srcbase + imm]; // returns XLEN bits
1824
1825 Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
1826 chunks are taken from the source memory location addressed by the current
1827 indexed source address register, and only when a full 32-bits-worth
1828 are taken will the index be moved on to the next contiguous source
1829 address register:
1830
1831 bitwidth = bw(elwidth); // source elwidth from CSR reg entry
1832 elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
1833 srcbase = ireg[rs+i/(elsperblock)]; // integer divide
1834 offs = i % elsperblock; // modulo
1835 return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
1836
1837 Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
1838 and 128 for LQ.
1839
1840 The principle is basically exactly the same as if the srcbase were pointing
1841 at the memory of the *register* file: memory is re-interpreted as containing
1842 groups of elwidth-wide discrete elements.
1843
1844 When storing the result from a load, it's important to respect the fact
1845 that the destination register has its *own separate element width*. Thus,
1846 when each element is loaded (at the source element width), any sign-extension
1847 or zero-extension (or truncation) needs to be done to the *destination*
1848 bitwidth. Also, the storing has the exact same analogous algorithm as
1849 above, where in fact it is just the set\_polymorphed\_reg pseudocode
1850 (completely unchanged) used above.
1851
1852 One issue remains: when the source element width is **greater** than
1853 the width of the operation, it is obvious that a single LB for example
1854 cannot possibly obtain 16-bit-wide data. This condition may be detected
1855 where, when using integer divide, elsperblock (the width of the LOAD
1856 divided by the bitwidth of the element) is zero.
1857
1858 The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
1859
1860 elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
1861
1862 The elements, if the element bitwidth is larger than the LD operation's
1863 size, will then be sign/zero-extended to the full LD operation size, as
1864 specified by the LOAD (LDU instead of LD, LBU instead of LB), before
1865 being passed on to the second phase.
1866
1867 As LOAD/STORE may be twin-predicated, it is important to note that
1868 the rules on twin predication still apply, except where in previous
1869 pseudo-code (elwidth=default for both source and target) it was
1870 the *registers* that the predication was applied to, it is now the
1871 **elements** that the predication is applied to.
1872
1873 Thus the full pseudocode for all LD operations may be written out
1874 as follows:
1875
1876 function LBU(rd, rs):
1877 load_elwidthed(rd, rs, 8, true)
1878 function LB(rd, rs):
1879 load_elwidthed(rd, rs, 8, false)
1880 function LH(rd, rs):
1881 load_elwidthed(rd, rs, 16, false)
1882 ...
1883 ...
1884 function LQ(rd, rs):
1885 load_elwidthed(rd, rs, 128, false)
1886
1887 # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
1888 function load_memory(rs, imm, i, opwidth):
1889 elwidth = int_csr[rs].elwidth
1890 bitwidth = bw(elwidth);
1891 elsperblock = min(1, opwidth / bitwidth)
1892 srcbase = ireg[rs+i/(elsperblock)];
1893 offs = i % elsperblock;
1894 return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
1895
1896 function load_elwidthed(rd, rs, opwidth, unsigned):
1897 destwid = int_csr[rd].elwidth # destination element width
1898  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1899  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1900  ps = get_pred_val(FALSE, rs); # predication on src
1901  pd = get_pred_val(FALSE, rd); # ... AND on dest
1902  for (int i = 0, int j = 0; i < VL && j < VL;):
1903 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1904 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1905 val = load_memory(rs, imm, i, opwidth)
1906 if unsigned:
1907 val = zero_extend(val, min(opwidth, bitwidth))
1908 else:
1909 val = sign_extend(val, min(opwidth, bitwidth))
1910 set_polymorphed_reg(rd, bitwidth, j, val)
1911 if (int_csr[rs].isvec) i++;
1912 if (int_csr[rd].isvec) j++; else break;
1913
1914 Note:
1915
1916 * when comparing against for example the twin-predicated c.mv
1917 pseudo-code, the pattern of independent incrementing of rd and rs
1918 is preserved unchanged.
1919 * just as with the c.mv pseudocode, zeroing is not included and must be
1920 taken into account (TODO).
1921 * that due to the use of a twin-predication algorithm, LOAD/STORE also
1922 take on the same VSPLAT, VINSERT, VREDUCE, VEXTRACT, VGATHER and
1923 VSCATTER characteristics.
1924 * that due to the use of the same set\_polymorphed\_reg pseudocode,
1925 a destination that is not vectorised (marked as scalar) will
1926 result in the element being fully sign-extended or zero-extended
1927 out to the full register file bitwidth (XLEN). When the source
1928 is also marked as scalar, this is how the compatibility with
1929 standard RV LOAD/STORE is preserved by this algorithm.
1930
1931 ### Example Tables showing LOAD elements
1932
1933 This section contains examples of vectorised LOAD operations, showing
1934 how the two stage process works (three if zero/sign-extension is included).
1935
1936
1937 #### Example: LD x8, x5(0), x8 CSR-elwidth=32, x5 CSR-elwidth=16, VL=7
1938
1939 This is:
1940
1941 * a 64-bit load, with an offset of zero
1942 * with a source-address elwidth of 16-bit
1943 * into a destination-register with an elwidth of 32-bit
1944 * where VL=7
1945 * from register x5 (actually x5-x6) to x8 (actually x8 to half of x11)
1946 * RV64, where XLEN=64 is assumed.
1947
1948 First, the memory table, which, due to the
1949 element width being 16 and the operation being LD (64), the 64-bits
1950 loaded from memory are subdivided into groups of **four** elements.
1951 And, with VL being 7 (deliberately to illustrate that this is reasonable
1952 and possible), the first four are sourced from the offset addresses pointed
1953 to by x5, and the next three from the ofset addresses pointed to by
1954 the next contiguous register, x6:
1955
1956 [[!table data="""
1957 addr | byte 0 | byte 1 | byte 2 | byte 3 | byte 4 | byte 5 | byte 6 | byte 7 |
1958 @x5 | elem 0 || elem 1 || elem 2 || elem 3 ||
1959 @x6 | elem 4 || elem 5 || elem 6 || not loaded ||
1960 """]]
1961
1962 Next, the elements are zero-extended from 16-bit to 32-bit, as whilst
1963 the elwidth CSR entry for x5 is 16-bit, the destination elwidth on x8 is 32.
1964
1965 [[!table data="""
1966 byte 3 | byte 2 | byte 1 | byte 0 |
1967 0x0 | 0x0 | elem0 ||
1968 0x0 | 0x0 | elem1 ||
1969 0x0 | 0x0 | elem2 ||
1970 0x0 | 0x0 | elem3 ||
1971 0x0 | 0x0 | elem4 ||
1972 0x0 | 0x0 | elem5 ||
1973 0x0 | 0x0 | elem6 ||
1974 0x0 | 0x0 | elem7 ||
1975 """]]
1976
1977 Lastly, the elements are stored in contiguous blocks, as if x8 was also
1978 byte-addressable "memory". That "memory" happens to cover registers
1979 x8, x9, x10 and x11, with the last 32 "bits" of x11 being **UNMODIFIED**:
1980
1981 [[!table data="""
1982 reg# | byte 7 | byte 6 | byte 5 | byte 4 | byte 3 | byte 2 | byte 1 | byte 0 |
1983 x8 | 0x0 | 0x0 | elem 1 || 0x0 | 0x0 | elem 0 ||
1984 x9 | 0x0 | 0x0 | elem 3 || 0x0 | 0x0 | elem 2 ||
1985 x10 | 0x0 | 0x0 | elem 5 || 0x0 | 0x0 | elem 4 ||
1986 x11 | **UNMODIFIED** |||| 0x0 | 0x0 | elem 6 ||
1987 """]]
1988
1989 Thus we have data that is loaded from the **addresses** pointed to by
1990 x5 and x6, zero-extended from 16-bit to 32-bit, stored in the **registers**
1991 x8 through to half of x11.
1992 The end result is that elements 0 and 1 end up in x8, with element 8 being
1993 shifted up 32 bits, and so on, until finally element 6 is in the
1994 LSBs of x11.
1995
1996 Note that whilst the memory addressing table is shown left-to-right byte order,
1997 the registers are shown in right-to-left (MSB) order. This does **not**
1998 imply that bit or byte-reversal is carried out: it's just easier to visualise
1999 memory as being contiguous bytes, and emphasises that registers are not
2000 really actually "memory" as such.
2001
2002 ## Why SV bitwidth specification is restricted to 4 entries
2003
2004 The four entries for SV element bitwidths only allows three over-rides:
2005
2006 * 8 bit
2007 * 16 hit
2008 * 32 bit
2009
2010 This would seem inadequate, surely it would be better to have 3 bits or
2011 more and allow 64, 128 and some other options besides. The answer here
2012 is, it gets too complex, no RV128 implementation yet exists, and so RV64's
2013 default is 64 bit, so the 4 major element widths are covered anyway.
2014
2015 There is an absolutely crucial aspect oF SV here that explicitly
2016 needs spelling out, and it's whether the "vectorised" bit is set in
2017 the Register's CSR entry.
2018
2019 If "vectorised" is clear (not set), this indicates that the operation
2020 is "scalar". Under these circumstances, when set on a destination (RD),
2021 then sign-extension and zero-extension, whilst changed to match the
2022 override bitwidth (if set), will erase the **full** register entry
2023 (64-bit if RV64).
2024
2025 When vectorised is *set*, this indicates that the operation now treats
2026 **elements** as if they were independent registers, so regardless of
2027 the length, any parts of a given actual register that are not involved
2028 in the operation are **NOT** modified, but are **PRESERVED**.
2029
2030 For example:
2031
2032 * when the vector bit is clear and elwidth set to 16 on the destination
2033 register, operations are truncated to 16 bit and then sign or zero
2034 extended to the *FULL* XLEN register width.
2035 * when the vector bit is set, elwidth is 16 and VL=1 (or other value where
2036 groups of elwidth sized elements do not fill an entire XLEN register),
2037 the "top" bits of the destination register do *NOT* get modified, zero'd
2038 or otherwise overwritten.
2039
2040 SIMD micro-architectures may implement this by using predication on
2041 any elements in a given actual register that are beyond the end of
2042 multi-element operation.
2043
2044 Other microarchitectures may choose to provide byte-level write-enable
2045 lines on the register file, such that each 64 bit register in an RV64
2046 system requires 8 WE lines. Scalar RV64 operations would require
2047 activation of all 8 lines, where SV elwidth based operations would
2048 activate the required subset of those byte-level write lines.
2049
2050 Example:
2051
2052 * rs1, rs2 and rd are all set to 8-bit
2053 * VL is set to 3
2054 * RV64 architecture is set (UXL=64)
2055 * add operation is carried out
2056 * bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16]
2057 concatenated with similar add operations on bits 15..8 and 7..0
2058 * bits 24 through 63 **remain as they originally were**.
2059
2060 Example SIMD micro-architectural implementation:
2061
2062 * SIMD architecture works out the nearest round number of elements
2063 that would fit into a full RV64 register (in this case: 8)
2064 * SIMD architecture creates a hidden predicate, binary 0b00000111
2065 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
2066 * SIMD architecture goes ahead with the add operation as if it
2067 was a full 8-wide batch of 8 adds
2068 * SIMD architecture passes top 5 elements through the adders
2069 (which are "disabled" due to zero-bit predication)
2070 * SIMD architecture gets the 5 unmodified top 8-bits back unmodified
2071 and stores them in rd.
2072
2073 This requires a read on rd, however this is required anyway in order
2074 to support non-zeroing mode.
2075
2076 ## Polymorphic floating-point
2077
2078 Standard scalar RV integer operations base the register width on XLEN,
2079 which may be changed (UXL in USTATUS, and the corresponding MXL and
2080 SXL in MSTATUS and SSTATUS respectively). Integer LOAD, STORE and
2081 arithmetic operations are therefore restricted to an active XLEN bits,
2082 with sign or zero extension to pad out the upper bits when XLEN has
2083 been dynamically set to less than the actual register size.
2084
2085 For scalar floating-point, the active (used / changed) bits are
2086 specified exclusively by the operation: ADD.S specifies an active
2087 32-bits, with the upper bits of the source registers needing to
2088 be all 1s ("NaN-boxed"), and the destination upper bits being
2089 *set* to all 1s (including on LOAD/STOREs).
2090
2091 Where elwidth is set to default (on any source or the destination)
2092 it is obvious that this NaN-boxing behaviour can and should be
2093 preserved. When elwidth is non-default things are less obvious,
2094 so need to be thought through. Here is a normal (scalar) sequence,
2095 assuming an RV64 which supports Quad (128-bit) FLEN:
2096
2097 * FLD loads 64-bit wide from memory. Top 64 MSBs are set to all 1s
2098 * ADD.D performs a 64-bit-wide add. Top 64 MSBs of destination set to 1s.
2099 * FSD stores lowest 64-bits from the 128-bit-wide register to memory:
2100 top 64 MSBs ignored.
2101
2102 Therefore it makes sense to mirror this behaviour when, for example,
2103 elwidth is set to 32. Assume elwidth set to 32 on all source and
2104 destination registers:
2105
2106 * FLD loads 64-bit wide from memory as **two** 32-bit single-precision
2107 floating-point numbers.
2108 * ADD.D performs **two** 32-bit-wide adds, storing one of the adds
2109 in bits 0-31 and the second in bits 32-63.
2110 * FSD stores lowest 64-bits from the 128-bit-wide register to memory
2111
2112 Here's the thing: it does not make sense to overwrite the top 64 MSBs
2113 of the registers either during the FLD **or** the ADD.D. The reason
2114 is that, effectively, the top 64 MSBs actually represent a completely
2115 independent 64-bit register, so overwriting it is not only gratuitous
2116 but may actually be harmful for a future extension to SV which may
2117 have a way to directly access those top 64 bits.
2118
2119 The decision is therefore **not** to touch the upper parts of floating-point
2120 registers whereever elwidth is set to non-default values, including
2121 when "isvec" is false in a given register's CSR entry. Only when the
2122 elwidth is set to default **and** isvec is false will the standard
2123 RV behaviour be followed, namely that the upper bits be modified.
2124
2125 Ultimately if elwidth is default and isvec false on *all* source
2126 and destination registers, a SimpleV instruction defaults completely
2127 to standard RV scalar behaviour (this holds true for **all** operations,
2128 right across the board).
2129
2130 The nice thing here is that ADD.S, ADD.D and ADD.Q when elwidth are
2131 non-default values are effectively all the same: they all still perform
2132 multiple ADD operations, just at different widths. A future extension
2133 to SimpleV may actually allow ADD.S to access the upper bits of the
2134 register, effectively breaking down a 128-bit register into a bank
2135 of 4 independently-accesible 32-bit registers.
2136
2137 In the meantime, although when e.g. setting VL to 8 it would technically
2138 make no difference to the ALU whether ADD.S, ADD.D or ADD.Q is used,
2139 using ADD.Q may be an easy way to signal to the microarchitecture that
2140 it is to receive a higher VL value. On a superscalar OoO architecture
2141 there may be absolutely no difference, however on simpler SIMD-style
2142 microarchitectures they may not necessarily have the infrastructure in
2143 place to know the difference, such that when VL=8 and an ADD.D instruction
2144 is issued, it completes in 2 cycles (or more) rather than one, where
2145 if an ADD.Q had been issued instead on such simpler microarchitectures
2146 it would complete in one.
2147
2148 ## Specific instruction walk-throughs
2149
2150 This section covers walk-throughs of the above-outlined procedure
2151 for converting standard RISC-V scalar arithmetic operations to
2152 polymorphic widths, to ensure that it is correct.
2153
2154 ### add
2155
2156 Standard Scalar RV32/RV64 (xlen):
2157
2158 * RS1 @ xlen bits
2159 * RS2 @ xlen bits
2160 * add @ xlen bits
2161 * RD @ xlen bits
2162
2163 Polymorphic variant:
2164
2165 * RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
2166 * RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
2167 * add @ max(rs1, rs2) bits
2168 * RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate
2169
2170 Note here that polymorphic add zero-extends its source operands,
2171 where addw sign-extends.
2172
2173 ### addw
2174
2175 The RV Specification specifically states that "W" variants of arithmetic
2176 operations always produce 32-bit signed values. In a polymorphic
2177 environment it is reasonable to assume that the signed aspect is
2178 preserved, where it is the length of the operands and the result
2179 that may be changed.
2180
2181 Standard Scalar RV64 (xlen):
2182
2183 * RS1 @ xlen bits
2184 * RS2 @ xlen bits
2185 * add @ xlen bits
2186 * RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
2187
2188 Polymorphic variant:
2189
2190 * RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
2191 * RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
2192 * add @ max(rs1, rs2) bits
2193 * RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
2194
2195 Note here that polymorphic addw sign-extends its source operands,
2196 where add zero-extends.
2197
2198 This requires a little more in-depth analysis. Where the bitwidth of
2199 rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
2200 only where the bitwidth of either rs1 or rs2 are different, will the
2201 lesser-width operand be sign-extended.
2202
2203 Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
2204 where for add they are both zero-extended. This holds true for all arithmetic
2205 operations ending with "W".
2206
2207 ### addiw
2208
2209 Standard Scalar RV64I:
2210
2211 * RS1 @ xlen bits, truncated to 32-bit
2212 * immed @ 12 bits, sign-extended to 32-bit
2213 * add @ 32 bits
2214 * RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
2215
2216 Polymorphic variant:
2217
2218 * RS1 @ rs1 bits
2219 * immed @ 12 bits, sign-extend to max(rs1, 12) bits
2220 * add @ max(rs1, 12) bits
2221 * RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
2222
2223 # Predication Element Zeroing
2224
2225 The introduction of zeroing on traditional vector predication is usually
2226 intended as an optimisation for lane-based microarchitectures with register
2227 renaming to be able to save power by avoiding a register read on elements
2228 that are passed through en-masse through the ALU. Simpler microarchitectures
2229 do not have this issue: they simply do not pass the element through to
2230 the ALU at all, and therefore do not store it back in the destination.
2231 More complex non-lane-based micro-architectures can, when zeroing is
2232 not set, use the predication bits to simply avoid sending element-based
2233 operations to the ALUs, entirely: thus, over the long term, potentially
2234 keeping all ALUs 100% occupied even when elements are predicated out.
2235
2236 SimpleV's design principle is not based on or influenced by
2237 microarchitectural design factors: it is a hardware-level API.
2238 Therefore, looking purely at whether zeroing is *useful* or not,
2239 (whether less instructions are needed for certain scenarios),
2240 given that a case can be made for zeroing *and* non-zeroing, the
2241 decision was taken to add support for both.
2242
2243 ## Single-predication (based on destination register)
2244
2245 Zeroing on predication for arithmetic operations is taken from
2246 the destination register's predicate. i.e. the predication *and*
2247 zeroing settings to be applied to the whole operation come from the
2248 CSR Predication table entry for the destination register.
2249 Thus when zeroing is set on predication of a destination element,
2250 if the predication bit is clear, then the destination element is *set*
2251 to zero (twin-predication is slightly different, and will be covered
2252 next).
2253
2254 Thus the pseudo-code loop for a predicated arithmetic operation
2255 is modified to as follows:
2256
2257  for (i = 0; i < VL; i++)
2258 if not zeroing: # an optimisation
2259 while (!(predval & 1<<i) && i < VL)
2260 if (int_vec[rd ].isvector)  { id += 1; }
2261 if (int_vec[rs1].isvector)  { irs1 += 1; }
2262 if (int_vec[rs2].isvector)  { irs2 += 1; }
2263 if i == VL:
2264 return
2265 if (predval & 1<<i)
2266 src1 = ....
2267 src2 = ...
2268 else:
2269 result = src1 + src2 # actual add (or other op) here
2270 set_polymorphed_reg(rd, destwid, ird, result)
2271 if int_vec[rd].ffirst and result == 0:
2272 VL = i # result was zero, end loop early, return VL
2273 return
2274 if (!int_vec[rd].isvector) return
2275 else if zeroing:
2276 result = 0
2277 set_polymorphed_reg(rd, destwid, ird, result)
2278 if (int_vec[rd ].isvector)  { id += 1; }
2279 else if (predval & 1<<i) return
2280 if (int_vec[rs1].isvector)  { irs1 += 1; }
2281 if (int_vec[rs2].isvector)  { irs2 += 1; }
2282 if (rd == VL or rs1 == VL or rs2 == VL): return
2283
2284 The optimisation to skip elements entirely is only possible for certain
2285 micro-architectures when zeroing is not set. However for lane-based
2286 micro-architectures this optimisation may not be practical, as it
2287 implies that elements end up in different "lanes". Under these
2288 circumstances it is perfectly fine to simply have the lanes
2289 "inactive" for predicated elements, even though it results in
2290 less than 100% ALU utilisation.
2291
2292 ## Twin-predication (based on source and destination register)
2293
2294 Twin-predication is not that much different, except that that
2295 the source is independently zero-predicated from the destination.
2296 This means that the source may be zero-predicated *or* the
2297 destination zero-predicated *or both*, or neither.
2298
2299 When with twin-predication, zeroing is set on the source and not
2300 the destination, if a predicate bit is set it indicates that a zero
2301 data element is passed through the operation (the exception being:
2302 if the source data element is to be treated as an address - a LOAD -
2303 then the data returned *from* the LOAD is zero, rather than looking up an
2304 *address* of zero.
2305
2306 When zeroing is set on the destination and not the source, then just
2307 as with single-predicated operations, a zero is stored into the destination
2308 element (or target memory address for a STORE).
2309
2310 Zeroing on both source and destination effectively result in a bitwise
2311 NOR operation of the source and destination predicate: the result is that
2312 where either source predicate OR destination predicate is set to 0,
2313 a zero element will ultimately end up in the destination register.
2314
2315 However: this may not necessarily be the case for all operations;
2316 implementors, particularly of custom instructions, clearly need to
2317 think through the implications in each and every case.
2318
2319 Here is pseudo-code for a twin zero-predicated operation:
2320
2321 function op_mv(rd, rs) # MV not VMV!
2322  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
2323  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
2324  ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
2325  pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
2326  for (int i = 0, int j = 0; i < VL && j < VL):
2327 if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
2328 if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
2329 if ((pd & 1<<j))
2330 if ((pd & 1<<j))
2331 sourcedata = ireg[rs+i];
2332 else
2333 sourcedata = 0
2334 ireg[rd+j] <= sourcedata
2335 else if (zerodst)
2336 ireg[rd+j] <= 0
2337 if (int_csr[rs].isvec)
2338 i++;
2339 if (int_csr[rd].isvec)
2340 j++;
2341 else
2342 if ((pd & 1<<j))
2343 break;
2344
2345 Note that in the instance where the destination is a scalar, the hardware
2346 loop is ended the moment a value *or a zero* is placed into the destination
2347 register/element. Also note that, for clarity, variable element widths
2348 have been left out of the above.
2349
2350 # Exceptions
2351
2352 TODO: expand. Exceptions may occur at any time, in any given underlying
2353 scalar operation. This implies that context-switching (traps) may
2354 occur, and operation must be returned to where it left off. That in
2355 turn implies that the full state - including the current parallel
2356 element being processed - has to be saved and restored. This is
2357 what the **STATE** CSR is for.
2358
2359 The implications are that all underlying individual scalar operations
2360 "issued" by the parallelisation have to appear to be executed sequentially.
2361 The further implications are that if two or more individual element
2362 operations are underway, and one with an earlier index causes an exception,
2363 it may be necessary for the microarchitecture to **discard** or terminate
2364 operations with higher indices.
2365
2366 This being somewhat dissatisfactory, an "opaque predication" variant
2367 of the STATE CSR is being considered.
2368
2369 # Hints
2370
2371 A "HINT" is an operation that has no effect on architectural state,
2372 where its use may, by agreed convention, give advance notification
2373 to the microarchitecture: branch prediction notification would be
2374 a good example. Usually HINTs are where rd=x0.
2375
2376 With Simple-V being capable of issuing *parallel* instructions where
2377 rd=x0, the space for possible HINTs is expanded considerably. VL
2378 could be used to indicate different hints. In addition, if predication
2379 is set, the predication register itself could hypothetically be passed
2380 in as a *parameter* to the HINT operation.
2381
2382 No specific hints are yet defined in Simple-V
2383
2384 # Vector Block Format <a name="vliw-format"></a>
2385
2386 See ancillary resource: [[vblock_format]]
2387
2388 # Subsets of RV functionality
2389
2390 This section describes the differences when SV is implemented on top of
2391 different subsets of RV.
2392
2393 ## Common options
2394
2395 It is permitted to only implement SVprefix and not the VBLOCK instruction
2396 format option, and vice-versa. UNIX Platforms **MUST** raise illegal
2397 instruction on seeing an unsupported VBLOCK or SVprefix opcode, so that
2398 traps may emulate the format.
2399
2400 It is permitted in SVprefix to either not implement VL or not implement
2401 SUBVL (see [[sv_prefix_proposal]] for full details. Again, UNIX Platforms
2402 *MUST* raise illegal instruction on implementations that do not support
2403 VL or SUBVL.
2404
2405 It is permitted to limit the size of either (or both) the register files
2406 down to the original size of the standard RV architecture. However, below
2407 the mandatory limits set in the RV standard will result in non-compliance
2408 with the SV Specification.
2409
2410 ## RV32 / RV32F
2411
2412 When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
2413 maximum limit for predication is also restricted to 32 bits. Whilst not
2414 actually specifically an "option" it is worth noting.
2415
2416 ## RV32G
2417
2418 Normally in standard RV32 it does not make much sense to have
2419 RV32G, The critical instructions that are missing in standard RV32
2420 are those for moving data to and from the double-width floating-point
2421 registers into the integer ones, as well as the FCVT routines.
2422
2423 In an earlier draft of SV, it was possible to specify an elwidth
2424 of double the standard register size: this had to be dropped,
2425 and may be reintroduced in future revisions.
2426
2427 ## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
2428
2429 When floating-point is not implemented, the size of the User Register and
2430 Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
2431 per table).
2432
2433 ## RV32E
2434
2435 In embedded scenarios the User Register and Predication CSRs may be
2436 dropped entirely, or optionally limited to 1 CSR, such that the combined
2437 number of entries from the M-Mode CSR Register table plus U-Mode
2438 CSR Register table is either 4 16-bit entries or (if the U-Mode is
2439 zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
2440 the Predication CSR tables.
2441
2442 RV32E is the most likely candidate for simply detecting that registers
2443 are marked as "vectorised", and generating an appropriate exception
2444 for the VL loop to be implemented in software.
2445
2446 ## RV128
2447
2448 RV128 has not been especially considered, here, however it has some
2449 extremely large possibilities: double the element width implies
2450 256-bit operands, spanning 2 128-bit registers each, and predication
2451 of total length 128 bit given that XLEN is now 128.
2452
2453 # Under consideration <a name="issues"></a>
2454
2455 for element-grouping, if there is unused space within a register
2456 (3 16-bit elements in a 64-bit register for example), recommend:
2457
2458 * For the unused elements in an integer register, the used element
2459 closest to the MSB is sign-extended on write and the unused elements
2460 are ignored on read.
2461 * The unused elements in a floating-point register are treated as-if
2462 they are set to all ones on write and are ignored on read, matching the
2463 existing standard for storing smaller FP values in larger registers.
2464
2465 ---
2466
2467 info register,
2468
2469 > One solution is to just not support LR/SC wider than a fixed
2470 > implementation-dependent size, which must be at least 
2471 >1 XLEN word, which can be read from a read-only CSR
2472 > that can also be used for info like the kind and width of 
2473 > hw parallelism supported (128-bit SIMD, minimal virtual 
2474 > parallelism, etc.) and other things (like maybe the number 
2475 > of registers supported). 
2476
2477 > That CSR would have to have a flag to make a read trap so
2478 > a hypervisor can simulate different values.
2479
2480 ----
2481
2482 > And what about instructions like JALR? 
2483
2484 answer: they're not vectorised, so not a problem
2485
2486 ----
2487
2488 * if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
2489 XLEN if elwidth==default
2490 * if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
2491 *32* if elwidth == default
2492
2493 ---
2494
2495 TODO: document different lengths for INT / FP regfiles, and provide
2496 as part of info register. 00=32, 01=64, 10=128, 11=reserved.
2497
2498 ---
2499
2500 TODO, update to remove RegCam and PredCam CSRs, just use SVprefix and
2501 VBLOCK format
2502
2503 ---
2504
2505 Could the 8 bit Register VBLOCK format use regnum<<1 instead, only accessing regs 0 to 64?
2506
2507 --
2508
2509 Expand the range of SUBVL and its associated svsrcoffs and svdestoffs by
2510 adding a 2nd STATE CSR (or extending STATE to 64 bits). Future version?
2511
2512 --
2513
2514 TODO evaluate strncpy and strlen
2515 <https://groups.google.com/forum/m/#!msg/comp.arch/bGBeaNjAKvc/_vbqyxTUAQAJ>
2516
2517 RVV version: <a name="strncpy"></>
2518
2519 strncpy:
2520 mv a3, a0 # Copy dst
2521 loop:
2522 setvli x0, a2, vint8 # Vectors of bytes.
2523 vlbff.v v1, (a1) # Get src bytes
2524 vseq.vi v0, v1, 0 # Flag zero bytes
2525 vmfirst a4, v0 # Zero found?
2526 vmsif.v v0, v0 # Set mask up to and including zero byte. Ppplio
2527 vsb.v v1, (a3), v0.t # Write out bytes
2528 bgez a4, exit # Done
2529 csrr t1, vl # Get number of bytes fetched
2530 add a1, a1, t1 # Bump src pointer
2531 sub a2, a2, t1 # Decrement count.
2532 add a3, a3, t1 # Bump dst pointer
2533 bnez a2, loop # Anymore?
2534
2535 exit:
2536 ret
2537
2538 SV version (WIP):
2539
2540 strncpy:
2541 mv a3, a0
2542 SETMVLI 8 # set max vector to 8
2543 RegCSR[a3] = 8bit, a3, scalar
2544 RegCSR[a1] = 8bit, a1, scalar
2545 RegCSR[t0] = 8bit, t0, vector
2546 PredTb[t0] = ffirst, x0, inv
2547 loop:
2548 SETVLI a2, t4 # t4 and VL now 1..8
2549 ldb t0, (a1) # t0 fail first mode
2550 bne t0, x0, allnonzero # still ff
2551 # VL points to last nonzero
2552 GETVL t4 # from bne tests
2553 addi t4, t4, 1 # include zero
2554 SETVL t4 # set exactly to t4
2555 stb t0, (a3) # store incl zero
2556 ret # end subroutine
2557 allnonzero:
2558 stb t0, (a3) # VL legal range
2559 GETVL t4 # from bne tests
2560 add a1, a1, t4 # Bump src pointer
2561 sub a2, a2, t4 # Decrement count.
2562 add a3, a3, t4 # Bump dst pointer
2563 bnez a2, loop # Anymore?
2564 exit:
2565 ret
2566
2567 Notes:
2568
2569 * Setting MVL to 8 is just an example. If enough registers are spare it may be set to XLEN which will require a bank of 8 scalar registers for a1, a3 and t0.
2570 * obviously if that is done, t0 is not separated by 8 full registers, and would overwrite t1 thru t7. x80 would work well, as an example, instead.
2571 * with the exception of the GETVL (a pseudo code alias for csrr), every single instruction above may use RVC.
2572 * RVC C.BNEZ can be used because rs1' may be extended to the full 128 registers through redirection
2573 * RVC C.LW and C.SW may be used because the W format may be overridden by the 8 bit format. All of t0, a3 and a1 are overridden to make that work.
2574 * with the exception of the GETVL, all Vector Context may be done in VBLOCK form.
2575 * setting predication to x0 (zero) and invert on t0 is a trick to enable just ffirst on t0
2576 * ldb and bne are both using t0, both in ffirst mode
2577 * ldb will end on illegal mem, reduce VL, but copied all sorts of stuff into t0
2578 * bne t0 x0 tests up to the NEW VL for nonzero, vector t0 against scalar x0
2579 * however as t0 is in ffirst mode, the first fail wil ALSO stop the compares, and reduce VL as well
2580 * the branch only goes to allnonzero if all tests succeed
2581 * if it did not, we can safely increment VL by 1 (using a4) to include the zero.
2582 * SETVL sets *exactly* the requested amount into VL.
2583 * the SETVL just after allnonzero label is needed in case the ldb ffirst activates but the bne allzeros does not.
2584 * this would cause the stb to copy up to the end of the legal memory
2585 * of course, on the next loop the ldb would throw a trap, as a1 now points to the first illegal mem location.
2586
2587 RVV version:
2588
2589 mv a3, a0 # Save start
2590 loop:
2591 setvli a1, x0, vint8 # byte vec, x0 (Zero reg) => use max hardware len
2592 vldbff.v v1, (a3) # Get bytes
2593 csrr a1, vl # Get bytes actually read e.g. if fault
2594 vseq.vi v0, v1, 0 # Set v0[i] where v1[i] = 0
2595 add a3, a3, a1 # Bump pointer
2596 vmfirst a2, v0 # Find first set bit in mask, returns -1 if none
2597 bltz a2, loop # Not found?
2598 add a0, a0, a1 # Sum start + bump
2599 add a3, a3, a2 # Add index of zero byte
2600 sub a0, a3, a0 # Subtract start address+bump
2601 ret