(no commit message)
[libreriscv.git] / simple_v_extension / specification.mdwn
1 # Simple-V (Parallelism Extension Proposal) Specification
2
3 * Copyright (C) 2017, 2018, 2019 Luke Kenneth Casson Leighton
4 * Status: DRAFTv0.6
5 * Last edited: 21 jun 2019
6 * Ancillary resource: [[opcodes]] [[sv_prefix_proposal]]
7
8 With thanks to:
9
10 * Allen Baum
11 * Bruce Hoult
12 * comp.arch
13 * Jacob Bachmeyer
14 * Guy Lemurieux
15 * Jacob Lifshay
16 * Terje Mathisen
17 * The RISC-V Founders, without whom this all would not be possible.
18
19 [[!toc ]]
20
21 # Summary and Background: Rationale
22
23 Simple-V is a uniform parallelism API for RISC-V hardware that has several
24 unplanned side-effects including code-size reduction, expansion of
25 HINT space and more. The reason for
26 creating it is to provide a manageable way to turn a pre-existing design
27 into a parallel one, in a step-by-step incremental fashion, without adding any new opcodes, thus allowing
28 the implementor to focus on adding hardware where it is needed and necessary.
29 The primary target is for mobile-class 3D GPUs and VPUs, with secondary
30 goals being to reduce executable size (by extending the effectiveness of RV opcodes, RVC in particular) and reduce context-switch latency.
31
32 Critically: **No new instructions are added**. The parallelism (if any
33 is implemented) is implicitly added by tagging *standard* scalar registers
34 for redirection. When such a tagged register is used in any instruction,
35 it indicates that the PC shall **not** be incremented; instead a loop
36 is activated where *multiple* instructions are issued to the pipeline
37 (as determined by a length CSR), with contiguously incrementing register
38 numbers starting from the tagged register. When the last "element"
39 has been reached, only then is the PC permitted to move on. Thus
40 Simple-V effectively sits (slots) *in between* the instruction decode phase
41 and the ALU(s).
42
43 The barrier to entry with SV is therefore very low. The minimum
44 compliant implementation is software-emulation (traps), requiring
45 only the CSRs and CSR tables, and that an exception be thrown if an
46 instruction's registers are detected to have been tagged. The looping
47 that would otherwise be done in hardware is thus carried out in software,
48 instead. Whilst much slower, it is "compliant" with the SV specification,
49 and may be suited for implementation in RV32E and also in situations
50 where the implementor wishes to focus on certain aspects of SV, without
51 unnecessary time and resources into the silicon, whilst also conforming
52 strictly with the API. A good area to punt to software would be the
53 polymorphic element width capability for example.
54
55 Hardware Parallelism, if any, is therefore added at the implementor's
56 discretion to turn what would otherwise be a sequential loop into a
57 parallel one.
58
59 To emphasise that clearly: Simple-V (SV) is *not*:
60
61 * A SIMD system
62 * A SIMT system
63 * A Vectorisation Microarchitecture
64 * A microarchitecture of any specific kind
65 * A mandary parallel processor microarchitecture of any kind
66 * A supercomputer extension
67
68 SV does **not** tell implementors how or even if they should implement
69 parallelism: it is a hardware "API" (Application Programming Interface)
70 that, if implemented, presents a uniform and consistent way to *express*
71 parallelism, at the same time leaving the choice of if, how, how much,
72 when and whether to parallelise operations **entirely to the implementor**.
73
74 # Basic Operation
75
76 The principle of SV is as follows:
77
78 * Standard RV instructions are "prefixed" (extended) through a 48/64
79 bit format (single instruction option) or a variable
80 length VLIW-like prefix (multi or "grouped" option).
81 * The prefix(es) indicate which registers are "tagged" as
82 "vectorised". Predicates can also be added, and element widths overridden on any src or dest register.
83 * A "Vector Length" CSR is set, indicating the span of any future
84 "parallel" operations.
85 * If any operation (a **scalar** standard RV opcode) uses a register
86 that has been so "marked" ("tagged"), a hardware "macro-unrolling loop"
87 is activated, of length VL, that effectively issues **multiple**
88 identical instructions using contiguous sequentially-incrementing
89 register numbers, based on the "tags".
90 * **Whether they be executed sequentially or in parallel or a
91 mixture of both or punted to software-emulation in a trap handler
92 is entirely up to the implementor**.
93
94 In this way an entire scalar algorithm may be vectorised with
95 the minimum of modification to the hardware and to compiler toolchains.
96
97 To reiterate: **There are *no* new opcodes**. The scheme works *entirely*
98 on hidden context that augments *scalar* RISCV instructions.
99
100 # CSRs <a name="csrs"></a>
101
102 * An optional "reshaping" CSR key-value table which remaps from a 1D
103 linear shape to 2D or 3D, including full transposition.
104
105 There are five additional CSRs, available in any privilege level:
106
107 * MVL (the Maximum Vector Length)
108 * VL (which has different characteristics from standard CSRs)
109 * SUBVL (effectively a kind of SIMD)
110 * STATE (containing copies of MVL, VL and SUBVL as well as context information)
111 * PCVLIW (the current operation being executed within a VLIW Group)
112
113 For User Mode there are the following CSRs:
114
115 * uePCVLIW (a copy of the sub-execution Program Counter, that is relative
116 to the start of the current VLIW Group, set on a trap).
117 * ueSTATE (useful for saving and restoring during context switch,
118 and for providing fast transitions)
119
120 There are also two additional CSRs for Supervisor-Mode:
121
122 * sePCVLIW
123 * seSTATE
124
125 And likewise for M-Mode:
126
127 * mePCVLIW
128 * meSTATE
129
130 The u/m/s CSRs are treated and handled exactly like their (x)epc equivalents. On entry to a privilege level, the contents of its (x)eSTATE and (x)ePCVLIW CSRs are copied into STATE and PCVLIW respectively, and on exit from a priv level the STATE and PCVLIW CSRs are copied to the exited priv level's corresponding CSRs.
131
132 Thus for example, a User Mode trap will end up swapping STATE and ueSTATE (on both entry and exit), allowing User Mode traps to have their own Vectorisation Context set up, separated from and unaffected by normal user applications.
133
134 Likewise, Supervisor Mode may perform context-switches, safe in the knowledge that its Vectorisation State is unaffected by User Mode.
135
136 For this to work, the (x)eSTATE CSR must be saved onto the stack by the trap, just like (x)epc, before modifying the trap atomicity flag (x)ie.
137
138 The access pattern for these groups of CSRs in each mode follows the
139 same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
140
141 * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
142 * In S-Mode, accessing and changing of the M-Mode CSRs is transparently
143 identical
144 to changing the S-Mode CSRs. Accessing and changing the U-Mode
145 CSRs is permitted.
146 * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
147 is prohibited.
148
149 In M-Mode, only the M-Mode CSRs are in effect, i.e. it is only the
150 M-Mode MVL, the M-Mode STATE and so on that influences the processor
151 behaviour. Likewise for S-Mode, and likewise for U-Mode.
152
153 This has the interesting benefit of allowing M-Mode (or S-Mode) to be set
154 up, for context-switching to take place, and, on return back to the higher
155 privileged mode, the CSRs of that mode will be exactly as they were.
156 Thus, it becomes possible for example to set up CSRs suited best to aiding
157 and assisting low-latency fast context-switching *once and only once*
158 (for example at boot time), without the need for re-initialising the
159 CSRs needed to do so.
160
161 Another interesting side effect of separate S Mode CSRs is that Vectorised
162 saving of the entire register file to the stack is a single instruction
163 (accidental provision of LOAD-MULTI semantics). If the SVPrefix P64-LD-type format is used, LOAD-MULTI may even be done with a single standalone 64 bit opcode (P64 may set up both VL and MVL from an immediate field). It can even be predicated,
164 which opens up some very interesting possibilities.
165
166 The (x)EPCVLIW CSRs must be treated exactly like their corresponding (x)epc
167 equivalents. See VLIW section for details.
168
169 ## MAXVECTORLENGTH (MVL) <a name="mvl" />
170
171 MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
172 is variable length and may be dynamically set. MVL is
173 however limited to the regfile bitwidth XLEN (1-32 for RV32,
174 1-64 for RV64 and so on).
175
176 The reason for setting this limit is so that predication registers, when
177 marked as such, may fit into a single register as opposed to fanning out
178 over several registers. This keeps the hardware implementation a little simpler.
179
180 The other important factor to note is that the actual MVL is internally
181 stored **offset by one**, so that it can fit into only 6 bits (for RV64)
182 and still cover a range up to XLEN bits. Attempts to set MVL to zero will
183 return an exception. This is expressed more clearly in the "pseudocode"
184 section, where there are subtle differences between CSRRW and CSRRWI.
185
186 ## Vector Length (VL) <a name="vl" />
187
188 VSETVL is slightly different from RVV. Similar to RVV, VL is set to be within
189 the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
190
191 VL = rd = MIN(vlen, MVL)
192
193 where 1 <= MVL <= XLEN
194
195 However just like MVL it is important to note that the range for VL has
196 subtle design implications, covered in the "CSR pseudocode" section
197
198 The fixed (specific) setting of VL allows vector LOAD/STORE to be used
199 to switch the entire bank of registers using a single instruction (see
200 Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
201 is down to the fact that predication bits fit into a single register of
202 length XLEN bits.
203
204 The second and most important change is that, within the limits set by
205 MVL, the value passed in **must** be set in VL (and in the
206 destination register).
207
208 This has implication for the microarchitecture, as VL is required to be
209 set (limits from MVL notwithstanding) to the actual value
210 requested. RVV has the option to set VL to an arbitrary value that suits
211 the conditions and the micro-architecture: SV does *not* permit this.
212
213 The reason is so that if SV is to be used for a context-switch or as a
214 substitute for LOAD/STORE-Multiple, the operation can be done with only
215 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
216 single LD/ST operation). If VL does *not* get set to the register file
217 length when VSETVL is called, then a software-loop would be needed.
218 To avoid this need, VL *must* be set to exactly what is requested
219 (limits notwithstanding).
220
221 Therefore, in turn, unlike RVV, implementors *must* provide
222 pseudo-parallelism (using sequential loops in hardware) if actual
223 hardware-parallelism in the ALUs is not deployed. A hybrid is also
224 permitted (as used in Broadcom's VideoCore-IV) however this must be
225 *entirely* transparent to the ISA.
226
227 The third change is that VSETVL is implemented as a CSR, where the
228 behaviour of CSRRW (and CSRRWI) must be changed to specifically store
229 the *new* value in the destination register, **not** the old value.
230 Where context-load/save is to be implemented in the usual fashion
231 by using a single CSRRW instruction to obtain the old value, the
232 *secondary* CSR must be used (STATE). This CSR by contrast behaves
233 exactly as standard CSRs, and contains more than just VL.
234
235 One interesting side-effect of using CSRRWI to set VL is that this
236 may be done with a single instruction, useful particularly for a
237 context-load/save. There are however limitations: CSRWI's immediate
238 is limited to 0-31 (representing VL=1-32).
239
240 Note that when VL is set to 1, vector operations cease (but not subvector operations: that requires setting SUBVL=1) the
241 hardware loop is reduced to a single element: scalar operations.
242 This is in effect the default, normal
243 operating mode. However it is important
244 to appreciate that this does **not**
245 result in the Register table or SUBVL
246 being disabled. Only when the Register
247 table is empty (P48/64 prefix fields notwithstanding)
248 would SV have no effect.
249
250 ## SUBVL - Sub Vector Length
251
252 This is a "group by quantity" that effectivrly asks each iteration of the hardware loop to load SUBVL elements of width elwidth at a time. Effectively, SUBVL is like a SIMD multiplier: instead of just 1 operation issued, SUBVL operations are issued.
253
254 Another way to view SUBVL is that each element in the VL length vector is now SUBVL times elwidth bits in length and
255 now comprises SUBVL discrete sub
256 operations. An inner SUBVL for-loop within
257 a VL for-loop in effect, with the
258 sub-element increased every time in the
259 innermost loop. This is best illustrated
260 in the (simplified) pseudocode example,
261 later.
262
263 The primary use case for SUBVL is for 3D FP Vectors. A Vector of 3D coordinates X,Y,Z for example may be loaded and multiplied the stored, per VL element iteration, rather than having to set VL to three times larger.
264
265 Legal values are 1, 2, 3 and 4 (and the STATE CSR must hold the 2 bit values 0b00 thru 0b11 to represent them).
266
267 Setting this CSR to 0 must raise an exception. Setting it to a value
268 greater than 4 likewise.
269
270 The main effect of SUBVL is that predication bits are applied per **group**,
271 rather than by individual element.
272
273 This saves a not insignificant number of instructions when handling 3D
274 vectors, as otherwise a much longer predicate mask would have to be set
275 up with regularly-repeated bit patterns.
276
277 See SUBVL Pseudocode illustration for details.
278
279 ## STATE
280
281 This is a standard CSR that contains sufficient information for a
282 full context save/restore. It contains (and permits setting of):
283
284 * MVL
285 * VL
286 * the destination element offset of the current parallel instruction
287 being executed
288 * and, for twin-predication, the source element offset as well.
289 * SUBVL
290 * the subvector destination element offset of the current parallel instruction
291 being executed
292 * and, for twin-predication, the subvector source element offset as well.
293
294 Interestingly STATE may hypothetically also be modified to make the
295 immediately-following instruction to skip a certain number of elements,
296 by playing with destoffs and srcoffs
297 (and the subvector offsets as well)
298
299 Setting destoffs and srcoffs is realistically intended for saving state
300 so that exceptions (page faults in particular) may be serviced and the
301 hardware-loop that was being executed at the time of the trap, from
302 user-mode (or Supervisor-mode), may be returned to and continued from exactly
303 where it left off. The reason why this works is because setting
304 User-Mode STATE will not change (not be used) in M-Mode or S-Mode
305 (and is entirely why M-Mode and S-Mode have their own STATE CSRs, meSTATE and seSTATE).
306
307 The format of the STATE CSR is as follows:
308
309 | (30..29 | (28..27) | (26..24) | (23..18) | (17..12) | (11..6) | (5...0) |
310 | ------- | -------- | -------- | -------- | -------- | ------- | ------- |
311 | dsvoffs | ssvoffs | subvl | destoffs | srcoffs | vl | maxvl |
312
313 When setting this CSR, the following characteristics will be enforced:
314
315 * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
316 * **VL** will be truncated (after offset) to be within the range 1 to MAXVL
317 * **SUBVL** which sets a SIMD-like quantity, has only 4 values there are no changes needed
318 * **srcoffs** will be truncated to be within the range 0 to VL-1
319 * **destoffs** will be truncated to be within the range 0 to VL-1
320 * **ssvoffs** will be truncated to be within the range 0 to SUBVL-1
321 * **dsvoffs** will be truncated to be within the range 0 to SUBVL-1
322
323 NOTE: if the following instruction is not a twin predicated instruction, and destoffs or dsvoffs has been set to non-zero, subsequent execution behaviour is undefined. **USE WITH CARE**.
324
325 ### Hardware rules for when to increment STATE offsets
326
327 The offsets inside STATE are like the indices in a loop, except in hardware. They are also partially (conceptually) similar to a "sub-execution Program Counter". As such, and to allow proper context switching and to define correct exception behaviour, the following rules must be observed:
328
329 * When the VL CSR is set, srcoffs and destoffs are reset to zero.
330 * Each instruction that contains a "tagged" register shall start execution at the *current* value of srcoffs (and destoffs in the case of twin predication)
331 * Unpredicated bits (in nonzeroing mode) shall cause the element operation to skip, incrementing the srcoffs (or destoffs)
332 * On execution of an element operation, Exceptions shall **NOT** cause srcoffs or destoffs to increment.
333 * On completion of the full Vector Loop (srcoffs = VL-1 or destoffs = VL-1 after the last element is executed), both srcoffs and destoffs shall be reset to zero.
334
335 This latter is why srcoffs and destoffs may be stored as values from 0 to XLEN-1 in the STATE CSR, because as loop indices they refer to elements. srcoffs and destoffs never need to be set to VL: their maximum operating values are limited to 0 to VL-1.
336
337 The same corresponding rules apply to SUBVL, svsrcoffs and svdestoffs.
338
339 ## MVL and VL Pseudocode
340
341 The pseudo-code for get and set of VL and MVL use the following internal
342 functions as follows:
343
344 set_mvl_csr(value, rd):
345 regs[rd] = STATE.MVL
346 STATE.MVL = MIN(value, STATE.MVL)
347
348 get_mvl_csr(rd):
349 regs[rd] = STATE.VL
350
351 set_vl_csr(value, rd):
352 STATE.VL = MIN(value, STATE.MVL)
353 regs[rd] = STATE.VL # yes returning the new value NOT the old CSR
354 return STATE.VL
355
356 get_vl_csr(rd):
357 regs[rd] = STATE.VL
358 return STATE.VL
359
360 Note that where setting MVL behaves as a normal CSR (returns the old
361 value), unlike standard CSR behaviour, setting VL will return the **new**
362 value of VL **not** the old one.
363
364 For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
365 maximise the effectiveness, an immediate of 0 is used to set VL=1,
366 an immediate of 1 is used to set VL=2 and so on:
367
368 CSRRWI_Set_MVL(value):
369 set_mvl_csr(value+1, x0)
370
371 CSRRWI_Set_VL(value):
372 set_vl_csr(value+1, x0)
373
374 However for CSRRW the following pseudocode is used for MVL and VL,
375 where setting the value to zero will cause an exception to be raised.
376 The reason is that if VL or MVL are set to zero, the STATE CSR is
377 not capable of storing that value.
378
379 CSRRW_Set_MVL(rs1, rd):
380 value = regs[rs1]
381 if value == 0 or value > XLEN:
382 raise Exception
383 set_mvl_csr(value, rd)
384
385 CSRRW_Set_VL(rs1, rd):
386 value = regs[rs1]
387 if value == 0 or value > XLEN:
388 raise Exception
389 set_vl_csr(value, rd)
390
391 In this way, when CSRRW is utilised with a loop variable, the value
392 that goes into VL (and into the destination register) may be used
393 in an instruction-minimal fashion:
394
395 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
396 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
397 CSRRWI MVL, 3 # sets MVL == **4** (not 3)
398 j zerotest # in case loop counter a0 already 0
399 loop:
400 CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
401 ld a3, a1 # load 4 registers a3-6 from x
402 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
403 ld a7, a2 # load 4 registers a7-10 from y
404 add a1, a1, t1 # increment pointer to x by vl*8
405 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
406 sub a0, a0, t0 # n -= vl (t0)
407 st a7, a2 # store 4 registers a7-10 to y
408 add a2, a2, t1 # increment pointer to y by vl*8
409 zerotest:
410 bnez a0, loop # repeat if n != 0
411
412 With the STATE CSR, just like with CSRRWI, in order to maximise the
413 utilisation of the limited bitspace, "000000" in binary represents
414 VL==1, "00001" represents VL==2 and so on (likewise for MVL):
415
416 CSRRW_Set_SV_STATE(rs1, rd):
417 value = regs[rs1]
418 get_state_csr(rd)
419 STATE.MVL = set_mvl_csr(value[11:6]+1)
420 STATE.VL = set_vl_csr(value[5:0]+1)
421 STATE.destoffs = value[23:18]>>18
422 STATE.srcoffs = value[23:18]>>12
423
424 get_state_csr(rd):
425 regs[rd] = (STATE.MVL-1) | (STATE.VL-1)<<6 | (STATE.srcoffs)<<12 |
426 (STATE.destoffs)<<18
427 return regs[rd]
428
429 In both cases, whilst CSR read of VL and MVL return the exact values
430 of VL and MVL respectively, reading and writing the STATE CSR returns
431 those values **minus one**. This is absolutely critical to implement
432 if the STATE CSR is to be used for fast context-switching.
433
434 ## VL, MVL and SUBVL instruction aliases
435
436 This table contains pseudo-assembly instruction aliases. Note the subtraction of 1 from the CSRRWI pseudo variants, to compensate for the reduced range of the 5 bit immediate.
437
438 | alias | CSR |
439 | - | - |
440 | SETVL rd, rs | CSRRW VL, rd, rs |
441 | SETVLi rd, #n | CSRRWI VL, rd, #n-1 |
442 | GETVL rd | CSRRW VL, rd, x0 |
443 | SETMVL rd, rs | CSRRW MVL, rd, rs |
444 | SETMVLi rd, #n | CSRRWI MVL,rd, #n-1 |
445 | GETMVL rd | CSRRW MVL, rd, x0 |
446
447 Note: CSRRC and other bitsetting may still be used, they are however not particularly useful (very obscure).
448
449 ## Register key-value (CAM) table <a name="regcsrtable" />
450
451 *NOTE: in prior versions of SV, this table used to be writable and
452 accessible via CSRs. It is now stored in the VLIW instruction format. Note that
453 this table does *not* get applied to the SVPrefix P48/64 format, only to scalar opcodes*
454
455 The purpose of the Register table is three-fold:
456
457 * To mark integer and floating-point registers as requiring "redirection"
458 if it is ever used as a source or destination in any given operation.
459 This involves a level of indirection through a 5-to-7-bit lookup table,
460 such that **unmodified** operands with 5 bits (3 for some RVC ops) may
461 access up to **128** registers.
462 * To indicate whether, after redirection through the lookup table, the
463 register is a vector (or remains a scalar).
464 * To over-ride the implicit or explicit bitwidth that the operation would
465 normally give the register.
466
467 Note: clearly, if an RVC operation uses a 3 bit spec'd register (x8-x15) and the Register table contains entried that only refer to registerd x1-x14 or x16-x31, such operations will *never* activate the VL hardware loop!
468
469 If however the (16 bit) Register table does contain such an entry (x8-x15 or x2 in the case of LWSP), that src or dest reg may be redirected anywhere to the *full* 128 register range. Thus, RVC becomes far more powerful and has many more opportunities to reduce code size that in Standard RV32/RV64 executables.
470
471 16 bit format:
472
473 | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
474 | ------ | | - | - | - | ------ | ------- |
475 | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
476 | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
477 | .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
478 | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
479
480 8 bit format:
481
482 | RegCAM | | 7 | (6..5) | (4..0) |
483 | ------ | | - | ------ | ------- |
484 | 0 | | i/f | vew0 | regnum |
485
486 i/f is set to "1" to indicate that the redirection/tag entry is to be applied
487 to integer registers; 0 indicates that it is relevant to floating-point
488 registers.
489
490 The 8 bit format is used for a much more compact expression. "isvec"
491 is implicit and, similar to [[sv-prefix-proposal]], the target vector
492 is "regnum<<2", implicitly. Contrast this with the 16-bit format where
493 the target vector is *explicitly* named in bits 8 to 14, and bit 15 may
494 optionally set "scalar" mode.
495
496 Note that whilst SVPrefix adds one extra bit to each of rd, rs1 etc.,
497 and thus the "vector" mode need only shift the (6 bit) regnum by 1 to
498 get the actual (7 bit) register number to use, there is not enough space
499 in the 8 bit format (only 5 bits for regnum) so "regnum<<2" is required.
500
501 vew has the following meanings, indicating that the instruction's
502 operand size is "over-ridden" in a polymorphic fashion:
503
504 | vew | bitwidth |
505 | --- | ------------------- |
506 | 00 | default (XLEN/FLEN) |
507 | 01 | 8 bit |
508 | 10 | 16 bit |
509 | 11 | 32 bit |
510
511 As the above table is a CAM (key-value store) it may be appropriate
512 (faster, implementation-wise) to expand it as follows:
513
514 struct vectorised fp_vec[32], int_vec[32];
515
516 for (i = 0; i < len; i++) // from VLIW Format
517 tb = int_vec if CSRvec[i].type == 0 else fp_vec
518 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
519 tb[idx].elwidth = CSRvec[i].elwidth
520 tb[idx].regidx = CSRvec[i].regidx // indirection
521 tb[idx].isvector = CSRvec[i].isvector // 0=scalar
522
523 ## Predication Table <a name="predication_csr_table"></a>
524
525 *NOTE: in prior versions of SV, this table used to be writable and
526 accessible via CSRs. It is now stored in the VLIW instruction format.
527 The table does **not** apply to SVPrefix opcodes*
528
529 The Predication Table is a key-value store indicating whether, if a
530 given destination register (integer or floating-point) is referred to
531 in an instruction, it is to be predicated. Like the Register table, it
532 is an indirect lookup that allows the RV opcodes to not need modification.
533
534 It is particularly important to note
535 that the *actual* register used can be *different* from the one that is
536 in the instruction, due to the redirection through the lookup table.
537
538 * regidx is the register that in combination with the
539 i/f flag, if that integer or floating-point register is referred to
540 in a (standard RV) instruction
541 results in the lookup table being referenced to find the predication
542 mask to use for this operation.
543 * predidx is the
544 *actual* (full, 7 bit) register to be used for the predication mask.
545 * inv indicates that the predication mask bits are to be inverted
546 prior to use *without* actually modifying the contents of the
547 registerfrom which those bits originated.
548 * zeroing is either 1 or 0, and if set to 1, the operation must
549 place zeros in any element position where the predication mask is
550 set to zero. If zeroing is set to 0, unpredicated elements *must*
551 be left alone. Some microarchitectures may choose to interpret
552 this as skipping the operation entirely. Others which wish to
553 stick more closely to a SIMD architecture may choose instead to
554 interpret unpredicated elements as an internal "copy element"
555 operation (which would be necessary in SIMD microarchitectures
556 that perform register-renaming)
557
558 16 bit format:
559
560 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
561 | ----- | - | - | - | - | ------- | ------- |
562 | 0 | predkey | zero0 | inv0 | i/f | regidx | rsrvd |
563 | 1 | predkey | zero1 | inv1 | i/f | regidx | rsvd |
564 | ... | predkey | ..... | .... | i/f | ....... | ....... |
565 | 15 | predkey | zero15 | inv15 | i/f | regidx | rsvd |
566
567
568 8 bit format:
569
570 | PrCSR | 7 | 6 | 5 | (4..0) |
571 | ----- | - | - | - | ------- |
572 | 0 | zero0 | inv0 | i/f | regnum |
573
574 The 8 bit format is a compact and less expressive variant of the full
575 16 bit format. Using the 8 bit formatis very different: the predicate
576 register to use is implicit, and numbering begins inplicitly from x9. The
577 regnum is still used to "activate" predication, in the same fashion as
578 described above.
579
580 The 16 bit Predication CSR Table is a key-value store, so implementation-wise
581 it will be faster to turn the table around (maintain topologically
582 equivalent state):
583
584 struct pred {
585 bool zero;
586 bool inv;
587 bool enabled;
588 int predidx; // redirection: actual int register to use
589 }
590
591 struct pred fp_pred_reg[32]; // 64 in future (bank=1)
592 struct pred int_pred_reg[32]; // 64 in future (bank=1)
593
594 for (i = 0; i < 16; i++)
595 tb = int_pred_reg if CSRpred[i].type == 0 else fp_pred_reg;
596 idx = CSRpred[i].regidx
597 tb[idx].zero = CSRpred[i].zero
598 tb[idx].inv = CSRpred[i].inv
599 tb[idx].predidx = CSRpred[i].predidx
600 tb[idx].enabled = true
601
602 So when an operation is to be predicated, it is the internal state that
603 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
604 pseudo-code for operations is given, where p is the explicit (direct)
605 reference to the predication register to be used:
606
607 for (int i=0; i<vl; ++i)
608 if ([!]preg[p][i])
609 (d ? vreg[rd][i] : sreg[rd]) =
610 iop(s1 ? vreg[rs1][i] : sreg[rs1],
611 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
612
613 This instead becomes an *indirect* reference using the *internal* state
614 table generated from the Predication CSR key-value store, which is used
615 as follows.
616
617 if type(iop) == INT:
618 preg = int_pred_reg[rd]
619 else:
620 preg = fp_pred_reg[rd]
621
622 for (int i=0; i<vl; ++i)
623 predicate, zeroing = get_pred_val(type(iop) == INT, rd):
624 if (predicate && (1<<i))
625 (d ? regfile[rd+i] : regfile[rd]) =
626 iop(s1 ? regfile[rs1+i] : regfile[rs1],
627 s2 ? regfile[rs2+i] : regfile[rs2]); // for insts with 2 inputs
628 else if (zeroing)
629 (d ? regfile[rd+i] : regfile[rd]) = 0
630
631 Note:
632
633 * d, s1 and s2 are booleans indicating whether destination,
634 source1 and source2 are vector or scalar
635 * key-value CSR-redirection of rd, rs1 and rs2 have NOT been included
636 above, for clarity. rd, rs1 and rs2 all also must ALSO go through
637 register-level redirection (from the Register table) if they are
638 vectors.
639
640 If written as a function, obtaining the predication mask (and whether
641 zeroing takes place) may be done as follows:
642
643 def get_pred_val(bool is_fp_op, int reg):
644 tb = int_reg if is_fp_op else fp_reg
645 if (!tb[reg].enabled):
646 return ~0x0, False // all enabled; no zeroing
647 tb = int_pred if is_fp_op else fp_pred
648 if (!tb[reg].enabled):
649 return ~0x0, False // all enabled; no zeroing
650 predidx = tb[reg].predidx // redirection occurs HERE
651 predicate = intreg[predidx] // actual predicate HERE
652 if (tb[reg].inv):
653 predicate = ~predicate // invert ALL bits
654 return predicate, tb[reg].zero
655
656 Note here, critically, that **only** if the register is marked
657 in its **register** table entry as being "active" does the testing
658 proceed further to check if the **predicate** table entry is
659 also active.
660
661 Note also that this is in direct contrast to branch operations
662 for the storage of comparisions: in these specific circumstances
663 the requirement for there to be an active *register* entry
664 is removed.
665
666 ## REMAP CSR <a name="remap" />
667
668 (Note: both the REMAP and SHAPE sections are best read after the
669 rest of the document has been read)
670
671 There is one 32-bit CSR which may be used to indicate which registers,
672 if used in any operation, must be "reshaped" (re-mapped) from a linear
673 form to a 2D or 3D transposed form, or "offset" to permit arbitrary
674 access to elements within a register.
675
676 The 32-bit REMAP CSR may reshape up to 3 registers:
677
678 | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
679 | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
680 | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
681
682 regidx0-2 refer not to the Register CSR CAM entry but to the underlying
683 *real* register (see regidx, the value) and consequently is 7-bits wide.
684 When set to zero (referring to x0), clearly reshaping x0 is pointless,
685 so is used to indicate "disabled".
686 shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
687 Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
688
689 It is anticipated that these specialist CSRs not be very often used.
690 Unlike the CSR Register and Predication tables, the REMAP CSRs use
691 the full 7-bit regidx so that they can be set once and left alone,
692 whilst the CSR Register entries pointing to them are disabled, instead.
693
694 ## SHAPE 1D/2D/3D vector-matrix remapping CSRs
695
696 (Note: both the REMAP and SHAPE sections are best read after the
697 rest of the document has been read)
698
699 There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
700 which have the same format. When each SHAPE CSR is set entirely to zeros,
701 remapping is disabled: the register's elements are a linear (1D) vector.
702
703 | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
704 | ------- | -- | ------- | -- | ------- | -- | ------- |
705 | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
706
707 offs is a 3-bit field, spread out across bits 7, 15 and 23, which
708 is added to the element index during the loop calculation.
709
710 xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
711 that the array dimensionality for that dimension is 1. A value of xdimsz=2
712 would indicate that in the first dimension there are 3 elements in the
713 array. The format of the array is therefore as follows:
714
715 array[xdim+1][ydim+1][zdim+1]
716
717 However whilst illustrative of the dimensionality, that does not take the
718 "permute" setting into account. "permute" may be any one of six values
719 (0-5, with values of 6 and 7 being reserved, and not legal). The table
720 below shows how the permutation dimensionality order works:
721
722 | permute | order | array format |
723 | ------- | ----- | ------------------------ |
724 | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
725 | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
726 | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
727 | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
728 | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
729 | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
730
731 In other words, the "permute" option changes the order in which
732 nested for-loops over the array would be done. The algorithm below
733 shows this more clearly, and may be executed as a python program:
734
735 # mapidx = REMAP.shape2
736 xdim = 3 # SHAPE[mapidx].xdim_sz+1
737 ydim = 4 # SHAPE[mapidx].ydim_sz+1
738 zdim = 5 # SHAPE[mapidx].zdim_sz+1
739
740 lims = [xdim, ydim, zdim]
741 idxs = [0,0,0] # starting indices
742 order = [1,0,2] # experiment with different permutations, here
743 offs = 0 # experiment with different offsets, here
744
745 for idx in range(xdim * ydim * zdim):
746 new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
747 print new_idx,
748 for i in range(3):
749 idxs[order[i]] = idxs[order[i]] + 1
750 if (idxs[order[i]] != lims[order[i]]):
751 break
752 print
753 idxs[order[i]] = 0
754
755 Here, it is assumed that this algorithm be run within all pseudo-code
756 throughout this document where a (parallelism) for-loop would normally
757 run from 0 to VL-1 to refer to contiguous register
758 elements; instead, where REMAP indicates to do so, the element index
759 is run through the above algorithm to work out the **actual** element
760 index, instead. Given that there are three possible SHAPE entries, up to
761 three separate registers in any given operation may be simultaneously
762 remapped:
763
764 function op_add(rd, rs1, rs2) # add not VADD!
765 ...
766 ...
767  for (i = 0; i < VL; i++)
768 xSTATE.srcoffs = i # save context
769 if (predval & 1<<i) # predication uses intregs
770    ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
771 ireg[rs2+remap(irs2)];
772 if (!int_vec[rd ].isvector) break;
773 if (int_vec[rd ].isvector)  { id += 1; }
774 if (int_vec[rs1].isvector)  { irs1 += 1; }
775 if (int_vec[rs2].isvector)  { irs2 += 1; }
776
777 By changing remappings, 2D matrices may be transposed "in-place" for one
778 operation, followed by setting a different permutation order without
779 having to move the values in the registers to or from memory. Also,
780 the reason for having REMAP separate from the three SHAPE CSRs is so
781 that in a chain of matrix multiplications and additions, for example,
782 the SHAPE CSRs need only be set up once; only the REMAP CSR need be
783 changed to target different registers.
784
785 Note that:
786
787 * Over-running the register file clearly has to be detected and
788 an illegal instruction exception thrown
789 * When non-default elwidths are set, the exact same algorithm still
790 applies (i.e. it offsets elements *within* registers rather than
791 entire registers).
792 * If permute option 000 is utilised, the actual order of the
793 reindexing does not change!
794 * If two or more dimensions are set to zero, the actual order does not change!
795 * The above algorithm is pseudo-code **only**. Actual implementations
796 will need to take into account the fact that the element for-looping
797 must be **re-entrant**, due to the possibility of exceptions occurring.
798 See MSTATE CSR, which records the current element index.
799 * Twin-predicated operations require **two** separate and distinct
800 element offsets. The above pseudo-code algorithm will be applied
801 separately and independently to each, should each of the two
802 operands be remapped. *This even includes C.LDSP* and other operations
803 in that category, where in that case it will be the **offset** that is
804 remapped (see Compressed Stack LOAD/STORE section).
805 * Offset is especially useful, on its own, for accessing elements
806 within the middle of a register. Without offsets, it is necessary
807 to either use a predicated MV, skipping the first elements, or
808 performing a LOAD/STORE cycle to memory.
809 With offsets, the data does not have to be moved.
810 * Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to
811 less than MVL is **perfectly legal**, albeit very obscure. It permits
812 entries to be regularly presented to operands **more than once**, thus
813 allowing the same underlying registers to act as an accumulator of
814 multiple vector or matrix operations, for example.
815
816 Clearly here some considerable care needs to be taken as the remapping
817 could hypothetically create arithmetic operations that target the
818 exact same underlying registers, resulting in data corruption due to
819 pipeline overlaps. Out-of-order / Superscalar micro-architectures with
820 register-renaming will have an easier time dealing with this than
821 DSP-style SIMD micro-architectures.
822
823 # Instruction Execution Order
824
825 Simple-V behaves as if it is a hardware-level "macro expansion system",
826 substituting and expanding a single instruction into multiple sequential
827 instructions with contiguous and sequentially-incrementing registers.
828 As such, it does **not** modify - or specify - the behaviour and semantics of
829 the execution order: that may be deduced from the **existing** RV
830 specification in each and every case.
831
832 So for example if a particular micro-architecture permits out-of-order
833 execution, and it is augmented with Simple-V, then wherever instructions
834 may be out-of-order then so may the "post-expansion" SV ones.
835
836 If on the other hand there are memory guarantees which specifically
837 prevent and prohibit certain instructions from being re-ordered
838 (such as the Atomicity Axiom, or FENCE constraints), then clearly
839 those constraints **MUST** also be obeyed "post-expansion".
840
841 It should be absolutely clear that SV is **not** about providing new
842 functionality or changing the existing behaviour of a micro-architetural
843 design, or about changing the RISC-V Specification.
844 It is **purely** about compacting what would otherwise be contiguous
845 instructions that use sequentially-increasing register numbers down
846 to the **one** instruction.
847
848 # Instructions <a name="instructions" />
849
850 Despite being a 98% complete and accurate topological remap of RVV
851 concepts and functionality, no new instructions are needed.
852 Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
853 becomes a critical dependency for efficient manipulation of predication
854 masks (as a bit-field). Despite the removal of all operations,
855 with the exception of CLIP and VSELECT.X
856 *all instructions from RVV Base are topologically re-mapped and retain their
857 complete functionality, intact*. Note that if RV64G ever had
858 a MV.X added as well as FCLIP, the full functionality of RVV-Base would
859 be obtained in SV.
860
861 Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
862 equivalents, so are left out of Simple-V. VSELECT could be included if
863 there existed a MV.X instruction in RV (MV.X is a hypothetical
864 non-immediate variant of MV that would allow another register to
865 specify which register was to be copied). Note that if any of these three
866 instructions are added to any given RV extension, their functionality
867 will be inherently parallelised.
868
869 With some exceptions, where it does not make sense or is simply too
870 challenging, all RV-Base instructions are parallelised:
871
872 * CSR instructions, whilst a case could be made for fast-polling of
873 a CSR into multiple registers, or for being able to copy multiple
874 contiguously addressed CSRs into contiguous registers, and so on,
875 are the fundamental core basis of SV. If parallelised, extreme
876 care would need to be taken. Additionally, CSR reads are done
877 using x0, and it is *really* inadviseable to tag x0.
878 * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
879 left as scalar.
880 * LR/SC could hypothetically be parallelised however their purpose is
881 single (complex) atomic memory operations where the LR must be followed
882 up by a matching SC. A sequence of parallel LR instructions followed
883 by a sequence of parallel SC instructions therefore is guaranteed to
884 not be useful. Not least: the guarantees of a Multi-LR/SC
885 would be impossible to provide if emulated in a trap.
886 * EBREAK, NOP, FENCE and others do not use registers so are not inherently
887 paralleliseable anyway.
888
889 All other operations using registers are automatically parallelised.
890 This includes AMOMAX, AMOSWAP and so on, where particular care and
891 attention must be paid.
892
893 Example pseudo-code for an integer ADD operation (including scalar operations).
894 Floating-point uses fp csrs.
895
896 function op_add(rd, rs1, rs2) # add not VADD!
897  int i, id=0, irs1=0, irs2=0;
898  predval = get_pred_val(FALSE, rd);
899  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
900  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
901  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
902  for (i = 0; i < VL; i++)
903 xSTATE.srcoffs = i # save context
904 if (predval & 1<<i) # predication uses intregs
905    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
906 if (!int_vec[rd ].isvector) break;
907 if (int_vec[rd ].isvector)  { id += 1; }
908 if (int_vec[rs1].isvector)  { irs1 += 1; }
909 if (int_vec[rs2].isvector)  { irs2 += 1; }
910
911 Note that for simplicity there is quite a lot missing from the above
912 pseudo-code: element widths, zeroing on predication, dimensional
913 reshaping and offsets and so on. However it demonstrates the basic
914 principle. Augmentations that produce the full pseudo-code are covered in
915 other sections.
916
917 ## SUBVL Pseudocode
918
919 Adding in support for SUBVL is a matter of adding in an extra inner for-loop, where register src and dest are still incremented inside the inner part. Not that the predication is still taken from the VL index.
920
921 So whilst elements are indexed by (i * SUBVL + s), predicate bits are indexed by i
922
923 function op_add(rd, rs1, rs2) # add not VADD!
924  int i, id=0, irs1=0, irs2=0;
925  predval = get_pred_val(FALSE, rd);
926  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
927  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
928  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
929  for (i = 0; i < VL; i++)
930 xSTATE.srcoffs = i # save context
931 for (s = 0; s < SUBVL; s++)
932 xSTATE.ssvoffs = s # save context
933 if (predval & 1<<i) # predication uses intregs
934 # actual add is here (at last)
935    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
936 if (!int_vec[rd ].isvector) break;
937 if (int_vec[rd ].isvector)  { id += 1; }
938 if (int_vec[rs1].isvector)  { irs1 += 1; }
939 if (int_vec[rs2].isvector)  { irs2 += 1; }
940 if (id == VL or irs1 == VL or irs2 == VL) {
941 # end VL hardware loop
942 xSTATE.srcoffs = 0; # reset
943 xSTATE.ssvoffs = 0; # reset
944 return;
945 }
946
947
948 NOTE: pseudocode simplified greatly: zeroing, proper predicate handling, elwidth handling etc. all left out.
949
950 ## Instruction Format
951
952 It is critical to appreciate that there are
953 **no operations added to SV, at all**.
954
955 Instead, by using CSRs to tag registers as an indication of "changed
956 behaviour", SV *overloads* pre-existing branch operations into predicated
957 variants, and implicitly overloads arithmetic operations, MV, FCVT, and
958 LOAD/STORE depending on CSR configurations for bitwidth and predication.
959 **Everything** becomes parallelised. *This includes Compressed
960 instructions* as well as any future instructions and Custom Extensions.
961
962 Note: CSR tags to change behaviour of instructions is nothing new, including
963 in RISC-V. UXL, SXL and MXL change the behaviour so that XLEN=32/64/128.
964 FRM changes the behaviour of the floating-point unit, to alter the rounding
965 mode. Other architectures change the LOAD/STORE byte-order from big-endian
966 to little-endian on a per-instruction basis. SV is just a little more...
967 comprehensive in its effect on instructions.
968
969 ## Branch Instructions
970
971 ### Standard Branch <a name="standard_branch"></a>
972
973 Branch operations use standard RV opcodes that are reinterpreted to
974 be "predicate variants" in the instance where either of the two src
975 registers are marked as vectors (active=1, vector=1).
976
977 Note that the predication register to use (if one is enabled) is taken from
978 the *first* src register, and that this is used, just as with predicated
979 arithmetic operations, to mask whether the comparison operations take
980 place or not. The target (destination) predication register
981 to use (if one is enabled) is taken from the *second* src register.
982
983 If either of src1 or src2 are scalars (whether by there being no
984 CSR register entry or whether by the CSR entry specifically marking
985 the register as "scalar") the comparison goes ahead as vector-scalar
986 or scalar-vector.
987
988 In instances where no vectorisation is detected on either src registers
989 the operation is treated as an absolutely standard scalar branch operation.
990 Where vectorisation is present on either or both src registers, the
991 branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
992 those tests that are predicated out).
993
994 Note that when zero-predication is enabled (from source rs1),
995 a cleared bit in the predicate indicates that the result
996 of the compare is set to "false", i.e. that the corresponding
997 destination bit (or result)) be set to zero. Contrast this with
998 when zeroing is not set: bits in the destination predicate are
999 only *set*; they are **not** cleared. This is important to appreciate,
1000 as there may be an expectation that, going into the hardware-loop,
1001 the destination predicate is always expected to be set to zero:
1002 this is **not** the case. The destination predicate is only set
1003 to zero if **zeroing** is enabled.
1004
1005 Note that just as with the standard (scalar, non-predicated) branch
1006 operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
1007 src1 and src2.
1008
1009 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
1010 for predicated compare operations of function "cmp":
1011
1012 for (int i=0; i<vl; ++i)
1013 if ([!]preg[p][i])
1014 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
1015 s2 ? vreg[rs2][i] : sreg[rs2]);
1016
1017 With associated predication, vector-length adjustments and so on,
1018 and temporarily ignoring bitwidth (which makes the comparisons more
1019 complex), this becomes:
1020
1021 s1 = reg_is_vectorised(src1);
1022 s2 = reg_is_vectorised(src2);
1023
1024 if not s1 && not s2
1025 if cmp(rs1, rs2) # scalar compare
1026 goto branch
1027 return
1028
1029 preg = int_pred_reg[rd]
1030 reg = int_regfile
1031
1032 ps = get_pred_val(I/F==INT, rs1);
1033 rd = get_pred_val(I/F==INT, rs2); # this may not exist
1034
1035 if not exists(rd) or zeroing:
1036 result = 0
1037 else
1038 result = preg[rd]
1039
1040 for (int i = 0; i < VL; ++i)
1041 if (zeroing)
1042 if not (ps & (1<<i))
1043 result &= ~(1<<i);
1044 else if (ps & (1<<i))
1045 if (cmp(s1 ? reg[src1+i]:reg[src1],
1046 s2 ? reg[src2+i]:reg[src2])
1047 result |= 1<<i;
1048 else
1049 result &= ~(1<<i);
1050
1051 if not exists(rd)
1052 if result == ps
1053 goto branch
1054 else
1055 preg[rd] = result # store in destination
1056 if preg[rd] == ps
1057 goto branch
1058
1059 Notes:
1060
1061 * Predicated SIMD comparisons would break src1 and src2 further down
1062 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
1063 Reordering") setting Vector-Length times (number of SIMD elements) bits
1064 in Predicate Register rd, as opposed to just Vector-Length bits.
1065 * The execution of "parallelised" instructions **must** be implemented
1066 as "re-entrant" (to use a term from software). If an exception (trap)
1067 occurs during the middle of a vectorised
1068 Branch (now a SV predicated compare) operation, the partial results
1069 of any comparisons must be written out to the destination
1070 register before the trap is permitted to begin. If however there
1071 is no predicate, the **entire** set of comparisons must be **restarted**,
1072 with the offset loop indices set back to zero. This is because
1073 there is no place to store the temporary result during the handling
1074 of traps.
1075
1076 TODO: predication now taken from src2. also branch goes ahead
1077 if all compares are successful.
1078
1079 Note also that where normally, predication requires that there must
1080 also be a CSR register entry for the register being used in order
1081 for the **predication** CSR register entry to also be active,
1082 for branches this is **not** the case. src2 does **not** have
1083 to have its CSR register entry marked as active in order for
1084 predication on src2 to be active.
1085
1086 Also note: SV Branch operations are **not** twin-predicated
1087 (see Twin Predication section). This would require three
1088 element offsets: one to track src1, one to track src2 and a third
1089 to track where to store the accumulation of the results. Given
1090 that the element offsets need to be exposed via CSRs so that
1091 the parallel hardware looping may be made re-entrant on traps
1092 and exceptions, the decision was made not to make SV Branches
1093 twin-predicated.
1094
1095 ### Floating-point Comparisons
1096
1097 There does not exist floating-point branch operations, only compare.
1098 Interestingly no change is needed to the instruction format because
1099 FP Compare already stores a 1 or a zero in its "rd" integer register
1100 target, i.e. it's not actually a Branch at all: it's a compare.
1101
1102 In RV (scalar) Base, a branch on a floating-point compare is
1103 done via the sequence "FEQ x1, f0, f5; BEQ x1, x0, #jumploc".
1104 This does extend to SV, as long as x1 (in the example sequence given)
1105 is vectorised. When that is the case, x1..x(1+VL-1) will also be
1106 set to 0 or 1 depending on whether f0==f5, f1==f6, f2==f7 and so on.
1107 The BEQ that follows will *also* compare x1==x0, x2==x0, x3==x0 and
1108 so on. Consequently, unlike integer-branch, FP Compare needs no
1109 modification in its behaviour.
1110
1111 In addition, it is noted that an entry "FNE" (the opposite of FEQ) is missing,
1112 and whilst in ordinary branch code this is fine because the standard
1113 RVF compare can always be followed up with an integer BEQ or a BNE (or
1114 a compressed comparison to zero or non-zero), in predication terms that
1115 becomes more of an impact. To deal with this, SV's predication has
1116 had "invert" added to it.
1117
1118 Also: note that FP Compare may be predicated, using the destination
1119 integer register (rd) to determine the predicate. FP Compare is **not**
1120 a twin-predication operation, as, again, just as with SV Branches,
1121 there are three registers involved: FP src1, FP src2 and INT rd.
1122
1123 ### Compressed Branch Instruction
1124
1125 Compressed Branch instructions are, just like standard Branch instructions,
1126 reinterpreted to be vectorised and predicated based on the source register
1127 (rs1s) CSR entries. As however there is only the one source register,
1128 given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
1129 to store the results of the comparisions is taken from CSR predication
1130 table entries for **x0**.
1131
1132 The specific required use of x0 is, with a little thought, quite obvious,
1133 but is counterintuitive. Clearly it is **not** recommended to redirect
1134 x0 with a CSR register entry, however as a means to opaquely obtain
1135 a predication target it is the only sensible option that does not involve
1136 additional special CSRs (or, worse, additional special opcodes).
1137
1138 Note also that, just as with standard branches, the 2nd source
1139 (in this case x0 rather than src2) does **not** have to have its CSR
1140 register table marked as "active" in order for predication to work.
1141
1142 ## Vectorised Dual-operand instructions
1143
1144 There is a series of 2-operand instructions involving copying (and
1145 sometimes alteration):
1146
1147 * C.MV
1148 * FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
1149 * C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
1150 * LOAD(-FP) and STORE(-FP)
1151
1152 All of these operations follow the same two-operand pattern, so it is
1153 *both* the source *and* destination predication masks that are taken into
1154 account. This is different from
1155 the three-operand arithmetic instructions, where the predication mask
1156 is taken from the *destination* register, and applied uniformly to the
1157 elements of the source register(s), element-for-element.
1158
1159 The pseudo-code pattern for twin-predicated operations is as
1160 follows:
1161
1162 function op(rd, rs):
1163  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1164  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1165  ps = get_pred_val(FALSE, rs); # predication on src
1166  pd = get_pred_val(FALSE, rd); # ... AND on dest
1167  for (int i = 0, int j = 0; i < VL && j < VL;):
1168 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1169 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1170 xSTATE.srcoffs = i # save context
1171 xSTATE.destoffs = j # save context
1172 reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
1173 if (int_csr[rs].isvec) i++;
1174 if (int_csr[rd].isvec) j++; else break
1175
1176 This pattern covers scalar-scalar, scalar-vector, vector-scalar
1177 and vector-vector, and predicated variants of all of those.
1178 Zeroing is not presently included (TODO). As such, when compared
1179 to RVV, the twin-predicated variants of C.MV and FMV cover
1180 **all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
1181 VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
1182
1183 Note that:
1184
1185 * elwidth (SIMD) is not covered in the pseudo-code above
1186 * ending the loop early in scalar cases (VINSERT, VEXTRACT) is also
1187 not covered
1188 * zero predication is also not shown (TODO).
1189
1190 ### C.MV Instruction <a name="c_mv"></a>
1191
1192 There is no MV instruction in RV however there is a C.MV instruction.
1193 It is used for copying integer-to-integer registers (vectorised FMV
1194 is used for copying floating-point).
1195
1196 If either the source or the destination register are marked as vectors
1197 C.MV is reinterpreted to be a vectorised (multi-register) predicated
1198 move operation. The actual instruction's format does not change:
1199
1200 [[!table data="""
1201 15 12 | 11 7 | 6 2 | 1 0 |
1202 funct4 | rd | rs | op |
1203 4 | 5 | 5 | 2 |
1204 C.MV | dest | src | C0 |
1205 """]]
1206
1207 A simplified version of the pseudocode for this operation is as follows:
1208
1209 function op_mv(rd, rs) # MV not VMV!
1210  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1211  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1212  ps = get_pred_val(FALSE, rs); # predication on src
1213  pd = get_pred_val(FALSE, rd); # ... AND on dest
1214  for (int i = 0, int j = 0; i < VL && j < VL;):
1215 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1216 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1217 xSTATE.srcoffs = i # save context
1218 xSTATE.destoffs = j # save context
1219 ireg[rd+j] <= ireg[rs+i];
1220 if (int_csr[rs].isvec) i++;
1221 if (int_csr[rd].isvec) j++; else break
1222
1223 There are several different instructions from RVV that are covered by
1224 this one opcode:
1225
1226 [[!table data="""
1227 src | dest | predication | op |
1228 scalar | vector | none | VSPLAT |
1229 scalar | vector | destination | sparse VSPLAT |
1230 scalar | vector | 1-bit dest | VINSERT |
1231 vector | scalar | 1-bit? src | VEXTRACT |
1232 vector | vector | none | VCOPY |
1233 vector | vector | src | Vector Gather |
1234 vector | vector | dest | Vector Scatter |
1235 vector | vector | src & dest | Gather/Scatter |
1236 vector | vector | src == dest | sparse VCOPY |
1237 """]]
1238
1239 Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
1240 operations with inversion on the src and dest predication for one of the
1241 two C.MV operations.
1242
1243 Note that in the instance where the Compressed Extension is not implemented,
1244 MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs.
1245 Note that the behaviour is **different** from C.MV because with addi the
1246 predication mask to use is taken **only** from rd and is applied against
1247 all elements: rs[i] = rd[i].
1248
1249 ### FMV, FNEG and FABS Instructions
1250
1251 These are identical in form to C.MV, except covering floating-point
1252 register copying. The same double-predication rules also apply.
1253 However when elwidth is not set to default the instruction is implicitly
1254 and automatic converted to a (vectorised) floating-point type conversion
1255 operation of the appropriate size covering the source and destination
1256 register bitwidths.
1257
1258 (Note that FMV, FNEG and FABS are all actually pseudo-instructions)
1259
1260 ### FVCT Instructions
1261
1262 These are again identical in form to C.MV, except that they cover
1263 floating-point to integer and integer to floating-point. When element
1264 width in each vector is set to default, the instructions behave exactly
1265 as they are defined for standard RV (scalar) operations, except vectorised
1266 in exactly the same fashion as outlined in C.MV.
1267
1268 However when the source or destination element width is not set to default,
1269 the opcode's explicit element widths are *over-ridden* to new definitions,
1270 and the opcode's element width is taken as indicative of the SIMD width
1271 (if applicable i.e. if packed SIMD is requested) instead.
1272
1273 For example FCVT.S.L would normally be used to convert a 64-bit
1274 integer in register rs1 to a 64-bit floating-point number in rd.
1275 If however the source rs1 is set to be a vector, where elwidth is set to
1276 default/2 and "packed SIMD" is enabled, then the first 32 bits of
1277 rs1 are converted to a floating-point number to be stored in rd's
1278 first element and the higher 32-bits *also* converted to floating-point
1279 and stored in the second. The 32 bit size comes from the fact that
1280 FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to
1281 divide that by two it means that rs1 element width is to be taken as 32.
1282
1283 Similar rules apply to the destination register.
1284
1285 ## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
1286
1287 An earlier draft of SV modified the behaviour of LOAD/STORE (modified
1288 the interpretation of the instruction fields). This
1289 actually undermined the fundamental principle of SV, namely that there
1290 be no modifications to the scalar behaviour (except where absolutely
1291 necessary), in order to simplify an implementor's task if considering
1292 converting a pre-existing scalar design to support parallelism.
1293
1294 So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
1295 do not change in SV, however just as with C.MV it is important to note
1296 that dual-predication is possible.
1297
1298 In vectorised architectures there are usually at least two different modes
1299 for LOAD/STORE:
1300
1301 * Read (or write for STORE) from sequential locations, where one
1302 register specifies the address, and the one address is incremented
1303 by a fixed amount. This is usually known as "Unit Stride" mode.
1304 * Read (or write) from multiple indirected addresses, where the
1305 vector elements each specify separate and distinct addresses.
1306
1307 To support these different addressing modes, the CSR Register "isvector"
1308 bit is used. So, for a LOAD, when the src register is set to
1309 scalar, the LOADs are sequentially incremented by the src register
1310 element width, and when the src register is set to "vector", the
1311 elements are treated as indirection addresses. Simplified
1312 pseudo-code would look like this:
1313
1314 function op_ld(rd, rs) # LD not VLD!
1315  rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
1316  rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
1317  ps = get_pred_val(FALSE, rs); # predication on src
1318  pd = get_pred_val(FALSE, rd); # ... AND on dest
1319  for (int i = 0, int j = 0; i < VL && j < VL;):
1320 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1321 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1322 if (int_csr[rd].isvec)
1323 # indirect mode (multi mode)
1324 srcbase = ireg[rsv+i];
1325 else
1326 # unit stride mode
1327 srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
1328 ireg[rdv+j] <= mem[srcbase + imm_offs];
1329 if (!int_csr[rs].isvec &&
1330 !int_csr[rd].isvec) break # scalar-scalar LD
1331 if (int_csr[rs].isvec) i++;
1332 if (int_csr[rd].isvec) j++;
1333
1334 Notes:
1335
1336 * For simplicity, zeroing and elwidth is not included in the above:
1337 the key focus here is the decision-making for srcbase; vectorised
1338 rs means use sequentially-numbered registers as the indirection
1339 address, and scalar rs is "offset" mode.
1340 * The test towards the end for whether both source and destination are
1341 scalar is what makes the above pseudo-code provide the "standard" RV
1342 Base behaviour for LD operations.
1343 * The offset in bytes (XLEN/8) changes depending on whether the
1344 operation is a LB (1 byte), LH (2 byes), LW (4 bytes) or LD
1345 (8 bytes), and also whether the element width is over-ridden
1346 (see special element width section).
1347
1348 ## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
1349
1350 C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
1351 where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
1352 It is therefore possible to use predicated C.LWSP to efficiently
1353 pop registers off the stack (by predicating x2 as the source), cherry-picking
1354 which registers to store to (by predicating the destination). Likewise
1355 for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
1356
1357 The two modes ("unit stride" and multi-indirection) are still supported,
1358 as with standard LD/ST. Essentially, the only difference is that the
1359 use of x2 is hard-coded into the instruction.
1360
1361 **Note**: it is still possible to redirect x2 to an alternative target
1362 register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
1363 general-purpose LOAD/STORE operations.
1364
1365 ## Compressed LOAD / STORE Instructions
1366
1367 Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
1368 where the same rules apply and the same pseudo-code apply as for
1369 non-compressed LOAD/STORE. Again: setting scalar or vector mode
1370 on the src for LOAD and dest for STORE switches mode from "Unit Stride"
1371 to "Multi-indirection", respectively.
1372
1373 # Element bitwidth polymorphism <a name="elwidth"></a>
1374
1375 Element bitwidth is best covered as its own special section, as it
1376 is quite involved and applies uniformly across-the-board. SV restricts
1377 bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
1378
1379 The effect of setting an element bitwidth is to re-cast each entry
1380 in the register table, and for all memory operations involving
1381 load/stores of certain specific sizes, to a completely different width.
1382 Thus In c-style terms, on an RV64 architecture, effectively each register
1383 now looks like this:
1384
1385 typedef union {
1386 uint8_t b[8];
1387 uint16_t s[4];
1388 uint32_t i[2];
1389 uint64_t l[1];
1390 } reg_t;
1391
1392 // integer table: assume maximum SV 7-bit regfile size
1393 reg_t int_regfile[128];
1394
1395 where the CSR Register table entry (not the instruction alone) determines
1396 which of those union entries is to be used on each operation, and the
1397 VL element offset in the hardware-loop specifies the index into each array.
1398
1399 However a naive interpretation of the data structure above masks the
1400 fact that setting VL greater than 8, for example, when the bitwidth is 8,
1401 accessing one specific register "spills over" to the following parts of
1402 the register file in a sequential fashion. So a much more accurate way
1403 to reflect this would be:
1404
1405 typedef union {
1406 uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
1407 uint8_t b[0]; // array of type uint8_t
1408 uint16_t s[0];
1409 uint32_t i[0];
1410 uint64_t l[0];
1411 uint128_t d[0];
1412 } reg_t;
1413
1414 reg_t int_regfile[128];
1415
1416 where when accessing any individual regfile[n].b entry it is permitted
1417 (in c) to arbitrarily over-run the *declared* length of the array (zero),
1418 and thus "overspill" to consecutive register file entries in a fashion
1419 that is completely transparent to a greatly-simplified software / pseudo-code
1420 representation.
1421 It is however critical to note that it is clearly the responsibility of
1422 the implementor to ensure that, towards the end of the register file,
1423 an exception is thrown if attempts to access beyond the "real" register
1424 bytes is ever attempted.
1425
1426 Now we may modify pseudo-code an operation where all element bitwidths have
1427 been set to the same size, where this pseudo-code is otherwise identical
1428 to its "non" polymorphic versions (above):
1429
1430 function op_add(rd, rs1, rs2) # add not VADD!
1431 ...
1432 ...
1433  for (i = 0; i < VL; i++)
1434 ...
1435 ...
1436 // TODO, calculate if over-run occurs, for each elwidth
1437 if (elwidth == 8) {
1438    int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
1439     int_regfile[rs2].i[irs2];
1440 } else if elwidth == 16 {
1441    int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
1442     int_regfile[rs2].s[irs2];
1443 } else if elwidth == 32 {
1444    int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
1445     int_regfile[rs2].i[irs2];
1446 } else { // elwidth == 64
1447    int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
1448     int_regfile[rs2].l[irs2];
1449 }
1450 ...
1451 ...
1452
1453 So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
1454 following sequentially on respectively from the same) are "type-cast"
1455 to 8-bit; for 16-bit entries likewise and so on.
1456
1457 However that only covers the case where the element widths are the same.
1458 Where the element widths are different, the following algorithm applies:
1459
1460 * Analyse the bitwidth of all source operands and work out the
1461 maximum. Record this as "maxsrcbitwidth"
1462 * If any given source operand requires sign-extension or zero-extension
1463 (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
1464 sign-extension / zero-extension or whatever is specified in the standard
1465 RV specification, **change** that to sign-extending from the respective
1466 individual source operand's bitwidth from the CSR table out to
1467 "maxsrcbitwidth" (previously calculated), instead.
1468 * Following separate and distinct (optional) sign/zero-extension of all
1469 source operands as specifically required for that operation, carry out the
1470 operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
1471 this may be a "null" (copy) operation, and that with FCVT, the changes
1472 to the source and destination bitwidths may also turn FVCT effectively
1473 into a copy).
1474 * If the destination operand requires sign-extension or zero-extension,
1475 instead of a mandatory fixed size (typically 32-bit for arithmetic,
1476 for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
1477 etc.), overload the RV specification with the bitwidth from the
1478 destination register's elwidth entry.
1479 * Finally, store the (optionally) sign/zero-extended value into its
1480 destination: memory for sb/sw etc., or an offset section of the register
1481 file for an arithmetic operation.
1482
1483 In this way, polymorphic bitwidths are achieved without requiring a
1484 massive 64-way permutation of calculations **per opcode**, for example
1485 (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
1486 rd bitwidths). The pseudo-code is therefore as follows:
1487
1488 typedef union {
1489 uint8_t b;
1490 uint16_t s;
1491 uint32_t i;
1492 uint64_t l;
1493 } el_reg_t;
1494
1495 bw(elwidth):
1496 if elwidth == 0:
1497 return xlen
1498 if elwidth == 1:
1499 return xlen / 2
1500 if elwidth == 2:
1501 return xlen * 2
1502 // elwidth == 3:
1503 return 8
1504
1505 get_max_elwidth(rs1, rs2):
1506 return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
1507 bw(int_csr[rs2].elwidth)) # again XLEN if no entry
1508
1509 get_polymorphed_reg(reg, bitwidth, offset):
1510 el_reg_t res;
1511 res.l = 0; // TODO: going to need sign-extending / zero-extending
1512 if bitwidth == 8:
1513 reg.b = int_regfile[reg].b[offset]
1514 elif bitwidth == 16:
1515 reg.s = int_regfile[reg].s[offset]
1516 elif bitwidth == 32:
1517 reg.i = int_regfile[reg].i[offset]
1518 elif bitwidth == 64:
1519 reg.l = int_regfile[reg].l[offset]
1520 return res
1521
1522 set_polymorphed_reg(reg, bitwidth, offset, val):
1523 if (!int_csr[reg].isvec):
1524 # sign/zero-extend depending on opcode requirements, from
1525 # the reg's bitwidth out to the full bitwidth of the regfile
1526 val = sign_or_zero_extend(val, bitwidth, xlen)
1527 int_regfile[reg].l[0] = val
1528 elif bitwidth == 8:
1529 int_regfile[reg].b[offset] = val
1530 elif bitwidth == 16:
1531 int_regfile[reg].s[offset] = val
1532 elif bitwidth == 32:
1533 int_regfile[reg].i[offset] = val
1534 elif bitwidth == 64:
1535 int_regfile[reg].l[offset] = val
1536
1537 maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
1538 destwid = int_csr[rs1].elwidth # destination element width
1539  for (i = 0; i < VL; i++)
1540 if (predval & 1<<i) # predication uses intregs
1541 // TODO, calculate if over-run occurs, for each elwidth
1542 src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
1543 // TODO, sign/zero-extend src1 and src2 as operation requires
1544 if (op_requires_sign_extend_src1)
1545 src1 = sign_extend(src1, maxsrcwid)
1546 src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
1547 result = src1 + src2 # actual add here
1548 // TODO, sign/zero-extend result, as operation requires
1549 if (op_requires_sign_extend_dest)
1550 result = sign_extend(result, maxsrcwid)
1551 set_polymorphed_reg(rd, destwid, ird, result)
1552 if (!int_vec[rd].isvector) break
1553 if (int_vec[rd ].isvector)  { id += 1; }
1554 if (int_vec[rs1].isvector)  { irs1 += 1; }
1555 if (int_vec[rs2].isvector)  { irs2 += 1; }
1556
1557 Whilst specific sign-extension and zero-extension pseudocode call
1558 details are left out, due to each operation being different, the above
1559 should be clear that;
1560
1561 * the source operands are extended out to the maximum bitwidth of all
1562 source operands
1563 * the operation takes place at that maximum source bitwidth (the
1564 destination bitwidth is not involved at this point, at all)
1565 * the result is extended (or potentially even, truncated) before being
1566 stored in the destination. i.e. truncation (if required) to the
1567 destination width occurs **after** the operation **not** before.
1568 * when the destination is not marked as "vectorised", the **full**
1569 (standard, scalar) register file entry is taken up, i.e. the
1570 element is either sign-extended or zero-extended to cover the
1571 full register bitwidth (XLEN) if it is not already XLEN bits long.
1572
1573 Implementors are entirely free to optimise the above, particularly
1574 if it is specifically known that any given operation will complete
1575 accurately in less bits, as long as the results produced are
1576 directly equivalent and equal, for all inputs and all outputs,
1577 to those produced by the above algorithm.
1578
1579 ## Polymorphic floating-point operation exceptions and error-handling
1580
1581 For floating-point operations, conversion takes place without
1582 raising any kind of exception. Exactly as specified in the standard
1583 RV specification, NAN (or appropriate) is stored if the result
1584 is beyond the range of the destination, and, again, exactly as
1585 with the standard RV specification just as with scalar
1586 operations, the floating-point flag is raised (FCSR). And, again, just as
1587 with scalar operations, it is software's responsibility to check this flag.
1588 Given that the FCSR flags are "accrued", the fact that multiple element
1589 operations could have occurred is not a problem.
1590
1591 Note that it is perfectly legitimate for floating-point bitwidths of
1592 only 8 to be specified. However whilst it is possible to apply IEEE 754
1593 principles, no actual standard yet exists. Implementors wishing to
1594 provide hardware-level 8-bit support rather than throw a trap to emulate
1595 in software should contact the author of this specification before
1596 proceeding.
1597
1598 ## Polymorphic shift operators
1599
1600 A special note is needed for changing the element width of left and right
1601 shift operators, particularly right-shift. Even for standard RV base,
1602 in order for correct results to be returned, the second operand RS2 must
1603 be truncated to be within the range of RS1's bitwidth. spike's implementation
1604 of sll for example is as follows:
1605
1606 WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));
1607
1608 which means: where XLEN is 32 (for RV32), restrict RS2 to cover the
1609 range 0..31 so that RS1 will only be left-shifted by the amount that
1610 is possible to fit into a 32-bit register. Whilst this appears not
1611 to matter for hardware, it matters greatly in software implementations,
1612 and it also matters where an RV64 system is set to "RV32" mode, such
1613 that the underlying registers RS1 and RS2 comprise 64 hardware bits
1614 each.
1615
1616 For SV, where each operand's element bitwidth may be over-ridden, the
1617 rule about determining the operation's bitwidth *still applies*, being
1618 defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
1619 **also applies to the truncation of RS2**. In other words, *after*
1620 determining the maximum bitwidth, RS2's range must **also be truncated**
1621 to ensure a correct answer. Example:
1622
1623 * RS1 is over-ridden to a 16-bit width
1624 * RS2 is over-ridden to an 8-bit width
1625 * RD is over-ridden to a 64-bit width
1626 * the maximum bitwidth is thus determined to be 16-bit - max(8,16)
1627 * RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
1628
1629 Pseudocode (in spike) for this example would therefore be:
1630
1631 WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
1632
1633 This example illustrates that considerable care therefore needs to be
1634 taken to ensure that left and right shift operations are implemented
1635 correctly. The key is that
1636
1637 * The operation bitwidth is determined by the maximum bitwidth
1638 of the *source registers*, **not** the destination register bitwidth
1639 * The result is then sign-extend (or truncated) as appropriate.
1640
1641 ## Polymorphic MULH/MULHU/MULHSU
1642
1643 MULH is designed to take the top half MSBs of a multiply that
1644 does not fit within the range of the source operands, such that
1645 smaller width operations may produce a full double-width multiply
1646 in two cycles. The issue is: SV allows the source operands to
1647 have variable bitwidth.
1648
1649 Here again special attention has to be paid to the rules regarding
1650 bitwidth, which, again, are that the operation is performed at
1651 the maximum bitwidth of the **source** registers. Therefore:
1652
1653 * An 8-bit x 8-bit multiply will create a 16-bit result that must
1654 be shifted down by 8 bits
1655 * A 16-bit x 8-bit multiply will create a 24-bit result that must
1656 be shifted down by 16 bits (top 8 bits being zero)
1657 * A 16-bit x 16-bit multiply will create a 32-bit result that must
1658 be shifted down by 16 bits
1659 * A 32-bit x 16-bit multiply will create a 48-bit result that must
1660 be shifted down by 32 bits
1661 * A 32-bit x 8-bit multiply will create a 40-bit result that must
1662 be shifted down by 32 bits
1663
1664 So again, just as with shift-left and shift-right, the result
1665 is shifted down by the maximum of the two source register bitwidths.
1666 And, exactly again, truncation or sign-extension is performed on the
1667 result. If sign-extension is to be carried out, it is performed
1668 from the same maximum of the two source register bitwidths out
1669 to the result element's bitwidth.
1670
1671 If truncation occurs, i.e. the top MSBs of the result are lost,
1672 this is "Officially Not Our Problem", i.e. it is assumed that the
1673 programmer actually desires the result to be truncated. i.e. if the
1674 programmer wanted all of the bits, they would have set the destination
1675 elwidth to accommodate them.
1676
1677 ## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
1678
1679 Polymorphic element widths in vectorised form means that the data
1680 being loaded (or stored) across multiple registers needs to be treated
1681 (reinterpreted) as a contiguous stream of elwidth-wide items, where
1682 the source register's element width is **independent** from the destination's.
1683
1684 This makes for a slightly more complex algorithm when using indirection
1685 on the "addressed" register (source for LOAD and destination for STORE),
1686 particularly given that the LOAD/STORE instruction provides important
1687 information about the width of the data to be reinterpreted.
1688
1689 Let's illustrate the "load" part, where the pseudo-code for elwidth=default
1690 was as follows, and i is the loop from 0 to VL-1:
1691
1692 srcbase = ireg[rs+i];
1693 return mem[srcbase + imm]; // returns XLEN bits
1694
1695 Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
1696 chunks are taken from the source memory location addressed by the current
1697 indexed source address register, and only when a full 32-bits-worth
1698 are taken will the index be moved on to the next contiguous source
1699 address register:
1700
1701 bitwidth = bw(elwidth); // source elwidth from CSR reg entry
1702 elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
1703 srcbase = ireg[rs+i/(elsperblock)]; // integer divide
1704 offs = i % elsperblock; // modulo
1705 return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
1706
1707 Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
1708 and 128 for LQ.
1709
1710 The principle is basically exactly the same as if the srcbase were pointing
1711 at the memory of the *register* file: memory is re-interpreted as containing
1712 groups of elwidth-wide discrete elements.
1713
1714 When storing the result from a load, it's important to respect the fact
1715 that the destination register has its *own separate element width*. Thus,
1716 when each element is loaded (at the source element width), any sign-extension
1717 or zero-extension (or truncation) needs to be done to the *destination*
1718 bitwidth. Also, the storing has the exact same analogous algorithm as
1719 above, where in fact it is just the set\_polymorphed\_reg pseudocode
1720 (completely unchanged) used above.
1721
1722 One issue remains: when the source element width is **greater** than
1723 the width of the operation, it is obvious that a single LB for example
1724 cannot possibly obtain 16-bit-wide data. This condition may be detected
1725 where, when using integer divide, elsperblock (the width of the LOAD
1726 divided by the bitwidth of the element) is zero.
1727
1728 The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
1729
1730 elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
1731
1732 The elements, if the element bitwidth is larger than the LD operation's
1733 size, will then be sign/zero-extended to the full LD operation size, as
1734 specified by the LOAD (LDU instead of LD, LBU instead of LB), before
1735 being passed on to the second phase.
1736
1737 As LOAD/STORE may be twin-predicated, it is important to note that
1738 the rules on twin predication still apply, except where in previous
1739 pseudo-code (elwidth=default for both source and target) it was
1740 the *registers* that the predication was applied to, it is now the
1741 **elements** that the predication is applied to.
1742
1743 Thus the full pseudocode for all LD operations may be written out
1744 as follows:
1745
1746 function LBU(rd, rs):
1747 load_elwidthed(rd, rs, 8, true)
1748 function LB(rd, rs):
1749 load_elwidthed(rd, rs, 8, false)
1750 function LH(rd, rs):
1751 load_elwidthed(rd, rs, 16, false)
1752 ...
1753 ...
1754 function LQ(rd, rs):
1755 load_elwidthed(rd, rs, 128, false)
1756
1757 # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
1758 function load_memory(rs, imm, i, opwidth):
1759 elwidth = int_csr[rs].elwidth
1760 bitwidth = bw(elwidth);
1761 elsperblock = min(1, opwidth / bitwidth)
1762 srcbase = ireg[rs+i/(elsperblock)];
1763 offs = i % elsperblock;
1764 return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
1765
1766 function load_elwidthed(rd, rs, opwidth, unsigned):
1767 destwid = int_csr[rd].elwidth # destination element width
1768  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1769  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1770  ps = get_pred_val(FALSE, rs); # predication on src
1771  pd = get_pred_val(FALSE, rd); # ... AND on dest
1772  for (int i = 0, int j = 0; i < VL && j < VL;):
1773 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1774 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1775 val = load_memory(rs, imm, i, opwidth)
1776 if unsigned:
1777 val = zero_extend(val, min(opwidth, bitwidth))
1778 else:
1779 val = sign_extend(val, min(opwidth, bitwidth))
1780 set_polymorphed_reg(rd, bitwidth, j, val)
1781 if (int_csr[rs].isvec) i++;
1782 if (int_csr[rd].isvec) j++; else break;
1783
1784 Note:
1785
1786 * when comparing against for example the twin-predicated c.mv
1787 pseudo-code, the pattern of independent incrementing of rd and rs
1788 is preserved unchanged.
1789 * just as with the c.mv pseudocode, zeroing is not included and must be
1790 taken into account (TODO).
1791 * that due to the use of a twin-predication algorithm, LOAD/STORE also
1792 take on the same VSPLAT, VINSERT, VREDUCE, VEXTRACT, VGATHER and
1793 VSCATTER characteristics.
1794 * that due to the use of the same set\_polymorphed\_reg pseudocode,
1795 a destination that is not vectorised (marked as scalar) will
1796 result in the element being fully sign-extended or zero-extended
1797 out to the full register file bitwidth (XLEN). When the source
1798 is also marked as scalar, this is how the compatibility with
1799 standard RV LOAD/STORE is preserved by this algorithm.
1800
1801 ### Example Tables showing LOAD elements
1802
1803 This section contains examples of vectorised LOAD operations, showing
1804 how the two stage process works (three if zero/sign-extension is included).
1805
1806
1807 #### Example: LD x8, x5(0), x8 CSR-elwidth=32, x5 CSR-elwidth=16, VL=7
1808
1809 This is:
1810
1811 * a 64-bit load, with an offset of zero
1812 * with a source-address elwidth of 16-bit
1813 * into a destination-register with an elwidth of 32-bit
1814 * where VL=7
1815 * from register x5 (actually x5-x6) to x8 (actually x8 to half of x11)
1816 * RV64, where XLEN=64 is assumed.
1817
1818 First, the memory table, which, due to the
1819 element width being 16 and the operation being LD (64), the 64-bits
1820 loaded from memory are subdivided into groups of **four** elements.
1821 And, with VL being 7 (deliberately to illustrate that this is reasonable
1822 and possible), the first four are sourced from the offset addresses pointed
1823 to by x5, and the next three from the ofset addresses pointed to by
1824 the next contiguous register, x6:
1825
1826 [[!table data="""
1827 addr | byte 0 | byte 1 | byte 2 | byte 3 | byte 4 | byte 5 | byte 6 | byte 7 |
1828 @x5 | elem 0 || elem 1 || elem 2 || elem 3 ||
1829 @x6 | elem 4 || elem 5 || elem 6 || not loaded ||
1830 """]]
1831
1832 Next, the elements are zero-extended from 16-bit to 32-bit, as whilst
1833 the elwidth CSR entry for x5 is 16-bit, the destination elwidth on x8 is 32.
1834
1835 [[!table data="""
1836 byte 3 | byte 2 | byte 1 | byte 0 |
1837 0x0 | 0x0 | elem0 ||
1838 0x0 | 0x0 | elem1 ||
1839 0x0 | 0x0 | elem2 ||
1840 0x0 | 0x0 | elem3 ||
1841 0x0 | 0x0 | elem4 ||
1842 0x0 | 0x0 | elem5 ||
1843 0x0 | 0x0 | elem6 ||
1844 0x0 | 0x0 | elem7 ||
1845 """]]
1846
1847 Lastly, the elements are stored in contiguous blocks, as if x8 was also
1848 byte-addressable "memory". That "memory" happens to cover registers
1849 x8, x9, x10 and x11, with the last 32 "bits" of x11 being **UNMODIFIED**:
1850
1851 [[!table data="""
1852 reg# | byte 7 | byte 6 | byte 5 | byte 4 | byte 3 | byte 2 | byte 1 | byte 0 |
1853 x8 | 0x0 | 0x0 | elem 1 || 0x0 | 0x0 | elem 0 ||
1854 x9 | 0x0 | 0x0 | elem 3 || 0x0 | 0x0 | elem 2 ||
1855 x10 | 0x0 | 0x0 | elem 5 || 0x0 | 0x0 | elem 4 ||
1856 x11 | **UNMODIFIED** |||| 0x0 | 0x0 | elem 6 ||
1857 """]]
1858
1859 Thus we have data that is loaded from the **addresses** pointed to by
1860 x5 and x6, zero-extended from 16-bit to 32-bit, stored in the **registers**
1861 x8 through to half of x11.
1862 The end result is that elements 0 and 1 end up in x8, with element 8 being
1863 shifted up 32 bits, and so on, until finally element 6 is in the
1864 LSBs of x11.
1865
1866 Note that whilst the memory addressing table is shown left-to-right byte order,
1867 the registers are shown in right-to-left (MSB) order. This does **not**
1868 imply that bit or byte-reversal is carried out: it's just easier to visualise
1869 memory as being contiguous bytes, and emphasises that registers are not
1870 really actually "memory" as such.
1871
1872 ## Why SV bitwidth specification is restricted to 4 entries
1873
1874 The four entries for SV element bitwidths only allows three over-rides:
1875
1876 * 8 bit
1877 * 16 hit
1878 * 32 bit
1879
1880 This would seem inadequate, surely it would be better to have 3 bits or
1881 more and allow 64, 128 and some other options besides. The answer here
1882 is, it gets too complex, no RV128 implementation yet exists, and so RV64's
1883 default is 64 bit, so the 4 major element widths are covered anyway.
1884
1885 There is an absolutely crucial aspect oF SV here that explicitly
1886 needs spelling out, and it's whether the "vectorised" bit is set in
1887 the Register's CSR entry.
1888
1889 If "vectorised" is clear (not set), this indicates that the operation
1890 is "scalar". Under these circumstances, when set on a destination (RD),
1891 then sign-extension and zero-extension, whilst changed to match the
1892 override bitwidth (if set), will erase the **full** register entry
1893 (64-bit if RV64).
1894
1895 When vectorised is *set*, this indicates that the operation now treats
1896 **elements** as if they were independent registers, so regardless of
1897 the length, any parts of a given actual register that are not involved
1898 in the operation are **NOT** modified, but are **PRESERVED**.
1899
1900 For example:
1901
1902 * when the vector bit is clear and elwidth set to 16 on the destination
1903 register, operations are truncated to 16 bit and then sign or zero
1904 extended to the *FULL* XLEN register width.
1905 * when the vector bit is set, elwidth is 16 and VL=1 (or other value where
1906 groups of elwidth sized elements do not fill an entire XLEN register),
1907 the "top" bits of the destination register do *NOT* get modified, zero'd
1908 or otherwise overwritten.
1909
1910 SIMD micro-architectures may implement this by using predication on
1911 any elements in a given actual register that are beyond the end of
1912 multi-element operation.
1913
1914 Other microarchitectures may choose to provide byte-level write-enable
1915 lines on the register file, such that each 64 bit register in an RV64
1916 system requires 8 WE lines. Scalar RV64 operations would require
1917 activation of all 8 lines, where SV elwidth based operations would
1918 activate the required subset of those byte-level write lines.
1919
1920 Example:
1921
1922 * rs1, rs2 and rd are all set to 8-bit
1923 * VL is set to 3
1924 * RV64 architecture is set (UXL=64)
1925 * add operation is carried out
1926 * bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16]
1927 concatenated with similar add operations on bits 15..8 and 7..0
1928 * bits 24 through 63 **remain as they originally were**.
1929
1930 Example SIMD micro-architectural implementation:
1931
1932 * SIMD architecture works out the nearest round number of elements
1933 that would fit into a full RV64 register (in this case: 8)
1934 * SIMD architecture creates a hidden predicate, binary 0b00000111
1935 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
1936 * SIMD architecture goes ahead with the add operation as if it
1937 was a full 8-wide batch of 8 adds
1938 * SIMD architecture passes top 5 elements through the adders
1939 (which are "disabled" due to zero-bit predication)
1940 * SIMD architecture gets the 5 unmodified top 8-bits back unmodified
1941 and stores them in rd.
1942
1943 This requires a read on rd, however this is required anyway in order
1944 to support non-zeroing mode.
1945
1946 ## Polymorphic floating-point
1947
1948 Standard scalar RV integer operations base the register width on XLEN,
1949 which may be changed (UXL in USTATUS, and the corresponding MXL and
1950 SXL in MSTATUS and SSTATUS respectively). Integer LOAD, STORE and
1951 arithmetic operations are therefore restricted to an active XLEN bits,
1952 with sign or zero extension to pad out the upper bits when XLEN has
1953 been dynamically set to less than the actual register size.
1954
1955 For scalar floating-point, the active (used / changed) bits are
1956 specified exclusively by the operation: ADD.S specifies an active
1957 32-bits, with the upper bits of the source registers needing to
1958 be all 1s ("NaN-boxed"), and the destination upper bits being
1959 *set* to all 1s (including on LOAD/STOREs).
1960
1961 Where elwidth is set to default (on any source or the destination)
1962 it is obvious that this NaN-boxing behaviour can and should be
1963 preserved. When elwidth is non-default things are less obvious,
1964 so need to be thought through. Here is a normal (scalar) sequence,
1965 assuming an RV64 which supports Quad (128-bit) FLEN:
1966
1967 * FLD loads 64-bit wide from memory. Top 64 MSBs are set to all 1s
1968 * ADD.D performs a 64-bit-wide add. Top 64 MSBs of destination set to 1s.
1969 * FSD stores lowest 64-bits from the 128-bit-wide register to memory:
1970 top 64 MSBs ignored.
1971
1972 Therefore it makes sense to mirror this behaviour when, for example,
1973 elwidth is set to 32. Assume elwidth set to 32 on all source and
1974 destination registers:
1975
1976 * FLD loads 64-bit wide from memory as **two** 32-bit single-precision
1977 floating-point numbers.
1978 * ADD.D performs **two** 32-bit-wide adds, storing one of the adds
1979 in bits 0-31 and the second in bits 32-63.
1980 * FSD stores lowest 64-bits from the 128-bit-wide register to memory
1981
1982 Here's the thing: it does not make sense to overwrite the top 64 MSBs
1983 of the registers either during the FLD **or** the ADD.D. The reason
1984 is that, effectively, the top 64 MSBs actually represent a completely
1985 independent 64-bit register, so overwriting it is not only gratuitous
1986 but may actually be harmful for a future extension to SV which may
1987 have a way to directly access those top 64 bits.
1988
1989 The decision is therefore **not** to touch the upper parts of floating-point
1990 registers whereever elwidth is set to non-default values, including
1991 when "isvec" is false in a given register's CSR entry. Only when the
1992 elwidth is set to default **and** isvec is false will the standard
1993 RV behaviour be followed, namely that the upper bits be modified.
1994
1995 Ultimately if elwidth is default and isvec false on *all* source
1996 and destination registers, a SimpleV instruction defaults completely
1997 to standard RV scalar behaviour (this holds true for **all** operations,
1998 right across the board).
1999
2000 The nice thing here is that ADD.S, ADD.D and ADD.Q when elwidth are
2001 non-default values are effectively all the same: they all still perform
2002 multiple ADD operations, just at different widths. A future extension
2003 to SimpleV may actually allow ADD.S to access the upper bits of the
2004 register, effectively breaking down a 128-bit register into a bank
2005 of 4 independently-accesible 32-bit registers.
2006
2007 In the meantime, although when e.g. setting VL to 8 it would technically
2008 make no difference to the ALU whether ADD.S, ADD.D or ADD.Q is used,
2009 using ADD.Q may be an easy way to signal to the microarchitecture that
2010 it is to receive a higher VL value. On a superscalar OoO architecture
2011 there may be absolutely no difference, however on simpler SIMD-style
2012 microarchitectures they may not necessarily have the infrastructure in
2013 place to know the difference, such that when VL=8 and an ADD.D instruction
2014 is issued, it completes in 2 cycles (or more) rather than one, where
2015 if an ADD.Q had been issued instead on such simpler microarchitectures
2016 it would complete in one.
2017
2018 ## Specific instruction walk-throughs
2019
2020 This section covers walk-throughs of the above-outlined procedure
2021 for converting standard RISC-V scalar arithmetic operations to
2022 polymorphic widths, to ensure that it is correct.
2023
2024 ### add
2025
2026 Standard Scalar RV32/RV64 (xlen):
2027
2028 * RS1 @ xlen bits
2029 * RS2 @ xlen bits
2030 * add @ xlen bits
2031 * RD @ xlen bits
2032
2033 Polymorphic variant:
2034
2035 * RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
2036 * RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
2037 * add @ max(rs1, rs2) bits
2038 * RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate
2039
2040 Note here that polymorphic add zero-extends its source operands,
2041 where addw sign-extends.
2042
2043 ### addw
2044
2045 The RV Specification specifically states that "W" variants of arithmetic
2046 operations always produce 32-bit signed values. In a polymorphic
2047 environment it is reasonable to assume that the signed aspect is
2048 preserved, where it is the length of the operands and the result
2049 that may be changed.
2050
2051 Standard Scalar RV64 (xlen):
2052
2053 * RS1 @ xlen bits
2054 * RS2 @ xlen bits
2055 * add @ xlen bits
2056 * RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
2057
2058 Polymorphic variant:
2059
2060 * RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
2061 * RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
2062 * add @ max(rs1, rs2) bits
2063 * RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
2064
2065 Note here that polymorphic addw sign-extends its source operands,
2066 where add zero-extends.
2067
2068 This requires a little more in-depth analysis. Where the bitwidth of
2069 rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
2070 only where the bitwidth of either rs1 or rs2 are different, will the
2071 lesser-width operand be sign-extended.
2072
2073 Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
2074 where for add they are both zero-extended. This holds true for all arithmetic
2075 operations ending with "W".
2076
2077 ### addiw
2078
2079 Standard Scalar RV64I:
2080
2081 * RS1 @ xlen bits, truncated to 32-bit
2082 * immed @ 12 bits, sign-extended to 32-bit
2083 * add @ 32 bits
2084 * RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
2085
2086 Polymorphic variant:
2087
2088 * RS1 @ rs1 bits
2089 * immed @ 12 bits, sign-extend to max(rs1, 12) bits
2090 * add @ max(rs1, 12) bits
2091 * RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
2092
2093 # Predication Element Zeroing
2094
2095 The introduction of zeroing on traditional vector predication is usually
2096 intended as an optimisation for lane-based microarchitectures with register
2097 renaming to be able to save power by avoiding a register read on elements
2098 that are passed through en-masse through the ALU. Simpler microarchitectures
2099 do not have this issue: they simply do not pass the element through to
2100 the ALU at all, and therefore do not store it back in the destination.
2101 More complex non-lane-based micro-architectures can, when zeroing is
2102 not set, use the predication bits to simply avoid sending element-based
2103 operations to the ALUs, entirely: thus, over the long term, potentially
2104 keeping all ALUs 100% occupied even when elements are predicated out.
2105
2106 SimpleV's design principle is not based on or influenced by
2107 microarchitectural design factors: it is a hardware-level API.
2108 Therefore, looking purely at whether zeroing is *useful* or not,
2109 (whether less instructions are needed for certain scenarios),
2110 given that a case can be made for zeroing *and* non-zeroing, the
2111 decision was taken to add support for both.
2112
2113 ## Single-predication (based on destination register)
2114
2115 Zeroing on predication for arithmetic operations is taken from
2116 the destination register's predicate. i.e. the predication *and*
2117 zeroing settings to be applied to the whole operation come from the
2118 CSR Predication table entry for the destination register.
2119 Thus when zeroing is set on predication of a destination element,
2120 if the predication bit is clear, then the destination element is *set*
2121 to zero (twin-predication is slightly different, and will be covered
2122 next).
2123
2124 Thus the pseudo-code loop for a predicated arithmetic operation
2125 is modified to as follows:
2126
2127  for (i = 0; i < VL; i++)
2128 if not zeroing: # an optimisation
2129 while (!(predval & 1<<i) && i < VL)
2130 if (int_vec[rd ].isvector)  { id += 1; }
2131 if (int_vec[rs1].isvector)  { irs1 += 1; }
2132 if (int_vec[rs2].isvector)  { irs2 += 1; }
2133 if i == VL:
2134 break
2135 if (predval & 1<<i)
2136 src1 = ....
2137 src2 = ...
2138 else:
2139 result = src1 + src2 # actual add (or other op) here
2140 set_polymorphed_reg(rd, destwid, ird, result)
2141 if (!int_vec[rd].isvector) break
2142 else if zeroing:
2143 result = 0
2144 set_polymorphed_reg(rd, destwid, ird, result)
2145 if (int_vec[rd ].isvector)  { id += 1; }
2146 else if (predval & 1<<i) break;
2147 if (int_vec[rs1].isvector)  { irs1 += 1; }
2148 if (int_vec[rs2].isvector)  { irs2 += 1; }
2149
2150 The optimisation to skip elements entirely is only possible for certain
2151 micro-architectures when zeroing is not set. However for lane-based
2152 micro-architectures this optimisation may not be practical, as it
2153 implies that elements end up in different "lanes". Under these
2154 circumstances it is perfectly fine to simply have the lanes
2155 "inactive" for predicated elements, even though it results in
2156 less than 100% ALU utilisation.
2157
2158 ## Twin-predication (based on source and destination register)
2159
2160 Twin-predication is not that much different, except that that
2161 the source is independently zero-predicated from the destination.
2162 This means that the source may be zero-predicated *or* the
2163 destination zero-predicated *or both*, or neither.
2164
2165 When with twin-predication, zeroing is set on the source and not
2166 the destination, if a predicate bit is set it indicates that a zero
2167 data element is passed through the operation (the exception being:
2168 if the source data element is to be treated as an address - a LOAD -
2169 then the data returned *from* the LOAD is zero, rather than looking up an
2170 *address* of zero.
2171
2172 When zeroing is set on the destination and not the source, then just
2173 as with single-predicated operations, a zero is stored into the destination
2174 element (or target memory address for a STORE).
2175
2176 Zeroing on both source and destination effectively result in a bitwise
2177 NOR operation of the source and destination predicate: the result is that
2178 where either source predicate OR destination predicate is set to 0,
2179 a zero element will ultimately end up in the destination register.
2180
2181 However: this may not necessarily be the case for all operations;
2182 implementors, particularly of custom instructions, clearly need to
2183 think through the implications in each and every case.
2184
2185 Here is pseudo-code for a twin zero-predicated operation:
2186
2187 function op_mv(rd, rs) # MV not VMV!
2188  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
2189  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
2190  ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
2191  pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
2192  for (int i = 0, int j = 0; i < VL && j < VL):
2193 if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
2194 if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
2195 if ((pd & 1<<j))
2196 if ((pd & 1<<j))
2197 sourcedata = ireg[rs+i];
2198 else
2199 sourcedata = 0
2200 ireg[rd+j] <= sourcedata
2201 else if (zerodst)
2202 ireg[rd+j] <= 0
2203 if (int_csr[rs].isvec)
2204 i++;
2205 if (int_csr[rd].isvec)
2206 j++;
2207 else
2208 if ((pd & 1<<j))
2209 break;
2210
2211 Note that in the instance where the destination is a scalar, the hardware
2212 loop is ended the moment a value *or a zero* is placed into the destination
2213 register/element. Also note that, for clarity, variable element widths
2214 have been left out of the above.
2215
2216 # Exceptions
2217
2218 TODO: expand. Exceptions may occur at any time, in any given underlying
2219 scalar operation. This implies that context-switching (traps) may
2220 occur, and operation must be returned to where it left off. That in
2221 turn implies that the full state - including the current parallel
2222 element being processed - has to be saved and restored. This is
2223 what the **STATE** CSR is for.
2224
2225 The implications are that all underlying individual scalar operations
2226 "issued" by the parallelisation have to appear to be executed sequentially.
2227 The further implications are that if two or more individual element
2228 operations are underway, and one with an earlier index causes an exception,
2229 it may be necessary for the microarchitecture to **discard** or terminate
2230 operations with higher indices.
2231
2232 This being somewhat dissatisfactory, an "opaque predication" variant
2233 of the STATE CSR is being considered.
2234
2235 # Hints
2236
2237 A "HINT" is an operation that has no effect on architectural state,
2238 where its use may, by agreed convention, give advance notification
2239 to the microarchitecture: branch prediction notification would be
2240 a good example. Usually HINTs are where rd=x0.
2241
2242 With Simple-V being capable of issuing *parallel* instructions where
2243 rd=x0, the space for possible HINTs is expanded considerably. VL
2244 could be used to indicate different hints. In addition, if predication
2245 is set, the predication register itself could hypothetically be passed
2246 in as a *parameter* to the HINT operation.
2247
2248 No specific hints are yet defined in Simple-V
2249
2250 # VLIW Format <a name="vliw-format"></a>
2251
2252 One issue with SV is the setup and teardown time of the CSRs. The cost
2253 of the use of a full CSRRW (requiring LI) is quite high. A VLIW format
2254 therefore makes sense.
2255
2256 A suitable prefix, which fits the Expanded Instruction-Length encoding
2257 for "(80 + 16 times instruction_length)", as defined in Section 1.5
2258 of the RISC-V ISA, is as follows:
2259
2260 | 15 | 14:12 | 11:10 | 9:8 | 7 | 6:0 |
2261 | - | ----- | ----- | ----- | --- | ------- |
2262 | vlset | 16xil | pplen | rplen | mode | 1111111 |
2263
2264 An optional VL Block, optional predicate entries, optional register
2265 entries and finally some 16/32/48 bit standard RV or SVPrefix opcodes
2266 follow.
2267
2268 The variable-length format from Section 1.5 of the RISC-V ISA:
2269
2270 | base+4 ... base+2 | base | number of bits |
2271 | ------ ----------------- | ---------------- | -------------------------- |
2272 | ..xxxx xxxxxxxxxxxxxxxx | xnnnxxxxx1111111 | (80+16\*nnn)-bit, nnn!=111 |
2273 | {ops}{Pred}{Reg}{VL Block} | SV Prefix | |
2274
2275 VL/MAXVL/SubVL Block:
2276
2277 | 31-30 | 29:28 | 27:22 | 21:17 - 16 |
2278 | - | ----- | ------ | ------ - - |
2279 | 0 | SubVL | VLdest | VLEN vlt |
2280 | 1 | SubVL | VLdest | VLEN |
2281
2282 Note: this format is very similar to that used in [[sv_prefix_proposal]]
2283
2284 If vlt is 0, VLEN is a 5 bit immediate value, offset by one (i.e
2285 a bit sequence of 0b00000 represents VL=1 and so on). If vlt is 1,
2286 it specifies the scalar register from which VL is set by this VLIW
2287 instruction group. VL, whether set from the register or the immediate,
2288 is then modified (truncated) to be MIN(VL, MAXVL), and the result stored
2289 in the scalar register specified in VLdest. If VLdest is zero, no store
2290 in the regfile occurs (however VL is still set).
2291
2292 This option will typically be used to start vectorised loops, where
2293 the VLIW instruction effectively embeds an optional "SETSUBVL, SETVL"
2294 sequence (in compact form).
2295
2296 When bit 15 is set to 1, MAXVL and VL are both set to the immediate,
2297 VLEN (again, offset by one), which is 6 bits in length, and the same
2298 value stored in scalar register VLdest (if that register is nonzero).
2299 A value of 0b000000 will set MAXVL=VL=1, a value of 0b000001 will
2300 set MAXVL=VL= 2 and so on.
2301
2302 This option will typically not be used so much for loops as it will be
2303 for one-off instructions such as saving the entire register file to the
2304 stack with a single one-off Vectorised and predicated LD/ST, or as a way
2305 to save or restore registers in a function call with a single instruction.
2306
2307 CSRs needed:
2308
2309 * mepcvliw
2310 * sepcvliw
2311 * uepcvliw
2312 * hepcvliw
2313
2314 Notes:
2315
2316 * Bit 7 specifies if the prefix block format is the full 16 bit format
2317 (1) or the compact less expressive format (0). In the 8 bit format,
2318 pplen is multiplied by 2.
2319 * 8 bit format predicate numbering is implicit and begins from x9. Thus
2320 it is critical to put blocks in the correct order as required.
2321 * Bit 7 also specifies if the register block format is 16 bit (1) or 8 bit
2322 (0). In the 8 bit format, rplen is multiplied by 2. If only an odd number
2323 of entries are needed the last may be set to 0x00, indicating "unused".
2324 * Bit 15 specifies if the VL Block is present. If set to 1, the VL Block
2325 immediately follows the VLIW instruction Prefix
2326 * Bits 8 and 9 define how many RegCam entries (0 to 3 if bit 15 is 1,
2327 otherwise 0 to 6) follow the (optional) VL Block.
2328 * Bits 10 and 11 define how many PredCam entries (0 to 3 if bit 7 is 1,
2329 otherwise 0 to 6) follow the (optional) RegCam entries
2330 * Bits 14 to 12 (IL) define the actual length of the instruction: total
2331 number of bits is 80 + 16 times IL. Standard RV32, RVC and also
2332 SVPrefix (P48/64-\*-Type) instructions fit into this space, after the
2333 (optional) VL / RegCam / PredCam entries
2334 * In any RVC or 32 Bit opcode, any registers within the VLIW-prefixed format *MUST* have the
2335 RegCam and PredCam entries applied to the operation
2336 (and the Vectorisation loop activated)
2337 * P48 and P64 opcodes do **not** take their Register or predication context from the VLIW Block tables: they do however have VL or SUBVL applied (unless VLtyp or svlen are set).
2338 * At the end of the VLIW Group, the RegCam and PredCam entries
2339 *no longer apply*. VL, MAXVL and SUBVL on the other hand remain at
2340 the values set by the last instruction (whether a CSRRW or the VL
2341 Block header).
2342 * Although an inefficient use of resources, it is fine to set the MAXVL,
2343 VL and SUBVL CSRs with standard CSRRW instructions, within a VLIW block.
2344
2345 All this would greatly reduce the amount of space utilised by Vectorised
2346 instructions, given that 64-bit CSRRW requires 3, even 4 32-bit opcodes: the
2347 CSR itself, a LI, and the setting up of the value into the RS register
2348 of the CSR, which, again, requires a LI / LUI to get the 32 bit
2349 data into the CSR. To get 64-bit data into the register in order to put
2350 it into the CSR(s), LOAD operations from memory are needed!
2351
2352 Given that each 64-bit CSR can hold only 4x PredCAM entries (or 4 RegCAM
2353 entries), that's potentially 6 to eight 32-bit instructions, just to
2354 establish the Vector State!
2355
2356 Not only that: even CSRRW on VL and MAXVL requires 64-bits (even more bits if
2357 VL needs to be set to greater than 32). Bear in mind that in SV, both MAXVL
2358 and VL need to be set.
2359
2360 By contrast, the VLIW prefix is only 16 bits, the VL/MAX/SubVL block is
2361 only 16 bits, and as long as not too many predicates and register vector
2362 qualifiers are specified, several 32-bit and 16-bit opcodes can fit into
2363 the format. If the full flexibility of the 16 bit block formats are not
2364 needed, more space is saved by using the 8 bit formats.
2365
2366 In this light, embedding the VL/MAXVL, PredCam and RegCam CSR entries into
2367 a VLIW format makes a lot of sense.
2368
2369 Bear in mind the warning in an earlier section that use of VLtyp or svlen in a P48 or P64 opcode within a VLIW Group will result in corruption (use) of the STATE CSR, as the STATE CSR is shared with SVPrefix. To avoid this situation, the STATE CSR may be copied into a temp register and restored afterwards.
2370
2371 Open Questions:
2372
2373 * Is it necessary to stick to the RISC-V 1.5 format? Why not go with
2374 using the 15th bit to allow 80 + 16\*0bnnnn bits? Perhaps to be sane,
2375 limit to 256 bits (16 times 0-11).
2376 * Could a "hint" be used to set which operations are parallel and which
2377 are sequential?
2378 * Could a new sub-instruction opcode format be used, one that does not
2379 conform precisely to RISC-V rules, but *unpacks* to RISC-V opcodes?
2380 no need for byte or bit-alignment
2381 * Could a hardware compression algorithm be deployed? Quite likely,
2382 because of the sub-execution context (sub-VLIW PC)
2383
2384 ## Limitations on instructions.
2385
2386 To greatly simplify implementations, it is required to treat the VLIW
2387 group as a separate sub-program with its own separate PC. The sub-pc
2388 advances separately whilst the main PC remains pointing at the beginning
2389 of the VLIW instruction (not to be confused with how VL works, which
2390 is exactly the same principle, except it is VStart in the STATE CSR
2391 that increments).
2392
2393 This has implications, namely that a new set of CSRs identical to xepc
2394 (mepc, srpc, hepc and uepc) must be created and managed and respected
2395 as being a sub extension of the xepc set of CSRs. Thus, xepcvliw CSRs
2396 must be context switched and saved / restored in traps.
2397
2398 The srcoffs and destoffs indices in the STATE CSR may be similarly regarded as another
2399 sub-execution context, giving in effect two sets of nested sub-levels
2400 of the RISCV Program Counter (actually, three including SUBVL and ssvoffs).
2401
2402 In addition, as xepcvliw CSRs are relative to the beginning of the VLIW
2403 block, branches MUST be restricted to within (relative to) the block, i.e. addressing
2404 is now restricted to the start (and very short) length of the block.
2405
2406 Also: calling subroutines is inadviseable, unless they can be entirely
2407 accomplished within a block.
2408
2409 A normal jump, normal branch and a normal function call may only be taken by letting
2410 the VLIW group end, returning to "normal" standard RV mode, and then using standard RVC, 32 bit
2411 or P48/64-\*-type opcodes.
2412
2413 ## Links
2414
2415 * <https://groups.google.com/d/msg/comp.arch/yIFmee-Cx-c/jRcf0evSAAAJ>
2416
2417 # Subsets of RV functionality
2418
2419 This section describes the differences when SV is implemented on top of
2420 different subsets of RV.
2421
2422 ## Common options
2423
2424 It is permitted to only implement SVprefix and not the VLIW instruction format option.
2425 UNIX Platforms **MUST** raise illegal instruction on seeing a VLIW opcode so that traps may emulate the format.
2426
2427 It is permitted in SVprefix to either not implement VL or not implement SUBVL (see [[sv_prefix_proposal]] for full details. Again, UNIX Platforms *MUST* raise illegal instruction on implementations that do not support VL or SUBVL.
2428
2429 It is permitted to limit the size of either (or both) the register files
2430 down to the original size of the standard RV architecture. However, below
2431 the mandatory limits set in the RV standard will result in non-compliance
2432 with the SV Specification.
2433
2434 ## RV32 / RV32F
2435
2436 When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
2437 maximum limit for predication is also restricted to 32 bits. Whilst not
2438 actually specifically an "option" it is worth noting.
2439
2440 ## RV32G
2441
2442 Normally in standard RV32 it does not make much sense to have
2443 RV32G, The critical instructions that are missing in standard RV32
2444 are those for moving data to and from the double-width floating-point
2445 registers into the integer ones, as well as the FCVT routines.
2446
2447 In an earlier draft of SV, it was possible to specify an elwidth
2448 of double the standard register size: this had to be dropped,
2449 and may be reintroduced in future revisions.
2450
2451 ## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
2452
2453 When floating-point is not implemented, the size of the User Register and
2454 Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
2455 per table).
2456
2457 ## RV32E
2458
2459 In embedded scenarios the User Register and Predication CSRs may be
2460 dropped entirely, or optionally limited to 1 CSR, such that the combined
2461 number of entries from the M-Mode CSR Register table plus U-Mode
2462 CSR Register table is either 4 16-bit entries or (if the U-Mode is
2463 zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
2464 the Predication CSR tables.
2465
2466 RV32E is the most likely candidate for simply detecting that registers
2467 are marked as "vectorised", and generating an appropriate exception
2468 for the VL loop to be implemented in software.
2469
2470 ## RV128
2471
2472 RV128 has not been especially considered, here, however it has some
2473 extremely large possibilities: double the element width implies
2474 256-bit operands, spanning 2 128-bit registers each, and predication
2475 of total length 128 bit given that XLEN is now 128.
2476
2477 # Under consideration <a name="issues"></a>
2478
2479 for element-grouping, if there is unused space within a register
2480 (3 16-bit elements in a 64-bit register for example), recommend:
2481
2482 * For the unused elements in an integer register, the used element
2483 closest to the MSB is sign-extended on write and the unused elements
2484 are ignored on read.
2485 * The unused elements in a floating-point register are treated as-if
2486 they are set to all ones on write and are ignored on read, matching the
2487 existing standard for storing smaller FP values in larger registers.
2488
2489 ---
2490
2491 info register,
2492
2493 > One solution is to just not support LR/SC wider than a fixed
2494 > implementation-dependent size, which must be at least 
2495 >1 XLEN word, which can be read from a read-only CSR
2496 > that can also be used for info like the kind and width of 
2497 > hw parallelism supported (128-bit SIMD, minimal virtual 
2498 > parallelism, etc.) and other things (like maybe the number 
2499 > of registers supported). 
2500
2501 > That CSR would have to have a flag to make a read trap so
2502 > a hypervisor can simulate different values.
2503
2504 ----
2505
2506 > And what about instructions like JALR? 
2507
2508 answer: they're not vectorised, so not a problem
2509
2510 ----
2511
2512 * if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
2513 XLEN if elwidth==default
2514 * if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
2515 *32* if elwidth == default
2516
2517 ---
2518
2519 TODO: document different lengths for INT / FP regfiles, and provide
2520 as part of info register. 00=32, 01=64, 10=128, 11=reserved.
2521
2522 ---
2523
2524 TODO, update to remove RegCam and PredCam CSRs, just use SVprefix and
2525 VLIW format
2526
2527 ---
2528
2529 Could the 8 bit Register VLIW format use regnum<<1 instead, only accessing regs 0 to 64?
2530
2531 --
2532
2533 TODO evaluate strncpy and strlen
2534 https://groups.google.com/forum/m/#!msg/comp.arch/bGBeaNjAKvc/_vbqyxTUAQAJ
2535
2536 strncpy:
2537 mv a3, a0 # Copy dst
2538 loop:
2539 setvli x0, a2, vint8 # Vectors of bytes.
2540 vlbff.v v1, (a1) # Get src bytes
2541 vseq.vi v0, v1, 0 # Flag zero bytes
2542 vmfirst a4, v0 # Zero found?
2543 vmsif.v v0, v0 # Set mask up to and including zero byte. Ppplio
2544 vsb.v v1, (a3), v0.t # Write out bytes
2545 bgez a4, exit # Done
2546 csrr t1, vl # Get number of bytes fetched
2547 add a1, a1, t1 # Bump src pointer
2548 sub a2, a2, t1 # Decrement count.
2549 add a3, a3, t1 # Bump dst pointer
2550 bnez a2, loop # Anymore?
2551
2552 exit:
2553 ret
2554
2555
2556
2557 mv a3, a0 # Save start
2558 loop:
2559 setvli a1, x0, vint8 # byte vec, x0 (Zero reg) => use max hardware len
2560 vldbff.v v1, (a3) # Get bytes
2561 csrr a1, vl # Get bytes actually read e.g. if fault
2562 vseq.vi v0, v1, 0 # Set v0[i] where v1[i] = 0
2563 add a3, a3, a1 # Bump pointer
2564 vmfirst a2, v0 # Find first set bit in mask, returns -1 if none
2565 bltz a2, loop # Not found?
2566 add a0, a0, a1 # Sum start + bump
2567 add a3, a3, a2 # Add index of zero byte
2568 sub a0, a3, a0 # Subtract start address+bump
2569 ret