(no commit message)
[libreriscv.git] / simple_v_extension / specification.mdwn
1 # Simple-V (Parallelism Extension Proposal) Specification
2
3 * Copyright (C) 2017, 2018, 2019 Luke Kenneth Casson Leighton
4 * Status: DRAFTv0.6
5 * Last edited: 21 jun 2019
6 * Ancillary resource: [[opcodes]] [[sv_prefix_proposal]]
7
8 With thanks to:
9
10 * Allen Baum
11 * Bruce Hoult
12 * comp.arch
13 * Jacob Bachmeyer
14 * Guy Lemurieux
15 * Jacob Lifshay
16 * Terje Mathisen
17 * The RISC-V Founders, without whom this all would not be possible.
18
19 [[!toc ]]
20
21 # Summary and Background: Rationale
22
23 Simple-V is a uniform parallelism API for RISC-V hardware that has several
24 unplanned side-effects including code-size reduction, expansion of
25 HINT space and more. The reason for
26 creating it is to provide a manageable way to turn a pre-existing design
27 into a parallel one, in a step-by-step incremental fashion, without adding any new opcodes, thus allowing
28 the implementor to focus on adding hardware where it is needed and necessary.
29 The primary target is for mobile-class 3D GPUs and VPUs, with secondary
30 goals being to reduce executable size (by extending the effectiveness of RV opcodes, RVC in particular) and reduce context-switch latency.
31
32 Critically: **No new instructions are added**. The parallelism (if any
33 is implemented) is implicitly added by tagging *standard* scalar registers
34 for redirection. When such a tagged register is used in any instruction,
35 it indicates that the PC shall **not** be incremented; instead a loop
36 is activated where *multiple* instructions are issued to the pipeline
37 (as determined by a length CSR), with contiguously incrementing register
38 numbers starting from the tagged register. When the last "element"
39 has been reached, only then is the PC permitted to move on. Thus
40 Simple-V effectively sits (slots) *in between* the instruction decode phase
41 and the ALU(s).
42
43 The barrier to entry with SV is therefore very low. The minimum
44 compliant implementation is software-emulation (traps), requiring
45 only the CSRs and CSR tables, and that an exception be thrown if an
46 instruction's registers are detected to have been tagged. The looping
47 that would otherwise be done in hardware is thus carried out in software,
48 instead. Whilst much slower, it is "compliant" with the SV specification,
49 and may be suited for implementation in RV32E and also in situations
50 where the implementor wishes to focus on certain aspects of SV, without
51 unnecessary time and resources into the silicon, whilst also conforming
52 strictly with the API. A good area to punt to software would be the
53 polymorphic element width capability for example.
54
55 Hardware Parallelism, if any, is therefore added at the implementor's
56 discretion to turn what would otherwise be a sequential loop into a
57 parallel one.
58
59 To emphasise that clearly: Simple-V (SV) is *not*:
60
61 * A SIMD system
62 * A SIMT system
63 * A Vectorisation Microarchitecture
64 * A microarchitecture of any specific kind
65 * A mandary parallel processor microarchitecture of any kind
66 * A supercomputer extension
67
68 SV does **not** tell implementors how or even if they should implement
69 parallelism: it is a hardware "API" (Application Programming Interface)
70 that, if implemented, presents a uniform and consistent way to *express*
71 parallelism, at the same time leaving the choice of if, how, how much,
72 when and whether to parallelise operations **entirely to the implementor**.
73
74 # Basic Operation
75
76 The principle of SV is as follows:
77
78 * Standard RV instructions are "prefixed" (extended) through a 48/64
79 bit format (single instruction option) or a variable
80 length VLIW-like prefix (multi or "grouped" option).
81 * The prefix(es) indicate which registers are "tagged" as
82 "vectorised". Predicates can also be added, and element widths overridden on any src or dest register.
83 * A "Vector Length" CSR is set, indicating the span of any future
84 "parallel" operations.
85 * If any operation (a **scalar** standard RV opcode) uses a register
86 that has been so "marked" ("tagged"), a hardware "macro-unrolling loop"
87 is activated, of length VL, that effectively issues **multiple**
88 identical instructions using contiguous sequentially-incrementing
89 register numbers, based on the "tags".
90 * **Whether they be executed sequentially or in parallel or a
91 mixture of both or punted to software-emulation in a trap handler
92 is entirely up to the implementor**.
93
94 In this way an entire scalar algorithm may be vectorised with
95 the minimum of modification to the hardware and to compiler toolchains.
96
97 To reiterate: **There are *no* new opcodes**. The scheme works *entirely*
98 on hidden context that augments *scalar* RISCV instructions.
99
100 # CSRs <a name="csrs"></a>
101
102 * An optional "reshaping" CSR key-value table which remaps from a 1D
103 linear shape to 2D or 3D, including full transposition.
104
105 There are five additional CSRs, available in any privilege level:
106
107 * MVL (the Maximum Vector Length)
108 * VL (which has different characteristics from standard CSRs)
109 * SUBVL (effectively a kind of SIMD)
110 * STATE (containing copies of MVL, VL and SUBVL as well as context information)
111 * PCVLIW (the current operation being executed within a VLIW Group)
112
113 For User Mode there are the following CSRs:
114
115 * uePCVLIW (a copy of the sub-execution Program Counter, that is relative
116 to the start of the current VLIW Group, set on a trap).
117 * ueSTATE (useful for saving and restoring during context switch,
118 and for providing fast transitions)
119
120 There are also two additional CSRs for Supervisor-Mode:
121
122 * sePCVLIW
123 * seSTATE
124
125 And likewise for M-Mode:
126
127 * mePCVLIW
128 * meSTATE
129
130 The u/m/s CSRs are treated and handled exactly like their (x)epc equivalents. On entry to a privilege level, the contents of its (x)eSTATE and (x)ePCVLIW CSRs are copied into STATE and PCVLIW respectively, and on exit from a priv level the STATE and PCVLIW CSRs are copied to the exited priv level's corresponding CSRs.
131
132 Thus for example, a User Mode trap will end up swapping STATE and ueSTATE (on both entry and exit), allowing User Mode traps to have their own Vectorisation Context set up, separated from and unaffected by normal user applications.
133
134 Likewise, Supervisor Mode may perform context-switches, safe in the knowledge that its Vectorisation State is unaffected by User Mode.
135
136 The access pattern for these groups of CSRs in each mode follows the
137 same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
138
139 * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
140 * In S-Mode, accessing and changing of the M-Mode CSRs is transparently
141 identical
142 to changing the S-Mode CSRs. Accessing and changing the U-Mode
143 CSRs is permitted.
144 * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
145 is prohibited.
146
147 In M-Mode, only the M-Mode CSRs are in effect, i.e. it is only the
148 M-Mode MVL, the M-Mode STATE and so on that influences the processor
149 behaviour. Likewise for S-Mode, and likewise for U-Mode.
150
151 This has the interesting benefit of allowing M-Mode (or S-Mode) to be set
152 up, for context-switching to take place, and, on return back to the higher
153 privileged mode, the CSRs of that mode will be exactly as they were.
154 Thus, it becomes possible for example to set up CSRs suited best to aiding
155 and assisting low-latency fast context-switching *once and only once*
156 (for example at boot time), without the need for re-initialising the
157 CSRs needed to do so.
158
159 Another interesting side effect of separate S Mode CSRs is that Vectorised
160 saving of the entire register file to the stack is a single instruction
161 (accidental provision of LOAD-MULTI semantics). If the SVPrefix P64-LD-type format is used, LOAD-MULTI may even be done with a single standalone 64 bit opcode (P64 may set up both VL and MVL from an immediate field). It can even be predicated,
162 which opens up some very interesting possibilities.
163
164 The (x)EPCVLIW CSRs must be treated exactly like their corresponding (x)epc
165 equivalents. See VLIW section for details.
166
167 ## MAXVECTORLENGTH (MVL) <a name="mvl" />
168
169 MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
170 is variable length and may be dynamically set. MVL is
171 however limited to the regfile bitwidth XLEN (1-32 for RV32,
172 1-64 for RV64 and so on).
173
174 The reason for setting this limit is so that predication registers, when
175 marked as such, may fit into a single register as opposed to fanning out
176 over several registers. This keeps the hardware implementation a little simpler.
177
178 The other important factor to note is that the actual MVL is internally
179 stored **offset by one**, so that it can fit into only 6 bits (for RV64)
180 and still cover a range up to XLEN bits. Attempts to set MVL to zero will
181 return an exception. This is expressed more clearly in the "pseudocode"
182 section, where there are subtle differences between CSRRW and CSRRWI.
183
184 ## Vector Length (VL) <a name="vl" />
185
186 VSETVL is slightly different from RVV. Similar to RVV, VL is set to be within
187 the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
188
189 VL = rd = MIN(vlen, MVL)
190
191 where 1 <= MVL <= XLEN
192
193 However just like MVL it is important to note that the range for VL has
194 subtle design implications, covered in the "CSR pseudocode" section
195
196 The fixed (specific) setting of VL allows vector LOAD/STORE to be used
197 to switch the entire bank of registers using a single instruction (see
198 Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
199 is down to the fact that predication bits fit into a single register of
200 length XLEN bits.
201
202 The second and most important change is that, within the limits set by
203 MVL, the value passed in **must** be set in VL (and in the
204 destination register).
205
206 This has implication for the microarchitecture, as VL is required to be
207 set (limits from MVL notwithstanding) to the actual value
208 requested. RVV has the option to set VL to an arbitrary value that suits
209 the conditions and the micro-architecture: SV does *not* permit this.
210
211 The reason is so that if SV is to be used for a context-switch or as a
212 substitute for LOAD/STORE-Multiple, the operation can be done with only
213 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
214 single LD/ST operation). If VL does *not* get set to the register file
215 length when VSETVL is called, then a software-loop would be needed.
216 To avoid this need, VL *must* be set to exactly what is requested
217 (limits notwithstanding).
218
219 Therefore, in turn, unlike RVV, implementors *must* provide
220 pseudo-parallelism (using sequential loops in hardware) if actual
221 hardware-parallelism in the ALUs is not deployed. A hybrid is also
222 permitted (as used in Broadcom's VideoCore-IV) however this must be
223 *entirely* transparent to the ISA.
224
225 The third change is that VSETVL is implemented as a CSR, where the
226 behaviour of CSRRW (and CSRRWI) must be changed to specifically store
227 the *new* value in the destination register, **not** the old value.
228 Where context-load/save is to be implemented in the usual fashion
229 by using a single CSRRW instruction to obtain the old value, the
230 *secondary* CSR must be used (STATE). This CSR by contrast behaves
231 exactly as standard CSRs, and contains more than just VL.
232
233 One interesting side-effect of using CSRRWI to set VL is that this
234 may be done with a single instruction, useful particularly for a
235 context-load/save. There are however limitations: CSRWI's immediate
236 is limited to 0-31 (representing VL=1-32).
237
238 Note that when VL is set to 1, vector operations cease (but not subvector operations: that requires setting SUBVL=1) the
239 hardware loop is reduced to a single element: scalar operations.
240 This is in effect the default, normal
241 operating mode. However it is important
242 to appreciate that this does **not**
243 result in the Register table or SUBVL
244 being disabled. Only when the Register
245 table is empty (P48/64 prefix fields notwithstanding)
246 would SV have no effect.
247
248 ## SUBVL - Sub Vector Length
249
250 This is a "group by quantity" that effectivrly asks each iteration of the hardware loop to load SUBVL elements of width elwidth at a time. Effectively, SUBVL is like a SIMD multiplier: instead of just 1 operation issued, SUBVL operations are issued.
251
252 Another way to view SUBVL is that each element in the VL length vector is now SUBVL times elwidth bits in length and
253 now comprises SUBVL discrete sub
254 operations. An inner SUBVL for-loop within
255 a VL for-loop in effect, with the
256 sub-element increased every time in the
257 innermost loop. This is best illustrated
258 in the (simplified) pseudocode example,
259 later.
260
261 The primary use case for SUBVL is for 3D FP Vectors. A Vector of 3D coordinates X,Y,Z for example may be loaded and multiplied the stored, per VL element iteration, rather than having to set VL to three times larger.
262
263 Legal values are 1, 2, 3 and 4 (and the STATE CSR must hold the 2 bit values 0b00 thru 0b11 to represent them).
264
265 Setting this CSR to 0 must raise an exception. Setting it to a value
266 greater than 4 likewise.
267
268 The main effect of SUBVL is that predication bits are applied per **group**,
269 rather than by individual element.
270
271 This saves a not insignificant number of instructions when handling 3D
272 vectors, as otherwise a much longer predicate mask would have to be set
273 up with regularly-repeated bit patterns.
274
275 See SUBVL Pseudocode illustration for details.
276
277 ## STATE
278
279 This is a standard CSR that contains sufficient information for a
280 full context save/restore. It contains (and permits setting of):
281
282 * MVL
283 * VL
284 * the destination element offset of the current parallel instruction
285 being executed
286 * and, for twin-predication, the source element offset as well.
287 * SUBVL
288 * the subvector destination element offset of the current parallel instruction
289 being executed
290 * and, for twin-predication, the subvector source element offset as well.
291
292 Interestingly STATE may hypothetically also be modified to make the
293 immediately-following instruction to skip a certain number of elements,
294 by playing with destoffs and srcoffs
295 (and the subvector offsets as well)
296
297 Setting destoffs and srcoffs is realistically intended for saving state
298 so that exceptions (page faults in particular) may be serviced and the
299 hardware-loop that was being executed at the time of the trap, from
300 user-mode (or Supervisor-mode), may be returned to and continued from exactly
301 where it left off. The reason why this works is because setting
302 User-Mode STATE will not change (not be used) in M-Mode or S-Mode
303 (and is entirely why M-Mode and S-Mode have their own STATE CSRs, meSTATE and seSTATE).
304
305 The format of the STATE CSR is as follows:
306
307 | (30..29 | (28..27) | (26..24) | (23..18) | (17..12) | (11..6) | (5...0) |
308 | ------- | -------- | -------- | -------- | -------- | ------- | ------- |
309 | dsvoffs | ssvoffs | subvl | destoffs | srcoffs | vl | maxvl |
310
311 When setting this CSR, the following characteristics will be enforced:
312
313 * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
314 * **VL** will be truncated (after offset) to be within the range 1 to MAXVL
315 * **SUBVL** which sets a SIMD-like quantity, has only 4 values there are no changes needed
316 * **srcoffs** will be truncated to be within the range 0 to VL-1
317 * **destoffs** will be truncated to be within the range 0 to VL-1
318 * **ssvoffs** will be truncated to be within the range 0 to SUBVL-1
319 * **dsvoffs** will be truncated to be within the range 0 to SUBVL-1
320
321 NOTE: if the following instruction is not a twin predicated instruction, and destoffs or dsvoffs has been set to non-zero, subsequent execution behaviour is undefined. **USE WITH CARE**.
322
323 ### Hardware rules for when to increment STATE offsets
324
325 The offsets inside STATE are like the indices in a loop, except in hardware. They are also partially (conceptually) similar to a "sub-execution Program Counter". As such, and to allow proper context switching and to define correct exception behaviour, the following rules must be observed:
326
327 * When the VL CSR is set, srcoffs and destoffs are reset to zero.
328 * Each instruction that contains a "tagged" register shall start execution at the *current* value of srcoffs (and destoffs in the case of twin predication)
329 * Unpredicated bits (in nonzeroing mode) shall cause the element operation to skip, incrementing the srcoffs (or destoffs)
330 * On execution of an element operation, Exceptions shall **NOT** cause srcoffs or destoffs to increment.
331 * On completion of the full Vector Loop (srcoffs = VL-1 or destoffs = VL-1 after the last element is executed), both srcoffs and destoffs shall be reset to zero.
332
333 This latter is why srcoffs and destoffs may be stored as values from 0 to XLEN-1 in the STATE CSR, because as loop indices they refer to elements. srcoffs and destoffs never need to be set to VL: their maximum operating values are limited to 0 to VL-1.
334
335 The same corresponding rules apply to SUBVL, svsrcoffs and svdestoffs.
336
337 ## MVL and VL Pseudocode
338
339 The pseudo-code for get and set of VL and MVL use the following internal
340 functions as follows:
341
342 set_mvl_csr(value, rd):
343 regs[rd] = STATE.MVL
344 STATE.MVL = MIN(value, STATE.MVL)
345
346 get_mvl_csr(rd):
347 regs[rd] = STATE.VL
348
349 set_vl_csr(value, rd):
350 STATE.VL = MIN(value, STATE.MVL)
351 regs[rd] = STATE.VL # yes returning the new value NOT the old CSR
352 return STATE.VL
353
354 get_vl_csr(rd):
355 regs[rd] = STATE.VL
356 return STATE.VL
357
358 Note that where setting MVL behaves as a normal CSR (returns the old
359 value), unlike standard CSR behaviour, setting VL will return the **new**
360 value of VL **not** the old one.
361
362 For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
363 maximise the effectiveness, an immediate of 0 is used to set VL=1,
364 an immediate of 1 is used to set VL=2 and so on:
365
366 CSRRWI_Set_MVL(value):
367 set_mvl_csr(value+1, x0)
368
369 CSRRWI_Set_VL(value):
370 set_vl_csr(value+1, x0)
371
372 However for CSRRW the following pseudocode is used for MVL and VL,
373 where setting the value to zero will cause an exception to be raised.
374 The reason is that if VL or MVL are set to zero, the STATE CSR is
375 not capable of storing that value.
376
377 CSRRW_Set_MVL(rs1, rd):
378 value = regs[rs1]
379 if value == 0 or value > XLEN:
380 raise Exception
381 set_mvl_csr(value, rd)
382
383 CSRRW_Set_VL(rs1, rd):
384 value = regs[rs1]
385 if value == 0 or value > XLEN:
386 raise Exception
387 set_vl_csr(value, rd)
388
389 In this way, when CSRRW is utilised with a loop variable, the value
390 that goes into VL (and into the destination register) may be used
391 in an instruction-minimal fashion:
392
393 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
394 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
395 CSRRWI MVL, 3 # sets MVL == **4** (not 3)
396 j zerotest # in case loop counter a0 already 0
397 loop:
398 CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
399 ld a3, a1 # load 4 registers a3-6 from x
400 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
401 ld a7, a2 # load 4 registers a7-10 from y
402 add a1, a1, t1 # increment pointer to x by vl*8
403 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
404 sub a0, a0, t0 # n -= vl (t0)
405 st a7, a2 # store 4 registers a7-10 to y
406 add a2, a2, t1 # increment pointer to y by vl*8
407 zerotest:
408 bnez a0, loop # repeat if n != 0
409
410 With the STATE CSR, just like with CSRRWI, in order to maximise the
411 utilisation of the limited bitspace, "000000" in binary represents
412 VL==1, "00001" represents VL==2 and so on (likewise for MVL):
413
414 CSRRW_Set_SV_STATE(rs1, rd):
415 value = regs[rs1]
416 get_state_csr(rd)
417 STATE.MVL = set_mvl_csr(value[11:6]+1)
418 STATE.VL = set_vl_csr(value[5:0]+1)
419 STATE.destoffs = value[23:18]>>18
420 STATE.srcoffs = value[23:18]>>12
421
422 get_state_csr(rd):
423 regs[rd] = (STATE.MVL-1) | (STATE.VL-1)<<6 | (STATE.srcoffs)<<12 |
424 (STATE.destoffs)<<18
425 return regs[rd]
426
427 In both cases, whilst CSR read of VL and MVL return the exact values
428 of VL and MVL respectively, reading and writing the STATE CSR returns
429 those values **minus one**. This is absolutely critical to implement
430 if the STATE CSR is to be used for fast context-switching.
431
432 ## VL, MVL and SUBVL instruction aliases
433
434 This table contains pseudo-assembly instruction aliases. Note the subtraction of 1 from the CSRRWI pseudo variants, to compensate for the reduced range of the 5 bit immediate.
435
436 | alias | CSR |
437 | - | - |
438 | SETVL rd, rs | CSRRW VL, rd, rs |
439 | SETVLi rd, #n | CSRRWI VL, rd, #n-1 |
440 | GETVL rd | CSRRW VL, rd, x0 |
441 | SETMVL rd, rs | CSRRW MVL, rd, rs |
442 | SETMVLi rd, #n | CSRRWI MVL,rd, #n-1 |
443 | GETMVL rd | CSRRW MVL, rd, x0 |
444
445 Note: CSRRC and other bitsetting may still be used, they are however not particularly useful (very obscure).
446
447 ## Register key-value (CAM) table <a name="regcsrtable" />
448
449 *NOTE: in prior versions of SV, this table used to be writable and
450 accessible via CSRs. It is now stored in the VLIW instruction format,
451 and entries may be overridden temporarily by the SVPrefix P48/64 format*
452
453 The purpose of the Register table is three-fold:
454
455 * To mark integer and floating-point registers as requiring "redirection"
456 if it is ever used as a source or destination in any given operation.
457 This involves a level of indirection through a 5-to-7-bit lookup table,
458 such that **unmodified** operands with 5 bits (3 for some RVC ops) may
459 access up to **128** registers.
460 * To indicate whether, after redirection through the lookup table, the
461 register is a vector (or remains a scalar).
462 * To over-ride the implicit or explicit bitwidth that the operation would
463 normally give the register.
464
465 Note: clearly, if an RVC operation uses a 3 bit spec'd register (x8-x15) and the Register table contains entried that only refer to registerd x1-x14 or x16-x31, such operations will *never* activate the VL hardware loop!
466
467 If however the (16 bit) Register table does contain such an entry (x8-x15 or x2 in the case of LWSP), that src or dest reg may be redirected anywhere to the *full* 128 register range. Thus, RVC becomes far more powerful and has many more opportunities to reduce code size that in Standard RV32/RV64 executables.
468
469 16 bit format:
470
471 | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
472 | ------ | | - | - | - | ------ | ------- |
473 | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
474 | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
475 | .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
476 | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
477
478 8 bit format:
479
480 | RegCAM | | 7 | (6..5) | (4..0) |
481 | ------ | | - | ------ | ------- |
482 | 0 | | i/f | vew0 | regnum |
483
484 i/f is set to "1" to indicate that the redirection/tag entry is to be applied
485 to integer registers; 0 indicates that it is relevant to floating-point
486 registers.
487
488 The 8 bit format is used for a much more compact expression. "isvec"
489 is implicit and, similar to [[sv-prefix-proposal]], the target vector
490 is "regnum<<2", implicitly. Contrast this with the 16-bit format where
491 the target vector is *explicitly* named in bits 8 to 14, and bit 15 may
492 optionally set "scalar" mode.
493
494 Note that whilst SVPrefix adds one extra bit to each of rd, rs1 etc.,
495 and thus the "vector" mode need only shift the (6 bit) regnum by 1 to
496 get the actual (7 bit) register number to use, there is not enough space
497 in the 8 bit format (only 5 bits for regnum) so "regnum<<2" is required.
498
499 vew has the following meanings, indicating that the instruction's
500 operand size is "over-ridden" in a polymorphic fashion:
501
502 | vew | bitwidth |
503 | --- | ------------------- |
504 | 00 | default (XLEN/FLEN) |
505 | 01 | 8 bit |
506 | 10 | 16 bit |
507 | 11 | 32 bit |
508
509 As the above table is a CAM (key-value store) it may be appropriate
510 (faster, implementation-wise) to expand it as follows:
511
512 struct vectorised fp_vec[32], int_vec[32];
513
514 for (i = 0; i < 16; i++) // 16 CSRs?
515 tb = int_vec if CSRvec[i].type == 0 else fp_vec
516 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
517 tb[idx].elwidth = CSRvec[i].elwidth
518 tb[idx].regidx = CSRvec[i].regidx // indirection
519 tb[idx].isvector = CSRvec[i].isvector // 0=scalar
520 tb[idx].packed = CSRvec[i].packed // SIMD or not
521
522 ## Predication Table <a name="predication_csr_table"></a>
523
524 *NOTE: in prior versions of SV, this table used to be writable and
525 accessible via CSRs. It is now stored in the VLIW instruction format,
526 and entries may be overridden by the SVPrefix format*
527
528 The Predication Table is a key-value store indicating whether, if a
529 given destination register (integer or floating-point) is referred to
530 in an instruction, it is to be predicated. Like the Register table, it
531 is an indirect lookup that allows the RV opcodes to not need modification.
532
533 It is particularly important to note
534 that the *actual* register used can be *different* from the one that is
535 in the instruction, due to the redirection through the lookup table.
536
537 * regidx is the register that in combination with the
538 i/f flag, if that integer or floating-point register is referred to
539 in a (standard RV) instruction
540 results in the lookup table being referenced to find the predication
541 mask to use for this operation.
542 * predidx is the
543 *actual* (full, 7 bit) register to be used for the predication mask.
544 * inv indicates that the predication mask bits are to be inverted
545 prior to use *without* actually modifying the contents of the
546 registerfrom which those bits originated.
547 * zeroing is either 1 or 0, and if set to 1, the operation must
548 place zeros in any element position where the predication mask is
549 set to zero. If zeroing is set to 0, unpredicated elements *must*
550 be left alone. Some microarchitectures may choose to interpret
551 this as skipping the operation entirely. Others which wish to
552 stick more closely to a SIMD architecture may choose instead to
553 interpret unpredicated elements as an internal "copy element"
554 operation (which would be necessary in SIMD microarchitectures
555 that perform register-renaming)
556
557 16 bit format:
558
559 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
560 | ----- | - | - | - | - | ------- | ------- |
561 | 0 | predkey | zero0 | inv0 | i/f | regidx | rsrvd |
562 | 1 | predkey | zero1 | inv1 | i/f | regidx | rsvd |
563 | ... | predkey | ..... | .... | i/f | ....... | ....... |
564 | 15 | predkey | zero15 | inv15 | i/f | regidx | rsvd |
565
566
567 8 bit format:
568
569 | PrCSR | 7 | 6 | 5 | (4..0) |
570 | ----- | - | - | - | ------- |
571 | 0 | zero0 | inv0 | i/f | regnum |
572
573 The 8 bit format is a compact and less expressive variant of the full
574 16 bit format. Using the 8 bit formatis very different: the predicate
575 register to use is implicit, and numbering begins inplicitly from x9. The
576 regnum is still used to "activate" predication, in the same fashion as
577 described above.
578
579 The 16 bit Predication CSR Table is a key-value store, so implementation-wise
580 it will be faster to turn the table around (maintain topologically
581 equivalent state):
582
583 struct pred {
584 bool zero;
585 bool inv;
586 bool enabled;
587 int predidx; // redirection: actual int register to use
588 }
589
590 struct pred fp_pred_reg[32]; // 64 in future (bank=1)
591 struct pred int_pred_reg[32]; // 64 in future (bank=1)
592
593 for (i = 0; i < 16; i++)
594 tb = int_pred_reg if CSRpred[i].type == 0 else fp_pred_reg;
595 idx = CSRpred[i].regidx
596 tb[idx].zero = CSRpred[i].zero
597 tb[idx].inv = CSRpred[i].inv
598 tb[idx].predidx = CSRpred[i].predidx
599 tb[idx].enabled = true
600
601 So when an operation is to be predicated, it is the internal state that
602 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
603 pseudo-code for operations is given, where p is the explicit (direct)
604 reference to the predication register to be used:
605
606 for (int i=0; i<vl; ++i)
607 if ([!]preg[p][i])
608 (d ? vreg[rd][i] : sreg[rd]) =
609 iop(s1 ? vreg[rs1][i] : sreg[rs1],
610 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
611
612 This instead becomes an *indirect* reference using the *internal* state
613 table generated from the Predication CSR key-value store, which is used
614 as follows.
615
616 if type(iop) == INT:
617 preg = int_pred_reg[rd]
618 else:
619 preg = fp_pred_reg[rd]
620
621 for (int i=0; i<vl; ++i)
622 predicate, zeroing = get_pred_val(type(iop) == INT, rd):
623 if (predicate && (1<<i))
624 (d ? regfile[rd+i] : regfile[rd]) =
625 iop(s1 ? regfile[rs1+i] : regfile[rs1],
626 s2 ? regfile[rs2+i] : regfile[rs2]); // for insts with 2 inputs
627 else if (zeroing)
628 (d ? regfile[rd+i] : regfile[rd]) = 0
629
630 Note:
631
632 * d, s1 and s2 are booleans indicating whether destination,
633 source1 and source2 are vector or scalar
634 * key-value CSR-redirection of rd, rs1 and rs2 have NOT been included
635 above, for clarity. rd, rs1 and rs2 all also must ALSO go through
636 register-level redirection (from the Register table) if they are
637 vectors.
638
639 If written as a function, obtaining the predication mask (and whether
640 zeroing takes place) may be done as follows:
641
642 def get_pred_val(bool is_fp_op, int reg):
643 tb = int_reg if is_fp_op else fp_reg
644 if (!tb[reg].enabled):
645 return ~0x0, False // all enabled; no zeroing
646 tb = int_pred if is_fp_op else fp_pred
647 if (!tb[reg].enabled):
648 return ~0x0, False // all enabled; no zeroing
649 predidx = tb[reg].predidx // redirection occurs HERE
650 predicate = intreg[predidx] // actual predicate HERE
651 if (tb[reg].inv):
652 predicate = ~predicate // invert ALL bits
653 return predicate, tb[reg].zero
654
655 Note here, critically, that **only** if the register is marked
656 in its **register** table entry as being "active" does the testing
657 proceed further to check if the **predicate** table entry is
658 also active.
659
660 Note also that this is in direct contrast to branch operations
661 for the storage of comparisions: in these specific circumstances
662 the requirement for there to be an active *register* entry
663 is removed.
664
665 ## REMAP CSR <a name="remap" />
666
667 (Note: both the REMAP and SHAPE sections are best read after the
668 rest of the document has been read)
669
670 There is one 32-bit CSR which may be used to indicate which registers,
671 if used in any operation, must be "reshaped" (re-mapped) from a linear
672 form to a 2D or 3D transposed form, or "offset" to permit arbitrary
673 access to elements within a register.
674
675 The 32-bit REMAP CSR may reshape up to 3 registers:
676
677 | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
678 | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
679 | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
680
681 regidx0-2 refer not to the Register CSR CAM entry but to the underlying
682 *real* register (see regidx, the value) and consequently is 7-bits wide.
683 When set to zero (referring to x0), clearly reshaping x0 is pointless,
684 so is used to indicate "disabled".
685 shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
686 Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
687
688 It is anticipated that these specialist CSRs not be very often used.
689 Unlike the CSR Register and Predication tables, the REMAP CSRs use
690 the full 7-bit regidx so that they can be set once and left alone,
691 whilst the CSR Register entries pointing to them are disabled, instead.
692
693 ## SHAPE 1D/2D/3D vector-matrix remapping CSRs
694
695 (Note: both the REMAP and SHAPE sections are best read after the
696 rest of the document has been read)
697
698 There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
699 which have the same format. When each SHAPE CSR is set entirely to zeros,
700 remapping is disabled: the register's elements are a linear (1D) vector.
701
702 | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
703 | ------- | -- | ------- | -- | ------- | -- | ------- |
704 | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
705
706 offs is a 3-bit field, spread out across bits 7, 15 and 23, which
707 is added to the element index during the loop calculation.
708
709 xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
710 that the array dimensionality for that dimension is 1. A value of xdimsz=2
711 would indicate that in the first dimension there are 3 elements in the
712 array. The format of the array is therefore as follows:
713
714 array[xdim+1][ydim+1][zdim+1]
715
716 However whilst illustrative of the dimensionality, that does not take the
717 "permute" setting into account. "permute" may be any one of six values
718 (0-5, with values of 6 and 7 being reserved, and not legal). The table
719 below shows how the permutation dimensionality order works:
720
721 | permute | order | array format |
722 | ------- | ----- | ------------------------ |
723 | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
724 | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
725 | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
726 | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
727 | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
728 | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
729
730 In other words, the "permute" option changes the order in which
731 nested for-loops over the array would be done. The algorithm below
732 shows this more clearly, and may be executed as a python program:
733
734 # mapidx = REMAP.shape2
735 xdim = 3 # SHAPE[mapidx].xdim_sz+1
736 ydim = 4 # SHAPE[mapidx].ydim_sz+1
737 zdim = 5 # SHAPE[mapidx].zdim_sz+1
738
739 lims = [xdim, ydim, zdim]
740 idxs = [0,0,0] # starting indices
741 order = [1,0,2] # experiment with different permutations, here
742 offs = 0 # experiment with different offsets, here
743
744 for idx in range(xdim * ydim * zdim):
745 new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
746 print new_idx,
747 for i in range(3):
748 idxs[order[i]] = idxs[order[i]] + 1
749 if (idxs[order[i]] != lims[order[i]]):
750 break
751 print
752 idxs[order[i]] = 0
753
754 Here, it is assumed that this algorithm be run within all pseudo-code
755 throughout this document where a (parallelism) for-loop would normally
756 run from 0 to VL-1 to refer to contiguous register
757 elements; instead, where REMAP indicates to do so, the element index
758 is run through the above algorithm to work out the **actual** element
759 index, instead. Given that there are three possible SHAPE entries, up to
760 three separate registers in any given operation may be simultaneously
761 remapped:
762
763 function op_add(rd, rs1, rs2) # add not VADD!
764 ...
765 ...
766  for (i = 0; i < VL; i++)
767 xSTATE.srcoffs = i # save context
768 if (predval & 1<<i) # predication uses intregs
769    ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
770 ireg[rs2+remap(irs2)];
771 if (!int_vec[rd ].isvector) break;
772 if (int_vec[rd ].isvector)  { id += 1; }
773 if (int_vec[rs1].isvector)  { irs1 += 1; }
774 if (int_vec[rs2].isvector)  { irs2 += 1; }
775
776 By changing remappings, 2D matrices may be transposed "in-place" for one
777 operation, followed by setting a different permutation order without
778 having to move the values in the registers to or from memory. Also,
779 the reason for having REMAP separate from the three SHAPE CSRs is so
780 that in a chain of matrix multiplications and additions, for example,
781 the SHAPE CSRs need only be set up once; only the REMAP CSR need be
782 changed to target different registers.
783
784 Note that:
785
786 * Over-running the register file clearly has to be detected and
787 an illegal instruction exception thrown
788 * When non-default elwidths are set, the exact same algorithm still
789 applies (i.e. it offsets elements *within* registers rather than
790 entire registers).
791 * If permute option 000 is utilised, the actual order of the
792 reindexing does not change!
793 * If two or more dimensions are set to zero, the actual order does not change!
794 * The above algorithm is pseudo-code **only**. Actual implementations
795 will need to take into account the fact that the element for-looping
796 must be **re-entrant**, due to the possibility of exceptions occurring.
797 See MSTATE CSR, which records the current element index.
798 * Twin-predicated operations require **two** separate and distinct
799 element offsets. The above pseudo-code algorithm will be applied
800 separately and independently to each, should each of the two
801 operands be remapped. *This even includes C.LDSP* and other operations
802 in that category, where in that case it will be the **offset** that is
803 remapped (see Compressed Stack LOAD/STORE section).
804 * Offset is especially useful, on its own, for accessing elements
805 within the middle of a register. Without offsets, it is necessary
806 to either use a predicated MV, skipping the first elements, or
807 performing a LOAD/STORE cycle to memory.
808 With offsets, the data does not have to be moved.
809 * Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to
810 less than MVL is **perfectly legal**, albeit very obscure. It permits
811 entries to be regularly presented to operands **more than once**, thus
812 allowing the same underlying registers to act as an accumulator of
813 multiple vector or matrix operations, for example.
814
815 Clearly here some considerable care needs to be taken as the remapping
816 could hypothetically create arithmetic operations that target the
817 exact same underlying registers, resulting in data corruption due to
818 pipeline overlaps. Out-of-order / Superscalar micro-architectures with
819 register-renaming will have an easier time dealing with this than
820 DSP-style SIMD micro-architectures.
821
822 # Instruction Execution Order
823
824 Simple-V behaves as if it is a hardware-level "macro expansion system",
825 substituting and expanding a single instruction into multiple sequential
826 instructions with contiguous and sequentially-incrementing registers.
827 As such, it does **not** modify - or specify - the behaviour and semantics of
828 the execution order: that may be deduced from the **existing** RV
829 specification in each and every case.
830
831 So for example if a particular micro-architecture permits out-of-order
832 execution, and it is augmented with Simple-V, then wherever instructions
833 may be out-of-order then so may the "post-expansion" SV ones.
834
835 If on the other hand there are memory guarantees which specifically
836 prevent and prohibit certain instructions from being re-ordered
837 (such as the Atomicity Axiom, or FENCE constraints), then clearly
838 those constraints **MUST** also be obeyed "post-expansion".
839
840 It should be absolutely clear that SV is **not** about providing new
841 functionality or changing the existing behaviour of a micro-architetural
842 design, or about changing the RISC-V Specification.
843 It is **purely** about compacting what would otherwise be contiguous
844 instructions that use sequentially-increasing register numbers down
845 to the **one** instruction.
846
847 # Instructions <a name="instructions" />
848
849 Despite being a 98% complete and accurate topological remap of RVV
850 concepts and functionality, no new instructions are needed.
851 Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
852 becomes a critical dependency for efficient manipulation of predication
853 masks (as a bit-field). Despite the removal of all operations,
854 with the exception of CLIP and VSELECT.X
855 *all instructions from RVV Base are topologically re-mapped and retain their
856 complete functionality, intact*. Note that if RV64G ever had
857 a MV.X added as well as FCLIP, the full functionality of RVV-Base would
858 be obtained in SV.
859
860 Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
861 equivalents, so are left out of Simple-V. VSELECT could be included if
862 there existed a MV.X instruction in RV (MV.X is a hypothetical
863 non-immediate variant of MV that would allow another register to
864 specify which register was to be copied). Note that if any of these three
865 instructions are added to any given RV extension, their functionality
866 will be inherently parallelised.
867
868 With some exceptions, where it does not make sense or is simply too
869 challenging, all RV-Base instructions are parallelised:
870
871 * CSR instructions, whilst a case could be made for fast-polling of
872 a CSR into multiple registers, or for being able to copy multiple
873 contiguously addressed CSRs into contiguous registers, and so on,
874 are the fundamental core basis of SV. If parallelised, extreme
875 care would need to be taken. Additionally, CSR reads are done
876 using x0, and it is *really* inadviseable to tag x0.
877 * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
878 left as scalar.
879 * LR/SC could hypothetically be parallelised however their purpose is
880 single (complex) atomic memory operations where the LR must be followed
881 up by a matching SC. A sequence of parallel LR instructions followed
882 by a sequence of parallel SC instructions therefore is guaranteed to
883 not be useful. Not least: the guarantees of a Multi-LR/SC
884 would be impossible to provide if emulated in a trap.
885 * EBREAK, NOP, FENCE and others do not use registers so are not inherently
886 paralleliseable anyway.
887
888 All other operations using registers are automatically parallelised.
889 This includes AMOMAX, AMOSWAP and so on, where particular care and
890 attention must be paid.
891
892 Example pseudo-code for an integer ADD operation (including scalar operations).
893 Floating-point uses fp csrs.
894
895 function op_add(rd, rs1, rs2) # add not VADD!
896  int i, id=0, irs1=0, irs2=0;
897  predval = get_pred_val(FALSE, rd);
898  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
899  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
900  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
901  for (i = 0; i < VL; i++)
902 xSTATE.srcoffs = i # save context
903 if (predval & 1<<i) # predication uses intregs
904    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
905 if (!int_vec[rd ].isvector) break;
906 if (int_vec[rd ].isvector)  { id += 1; }
907 if (int_vec[rs1].isvector)  { irs1 += 1; }
908 if (int_vec[rs2].isvector)  { irs2 += 1; }
909
910 Note that for simplicity there is quite a lot missing from the above
911 pseudo-code: element widths, zeroing on predication, dimensional
912 reshaping and offsets and so on. However it demonstrates the basic
913 principle. Augmentations that produce the full pseudo-code are covered in
914 other sections.
915
916 ## SUBVL Pseudocode
917
918 Adding in support for SUBVL is a matter of adding in an extra inner for-loop, where register src and dest are still incremented inside the inner part. Not that the predication is still taken from the VL index.
919
920 So whilst elements are indexed by (i * SUBVL + s), predicate bits are indexed by i
921
922 function op_add(rd, rs1, rs2) # add not VADD!
923  int i, id=0, irs1=0, irs2=0;
924  predval = get_pred_val(FALSE, rd);
925  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
926  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
927  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
928  for (i = 0; i < VL; i++)
929 xSTATE.srcoffs = i # save context
930 for (s = 0; s < SUBVL; s++)
931 xSTATE.ssvoffs = s # save context
932 if (predval & 1<<i) # predication uses intregs
933 # actual add is here (at last)
934    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
935 if (!int_vec[rd ].isvector) break;
936 if (int_vec[rd ].isvector)  { id += 1; }
937 if (int_vec[rs1].isvector)  { irs1 += 1; }
938 if (int_vec[rs2].isvector)  { irs2 += 1; }
939 if (id == VL or irs1 == VL or irs2 == VL) {
940 # end VL hardware loop
941 xSTATE.srcoffs = 0; # reset
942 xSTATE.ssvoffs = 0; # reset
943 return;
944 }
945
946
947 NOTE: pseudocode simplified greatly: zeroing, proper predicate handling, elwidth handling etc. all left out.
948
949 ## Instruction Format
950
951 It is critical to appreciate that there are
952 **no operations added to SV, at all**.
953
954 Instead, by using CSRs to tag registers as an indication of "changed
955 behaviour", SV *overloads* pre-existing branch operations into predicated
956 variants, and implicitly overloads arithmetic operations, MV, FCVT, and
957 LOAD/STORE depending on CSR configurations for bitwidth and predication.
958 **Everything** becomes parallelised. *This includes Compressed
959 instructions* as well as any future instructions and Custom Extensions.
960
961 Note: CSR tags to change behaviour of instructions is nothing new, including
962 in RISC-V. UXL, SXL and MXL change the behaviour so that XLEN=32/64/128.
963 FRM changes the behaviour of the floating-point unit, to alter the rounding
964 mode. Other architectures change the LOAD/STORE byte-order from big-endian
965 to little-endian on a per-instruction basis. SV is just a little more...
966 comprehensive in its effect on instructions.
967
968 ## Branch Instructions
969
970 ### Standard Branch <a name="standard_branch"></a>
971
972 Branch operations use standard RV opcodes that are reinterpreted to
973 be "predicate variants" in the instance where either of the two src
974 registers are marked as vectors (active=1, vector=1).
975
976 Note that the predication register to use (if one is enabled) is taken from
977 the *first* src register, and that this is used, just as with predicated
978 arithmetic operations, to mask whether the comparison operations take
979 place or not. The target (destination) predication register
980 to use (if one is enabled) is taken from the *second* src register.
981
982 If either of src1 or src2 are scalars (whether by there being no
983 CSR register entry or whether by the CSR entry specifically marking
984 the register as "scalar") the comparison goes ahead as vector-scalar
985 or scalar-vector.
986
987 In instances where no vectorisation is detected on either src registers
988 the operation is treated as an absolutely standard scalar branch operation.
989 Where vectorisation is present on either or both src registers, the
990 branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
991 those tests that are predicated out).
992
993 Note that when zero-predication is enabled (from source rs1),
994 a cleared bit in the predicate indicates that the result
995 of the compare is set to "false", i.e. that the corresponding
996 destination bit (or result)) be set to zero. Contrast this with
997 when zeroing is not set: bits in the destination predicate are
998 only *set*; they are **not** cleared. This is important to appreciate,
999 as there may be an expectation that, going into the hardware-loop,
1000 the destination predicate is always expected to be set to zero:
1001 this is **not** the case. The destination predicate is only set
1002 to zero if **zeroing** is enabled.
1003
1004 Note that just as with the standard (scalar, non-predicated) branch
1005 operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
1006 src1 and src2.
1007
1008 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
1009 for predicated compare operations of function "cmp":
1010
1011 for (int i=0; i<vl; ++i)
1012 if ([!]preg[p][i])
1013 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
1014 s2 ? vreg[rs2][i] : sreg[rs2]);
1015
1016 With associated predication, vector-length adjustments and so on,
1017 and temporarily ignoring bitwidth (which makes the comparisons more
1018 complex), this becomes:
1019
1020 s1 = reg_is_vectorised(src1);
1021 s2 = reg_is_vectorised(src2);
1022
1023 if not s1 && not s2
1024 if cmp(rs1, rs2) # scalar compare
1025 goto branch
1026 return
1027
1028 preg = int_pred_reg[rd]
1029 reg = int_regfile
1030
1031 ps = get_pred_val(I/F==INT, rs1);
1032 rd = get_pred_val(I/F==INT, rs2); # this may not exist
1033
1034 if not exists(rd) or zeroing:
1035 result = 0
1036 else
1037 result = preg[rd]
1038
1039 for (int i = 0; i < VL; ++i)
1040 if (zeroing)
1041 if not (ps & (1<<i))
1042 result &= ~(1<<i);
1043 else if (ps & (1<<i))
1044 if (cmp(s1 ? reg[src1+i]:reg[src1],
1045 s2 ? reg[src2+i]:reg[src2])
1046 result |= 1<<i;
1047 else
1048 result &= ~(1<<i);
1049
1050 if not exists(rd)
1051 if result == ps
1052 goto branch
1053 else
1054 preg[rd] = result # store in destination
1055 if preg[rd] == ps
1056 goto branch
1057
1058 Notes:
1059
1060 * Predicated SIMD comparisons would break src1 and src2 further down
1061 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
1062 Reordering") setting Vector-Length times (number of SIMD elements) bits
1063 in Predicate Register rd, as opposed to just Vector-Length bits.
1064 * The execution of "parallelised" instructions **must** be implemented
1065 as "re-entrant" (to use a term from software). If an exception (trap)
1066 occurs during the middle of a vectorised
1067 Branch (now a SV predicated compare) operation, the partial results
1068 of any comparisons must be written out to the destination
1069 register before the trap is permitted to begin. If however there
1070 is no predicate, the **entire** set of comparisons must be **restarted**,
1071 with the offset loop indices set back to zero. This is because
1072 there is no place to store the temporary result during the handling
1073 of traps.
1074
1075 TODO: predication now taken from src2. also branch goes ahead
1076 if all compares are successful.
1077
1078 Note also that where normally, predication requires that there must
1079 also be a CSR register entry for the register being used in order
1080 for the **predication** CSR register entry to also be active,
1081 for branches this is **not** the case. src2 does **not** have
1082 to have its CSR register entry marked as active in order for
1083 predication on src2 to be active.
1084
1085 Also note: SV Branch operations are **not** twin-predicated
1086 (see Twin Predication section). This would require three
1087 element offsets: one to track src1, one to track src2 and a third
1088 to track where to store the accumulation of the results. Given
1089 that the element offsets need to be exposed via CSRs so that
1090 the parallel hardware looping may be made re-entrant on traps
1091 and exceptions, the decision was made not to make SV Branches
1092 twin-predicated.
1093
1094 ### Floating-point Comparisons
1095
1096 There does not exist floating-point branch operations, only compare.
1097 Interestingly no change is needed to the instruction format because
1098 FP Compare already stores a 1 or a zero in its "rd" integer register
1099 target, i.e. it's not actually a Branch at all: it's a compare.
1100
1101 In RV (scalar) Base, a branch on a floating-point compare is
1102 done via the sequence "FEQ x1, f0, f5; BEQ x1, x0, #jumploc".
1103 This does extend to SV, as long as x1 (in the example sequence given)
1104 is vectorised. When that is the case, x1..x(1+VL-1) will also be
1105 set to 0 or 1 depending on whether f0==f5, f1==f6, f2==f7 and so on.
1106 The BEQ that follows will *also* compare x1==x0, x2==x0, x3==x0 and
1107 so on. Consequently, unlike integer-branch, FP Compare needs no
1108 modification in its behaviour.
1109
1110 In addition, it is noted that an entry "FNE" (the opposite of FEQ) is missing,
1111 and whilst in ordinary branch code this is fine because the standard
1112 RVF compare can always be followed up with an integer BEQ or a BNE (or
1113 a compressed comparison to zero or non-zero), in predication terms that
1114 becomes more of an impact. To deal with this, SV's predication has
1115 had "invert" added to it.
1116
1117 Also: note that FP Compare may be predicated, using the destination
1118 integer register (rd) to determine the predicate. FP Compare is **not**
1119 a twin-predication operation, as, again, just as with SV Branches,
1120 there are three registers involved: FP src1, FP src2 and INT rd.
1121
1122 ### Compressed Branch Instruction
1123
1124 Compressed Branch instructions are, just like standard Branch instructions,
1125 reinterpreted to be vectorised and predicated based on the source register
1126 (rs1s) CSR entries. As however there is only the one source register,
1127 given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
1128 to store the results of the comparisions is taken from CSR predication
1129 table entries for **x0**.
1130
1131 The specific required use of x0 is, with a little thought, quite obvious,
1132 but is counterintuitive. Clearly it is **not** recommended to redirect
1133 x0 with a CSR register entry, however as a means to opaquely obtain
1134 a predication target it is the only sensible option that does not involve
1135 additional special CSRs (or, worse, additional special opcodes).
1136
1137 Note also that, just as with standard branches, the 2nd source
1138 (in this case x0 rather than src2) does **not** have to have its CSR
1139 register table marked as "active" in order for predication to work.
1140
1141 ## Vectorised Dual-operand instructions
1142
1143 There is a series of 2-operand instructions involving copying (and
1144 sometimes alteration):
1145
1146 * C.MV
1147 * FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
1148 * C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
1149 * LOAD(-FP) and STORE(-FP)
1150
1151 All of these operations follow the same two-operand pattern, so it is
1152 *both* the source *and* destination predication masks that are taken into
1153 account. This is different from
1154 the three-operand arithmetic instructions, where the predication mask
1155 is taken from the *destination* register, and applied uniformly to the
1156 elements of the source register(s), element-for-element.
1157
1158 The pseudo-code pattern for twin-predicated operations is as
1159 follows:
1160
1161 function op(rd, rs):
1162  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1163  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1164  ps = get_pred_val(FALSE, rs); # predication on src
1165  pd = get_pred_val(FALSE, rd); # ... AND on dest
1166  for (int i = 0, int j = 0; i < VL && j < VL;):
1167 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1168 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1169 xSTATE.srcoffs = i # save context
1170 xSTATE.destoffs = j # save context
1171 reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
1172 if (int_csr[rs].isvec) i++;
1173 if (int_csr[rd].isvec) j++; else break
1174
1175 This pattern covers scalar-scalar, scalar-vector, vector-scalar
1176 and vector-vector, and predicated variants of all of those.
1177 Zeroing is not presently included (TODO). As such, when compared
1178 to RVV, the twin-predicated variants of C.MV and FMV cover
1179 **all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
1180 VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
1181
1182 Note that:
1183
1184 * elwidth (SIMD) is not covered in the pseudo-code above
1185 * ending the loop early in scalar cases (VINSERT, VEXTRACT) is also
1186 not covered
1187 * zero predication is also not shown (TODO).
1188
1189 ### C.MV Instruction <a name="c_mv"></a>
1190
1191 There is no MV instruction in RV however there is a C.MV instruction.
1192 It is used for copying integer-to-integer registers (vectorised FMV
1193 is used for copying floating-point).
1194
1195 If either the source or the destination register are marked as vectors
1196 C.MV is reinterpreted to be a vectorised (multi-register) predicated
1197 move operation. The actual instruction's format does not change:
1198
1199 [[!table data="""
1200 15 12 | 11 7 | 6 2 | 1 0 |
1201 funct4 | rd | rs | op |
1202 4 | 5 | 5 | 2 |
1203 C.MV | dest | src | C0 |
1204 """]]
1205
1206 A simplified version of the pseudocode for this operation is as follows:
1207
1208 function op_mv(rd, rs) # MV not VMV!
1209  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1210  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1211  ps = get_pred_val(FALSE, rs); # predication on src
1212  pd = get_pred_val(FALSE, rd); # ... AND on dest
1213  for (int i = 0, int j = 0; i < VL && j < VL;):
1214 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1215 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1216 xSTATE.srcoffs = i # save context
1217 xSTATE.destoffs = j # save context
1218 ireg[rd+j] <= ireg[rs+i];
1219 if (int_csr[rs].isvec) i++;
1220 if (int_csr[rd].isvec) j++; else break
1221
1222 There are several different instructions from RVV that are covered by
1223 this one opcode:
1224
1225 [[!table data="""
1226 src | dest | predication | op |
1227 scalar | vector | none | VSPLAT |
1228 scalar | vector | destination | sparse VSPLAT |
1229 scalar | vector | 1-bit dest | VINSERT |
1230 vector | scalar | 1-bit? src | VEXTRACT |
1231 vector | vector | none | VCOPY |
1232 vector | vector | src | Vector Gather |
1233 vector | vector | dest | Vector Scatter |
1234 vector | vector | src & dest | Gather/Scatter |
1235 vector | vector | src == dest | sparse VCOPY |
1236 """]]
1237
1238 Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
1239 operations with inversion on the src and dest predication for one of the
1240 two C.MV operations.
1241
1242 Note that in the instance where the Compressed Extension is not implemented,
1243 MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs.
1244 Note that the behaviour is **different** from C.MV because with addi the
1245 predication mask to use is taken **only** from rd and is applied against
1246 all elements: rs[i] = rd[i].
1247
1248 ### FMV, FNEG and FABS Instructions
1249
1250 These are identical in form to C.MV, except covering floating-point
1251 register copying. The same double-predication rules also apply.
1252 However when elwidth is not set to default the instruction is implicitly
1253 and automatic converted to a (vectorised) floating-point type conversion
1254 operation of the appropriate size covering the source and destination
1255 register bitwidths.
1256
1257 (Note that FMV, FNEG and FABS are all actually pseudo-instructions)
1258
1259 ### FVCT Instructions
1260
1261 These are again identical in form to C.MV, except that they cover
1262 floating-point to integer and integer to floating-point. When element
1263 width in each vector is set to default, the instructions behave exactly
1264 as they are defined for standard RV (scalar) operations, except vectorised
1265 in exactly the same fashion as outlined in C.MV.
1266
1267 However when the source or destination element width is not set to default,
1268 the opcode's explicit element widths are *over-ridden* to new definitions,
1269 and the opcode's element width is taken as indicative of the SIMD width
1270 (if applicable i.e. if packed SIMD is requested) instead.
1271
1272 For example FCVT.S.L would normally be used to convert a 64-bit
1273 integer in register rs1 to a 64-bit floating-point number in rd.
1274 If however the source rs1 is set to be a vector, where elwidth is set to
1275 default/2 and "packed SIMD" is enabled, then the first 32 bits of
1276 rs1 are converted to a floating-point number to be stored in rd's
1277 first element and the higher 32-bits *also* converted to floating-point
1278 and stored in the second. The 32 bit size comes from the fact that
1279 FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to
1280 divide that by two it means that rs1 element width is to be taken as 32.
1281
1282 Similar rules apply to the destination register.
1283
1284 ## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
1285
1286 An earlier draft of SV modified the behaviour of LOAD/STORE (modified
1287 the interpretation of the instruction fields). This
1288 actually undermined the fundamental principle of SV, namely that there
1289 be no modifications to the scalar behaviour (except where absolutely
1290 necessary), in order to simplify an implementor's task if considering
1291 converting a pre-existing scalar design to support parallelism.
1292
1293 So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
1294 do not change in SV, however just as with C.MV it is important to note
1295 that dual-predication is possible.
1296
1297 In vectorised architectures there are usually at least two different modes
1298 for LOAD/STORE:
1299
1300 * Read (or write for STORE) from sequential locations, where one
1301 register specifies the address, and the one address is incremented
1302 by a fixed amount. This is usually known as "Unit Stride" mode.
1303 * Read (or write) from multiple indirected addresses, where the
1304 vector elements each specify separate and distinct addresses.
1305
1306 To support these different addressing modes, the CSR Register "isvector"
1307 bit is used. So, for a LOAD, when the src register is set to
1308 scalar, the LOADs are sequentially incremented by the src register
1309 element width, and when the src register is set to "vector", the
1310 elements are treated as indirection addresses. Simplified
1311 pseudo-code would look like this:
1312
1313 function op_ld(rd, rs) # LD not VLD!
1314  rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
1315  rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
1316  ps = get_pred_val(FALSE, rs); # predication on src
1317  pd = get_pred_val(FALSE, rd); # ... AND on dest
1318  for (int i = 0, int j = 0; i < VL && j < VL;):
1319 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1320 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1321 if (int_csr[rd].isvec)
1322 # indirect mode (multi mode)
1323 srcbase = ireg[rsv+i];
1324 else
1325 # unit stride mode
1326 srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
1327 ireg[rdv+j] <= mem[srcbase + imm_offs];
1328 if (!int_csr[rs].isvec &&
1329 !int_csr[rd].isvec) break # scalar-scalar LD
1330 if (int_csr[rs].isvec) i++;
1331 if (int_csr[rd].isvec) j++;
1332
1333 Notes:
1334
1335 * For simplicity, zeroing and elwidth is not included in the above:
1336 the key focus here is the decision-making for srcbase; vectorised
1337 rs means use sequentially-numbered registers as the indirection
1338 address, and scalar rs is "offset" mode.
1339 * The test towards the end for whether both source and destination are
1340 scalar is what makes the above pseudo-code provide the "standard" RV
1341 Base behaviour for LD operations.
1342 * The offset in bytes (XLEN/8) changes depending on whether the
1343 operation is a LB (1 byte), LH (2 byes), LW (4 bytes) or LD
1344 (8 bytes), and also whether the element width is over-ridden
1345 (see special element width section).
1346
1347 ## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
1348
1349 C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
1350 where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
1351 It is therefore possible to use predicated C.LWSP to efficiently
1352 pop registers off the stack (by predicating x2 as the source), cherry-picking
1353 which registers to store to (by predicating the destination). Likewise
1354 for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
1355
1356 The two modes ("unit stride" and multi-indirection) are still supported,
1357 as with standard LD/ST. Essentially, the only difference is that the
1358 use of x2 is hard-coded into the instruction.
1359
1360 **Note**: it is still possible to redirect x2 to an alternative target
1361 register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
1362 general-purpose LOAD/STORE operations.
1363
1364 ## Compressed LOAD / STORE Instructions
1365
1366 Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
1367 where the same rules apply and the same pseudo-code apply as for
1368 non-compressed LOAD/STORE. Again: setting scalar or vector mode
1369 on the src for LOAD and dest for STORE switches mode from "Unit Stride"
1370 to "Multi-indirection", respectively.
1371
1372 # Element bitwidth polymorphism <a name="elwidth"></a>
1373
1374 Element bitwidth is best covered as its own special section, as it
1375 is quite involved and applies uniformly across-the-board. SV restricts
1376 bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
1377
1378 The effect of setting an element bitwidth is to re-cast each entry
1379 in the register table, and for all memory operations involving
1380 load/stores of certain specific sizes, to a completely different width.
1381 Thus In c-style terms, on an RV64 architecture, effectively each register
1382 now looks like this:
1383
1384 typedef union {
1385 uint8_t b[8];
1386 uint16_t s[4];
1387 uint32_t i[2];
1388 uint64_t l[1];
1389 } reg_t;
1390
1391 // integer table: assume maximum SV 7-bit regfile size
1392 reg_t int_regfile[128];
1393
1394 where the CSR Register table entry (not the instruction alone) determines
1395 which of those union entries is to be used on each operation, and the
1396 VL element offset in the hardware-loop specifies the index into each array.
1397
1398 However a naive interpretation of the data structure above masks the
1399 fact that setting VL greater than 8, for example, when the bitwidth is 8,
1400 accessing one specific register "spills over" to the following parts of
1401 the register file in a sequential fashion. So a much more accurate way
1402 to reflect this would be:
1403
1404 typedef union {
1405 uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
1406 uint8_t b[0]; // array of type uint8_t
1407 uint16_t s[0];
1408 uint32_t i[0];
1409 uint64_t l[0];
1410 uint128_t d[0];
1411 } reg_t;
1412
1413 reg_t int_regfile[128];
1414
1415 where when accessing any individual regfile[n].b entry it is permitted
1416 (in c) to arbitrarily over-run the *declared* length of the array (zero),
1417 and thus "overspill" to consecutive register file entries in a fashion
1418 that is completely transparent to a greatly-simplified software / pseudo-code
1419 representation.
1420 It is however critical to note that it is clearly the responsibility of
1421 the implementor to ensure that, towards the end of the register file,
1422 an exception is thrown if attempts to access beyond the "real" register
1423 bytes is ever attempted.
1424
1425 Now we may modify pseudo-code an operation where all element bitwidths have
1426 been set to the same size, where this pseudo-code is otherwise identical
1427 to its "non" polymorphic versions (above):
1428
1429 function op_add(rd, rs1, rs2) # add not VADD!
1430 ...
1431 ...
1432  for (i = 0; i < VL; i++)
1433 ...
1434 ...
1435 // TODO, calculate if over-run occurs, for each elwidth
1436 if (elwidth == 8) {
1437    int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
1438     int_regfile[rs2].i[irs2];
1439 } else if elwidth == 16 {
1440    int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
1441     int_regfile[rs2].s[irs2];
1442 } else if elwidth == 32 {
1443    int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
1444     int_regfile[rs2].i[irs2];
1445 } else { // elwidth == 64
1446    int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
1447     int_regfile[rs2].l[irs2];
1448 }
1449 ...
1450 ...
1451
1452 So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
1453 following sequentially on respectively from the same) are "type-cast"
1454 to 8-bit; for 16-bit entries likewise and so on.
1455
1456 However that only covers the case where the element widths are the same.
1457 Where the element widths are different, the following algorithm applies:
1458
1459 * Analyse the bitwidth of all source operands and work out the
1460 maximum. Record this as "maxsrcbitwidth"
1461 * If any given source operand requires sign-extension or zero-extension
1462 (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
1463 sign-extension / zero-extension or whatever is specified in the standard
1464 RV specification, **change** that to sign-extending from the respective
1465 individual source operand's bitwidth from the CSR table out to
1466 "maxsrcbitwidth" (previously calculated), instead.
1467 * Following separate and distinct (optional) sign/zero-extension of all
1468 source operands as specifically required for that operation, carry out the
1469 operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
1470 this may be a "null" (copy) operation, and that with FCVT, the changes
1471 to the source and destination bitwidths may also turn FVCT effectively
1472 into a copy).
1473 * If the destination operand requires sign-extension or zero-extension,
1474 instead of a mandatory fixed size (typically 32-bit for arithmetic,
1475 for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
1476 etc.), overload the RV specification with the bitwidth from the
1477 destination register's elwidth entry.
1478 * Finally, store the (optionally) sign/zero-extended value into its
1479 destination: memory for sb/sw etc., or an offset section of the register
1480 file for an arithmetic operation.
1481
1482 In this way, polymorphic bitwidths are achieved without requiring a
1483 massive 64-way permutation of calculations **per opcode**, for example
1484 (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
1485 rd bitwidths). The pseudo-code is therefore as follows:
1486
1487 typedef union {
1488 uint8_t b;
1489 uint16_t s;
1490 uint32_t i;
1491 uint64_t l;
1492 } el_reg_t;
1493
1494 bw(elwidth):
1495 if elwidth == 0:
1496 return xlen
1497 if elwidth == 1:
1498 return xlen / 2
1499 if elwidth == 2:
1500 return xlen * 2
1501 // elwidth == 3:
1502 return 8
1503
1504 get_max_elwidth(rs1, rs2):
1505 return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
1506 bw(int_csr[rs2].elwidth)) # again XLEN if no entry
1507
1508 get_polymorphed_reg(reg, bitwidth, offset):
1509 el_reg_t res;
1510 res.l = 0; // TODO: going to need sign-extending / zero-extending
1511 if bitwidth == 8:
1512 reg.b = int_regfile[reg].b[offset]
1513 elif bitwidth == 16:
1514 reg.s = int_regfile[reg].s[offset]
1515 elif bitwidth == 32:
1516 reg.i = int_regfile[reg].i[offset]
1517 elif bitwidth == 64:
1518 reg.l = int_regfile[reg].l[offset]
1519 return res
1520
1521 set_polymorphed_reg(reg, bitwidth, offset, val):
1522 if (!int_csr[reg].isvec):
1523 # sign/zero-extend depending on opcode requirements, from
1524 # the reg's bitwidth out to the full bitwidth of the regfile
1525 val = sign_or_zero_extend(val, bitwidth, xlen)
1526 int_regfile[reg].l[0] = val
1527 elif bitwidth == 8:
1528 int_regfile[reg].b[offset] = val
1529 elif bitwidth == 16:
1530 int_regfile[reg].s[offset] = val
1531 elif bitwidth == 32:
1532 int_regfile[reg].i[offset] = val
1533 elif bitwidth == 64:
1534 int_regfile[reg].l[offset] = val
1535
1536 maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
1537 destwid = int_csr[rs1].elwidth # destination element width
1538  for (i = 0; i < VL; i++)
1539 if (predval & 1<<i) # predication uses intregs
1540 // TODO, calculate if over-run occurs, for each elwidth
1541 src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
1542 // TODO, sign/zero-extend src1 and src2 as operation requires
1543 if (op_requires_sign_extend_src1)
1544 src1 = sign_extend(src1, maxsrcwid)
1545 src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
1546 result = src1 + src2 # actual add here
1547 // TODO, sign/zero-extend result, as operation requires
1548 if (op_requires_sign_extend_dest)
1549 result = sign_extend(result, maxsrcwid)
1550 set_polymorphed_reg(rd, destwid, ird, result)
1551 if (!int_vec[rd].isvector) break
1552 if (int_vec[rd ].isvector)  { id += 1; }
1553 if (int_vec[rs1].isvector)  { irs1 += 1; }
1554 if (int_vec[rs2].isvector)  { irs2 += 1; }
1555
1556 Whilst specific sign-extension and zero-extension pseudocode call
1557 details are left out, due to each operation being different, the above
1558 should be clear that;
1559
1560 * the source operands are extended out to the maximum bitwidth of all
1561 source operands
1562 * the operation takes place at that maximum source bitwidth (the
1563 destination bitwidth is not involved at this point, at all)
1564 * the result is extended (or potentially even, truncated) before being
1565 stored in the destination. i.e. truncation (if required) to the
1566 destination width occurs **after** the operation **not** before.
1567 * when the destination is not marked as "vectorised", the **full**
1568 (standard, scalar) register file entry is taken up, i.e. the
1569 element is either sign-extended or zero-extended to cover the
1570 full register bitwidth (XLEN) if it is not already XLEN bits long.
1571
1572 Implementors are entirely free to optimise the above, particularly
1573 if it is specifically known that any given operation will complete
1574 accurately in less bits, as long as the results produced are
1575 directly equivalent and equal, for all inputs and all outputs,
1576 to those produced by the above algorithm.
1577
1578 ## Polymorphic floating-point operation exceptions and error-handling
1579
1580 For floating-point operations, conversion takes place without
1581 raising any kind of exception. Exactly as specified in the standard
1582 RV specification, NAN (or appropriate) is stored if the result
1583 is beyond the range of the destination, and, again, exactly as
1584 with the standard RV specification just as with scalar
1585 operations, the floating-point flag is raised (FCSR). And, again, just as
1586 with scalar operations, it is software's responsibility to check this flag.
1587 Given that the FCSR flags are "accrued", the fact that multiple element
1588 operations could have occurred is not a problem.
1589
1590 Note that it is perfectly legitimate for floating-point bitwidths of
1591 only 8 to be specified. However whilst it is possible to apply IEEE 754
1592 principles, no actual standard yet exists. Implementors wishing to
1593 provide hardware-level 8-bit support rather than throw a trap to emulate
1594 in software should contact the author of this specification before
1595 proceeding.
1596
1597 ## Polymorphic shift operators
1598
1599 A special note is needed for changing the element width of left and right
1600 shift operators, particularly right-shift. Even for standard RV base,
1601 in order for correct results to be returned, the second operand RS2 must
1602 be truncated to be within the range of RS1's bitwidth. spike's implementation
1603 of sll for example is as follows:
1604
1605 WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));
1606
1607 which means: where XLEN is 32 (for RV32), restrict RS2 to cover the
1608 range 0..31 so that RS1 will only be left-shifted by the amount that
1609 is possible to fit into a 32-bit register. Whilst this appears not
1610 to matter for hardware, it matters greatly in software implementations,
1611 and it also matters where an RV64 system is set to "RV32" mode, such
1612 that the underlying registers RS1 and RS2 comprise 64 hardware bits
1613 each.
1614
1615 For SV, where each operand's element bitwidth may be over-ridden, the
1616 rule about determining the operation's bitwidth *still applies*, being
1617 defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
1618 **also applies to the truncation of RS2**. In other words, *after*
1619 determining the maximum bitwidth, RS2's range must **also be truncated**
1620 to ensure a correct answer. Example:
1621
1622 * RS1 is over-ridden to a 16-bit width
1623 * RS2 is over-ridden to an 8-bit width
1624 * RD is over-ridden to a 64-bit width
1625 * the maximum bitwidth is thus determined to be 16-bit - max(8,16)
1626 * RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
1627
1628 Pseudocode (in spike) for this example would therefore be:
1629
1630 WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
1631
1632 This example illustrates that considerable care therefore needs to be
1633 taken to ensure that left and right shift operations are implemented
1634 correctly. The key is that
1635
1636 * The operation bitwidth is determined by the maximum bitwidth
1637 of the *source registers*, **not** the destination register bitwidth
1638 * The result is then sign-extend (or truncated) as appropriate.
1639
1640 ## Polymorphic MULH/MULHU/MULHSU
1641
1642 MULH is designed to take the top half MSBs of a multiply that
1643 does not fit within the range of the source operands, such that
1644 smaller width operations may produce a full double-width multiply
1645 in two cycles. The issue is: SV allows the source operands to
1646 have variable bitwidth.
1647
1648 Here again special attention has to be paid to the rules regarding
1649 bitwidth, which, again, are that the operation is performed at
1650 the maximum bitwidth of the **source** registers. Therefore:
1651
1652 * An 8-bit x 8-bit multiply will create a 16-bit result that must
1653 be shifted down by 8 bits
1654 * A 16-bit x 8-bit multiply will create a 24-bit result that must
1655 be shifted down by 16 bits (top 8 bits being zero)
1656 * A 16-bit x 16-bit multiply will create a 32-bit result that must
1657 be shifted down by 16 bits
1658 * A 32-bit x 16-bit multiply will create a 48-bit result that must
1659 be shifted down by 32 bits
1660 * A 32-bit x 8-bit multiply will create a 40-bit result that must
1661 be shifted down by 32 bits
1662
1663 So again, just as with shift-left and shift-right, the result
1664 is shifted down by the maximum of the two source register bitwidths.
1665 And, exactly again, truncation or sign-extension is performed on the
1666 result. If sign-extension is to be carried out, it is performed
1667 from the same maximum of the two source register bitwidths out
1668 to the result element's bitwidth.
1669
1670 If truncation occurs, i.e. the top MSBs of the result are lost,
1671 this is "Officially Not Our Problem", i.e. it is assumed that the
1672 programmer actually desires the result to be truncated. i.e. if the
1673 programmer wanted all of the bits, they would have set the destination
1674 elwidth to accommodate them.
1675
1676 ## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
1677
1678 Polymorphic element widths in vectorised form means that the data
1679 being loaded (or stored) across multiple registers needs to be treated
1680 (reinterpreted) as a contiguous stream of elwidth-wide items, where
1681 the source register's element width is **independent** from the destination's.
1682
1683 This makes for a slightly more complex algorithm when using indirection
1684 on the "addressed" register (source for LOAD and destination for STORE),
1685 particularly given that the LOAD/STORE instruction provides important
1686 information about the width of the data to be reinterpreted.
1687
1688 Let's illustrate the "load" part, where the pseudo-code for elwidth=default
1689 was as follows, and i is the loop from 0 to VL-1:
1690
1691 srcbase = ireg[rs+i];
1692 return mem[srcbase + imm]; // returns XLEN bits
1693
1694 Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
1695 chunks are taken from the source memory location addressed by the current
1696 indexed source address register, and only when a full 32-bits-worth
1697 are taken will the index be moved on to the next contiguous source
1698 address register:
1699
1700 bitwidth = bw(elwidth); // source elwidth from CSR reg entry
1701 elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
1702 srcbase = ireg[rs+i/(elsperblock)]; // integer divide
1703 offs = i % elsperblock; // modulo
1704 return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
1705
1706 Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
1707 and 128 for LQ.
1708
1709 The principle is basically exactly the same as if the srcbase were pointing
1710 at the memory of the *register* file: memory is re-interpreted as containing
1711 groups of elwidth-wide discrete elements.
1712
1713 When storing the result from a load, it's important to respect the fact
1714 that the destination register has its *own separate element width*. Thus,
1715 when each element is loaded (at the source element width), any sign-extension
1716 or zero-extension (or truncation) needs to be done to the *destination*
1717 bitwidth. Also, the storing has the exact same analogous algorithm as
1718 above, where in fact it is just the set\_polymorphed\_reg pseudocode
1719 (completely unchanged) used above.
1720
1721 One issue remains: when the source element width is **greater** than
1722 the width of the operation, it is obvious that a single LB for example
1723 cannot possibly obtain 16-bit-wide data. This condition may be detected
1724 where, when using integer divide, elsperblock (the width of the LOAD
1725 divided by the bitwidth of the element) is zero.
1726
1727 The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
1728
1729 elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
1730
1731 The elements, if the element bitwidth is larger than the LD operation's
1732 size, will then be sign/zero-extended to the full LD operation size, as
1733 specified by the LOAD (LDU instead of LD, LBU instead of LB), before
1734 being passed on to the second phase.
1735
1736 As LOAD/STORE may be twin-predicated, it is important to note that
1737 the rules on twin predication still apply, except where in previous
1738 pseudo-code (elwidth=default for both source and target) it was
1739 the *registers* that the predication was applied to, it is now the
1740 **elements** that the predication is applied to.
1741
1742 Thus the full pseudocode for all LD operations may be written out
1743 as follows:
1744
1745 function LBU(rd, rs):
1746 load_elwidthed(rd, rs, 8, true)
1747 function LB(rd, rs):
1748 load_elwidthed(rd, rs, 8, false)
1749 function LH(rd, rs):
1750 load_elwidthed(rd, rs, 16, false)
1751 ...
1752 ...
1753 function LQ(rd, rs):
1754 load_elwidthed(rd, rs, 128, false)
1755
1756 # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
1757 function load_memory(rs, imm, i, opwidth):
1758 elwidth = int_csr[rs].elwidth
1759 bitwidth = bw(elwidth);
1760 elsperblock = min(1, opwidth / bitwidth)
1761 srcbase = ireg[rs+i/(elsperblock)];
1762 offs = i % elsperblock;
1763 return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
1764
1765 function load_elwidthed(rd, rs, opwidth, unsigned):
1766 destwid = int_csr[rd].elwidth # destination element width
1767  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1768  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1769  ps = get_pred_val(FALSE, rs); # predication on src
1770  pd = get_pred_val(FALSE, rd); # ... AND on dest
1771  for (int i = 0, int j = 0; i < VL && j < VL;):
1772 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1773 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1774 val = load_memory(rs, imm, i, opwidth)
1775 if unsigned:
1776 val = zero_extend(val, min(opwidth, bitwidth))
1777 else:
1778 val = sign_extend(val, min(opwidth, bitwidth))
1779 set_polymorphed_reg(rd, bitwidth, j, val)
1780 if (int_csr[rs].isvec) i++;
1781 if (int_csr[rd].isvec) j++; else break;
1782
1783 Note:
1784
1785 * when comparing against for example the twin-predicated c.mv
1786 pseudo-code, the pattern of independent incrementing of rd and rs
1787 is preserved unchanged.
1788 * just as with the c.mv pseudocode, zeroing is not included and must be
1789 taken into account (TODO).
1790 * that due to the use of a twin-predication algorithm, LOAD/STORE also
1791 take on the same VSPLAT, VINSERT, VREDUCE, VEXTRACT, VGATHER and
1792 VSCATTER characteristics.
1793 * that due to the use of the same set\_polymorphed\_reg pseudocode,
1794 a destination that is not vectorised (marked as scalar) will
1795 result in the element being fully sign-extended or zero-extended
1796 out to the full register file bitwidth (XLEN). When the source
1797 is also marked as scalar, this is how the compatibility with
1798 standard RV LOAD/STORE is preserved by this algorithm.
1799
1800 ### Example Tables showing LOAD elements
1801
1802 This section contains examples of vectorised LOAD operations, showing
1803 how the two stage process works (three if zero/sign-extension is included).
1804
1805
1806 #### Example: LD x8, x5(0), x8 CSR-elwidth=32, x5 CSR-elwidth=16, VL=7
1807
1808 This is:
1809
1810 * a 64-bit load, with an offset of zero
1811 * with a source-address elwidth of 16-bit
1812 * into a destination-register with an elwidth of 32-bit
1813 * where VL=7
1814 * from register x5 (actually x5-x6) to x8 (actually x8 to half of x11)
1815 * RV64, where XLEN=64 is assumed.
1816
1817 First, the memory table, which, due to the
1818 element width being 16 and the operation being LD (64), the 64-bits
1819 loaded from memory are subdivided into groups of **four** elements.
1820 And, with VL being 7 (deliberately to illustrate that this is reasonable
1821 and possible), the first four are sourced from the offset addresses pointed
1822 to by x5, and the next three from the ofset addresses pointed to by
1823 the next contiguous register, x6:
1824
1825 [[!table data="""
1826 addr | byte 0 | byte 1 | byte 2 | byte 3 | byte 4 | byte 5 | byte 6 | byte 7 |
1827 @x5 | elem 0 || elem 1 || elem 2 || elem 3 ||
1828 @x6 | elem 4 || elem 5 || elem 6 || not loaded ||
1829 """]]
1830
1831 Next, the elements are zero-extended from 16-bit to 32-bit, as whilst
1832 the elwidth CSR entry for x5 is 16-bit, the destination elwidth on x8 is 32.
1833
1834 [[!table data="""
1835 byte 3 | byte 2 | byte 1 | byte 0 |
1836 0x0 | 0x0 | elem0 ||
1837 0x0 | 0x0 | elem1 ||
1838 0x0 | 0x0 | elem2 ||
1839 0x0 | 0x0 | elem3 ||
1840 0x0 | 0x0 | elem4 ||
1841 0x0 | 0x0 | elem5 ||
1842 0x0 | 0x0 | elem6 ||
1843 0x0 | 0x0 | elem7 ||
1844 """]]
1845
1846 Lastly, the elements are stored in contiguous blocks, as if x8 was also
1847 byte-addressable "memory". That "memory" happens to cover registers
1848 x8, x9, x10 and x11, with the last 32 "bits" of x11 being **UNMODIFIED**:
1849
1850 [[!table data="""
1851 reg# | byte 7 | byte 6 | byte 5 | byte 4 | byte 3 | byte 2 | byte 1 | byte 0 |
1852 x8 | 0x0 | 0x0 | elem 1 || 0x0 | 0x0 | elem 0 ||
1853 x9 | 0x0 | 0x0 | elem 3 || 0x0 | 0x0 | elem 2 ||
1854 x10 | 0x0 | 0x0 | elem 5 || 0x0 | 0x0 | elem 4 ||
1855 x11 | **UNMODIFIED** |||| 0x0 | 0x0 | elem 6 ||
1856 """]]
1857
1858 Thus we have data that is loaded from the **addresses** pointed to by
1859 x5 and x6, zero-extended from 16-bit to 32-bit, stored in the **registers**
1860 x8 through to half of x11.
1861 The end result is that elements 0 and 1 end up in x8, with element 8 being
1862 shifted up 32 bits, and so on, until finally element 6 is in the
1863 LSBs of x11.
1864
1865 Note that whilst the memory addressing table is shown left-to-right byte order,
1866 the registers are shown in right-to-left (MSB) order. This does **not**
1867 imply that bit or byte-reversal is carried out: it's just easier to visualise
1868 memory as being contiguous bytes, and emphasises that registers are not
1869 really actually "memory" as such.
1870
1871 ## Why SV bitwidth specification is restricted to 4 entries
1872
1873 The four entries for SV element bitwidths only allows three over-rides:
1874
1875 * 8 bit
1876 * 16 hit
1877 * 32 bit
1878
1879 This would seem inadequate, surely it would be better to have 3 bits or
1880 more and allow 64, 128 and some other options besides. The answer here
1881 is, it gets too complex, no RV128 implementation yet exists, and so RV64's
1882 default is 64 bit, so the 4 major element widths are covered anyway.
1883
1884 There is an absolutely crucial aspect oF SV here that explicitly
1885 needs spelling out, and it's whether the "vectorised" bit is set in
1886 the Register's CSR entry.
1887
1888 If "vectorised" is clear (not set), this indicates that the operation
1889 is "scalar". Under these circumstances, when set on a destination (RD),
1890 then sign-extension and zero-extension, whilst changed to match the
1891 override bitwidth (if set), will erase the **full** register entry
1892 (64-bit if RV64).
1893
1894 When vectorised is *set*, this indicates that the operation now treats
1895 **elements** as if they were independent registers, so regardless of
1896 the length, any parts of a given actual register that are not involved
1897 in the operation are **NOT** modified, but are **PRESERVED**.
1898
1899 For example:
1900
1901 * when the vector bit is clear and elwidth set to 16 on the destination
1902 register, operations are truncated to 16 bit and then sign or zero
1903 extended to the *FULL* XLEN register width.
1904 * when the vector bit is set, elwidth is 16 and VL=1 (or other value where
1905 groups of elwidth sized elements do not fill an entire XLEN register),
1906 the "top" bits of the destination register do *NOT* get modified, zero'd
1907 or otherwise overwritten.
1908
1909 SIMD micro-architectures may implement this by using predication on
1910 any elements in a given actual register that are beyond the end of
1911 multi-element operation.
1912
1913 Other microarchitectures may choose to provide byte-level write-enable
1914 lines on the register file, such that each 64 bit register in an RV64
1915 system requires 8 WE lines. Scalar RV64 operations would require
1916 activation of all 8 lines, where SV elwidth based operations would
1917 activate the required subset of those byte-level write lines.
1918
1919 Example:
1920
1921 * rs1, rs2 and rd are all set to 8-bit
1922 * VL is set to 3
1923 * RV64 architecture is set (UXL=64)
1924 * add operation is carried out
1925 * bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16]
1926 concatenated with similar add operations on bits 15..8 and 7..0
1927 * bits 24 through 63 **remain as they originally were**.
1928
1929 Example SIMD micro-architectural implementation:
1930
1931 * SIMD architecture works out the nearest round number of elements
1932 that would fit into a full RV64 register (in this case: 8)
1933 * SIMD architecture creates a hidden predicate, binary 0b00000111
1934 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
1935 * SIMD architecture goes ahead with the add operation as if it
1936 was a full 8-wide batch of 8 adds
1937 * SIMD architecture passes top 5 elements through the adders
1938 (which are "disabled" due to zero-bit predication)
1939 * SIMD architecture gets the 5 unmodified top 8-bits back unmodified
1940 and stores them in rd.
1941
1942 This requires a read on rd, however this is required anyway in order
1943 to support non-zeroing mode.
1944
1945 ## Polymorphic floating-point
1946
1947 Standard scalar RV integer operations base the register width on XLEN,
1948 which may be changed (UXL in USTATUS, and the corresponding MXL and
1949 SXL in MSTATUS and SSTATUS respectively). Integer LOAD, STORE and
1950 arithmetic operations are therefore restricted to an active XLEN bits,
1951 with sign or zero extension to pad out the upper bits when XLEN has
1952 been dynamically set to less than the actual register size.
1953
1954 For scalar floating-point, the active (used / changed) bits are
1955 specified exclusively by the operation: ADD.S specifies an active
1956 32-bits, with the upper bits of the source registers needing to
1957 be all 1s ("NaN-boxed"), and the destination upper bits being
1958 *set* to all 1s (including on LOAD/STOREs).
1959
1960 Where elwidth is set to default (on any source or the destination)
1961 it is obvious that this NaN-boxing behaviour can and should be
1962 preserved. When elwidth is non-default things are less obvious,
1963 so need to be thought through. Here is a normal (scalar) sequence,
1964 assuming an RV64 which supports Quad (128-bit) FLEN:
1965
1966 * FLD loads 64-bit wide from memory. Top 64 MSBs are set to all 1s
1967 * ADD.D performs a 64-bit-wide add. Top 64 MSBs of destination set to 1s.
1968 * FSD stores lowest 64-bits from the 128-bit-wide register to memory:
1969 top 64 MSBs ignored.
1970
1971 Therefore it makes sense to mirror this behaviour when, for example,
1972 elwidth is set to 32. Assume elwidth set to 32 on all source and
1973 destination registers:
1974
1975 * FLD loads 64-bit wide from memory as **two** 32-bit single-precision
1976 floating-point numbers.
1977 * ADD.D performs **two** 32-bit-wide adds, storing one of the adds
1978 in bits 0-31 and the second in bits 32-63.
1979 * FSD stores lowest 64-bits from the 128-bit-wide register to memory
1980
1981 Here's the thing: it does not make sense to overwrite the top 64 MSBs
1982 of the registers either during the FLD **or** the ADD.D. The reason
1983 is that, effectively, the top 64 MSBs actually represent a completely
1984 independent 64-bit register, so overwriting it is not only gratuitous
1985 but may actually be harmful for a future extension to SV which may
1986 have a way to directly access those top 64 bits.
1987
1988 The decision is therefore **not** to touch the upper parts of floating-point
1989 registers whereever elwidth is set to non-default values, including
1990 when "isvec" is false in a given register's CSR entry. Only when the
1991 elwidth is set to default **and** isvec is false will the standard
1992 RV behaviour be followed, namely that the upper bits be modified.
1993
1994 Ultimately if elwidth is default and isvec false on *all* source
1995 and destination registers, a SimpleV instruction defaults completely
1996 to standard RV scalar behaviour (this holds true for **all** operations,
1997 right across the board).
1998
1999 The nice thing here is that ADD.S, ADD.D and ADD.Q when elwidth are
2000 non-default values are effectively all the same: they all still perform
2001 multiple ADD operations, just at different widths. A future extension
2002 to SimpleV may actually allow ADD.S to access the upper bits of the
2003 register, effectively breaking down a 128-bit register into a bank
2004 of 4 independently-accesible 32-bit registers.
2005
2006 In the meantime, although when e.g. setting VL to 8 it would technically
2007 make no difference to the ALU whether ADD.S, ADD.D or ADD.Q is used,
2008 using ADD.Q may be an easy way to signal to the microarchitecture that
2009 it is to receive a higher VL value. On a superscalar OoO architecture
2010 there may be absolutely no difference, however on simpler SIMD-style
2011 microarchitectures they may not necessarily have the infrastructure in
2012 place to know the difference, such that when VL=8 and an ADD.D instruction
2013 is issued, it completes in 2 cycles (or more) rather than one, where
2014 if an ADD.Q had been issued instead on such simpler microarchitectures
2015 it would complete in one.
2016
2017 ## Specific instruction walk-throughs
2018
2019 This section covers walk-throughs of the above-outlined procedure
2020 for converting standard RISC-V scalar arithmetic operations to
2021 polymorphic widths, to ensure that it is correct.
2022
2023 ### add
2024
2025 Standard Scalar RV32/RV64 (xlen):
2026
2027 * RS1 @ xlen bits
2028 * RS2 @ xlen bits
2029 * add @ xlen bits
2030 * RD @ xlen bits
2031
2032 Polymorphic variant:
2033
2034 * RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
2035 * RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
2036 * add @ max(rs1, rs2) bits
2037 * RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate
2038
2039 Note here that polymorphic add zero-extends its source operands,
2040 where addw sign-extends.
2041
2042 ### addw
2043
2044 The RV Specification specifically states that "W" variants of arithmetic
2045 operations always produce 32-bit signed values. In a polymorphic
2046 environment it is reasonable to assume that the signed aspect is
2047 preserved, where it is the length of the operands and the result
2048 that may be changed.
2049
2050 Standard Scalar RV64 (xlen):
2051
2052 * RS1 @ xlen bits
2053 * RS2 @ xlen bits
2054 * add @ xlen bits
2055 * RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
2056
2057 Polymorphic variant:
2058
2059 * RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
2060 * RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
2061 * add @ max(rs1, rs2) bits
2062 * RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
2063
2064 Note here that polymorphic addw sign-extends its source operands,
2065 where add zero-extends.
2066
2067 This requires a little more in-depth analysis. Where the bitwidth of
2068 rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
2069 only where the bitwidth of either rs1 or rs2 are different, will the
2070 lesser-width operand be sign-extended.
2071
2072 Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
2073 where for add they are both zero-extended. This holds true for all arithmetic
2074 operations ending with "W".
2075
2076 ### addiw
2077
2078 Standard Scalar RV64I:
2079
2080 * RS1 @ xlen bits, truncated to 32-bit
2081 * immed @ 12 bits, sign-extended to 32-bit
2082 * add @ 32 bits
2083 * RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
2084
2085 Polymorphic variant:
2086
2087 * RS1 @ rs1 bits
2088 * immed @ 12 bits, sign-extend to max(rs1, 12) bits
2089 * add @ max(rs1, 12) bits
2090 * RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
2091
2092 # Predication Element Zeroing
2093
2094 The introduction of zeroing on traditional vector predication is usually
2095 intended as an optimisation for lane-based microarchitectures with register
2096 renaming to be able to save power by avoiding a register read on elements
2097 that are passed through en-masse through the ALU. Simpler microarchitectures
2098 do not have this issue: they simply do not pass the element through to
2099 the ALU at all, and therefore do not store it back in the destination.
2100 More complex non-lane-based micro-architectures can, when zeroing is
2101 not set, use the predication bits to simply avoid sending element-based
2102 operations to the ALUs, entirely: thus, over the long term, potentially
2103 keeping all ALUs 100% occupied even when elements are predicated out.
2104
2105 SimpleV's design principle is not based on or influenced by
2106 microarchitectural design factors: it is a hardware-level API.
2107 Therefore, looking purely at whether zeroing is *useful* or not,
2108 (whether less instructions are needed for certain scenarios),
2109 given that a case can be made for zeroing *and* non-zeroing, the
2110 decision was taken to add support for both.
2111
2112 ## Single-predication (based on destination register)
2113
2114 Zeroing on predication for arithmetic operations is taken from
2115 the destination register's predicate. i.e. the predication *and*
2116 zeroing settings to be applied to the whole operation come from the
2117 CSR Predication table entry for the destination register.
2118 Thus when zeroing is set on predication of a destination element,
2119 if the predication bit is clear, then the destination element is *set*
2120 to zero (twin-predication is slightly different, and will be covered
2121 next).
2122
2123 Thus the pseudo-code loop for a predicated arithmetic operation
2124 is modified to as follows:
2125
2126  for (i = 0; i < VL; i++)
2127 if not zeroing: # an optimisation
2128 while (!(predval & 1<<i) && i < VL)
2129 if (int_vec[rd ].isvector)  { id += 1; }
2130 if (int_vec[rs1].isvector)  { irs1 += 1; }
2131 if (int_vec[rs2].isvector)  { irs2 += 1; }
2132 if i == VL:
2133 break
2134 if (predval & 1<<i)
2135 src1 = ....
2136 src2 = ...
2137 else:
2138 result = src1 + src2 # actual add (or other op) here
2139 set_polymorphed_reg(rd, destwid, ird, result)
2140 if (!int_vec[rd].isvector) break
2141 else if zeroing:
2142 result = 0
2143 set_polymorphed_reg(rd, destwid, ird, result)
2144 if (int_vec[rd ].isvector)  { id += 1; }
2145 else if (predval & 1<<i) break;
2146 if (int_vec[rs1].isvector)  { irs1 += 1; }
2147 if (int_vec[rs2].isvector)  { irs2 += 1; }
2148
2149 The optimisation to skip elements entirely is only possible for certain
2150 micro-architectures when zeroing is not set. However for lane-based
2151 micro-architectures this optimisation may not be practical, as it
2152 implies that elements end up in different "lanes". Under these
2153 circumstances it is perfectly fine to simply have the lanes
2154 "inactive" for predicated elements, even though it results in
2155 less than 100% ALU utilisation.
2156
2157 ## Twin-predication (based on source and destination register)
2158
2159 Twin-predication is not that much different, except that that
2160 the source is independently zero-predicated from the destination.
2161 This means that the source may be zero-predicated *or* the
2162 destination zero-predicated *or both*, or neither.
2163
2164 When with twin-predication, zeroing is set on the source and not
2165 the destination, if a predicate bit is set it indicates that a zero
2166 data element is passed through the operation (the exception being:
2167 if the source data element is to be treated as an address - a LOAD -
2168 then the data returned *from* the LOAD is zero, rather than looking up an
2169 *address* of zero.
2170
2171 When zeroing is set on the destination and not the source, then just
2172 as with single-predicated operations, a zero is stored into the destination
2173 element (or target memory address for a STORE).
2174
2175 Zeroing on both source and destination effectively result in a bitwise
2176 NOR operation of the source and destination predicate: the result is that
2177 where either source predicate OR destination predicate is set to 0,
2178 a zero element will ultimately end up in the destination register.
2179
2180 However: this may not necessarily be the case for all operations;
2181 implementors, particularly of custom instructions, clearly need to
2182 think through the implications in each and every case.
2183
2184 Here is pseudo-code for a twin zero-predicated operation:
2185
2186 function op_mv(rd, rs) # MV not VMV!
2187  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
2188  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
2189  ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
2190  pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
2191  for (int i = 0, int j = 0; i < VL && j < VL):
2192 if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
2193 if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
2194 if ((pd & 1<<j))
2195 if ((pd & 1<<j))
2196 sourcedata = ireg[rs+i];
2197 else
2198 sourcedata = 0
2199 ireg[rd+j] <= sourcedata
2200 else if (zerodst)
2201 ireg[rd+j] <= 0
2202 if (int_csr[rs].isvec)
2203 i++;
2204 if (int_csr[rd].isvec)
2205 j++;
2206 else
2207 if ((pd & 1<<j))
2208 break;
2209
2210 Note that in the instance where the destination is a scalar, the hardware
2211 loop is ended the moment a value *or a zero* is placed into the destination
2212 register/element. Also note that, for clarity, variable element widths
2213 have been left out of the above.
2214
2215 # Exceptions
2216
2217 TODO: expand. Exceptions may occur at any time, in any given underlying
2218 scalar operation. This implies that context-switching (traps) may
2219 occur, and operation must be returned to where it left off. That in
2220 turn implies that the full state - including the current parallel
2221 element being processed - has to be saved and restored. This is
2222 what the **STATE** CSR is for.
2223
2224 The implications are that all underlying individual scalar operations
2225 "issued" by the parallelisation have to appear to be executed sequentially.
2226 The further implications are that if two or more individual element
2227 operations are underway, and one with an earlier index causes an exception,
2228 it may be necessary for the microarchitecture to **discard** or terminate
2229 operations with higher indices.
2230
2231 This being somewhat dissatisfactory, an "opaque predication" variant
2232 of the STATE CSR is being considered.
2233
2234 # Hints
2235
2236 A "HINT" is an operation that has no effect on architectural state,
2237 where its use may, by agreed convention, give advance notification
2238 to the microarchitecture: branch prediction notification would be
2239 a good example. Usually HINTs are where rd=x0.
2240
2241 With Simple-V being capable of issuing *parallel* instructions where
2242 rd=x0, the space for possible HINTs is expanded considerably. VL
2243 could be used to indicate different hints. In addition, if predication
2244 is set, the predication register itself could hypothetically be passed
2245 in as a *parameter* to the HINT operation.
2246
2247 No specific hints are yet defined in Simple-V
2248
2249 # VLIW Format <a name="vliw-format"></a>
2250
2251 One issue with SV is the setup and teardown time of the CSRs. The cost
2252 of the use of a full CSRRW (requiring LI) is quite high. A VLIW format
2253 therefore makes sense.
2254
2255 A suitable prefix, which fits the Expanded Instruction-Length encoding
2256 for "(80 + 16 times instruction_length)", as defined in Section 1.5
2257 of the RISC-V ISA, is as follows:
2258
2259 | 15 | 14:12 | 11:10 | 9:8 | 7 | 6:0 |
2260 | - | ----- | ----- | ----- | --- | ------- |
2261 | vlset | 16xil | pplen | rplen | mode | 1111111 |
2262
2263 An optional VL Block, optional predicate entries, optional register
2264 entries and finally some 16/32/48 bit standard RV or SVPrefix opcodes
2265 follow.
2266
2267 The variable-length format from Section 1.5 of the RISC-V ISA:
2268
2269 | base+4 ... base+2 | base | number of bits |
2270 | ------ ----------------- | ---------------- | -------------------------- |
2271 | ..xxxx xxxxxxxxxxxxxxxx | xnnnxxxxx1111111 | (80+16\*nnn)-bit, nnn!=111 |
2272 | {ops}{Pred}{Reg}{VL Block} | SV Prefix | |
2273
2274 VL/MAXVL/SubVL Block:
2275
2276 | 31-30 | 29:28 | 27:22 | 21:17 - 16 |
2277 | - | ----- | ------ | ------ - - |
2278 | 0 | SubVL | VLdest | VLEN vlt |
2279 | 1 | SubVL | VLdest | VLEN |
2280
2281 Note: this format is very similar to that used in [[sv_prefix_proposal]]
2282
2283 If vlt is 0, VLEN is a 5 bit immediate value, offset by one (i.e
2284 a bit sequence of 0b00000 represents VL=1 and so on). If vlt is 1,
2285 it specifies the scalar register from which VL is set by this VLIW
2286 instruction group. VL, whether set from the register or the immediate,
2287 is then modified (truncated) to be MIN(VL, MAXVL), and the result stored
2288 in the scalar register specified in VLdest. If VLdest is zero, no store
2289 in the regfile occurs (however VL is still set).
2290
2291 This option will typically be used to start vectorised loops, where
2292 the VLIW instruction effectively embeds an optional "SETSUBVL, SETVL"
2293 sequence (in compact form).
2294
2295 When bit 15 is set to 1, MAXVL and VL are both set to the immediate,
2296 VLEN (again, offset by one), which is 6 bits in length, and the same
2297 value stored in scalar register VLdest (if that register is nonzero).
2298 A value of 0b000000 will set MAXVL=VL=1, a value of 0b000001 will
2299 set MAXVL=VL= 2 and so on.
2300
2301 This option will typically not be used so much for loops as it will be
2302 for one-off instructions such as saving the entire register file to the
2303 stack with a single one-off Vectorised and predicated LD/ST, or as a way
2304 to save or restore registers in a function call with a single instruction.
2305
2306 CSRs needed:
2307
2308 * mepcvliw
2309 * sepcvliw
2310 * uepcvliw
2311 * hepcvliw
2312
2313 Notes:
2314
2315 * Bit 7 specifies if the prefix block format is the full 16 bit format
2316 (1) or the compact less expressive format (0). In the 8 bit format,
2317 pplen is multiplied by 2.
2318 * 8 bit format predicate numbering is implicit and begins from x9. Thus
2319 it is critical to put blocks in the correct order as required.
2320 * Bit 7 also specifies if the register block format is 16 bit (1) or 8 bit
2321 (0). In the 8 bit format, rplen is multiplied by 2. If only an odd number
2322 of entries are needed the last may be set to 0x00, indicating "unused".
2323 * Bit 15 specifies if the VL Block is present. If set to 1, the VL Block
2324 immediately follows the VLIW instruction Prefix
2325 * Bits 8 and 9 define how many RegCam entries (0 to 3 if bit 15 is 1,
2326 otherwise 0 to 6) follow the (optional) VL Block.
2327 * Bits 10 and 11 define how many PredCam entries (0 to 3 if bit 7 is 1,
2328 otherwise 0 to 6) follow the (optional) RegCam entries
2329 * Bits 14 to 12 (IL) define the actual length of the instruction: total
2330 number of bits is 80 + 16 times IL. Standard RV32, RVC and also
2331 SVPrefix (P48/64-\*-Type) instructions fit into this space, after the
2332 (optional) VL / RegCam / PredCam entries
2333 * Anything - any registers - within the VLIW-prefixed format *MUST* have the
2334 RegCam and PredCam entries applied to it.
2335 * At the end of the VLIW Group, the RegCam and PredCam entries
2336 *no longer apply*. VL, MAXVL and SUBVL on the other hand remain at
2337 the values set by the last instruction (whether a CSRRW or the VL
2338 Block header).
2339 * Although an inefficient use of resources, it is fine to set the MAXVL,
2340 VL and SUBVL CSRs with standard CSRRW instructions, within a VLIW block.
2341
2342 All this would greatly reduce the amount of space utilised by Vectorised
2343 instructions, given that 64-bit CSRRW requires 3, even 4 32-bit opcodes: the
2344 CSR itself, a LI, and the setting up of the value into the RS register
2345 of the CSR, which, again, requires a LI / LUI to get the 32 bit
2346 data into the CSR. To get 64-bit data into the register in order to put
2347 it into the CSR(s), LOAD operations from memory are needed!
2348
2349 Given that each 64-bit CSR can hold only 4x PredCAM entries (or 4 RegCAM
2350 entries), that's potentially 6 to eight 32-bit instructions, just to
2351 establish the Vector State!
2352
2353 Not only that: even CSRRW on VL and MAXVL requires 64-bits (even more bits if
2354 VL needs to be set to greater than 32). Bear in mind that in SV, both MAXVL
2355 and VL need to be set.
2356
2357 By contrast, the VLIW prefix is only 16 bits, the VL/MAX/SubVL block is
2358 only 16 bits, and as long as not too many predicates and register vector
2359 qualifiers are specified, several 32-bit and 16-bit opcodes can fit into
2360 the format. If the full flexibility of the 16 bit block formats are not
2361 needed, more space is saved by using the 8 bit formats.
2362
2363 In this light, embedding the VL/MAXVL, PredCam and RegCam CSR entries into
2364 a VLIW format makes a lot of sense.
2365
2366 Open Questions:
2367
2368 * Is it necessary to stick to the RISC-V 1.5 format? Why not go with
2369 using the 15th bit to allow 80 + 16\*0bnnnn bits? Perhaps to be sane,
2370 limit to 256 bits (16 times 0-11).
2371 * Could a "hint" be used to set which operations are parallel and which
2372 are sequential?
2373 * Could a new sub-instruction opcode format be used, one that does not
2374 conform precisely to RISC-V rules, but *unpacks* to RISC-V opcodes?
2375 no need for byte or bit-alignment
2376 * Could a hardware compression algorithm be deployed? Quite likely,
2377 because of the sub-execution context (sub-VLIW PC)
2378
2379 ## Limitations on instructions.
2380
2381 To greatly simplify implementations, it is required to treat the VLIW
2382 group as a separate sub-program with its own separate PC. The sub-pc
2383 advances separately whilst the main PC remains pointing at the beginning
2384 of the VLIW instruction (not to be confused with how VL works, which
2385 is exactly the same principle, except it is VStart in the STATE CSR
2386 that increments).
2387
2388 This has implications, namely that a new set of CSRs identical to xepc
2389 (mepc, srpc, hepc and uepc) must be created and managed and respected
2390 as being a sub extension of the xepc set of CSRs. Thus, xepcvliw CSRs
2391 must be context switched and saved / restored in traps.
2392
2393 The srcoffs and destoffs indices in the STATE CSR may be similarly regarded as another
2394 sub-execution context, giving in effect two sets of nested sub-levels
2395 of the RISCV Program Counter (actually, three including SUBVL and ssvoffs).
2396
2397 In addition, as xepcvliw CSRs are relative to the beginning of the VLIW
2398 block, branches MUST be restricted to within (relative to) the block, i.e. addressing
2399 is now restricted to the start (and very short) length of the block.
2400
2401 Also: calling subroutines is inadviseable, unless they can be entirely
2402 accomplished within a block.
2403
2404 A normal jump, normal branch and a normal function call may only be taken by letting
2405 the VLIW group end, returning to "normal" standard RV mode, and then using standard RVC, 32 bit
2406 or P48/64-\*-type opcodes.
2407
2408 ## Links
2409
2410 * <https://groups.google.com/d/msg/comp.arch/yIFmee-Cx-c/jRcf0evSAAAJ>
2411
2412 # Subsets of RV functionality
2413
2414 This section describes the differences when SV is implemented on top of
2415 different subsets of RV.
2416
2417 ## Common options
2418
2419 It is permitted to only implement SVprefix and not the VLIW instruction format option.
2420 UNIX Platforms **MUST** raise illegal instruction on seeing a VLIW opcode so that traps may emulate the format.
2421
2422 It is permitted in SVprefix to either not implement VL or not implement SUBVL (see [[sv_prefix_proposal]] for full details. Again, UNIX Platforms *MUST* raise illegal instruction on implementations that do not support VL or SUBVL.
2423
2424 It is permitted to limit the size of either (or both) the register files
2425 down to the original size of the standard RV architecture. However, below
2426 the mandatory limits set in the RV standard will result in non-compliance
2427 with the SV Specification.
2428
2429 ## RV32 / RV32F
2430
2431 When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
2432 maximum limit for predication is also restricted to 32 bits. Whilst not
2433 actually specifically an "option" it is worth noting.
2434
2435 ## RV32G
2436
2437 Normally in standard RV32 it does not make much sense to have
2438 RV32G, The critical instructions that are missing in standard RV32
2439 are those for moving data to and from the double-width floating-point
2440 registers into the integer ones, as well as the FCVT routines.
2441
2442 In an earlier draft of SV, it was possible to specify an elwidth
2443 of double the standard register size: this had to be dropped,
2444 and may be reintroduced in future revisions.
2445
2446 ## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
2447
2448 When floating-point is not implemented, the size of the User Register and
2449 Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
2450 per table).
2451
2452 ## RV32E
2453
2454 In embedded scenarios the User Register and Predication CSRs may be
2455 dropped entirely, or optionally limited to 1 CSR, such that the combined
2456 number of entries from the M-Mode CSR Register table plus U-Mode
2457 CSR Register table is either 4 16-bit entries or (if the U-Mode is
2458 zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
2459 the Predication CSR tables.
2460
2461 RV32E is the most likely candidate for simply detecting that registers
2462 are marked as "vectorised", and generating an appropriate exception
2463 for the VL loop to be implemented in software.
2464
2465 ## RV128
2466
2467 RV128 has not been especially considered, here, however it has some
2468 extremely large possibilities: double the element width implies
2469 256-bit operands, spanning 2 128-bit registers each, and predication
2470 of total length 128 bit given that XLEN is now 128.
2471
2472 # Under consideration <a name="issues"></a>
2473
2474 for element-grouping, if there is unused space within a register
2475 (3 16-bit elements in a 64-bit register for example), recommend:
2476
2477 * For the unused elements in an integer register, the used element
2478 closest to the MSB is sign-extended on write and the unused elements
2479 are ignored on read.
2480 * The unused elements in a floating-point register are treated as-if
2481 they are set to all ones on write and are ignored on read, matching the
2482 existing standard for storing smaller FP values in larger registers.
2483
2484 ---
2485
2486 info register,
2487
2488 > One solution is to just not support LR/SC wider than a fixed
2489 > implementation-dependent size, which must be at least 
2490 >1 XLEN word, which can be read from a read-only CSR
2491 > that can also be used for info like the kind and width of 
2492 > hw parallelism supported (128-bit SIMD, minimal virtual 
2493 > parallelism, etc.) and other things (like maybe the number 
2494 > of registers supported). 
2495
2496 > That CSR would have to have a flag to make a read trap so
2497 > a hypervisor can simulate different values.
2498
2499 ----
2500
2501 > And what about instructions like JALR? 
2502
2503 answer: they're not vectorised, so not a problem
2504
2505 ----
2506
2507 * if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
2508 XLEN if elwidth==default
2509 * if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
2510 *32* if elwidth == default
2511
2512 ---
2513
2514 TODO: document different lengths for INT / FP regfiles, and provide
2515 as part of info register. 00=32, 01=64, 10=128, 11=reserved.
2516
2517 ---
2518
2519 TODO, update to remove RegCam and PredCam CSRs, just use SVprefix and
2520 VLIW format
2521
2522 ---
2523
2524 Could the 8 bit Register VLIW format use regnum<<1 instead, only accessing regs 0 to 64?