(no commit message)
[libreriscv.git] / simple_v_extension / specification.mdwn
1 # Simple-V (Parallelism Extension Proposal) Specification
2
3 * Copyright (C) 2017, 2018, 3029 Luke Kenneth Casson Leighton
4 * Status: DRAFTv0.6
5 * Last edited: 21 jun 2019
6 * Ancillary resource: [[opcodes]] [[sv_prefix_proposal]]
7
8 With thanks to:
9
10 * Allen Baum
11 * Bruce Hoult
12 * comp.arch
13 * Jacob Bachmeyer
14 * Guy Lemurieux
15 * Jacob Lifshay
16 * Terje Mathisen
17 * The RISC-V Founders, without whom this all would not be possible.
18
19 [[!toc ]]
20
21 # Summary and Background: Rationale
22
23 Simple-V is a uniform parallelism API for RISC-V hardware that has several
24 unplanned side-effects including code-size reduction, expansion of
25 HINT space and more. The reason for
26 creating it is to provide a manageable way to turn a pre-existing design
27 into a parallel one, in a step-by-step incremental fashion, allowing
28 the implementor to focus on adding hardware where it is needed and necessary.
29 The primary target is for mobile-class 3D GPUs and VPUs, with secondary
30 goals being to reduce executable size and reduce context-switch latency.
31
32 Critically: **No new instructions are added**. The parallelism (if any
33 is implemented) is implicitly added by tagging *standard* scalar registers
34 for redirection. When such a tagged register is used in any instruction,
35 it indicates that the PC shall **not** be incremented; instead a loop
36 is activated where *multiple* instructions are issued to the pipeline
37 (as determined by a length CSR), with contiguously incrementing register
38 numbers starting from the tagged register. When the last "element"
39 has been reached, only then is the PC permitted to move on. Thus
40 Simple-V effectively sits (slots) *in between* the instruction decode phase
41 and the ALU(s).
42
43 The barrier to entry with SV is therefore very low. The minimum
44 compliant implementation is software-emulation (traps), requiring
45 only the CSRs and CSR tables, and that an exception be thrown if an
46 instruction's registers are detected to have been tagged. The looping
47 that would otherwise be done in hardware is thus carried out in software,
48 instead. Whilst much slower, it is "compliant" with the SV specification,
49 and may be suited for implementation in RV32E and also in situations
50 where the implementor wishes to focus on certain aspects of SV, without
51 unnecessary time and resources into the silicon, whilst also conforming
52 strictly with the API. A good area to punt to software would be the
53 polymorphic element width capability for example.
54
55 Hardware Parallelism, if any, is therefore added at the implementor's
56 discretion to turn what would otherwise be a sequential loop into a
57 parallel one.
58
59 To emphasise that clearly: Simple-V (SV) is *not*:
60
61 * A SIMD system
62 * A SIMT system
63 * A Vectorisation Microarchitecture
64 * A microarchitecture of any specific kind
65 * A mandary parallel processor microarchitecture of any kind
66 * A supercomputer extension
67
68 SV does **not** tell implementors how or even if they should implement
69 parallelism: it is a hardware "API" (Application Programming Interface)
70 that, if implemented, presents a uniform and consistent way to *express*
71 parallelism, at the same time leaving the choice of if, how, how much,
72 when and whether to parallelise operations **entirely to the implementor**.
73
74 # Basic Operation
75
76 The principle of SV is as follows:
77
78 * Standard RV instructions are "prefixed" either to a 48 format (single instruction option) or a variable
79 length VLIW-like prefix (multi or "grouped" option) that indicates
80 which registers are "tagged" as "vectorised". Predicates can also be added.
81 * A "Vector Length" CSR is set, indicating the span of any future
82 "parallel" operations.
83 * If any operation (a **scalar** standard RV opcode)
84 uses a register that has been so "marked"
85 ("tagged"),
86 a hardware "macro-unrolling loop" is activated, of length
87 VL, that effectively issues **multiple** identical instructions
88 using contiguous sequentially-incrementing register numbers, based on the "tags".
89 * **Whether they be executed sequentially or in parallel or a
90 mixture of both or punted to software-emulation in a trap handler
91 is entirely up to the implementor**.
92
93 In this way an entire scalar algorithm may be vectorised with
94 the minimum of modification to the hardware and to compiler toolchains.
95
96 To reiterate: **There are *no* new opcodes**. The scheme works *entirely* on hidden context that augments *scalar* RISCV instructions.
97
98 # CSRs <a name="csrs"></a>
99
100 * An optional "reshaping" CSR key-value table which remaps from a 1D
101 linear shape to 2D or 3D, including full transposition.
102
103 There are also five additional User mode CSRs :
104
105 * uMVL (the Maximum Vector Length)
106 * uVL (which has different characteristics from standard CSRs)
107 * uSUBVL (effectively a kind of SIMD)
108 * uEPCVLIW (a copy of the sub-execution Program Counter, that is relative to the start of the current VLIW Group, set on a trap).
109 * uSTATE (useful for saving and restoring during context switch,
110 and for providing fast transitions)
111
112 There are also five additional CSRs for Supervisor-Mode:
113
114 * SMVL
115 * SVL
116 * SSUBVL
117 * SEPCVLIW
118 * SSTATE
119
120 And likewise for M-Mode:
121
122 * MMVL
123 * MVL
124 * MSUBVL
125 * MEPCVLIW
126 * MSTATE
127
128 Both Supervisor and M-Mode have their own CSR registers, independent of the other privilege levels, in order to make it easier to use Vectorisation in each level without affecting other privilege levels.
129
130 The access pattern for these groups of CSRs in each mode follows the
131 same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
132
133 * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
134 * In S-Mode, accessing and changing of the M-Mode CSRs is transparently
135 identical
136 to changing the S-Mode CSRs. Accessing and changing the U-Mode
137 CSRs is permitted.
138 * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
139 is prohibited.
140
141 In M-Mode, only the M-Mode CSRs are in effect, i.e. it is only the
142 M-Mode MVL, the M-Mode STATE and so on that influences the processor
143 behaviour. Likewise for S-Mode, and likewise for U-Mode.
144
145 This has the interesting benefit of allowing M-Mode (or S-Mode)
146 to be set up, for context-switching to take place, and, on return
147 back to the higher privileged mode, the CSRs of that mode will be
148 exactly as they were. Thus, it becomes possible for example to
149 set up CSRs suited best to aiding and assisting low-latency fast
150 context-switching *once and only once* (for example at boot time), without the need for
151 re-initialising the CSRs needed to do so.
152
153 Another interesting side effect of separate S Mode CSRs is that Vectorised saving of the entire register file to the stack is a single instruction (accidental provision of LOAD-MULTI semantics). It can even be predicated, which opens up some very interesting possibilities.
154
155 The xEPCVLIW CSRs must be treated exactly like their corresponding xepc equivalents. See VLIW section for details.
156
157 ## MAXVECTORLENGTH (MVL) <a name="mvl" />
158
159 MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
160 is variable length and may be dynamically set. MVL is
161 however limited to the regfile bitwidth XLEN (1-32 for RV32,
162 1-64 for RV64 and so on).
163
164 The reason for setting this limit is so that predication registers, when
165 marked as such, may fit into a single register as opposed to fanning out
166 over several registers. This keeps the implementation a little simpler.
167
168 The other important factor to note is that the actual MVL is **offset
169 by one**, so that it can fit into only 6 bits (for RV64) and still cover
170 a range up to XLEN bits. So, when setting the MVL CSR to 0, this actually
171 means that MVL==1. When setting the MVL CSR to 3, this actually means
172 that MVL==4, and so on. This is expressed more clearly in the "pseudocode"
173 section, where there are subtle differences between CSRRW and CSRRWI.
174
175 ## Vector Length (VL) <a name="vl" />
176
177 VSETVL is slightly different from RVV. Like RVV, VL is set to be within
178 the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
179
180 VL = rd = MIN(vlen, MVL)
181
182 where 1 <= MVL <= XLEN
183
184 However just like MVL it is important to note that the range for VL has
185 subtle design implications, covered in the "CSR pseudocode" section
186
187 The fixed (specific) setting of VL allows vector LOAD/STORE to be used
188 to switch the entire bank of registers using a single instruction (see
189 Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
190 is down to the fact that predication bits fit into a single register of
191 length XLEN bits.
192
193 The second change is that when VSETVL is requested to be stored
194 into x0, it is *ignored* silently (VSETVL x0, x5)
195
196 The third and most important change is that, within the limits set by
197 MVL, the value passed in **must** be set in VL (and in the
198 destination register).
199
200 This has implication for the microarchitecture, as VL is required to be
201 set (limits from MVL notwithstanding) to the actual value
202 requested. RVV has the option to set VL to an arbitrary value that suits
203 the conditions and the micro-architecture: SV does *not* permit this.
204
205 The reason is so that if SV is to be used for a context-switch or as a
206 substitute for LOAD/STORE-Multiple, the operation can be done with only
207 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
208 single LD/ST operation). If VL does *not* get set to the register file
209 length when VSETVL is called, then a software-loop would be needed.
210 To avoid this need, VL *must* be set to exactly what is requested
211 (limits notwithstanding).
212
213 Therefore, in turn, unlike RVV, implementors *must* provide
214 pseudo-parallelism (using sequential loops in hardware) if actual
215 hardware-parallelism in the ALUs is not deployed. A hybrid is also
216 permitted (as used in Broadcom's VideoCore-IV) however this must be
217 *entirely* transparent to the ISA.
218
219 The fourth change is that VSETVL is implemented as a CSR, where the
220 behaviour of CSRRW (and CSRRWI) must be changed to specifically store
221 the *new* value in the destination register, **not** the old value.
222 Where context-load/save is to be implemented in the usual fashion
223 by using a single CSRRW instruction to obtain the old value, the
224 *secondary* CSR must be used (SVSTATE). This CSR behaves
225 exactly as standard CSRs, and contains more than just VL.
226
227 One interesting side-effect of using CSRRWI to set VL is that this
228 may be done with a single instruction, useful particularly for a
229 context-load/save. There are however limitations: CSRWI's immediate
230 is limited to 0-31 (representing VL=1-32).
231
232 Note that when VL is set to 1, all parallel operations cease: the
233 hardware loop is reduced to a single element: scalar operations.
234
235 ## SUBVL - Sub Vector Length
236
237 This is a "group by quantity" that effectively divides VL into groups of elements of length SUBVL. VL itself must therefore be set in advance to a multiple of SUBVL.
238
239 Legal values are 1, 2, 3 and 4, and the STATE CSR must hold the 2 bit values 0b00 thru 0b11.
240
241 Setting this CSR to 0 must raise an exception. Setting it to a value greater than 4 likewise.
242
243 The main effect of SUBVL is that predication bits are applied per **group**,
244 rather than by individual element.
245
246 This saves a not insignificant number of instructions when handling 3D vectors, as otherwise a much longer predicate mask would have to be set up with regularly-repeated bit patterns.
247
248 ## STATE
249
250 This is a standard CSR that contains sufficient information for a
251 full context save/restore. It contains (and permits setting of)
252 MVL, VL, SUBVL,
253 the destination element offset of the current parallel
254 instruction being executed, and, for twin-predication, the source
255 element offset as well. Interestingly it may hypothetically
256 also be used to make the immediately-following instruction to skip a
257 certain number of elements, however the recommended method to do
258 this is predication or using the offset mode of the REMAP CSRs.
259
260 Setting destoffs and srcoffs is realistically intended for saving state
261 so that exceptions (page faults in particular) may be serviced and the
262 hardware-loop that was being executed at the time of the trap, from
263 user-mode (or Supervisor-mode), may be returned to and continued from
264 where it left off. The reason why this works is because setting
265 User-Mode STATE will not change (not be used) in M-Mode or S-Mode
266 (and is entirely why M-Mode and S-Mode have their own STATE CSRs).
267
268 The format of the STATE CSR is as follows:
269
270 | (28..27) | (26..24) | (23..18) | (17..12) | (11..6) | (5...0) |
271 | -------- | -------- | -------- | -------- | ------- | ------- |
272 | rsvd | subvl | destoffs | srcoffs | vl | maxvl |
273
274 When setting this CSR, the following characteristics will be enforced:
275
276 * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
277 * **VL** will be truncated (after offset) to be within the range 1 to MAXVL
278 * **SUBVL** which sets a SIMD-like quantity, has only 4 values however if VL is not a multiple of SUBVL an exception will be raised.
279 * **srcoffs** will be truncated to be within the range 0 to VL-1
280 * **destoffs** will be truncated to be within the range 0 to VL-1
281
282 ## MVL and VL Pseudocode
283
284 The pseudo-code for get and set of VL and MVL are as follows:
285
286 set_mvl_csr(value, rd):
287 regs[rd] = MVL
288 MVL = MIN(value, MVL)
289
290 get_mvl_csr(rd):
291 regs[rd] = VL
292
293 set_vl_csr(value, rd):
294 VL = MIN(value, MVL)
295 regs[rd] = VL # yes returning the new value NOT the old CSR
296 return VL
297
298 get_vl_csr(rd):
299 regs[rd] = VL
300 return VL
301
302 Note that where setting MVL behaves as a normal CSR, unlike standard CSR
303 behaviour, setting VL will return the **new** value of VL **not** the old
304 one.
305
306 For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
307 maximise the effectiveness, an immediate of 0 is used to set VL=1,
308 an immediate of 1 is used to set VL=2 and so on:
309
310 CSRRWI_Set_MVL(value):
311 set_mvl_csr(value+1, x0)
312
313 CSRRWI_Set_VL(value):
314 set_vl_csr(value+1, x0)
315
316 However for CSRRW the following pseudocode is used for MVL and VL,
317 where setting the value to zero will cause an exception to be raised.
318 The reason is that if VL or MVL are set to zero, the STATE CSR is
319 not capable of returning that value.
320
321 CSRRW_Set_MVL(rs1, rd):
322 value = regs[rs1]
323 if value == 0:
324 raise Exception
325 set_mvl_csr(value, rd)
326
327 CSRRW_Set_VL(rs1, rd):
328 value = regs[rs1]
329 if value == 0:
330 raise Exception
331 set_vl_csr(value, rd)
332
333 In this way, when CSRRW is utilised with a loop variable, the value
334 that goes into VL (and into the destination register) may be used
335 in an instruction-minimal fashion:
336
337 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
338 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
339 CSRRWI MVL, 3 # sets MVL == **4** (not 3)
340 j zerotest # in case loop counter a0 already 0
341 loop:
342 CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
343 ld a3, a1 # load 4 registers a3-6 from x
344 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
345 ld a7, a2 # load 4 registers a7-10 from y
346 add a1, a1, t1 # increment pointer to x by vl*8
347 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
348 sub a0, a0, t0 # n -= vl (t0)
349 st a7, a2 # store 4 registers a7-10 to y
350 add a2, a2, t1 # increment pointer to y by vl*8
351 zerotest:
352 bnez a0, loop # repeat if n != 0
353
354 With the STATE CSR, just like with CSRRWI, in order to maximise the
355 utilisation of the limited bitspace, "000000" in binary represents
356 VL==1, "00001" represents VL==2 and so on (likewise for MVL):
357
358 CSRRW_Set_SV_STATE(rs1, rd):
359 value = regs[rs1]
360 get_state_csr(rd)
361 MVL = set_mvl_csr(value[11:6]+1)
362 VL = set_vl_csr(value[5:0]+1)
363 destoffs = value[23:18]>>18
364 srcoffs = value[23:18]>>12
365
366 get_state_csr(rd):
367 regs[rd] = (MVL-1) | (VL-1)<<6 | (srcoffs)<<12 |
368 (destoffs)<<18
369 return regs[rd]
370
371 In both cases, whilst CSR read of VL and MVL return the exact values
372 of VL and MVL respectively, reading and writing the STATE CSR returns
373 those values **minus one**. This is absolutely critical to implement
374 if the STATE CSR is to be used for fast context-switching.
375
376 ## Register key-value (CAM) table <a name="regcsrtable" />
377
378 *NOTE: in prior versions of SV, this table used to be writable and accessible via CSRs. It is now stored in the VLIW instruction format, and entries may be overridden by the SVPrefix format*
379
380 The purpose of the Register table is four-fold:
381
382 * To mark integer and floating-point registers as requiring "redirection"
383 if it is ever used as a source or destination in any given operation.
384 This involves a level of indirection through a 5-to-7-bit lookup table,
385 such that **unmodified** operands with 5 bit (3 for Compressed) may
386 access up to **128** registers.
387 * To indicate whether, after redirection through the lookup table, the
388 register is a vector (or remains a scalar).
389 * To over-ride the implicit or explicit bitwidth that the operation would
390 normally give the register.
391
392 16 bit format:
393
394 | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
395 | ------ | | - | - | - | ------ | ------- |
396 | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
397 | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
398 | .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
399 | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
400
401 8 bit format:
402
403 | RegCAM | | 7 | (6..5) | (4..0) |
404 | ------ | | - | ------ | ------- |
405 | 0 | | i/f | vew0 | regnum |
406
407 i/f is set to "1" to indicate that the redirection/tag entry is to be applied
408 to integer registers; 0 indicates that it is relevant to floating-point
409 registers.
410
411 The 8 bit format is used for a much more compact expression. "isvec" is implicit and, similar to [[sv-prefix-proposal]], the target vector is "regnum<<2", implicitly. Contrast this with the 16-bit format where the target vector is *explicitly* named in bits 8 to 14, and bit 15 may optionally set "scalar" mode.
412
413 Note that whilst SVPrefis adds one extra bit to each of rd, rs1 etc., and thus the "vector" mode need only shift the (6 bit) regnum by 1 to get the actual (7 bit) register number to use, there is not enough space in the 8 bit format so "regnum<<2" is required.
414
415 vew has the following meanings, indicating that the instruction's
416 operand size is "over-ridden" in a polymorphic fashion:
417
418 | vew | bitwidth |
419 | --- | ------------------- |
420 | 00 | default (XLEN/FLEN) |
421 | 01 | 8 bit |
422 | 10 | 16 bit |
423 | 11 | 32 bit |
424
425 As the above table is a CAM (key-value store) it may be appropriate
426 (faster, implementation-wise) to expand it as follows:
427
428 struct vectorised fp_vec[32], int_vec[32];
429
430 for (i = 0; i < 16; i++) // 16 CSRs?
431 tb = int_vec if CSRvec[i].type == 0 else fp_vec
432 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
433 tb[idx].elwidth = CSRvec[i].elwidth
434 tb[idx].regidx = CSRvec[i].regidx // indirection
435 tb[idx].isvector = CSRvec[i].isvector // 0=scalar
436 tb[idx].packed = CSRvec[i].packed // SIMD or not
437
438
439
440 ## Predication Table <a name="predication_csr_table"></a>
441
442 *NOTE: in prior versions of SV, this table used to be writable and accessible via CSRs. It is now stored in the VLIW instruction format, and entries may be overridden by the SVPrefix format*
443
444 The Predication Table is a key-value store indicating whether, if a given
445 destination register (integer or floating-point) is referred to in an
446 instruction, it is to be predicated. Like the Register table, it is an indirect lookup that allows the RV opcodes to not need modification.
447
448 It is particularly important to note
449 that the *actual* register used can be *different* from the one that is
450 in the instruction, due to the redirection through the lookup table.
451
452 * regidx is the register that in combination with the
453 i/f flag, if that integer or floating-point register is referred to
454 in a (standard RV) instruction
455 results in the lookup table being referenced to find the predication
456 mask to use for this operation.
457 * predidx is the
458 *actual* (full, 7 bit) register to be used for the predication mask.
459 * inv indicates that the predication mask bits are to be inverted
460 prior to use *without* actually modifying the contents of the
461 registerfrom which those bits originated.
462 * zeroing is either 1 or 0, and if set to 1, the operation must
463 place zeros in any element position where the predication mask is
464 set to zero. If zeroing is set to 0, unpredicated elements *must*
465 be left alone. Some microarchitectures may choose to interpret
466 this as skipping the operation entirely. Others which wish to
467 stick more closely to a SIMD architecture may choose instead to
468 interpret unpredicated elements as an internal "copy element"
469 operation (which would be necessary in SIMD microarchitectures
470 that perform register-renaming)
471
472 16 bit format:
473
474 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
475 | ----- | - | - | - | - | ------- | ------- |
476 | 0 | predkey | zero0 | inv0 | i/f | regidx | rsrvd |
477 | 1 | predkey | zero1 | inv1 | i/f | regidx | rsvd |
478 | ... | predkey | ..... | .... | i/f | ....... | ....... |
479 | 15 | predkey | zero15 | inv15 | i/f | regidx | rsvd |
480
481
482 8 bit format:
483
484 | PrCSR | 7 | 6 | 5 | (4..0) |
485 | ----- | - | - | - | ------- |
486 | 0 | zero0 | inv0 | i/f | regnum |
487
488 The 8 bit format is a compact and less expressive variant of the full 16 bit format. Using the 8 bit formatis very different: the predicate register to use is implicit, and numbering begins inplicitly from x9. The regnum is still used to "activate" predication, in the same fashion as described above.
489
490 The 16 bit Predication CSR Table is a key-value store, so implementation-wise
491 it will be faster to turn the table around (maintain topologically
492 equivalent state):
493
494 struct pred {
495 bool zero;
496 bool inv;
497 bool enabled;
498 int predidx; // redirection: actual int register to use
499 }
500
501 struct pred fp_pred_reg[32]; // 64 in future (bank=1)
502 struct pred int_pred_reg[32]; // 64 in future (bank=1)
503
504 for (i = 0; i < 16; i++)
505 tb = int_pred_reg if CSRpred[i].type == 0 else fp_pred_reg;
506 idx = CSRpred[i].regidx
507 tb[idx].zero = CSRpred[i].zero
508 tb[idx].inv = CSRpred[i].inv
509 tb[idx].predidx = CSRpred[i].predidx
510 tb[idx].enabled = true
511
512 So when an operation is to be predicated, it is the internal state that
513 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
514 pseudo-code for operations is given, where p is the explicit (direct)
515 reference to the predication register to be used:
516
517 for (int i=0; i<vl; ++i)
518 if ([!]preg[p][i])
519 (d ? vreg[rd][i] : sreg[rd]) =
520 iop(s1 ? vreg[rs1][i] : sreg[rs1],
521 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
522
523 This instead becomes an *indirect* reference using the *internal* state
524 table generated from the Predication CSR key-value store, which is used
525 as follows.
526
527 if type(iop) == INT:
528 preg = int_pred_reg[rd]
529 else:
530 preg = fp_pred_reg[rd]
531
532 for (int i=0; i<vl; ++i)
533 predicate, zeroing = get_pred_val(type(iop) == INT, rd):
534 if (predicate && (1<<i))
535 (d ? regfile[rd+i] : regfile[rd]) =
536 iop(s1 ? regfile[rs1+i] : regfile[rs1],
537 s2 ? regfile[rs2+i] : regfile[rs2]); // for insts with 2 inputs
538 else if (zeroing)
539 (d ? regfile[rd+i] : regfile[rd]) = 0
540
541 Note:
542
543 * d, s1 and s2 are booleans indicating whether destination,
544 source1 and source2 are vector or scalar
545 * key-value CSR-redirection of rd, rs1 and rs2 have NOT been included
546 above, for clarity. rd, rs1 and rs2 all also must ALSO go through
547 register-level redirection (from the Register table) if they are
548 vectors.
549
550 If written as a function, obtaining the predication mask (and whether
551 zeroing takes place) may be done as follows:
552
553 def get_pred_val(bool is_fp_op, int reg):
554 tb = int_reg if is_fp_op else fp_reg
555 if (!tb[reg].enabled):
556 return ~0x0, False // all enabled; no zeroing
557 tb = int_pred if is_fp_op else fp_pred
558 if (!tb[reg].enabled):
559 return ~0x0, False // all enabled; no zeroing
560 predidx = tb[reg].predidx // redirection occurs HERE
561 predicate = intreg[predidx] // actual predicate HERE
562 if (tb[reg].inv):
563 predicate = ~predicate // invert ALL bits
564 return predicate, tb[reg].zero
565
566 Note here, critically, that **only** if the register is marked
567 in its **register** table entry as being "active" does the testing
568 proceed further to check if the **predicate** table entry is
569 also active.
570
571 Note also that this is in direct contrast to branch operations
572 for the storage of comparisions: in these specific circumstances
573 the requirement for there to be an active *register* entry
574 is removed.
575
576 ## REMAP CSR <a name="remap" />
577
578 (Note: both the REMAP and SHAPE sections are best read after the
579 rest of the document has been read)
580
581 There is one 32-bit CSR which may be used to indicate which registers,
582 if used in any operation, must be "reshaped" (re-mapped) from a linear
583 form to a 2D or 3D transposed form, or "offset" to permit arbitrary
584 access to elements within a register.
585
586 The 32-bit REMAP CSR may reshape up to 3 registers:
587
588 | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
589 | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
590 | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
591
592 regidx0-2 refer not to the Register CSR CAM entry but to the underlying
593 *real* register (see regidx, the value) and consequently is 7-bits wide.
594 When set to zero (referring to x0), clearly reshaping x0 is pointless,
595 so is used to indicate "disabled".
596 shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
597 Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
598
599 It is anticipated that these specialist CSRs not be very often used.
600 Unlike the CSR Register and Predication tables, the REMAP CSRs use
601 the full 7-bit regidx so that they can be set once and left alone,
602 whilst the CSR Register entries pointing to them are disabled, instead.
603
604 ## SHAPE 1D/2D/3D vector-matrix remapping CSRs
605
606 (Note: both the REMAP and SHAPE sections are best read after the
607 rest of the document has been read)
608
609 There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
610 which have the same format. When each SHAPE CSR is set entirely to zeros,
611 remapping is disabled: the register's elements are a linear (1D) vector.
612
613 | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
614 | ------- | -- | ------- | -- | ------- | -- | ------- |
615 | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
616
617 offs is a 3-bit field, spread out across bits 7, 15 and 23, which
618 is added to the element index during the loop calculation.
619
620 xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
621 that the array dimensionality for that dimension is 1. A value of xdimsz=2
622 would indicate that in the first dimension there are 3 elements in the
623 array. The format of the array is therefore as follows:
624
625 array[xdim+1][ydim+1][zdim+1]
626
627 However whilst illustrative of the dimensionality, that does not take the
628 "permute" setting into account. "permute" may be any one of six values
629 (0-5, with values of 6 and 7 being reserved, and not legal). The table
630 below shows how the permutation dimensionality order works:
631
632 | permute | order | array format |
633 | ------- | ----- | ------------------------ |
634 | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
635 | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
636 | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
637 | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
638 | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
639 | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
640
641 In other words, the "permute" option changes the order in which
642 nested for-loops over the array would be done. The algorithm below
643 shows this more clearly, and may be executed as a python program:
644
645 # mapidx = REMAP.shape2
646 xdim = 3 # SHAPE[mapidx].xdim_sz+1
647 ydim = 4 # SHAPE[mapidx].ydim_sz+1
648 zdim = 5 # SHAPE[mapidx].zdim_sz+1
649
650 lims = [xdim, ydim, zdim]
651 idxs = [0,0,0] # starting indices
652 order = [1,0,2] # experiment with different permutations, here
653 offs = 0 # experiment with different offsets, here
654
655 for idx in range(xdim * ydim * zdim):
656 new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
657 print new_idx,
658 for i in range(3):
659 idxs[order[i]] = idxs[order[i]] + 1
660 if (idxs[order[i]] != lims[order[i]]):
661 break
662 print
663 idxs[order[i]] = 0
664
665 Here, it is assumed that this algorithm be run within all pseudo-code
666 throughout this document where a (parallelism) for-loop would normally
667 run from 0 to VL-1 to refer to contiguous register
668 elements; instead, where REMAP indicates to do so, the element index
669 is run through the above algorithm to work out the **actual** element
670 index, instead. Given that there are three possible SHAPE entries, up to
671 three separate registers in any given operation may be simultaneously
672 remapped:
673
674 function op_add(rd, rs1, rs2) # add not VADD!
675 ...
676 ...
677  for (i = 0; i < VL; i++)
678 if (predval & 1<<i) # predication uses intregs
679    ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
680 ireg[rs2+remap(irs2)];
681 if (!int_vec[rd ].isvector) break;
682 if (int_vec[rd ].isvector)  { id += 1; }
683 if (int_vec[rs1].isvector)  { irs1 += 1; }
684 if (int_vec[rs2].isvector)  { irs2 += 1; }
685
686 By changing remappings, 2D matrices may be transposed "in-place" for one
687 operation, followed by setting a different permutation order without
688 having to move the values in the registers to or from memory. Also,
689 the reason for having REMAP separate from the three SHAPE CSRs is so
690 that in a chain of matrix multiplications and additions, for example,
691 the SHAPE CSRs need only be set up once; only the REMAP CSR need be
692 changed to target different registers.
693
694 Note that:
695
696 * Over-running the register file clearly has to be detected and
697 an illegal instruction exception thrown
698 * When non-default elwidths are set, the exact same algorithm still
699 applies (i.e. it offsets elements *within* registers rather than
700 entire registers).
701 * If permute option 000 is utilised, the actual order of the
702 reindexing does not change!
703 * If two or more dimensions are set to zero, the actual order does not change!
704 * The above algorithm is pseudo-code **only**. Actual implementations
705 will need to take into account the fact that the element for-looping
706 must be **re-entrant**, due to the possibility of exceptions occurring.
707 See MSTATE CSR, which records the current element index.
708 * Twin-predicated operations require **two** separate and distinct
709 element offsets. The above pseudo-code algorithm will be applied
710 separately and independently to each, should each of the two
711 operands be remapped. *This even includes C.LDSP* and other operations
712 in that category, where in that case it will be the **offset** that is
713 remapped (see Compressed Stack LOAD/STORE section).
714 * Offset is especially useful, on its own, for accessing elements
715 within the middle of a register. Without offsets, it is necessary
716 to either use a predicated MV, skipping the first elements, or
717 performing a LOAD/STORE cycle to memory.
718 With offsets, the data does not have to be moved.
719 * Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to
720 less than MVL is **perfectly legal**, albeit very obscure. It permits
721 entries to be regularly presented to operands **more than once**, thus
722 allowing the same underlying registers to act as an accumulator of
723 multiple vector or matrix operations, for example.
724
725 Clearly here some considerable care needs to be taken as the remapping
726 could hypothetically create arithmetic operations that target the
727 exact same underlying registers, resulting in data corruption due to
728 pipeline overlaps. Out-of-order / Superscalar micro-architectures with
729 register-renaming will have an easier time dealing with this than
730 DSP-style SIMD micro-architectures.
731
732 # Instruction Execution Order
733
734 Simple-V behaves as if it is a hardware-level "macro expansion system",
735 substituting and expanding a single instruction into multiple sequential
736 instructions with contiguous and sequentially-incrementing registers.
737 As such, it does **not** modify - or specify - the behaviour and semantics of
738 the execution order: that may be deduced from the **existing** RV
739 specification in each and every case.
740
741 So for example if a particular micro-architecture permits out-of-order
742 execution, and it is augmented with Simple-V, then wherever instructions
743 may be out-of-order then so may the "post-expansion" SV ones.
744
745 If on the other hand there are memory guarantees which specifically
746 prevent and prohibit certain instructions from being re-ordered
747 (such as the Atomicity Axiom, or FENCE constraints), then clearly
748 those constraints **MUST** also be obeyed "post-expansion".
749
750 It should be absolutely clear that SV is **not** about providing new
751 functionality or changing the existing behaviour of a micro-architetural
752 design, or about changing the RISC-V Specification.
753 It is **purely** about compacting what would otherwise be contiguous
754 instructions that use sequentially-increasing register numbers down
755 to the **one** instruction.
756
757 # Instructions <a name="instructions" />
758
759 Despite being a 98% complete and accurate topological remap of RVV
760 concepts and functionality, no new instructions are needed.
761 Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
762 becomes a critical dependency for efficient manipulation of predication
763 masks (as a bit-field). Despite the removal of all operations,
764 with the exception of CLIP and VSELECT.X
765 *all instructions from RVV Base are topologically re-mapped and retain their
766 complete functionality, intact*. Note that if RV64G ever had
767 a MV.X added as well as FCLIP, the full functionality of RVV-Base would
768 be obtained in SV.
769
770 Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
771 equivalents, so are left out of Simple-V. VSELECT could be included if
772 there existed a MV.X instruction in RV (MV.X is a hypothetical
773 non-immediate variant of MV that would allow another register to
774 specify which register was to be copied). Note that if any of these three
775 instructions are added to any given RV extension, their functionality
776 will be inherently parallelised.
777
778 With some exceptions, where it does not make sense or is simply too
779 challenging, all RV-Base instructions are parallelised:
780
781 * CSR instructions, whilst a case could be made for fast-polling of
782 a CSR into multiple registers, or for being able to copy multiple
783 contiguously addressed CSRs into contiguous registers, and so on,
784 are the fundamental core basis of SV. If parallelised, extreme
785 care would need to be taken. Additionally, CSR reads are done
786 using x0, and it is *really* inadviseable to tag x0.
787 * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
788 left as scalar.
789 * LR/SC could hypothetically be parallelised however their purpose is
790 single (complex) atomic memory operations where the LR must be followed
791 up by a matching SC. A sequence of parallel LR instructions followed
792 by a sequence of parallel SC instructions therefore is guaranteed to
793 not be useful. Not least: the guarantees of a Multi-LR/SC
794 would be impossible to provide if emulated in a trap.
795 * EBREAK, NOP, FENCE and others do not use registers so are not inherently
796 paralleliseable anyway.
797
798 All other operations using registers are automatically parallelised.
799 This includes AMOMAX, AMOSWAP and so on, where particular care and
800 attention must be paid.
801
802 Example pseudo-code for an integer ADD operation (including scalar operations).
803 Floating-point uses fp csrs.
804
805 function op_add(rd, rs1, rs2) # add not VADD!
806  int i, id=0, irs1=0, irs2=0;
807  predval = get_pred_val(FALSE, rd);
808  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
809  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
810  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
811  for (i = 0; i < VL; i++)
812 if (predval & 1<<i) # predication uses intregs
813    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
814 if (!int_vec[rd ].isvector) break;
815 if (int_vec[rd ].isvector)  { id += 1; }
816 if (int_vec[rs1].isvector)  { irs1 += 1; }
817 if (int_vec[rs2].isvector)  { irs2 += 1; }
818
819 Note that for simplicity there is quite a lot missing from the above
820 pseudo-code: element widths, zeroing on predication, dimensional
821 reshaping and offsets and so on. However it demonstrates the basic
822 principle. Augmentations that produce the full pseudo-code are covered in
823 other sections.
824
825 ## Instruction Format
826
827 It is critical to appreciate that there are
828 **no operations added to SV, at all**.
829
830 Instead, by using CSRs to tag registers as an indication of "changed behaviour",
831 SV *overloads* pre-existing branch operations into predicated
832 variants, and implicitly overloads arithmetic operations, MV,
833 FCVT, and LOAD/STORE depending on CSR configurations for bitwidth
834 and predication. **Everything** becomes parallelised. *This includes
835 Compressed instructions* as well as any future instructions and Custom
836 Extensions.
837
838 Note: CSR tags to change behaviour of instructions is nothing new, including
839 in RISC-V. UXL, SXL and MXL change the behaviour so that XLEN=32/64/128.
840 FRM changes the behaviour of the floating-point unit, to alter the rounding
841 mode. Other architectures change the LOAD/STORE byte-order from big-endian
842 to little-endian on a per-instruction basis. SV is just a little more...
843 comprehensive in its effect on instructions.
844
845 ## Branch Instructions
846
847 ### Standard Branch <a name="standard_branch"></a>
848
849 Branch operations use standard RV opcodes that are reinterpreted to
850 be "predicate variants" in the instance where either of the two src
851 registers are marked as vectors (active=1, vector=1).
852
853 Note that the predication register to use (if one is enabled) is taken from
854 the *first* src register, and that this is used, just as with predicated
855 arithmetic operations, to mask whether the comparison operations take
856 place or not. The target (destination) predication register
857 to use (if one is enabled) is taken from the *second* src register.
858
859 If either of src1 or src2 are scalars (whether by there being no
860 CSR register entry or whether by the CSR entry specifically marking
861 the register as "scalar") the comparison goes ahead as vector-scalar
862 or scalar-vector.
863
864 In instances where no vectorisation is detected on either src registers
865 the operation is treated as an absolutely standard scalar branch operation.
866 Where vectorisation is present on either or both src registers, the
867 branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
868 those tests that are predicated out).
869
870 Note that when zero-predication is enabled (from source rs1),
871 a cleared bit in the predicate indicates that the result
872 of the compare is set to "false", i.e. that the corresponding
873 destination bit (or result)) be set to zero. Contrast this with
874 when zeroing is not set: bits in the destination predicate are
875 only *set*; they are **not** cleared. This is important to appreciate,
876 as there may be an expectation that, going into the hardware-loop,
877 the destination predicate is always expected to be set to zero:
878 this is **not** the case. The destination predicate is only set
879 to zero if **zeroing** is enabled.
880
881 Note that just as with the standard (scalar, non-predicated) branch
882 operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
883 src1 and src2.
884
885 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
886 for predicated compare operations of function "cmp":
887
888 for (int i=0; i<vl; ++i)
889 if ([!]preg[p][i])
890 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
891 s2 ? vreg[rs2][i] : sreg[rs2]);
892
893 With associated predication, vector-length adjustments and so on,
894 and temporarily ignoring bitwidth (which makes the comparisons more
895 complex), this becomes:
896
897 s1 = reg_is_vectorised(src1);
898 s2 = reg_is_vectorised(src2);
899
900 if not s1 && not s2
901 if cmp(rs1, rs2) # scalar compare
902 goto branch
903 return
904
905 preg = int_pred_reg[rd]
906 reg = int_regfile
907
908 ps = get_pred_val(I/F==INT, rs1);
909 rd = get_pred_val(I/F==INT, rs2); # this may not exist
910
911 if not exists(rd) or zeroing:
912 result = 0
913 else
914 result = preg[rd]
915
916 for (int i = 0; i < VL; ++i)
917 if (zeroing)
918 if not (ps & (1<<i))
919 result &= ~(1<<i);
920 else if (ps & (1<<i))
921 if (cmp(s1 ? reg[src1+i]:reg[src1],
922 s2 ? reg[src2+i]:reg[src2])
923 result |= 1<<i;
924 else
925 result &= ~(1<<i);
926
927 if not exists(rd)
928 if result == ps
929 goto branch
930 else
931 preg[rd] = result # store in destination
932 if preg[rd] == ps
933 goto branch
934
935 Notes:
936
937 * Predicated SIMD comparisons would break src1 and src2 further down
938 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
939 Reordering") setting Vector-Length times (number of SIMD elements) bits
940 in Predicate Register rd, as opposed to just Vector-Length bits.
941 * The execution of "parallelised" instructions **must** be implemented
942 as "re-entrant" (to use a term from software). If an exception (trap)
943 occurs during the middle of a vectorised
944 Branch (now a SV predicated compare) operation, the partial results
945 of any comparisons must be written out to the destination
946 register before the trap is permitted to begin. If however there
947 is no predicate, the **entire** set of comparisons must be **restarted**,
948 with the offset loop indices set back to zero. This is because
949 there is no place to store the temporary result during the handling
950 of traps.
951
952 TODO: predication now taken from src2. also branch goes ahead
953 if all compares are successful.
954
955 Note also that where normally, predication requires that there must
956 also be a CSR register entry for the register being used in order
957 for the **predication** CSR register entry to also be active,
958 for branches this is **not** the case. src2 does **not** have
959 to have its CSR register entry marked as active in order for
960 predication on src2 to be active.
961
962 Also note: SV Branch operations are **not** twin-predicated
963 (see Twin Predication section). This would require three
964 element offsets: one to track src1, one to track src2 and a third
965 to track where to store the accumulation of the results. Given
966 that the element offsets need to be exposed via CSRs so that
967 the parallel hardware looping may be made re-entrant on traps
968 and exceptions, the decision was made not to make SV Branches
969 twin-predicated.
970
971 ### Floating-point Comparisons
972
973 There does not exist floating-point branch operations, only compare.
974 Interestingly no change is needed to the instruction format because
975 FP Compare already stores a 1 or a zero in its "rd" integer register
976 target, i.e. it's not actually a Branch at all: it's a compare.
977
978 In RV (scalar) Base, a branch on a floating-point compare is
979 done via the sequence "FEQ x1, f0, f5; BEQ x1, x0, #jumploc".
980 This does extend to SV, as long as x1 (in the example sequence given)
981 is vectorised. When that is the case, x1..x(1+VL-1) will also be
982 set to 0 or 1 depending on whether f0==f5, f1==f6, f2==f7 and so on.
983 The BEQ that follows will *also* compare x1==x0, x2==x0, x3==x0 and
984 so on. Consequently, unlike integer-branch, FP Compare needs no
985 modification in its behaviour.
986
987 In addition, it is noted that an entry "FNE" (the opposite of FEQ) is missing,
988 and whilst in ordinary branch code this is fine because the standard
989 RVF compare can always be followed up with an integer BEQ or a BNE (or
990 a compressed comparison to zero or non-zero), in predication terms that
991 becomes more of an impact. To deal with this, SV's predication has
992 had "invert" added to it.
993
994 Also: note that FP Compare may be predicated, using the destination
995 integer register (rd) to determine the predicate. FP Compare is **not**
996 a twin-predication operation, as, again, just as with SV Branches,
997 there are three registers involved: FP src1, FP src2 and INT rd.
998
999 ### Compressed Branch Instruction
1000
1001 Compressed Branch instructions are, just like standard Branch instructions,
1002 reinterpreted to be vectorised and predicated based on the source register
1003 (rs1s) CSR entries. As however there is only the one source register,
1004 given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
1005 to store the results of the comparisions is taken from CSR predication
1006 table entries for **x0**.
1007
1008 The specific required use of x0 is, with a little thought, quite obvious,
1009 but is counterintuitive. Clearly it is **not** recommended to redirect
1010 x0 with a CSR register entry, however as a means to opaquely obtain
1011 a predication target it is the only sensible option that does not involve
1012 additional special CSRs (or, worse, additional special opcodes).
1013
1014 Note also that, just as with standard branches, the 2nd source
1015 (in this case x0 rather than src2) does **not** have to have its CSR
1016 register table marked as "active" in order for predication to work.
1017
1018 ## Vectorised Dual-operand instructions
1019
1020 There is a series of 2-operand instructions involving copying (and
1021 sometimes alteration):
1022
1023 * C.MV
1024 * FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
1025 * C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
1026 * LOAD(-FP) and STORE(-FP)
1027
1028 All of these operations follow the same two-operand pattern, so it is
1029 *both* the source *and* destination predication masks that are taken into
1030 account. This is different from
1031 the three-operand arithmetic instructions, where the predication mask
1032 is taken from the *destination* register, and applied uniformly to the
1033 elements of the source register(s), element-for-element.
1034
1035 The pseudo-code pattern for twin-predicated operations is as
1036 follows:
1037
1038 function op(rd, rs):
1039  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1040  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1041  ps = get_pred_val(FALSE, rs); # predication on src
1042  pd = get_pred_val(FALSE, rd); # ... AND on dest
1043  for (int i = 0, int j = 0; i < VL && j < VL;):
1044 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1045 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1046 reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
1047 if (int_csr[rs].isvec) i++;
1048 if (int_csr[rd].isvec) j++; else break
1049
1050 This pattern covers scalar-scalar, scalar-vector, vector-scalar
1051 and vector-vector, and predicated variants of all of those.
1052 Zeroing is not presently included (TODO). As such, when compared
1053 to RVV, the twin-predicated variants of C.MV and FMV cover
1054 **all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
1055 VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
1056
1057 Note that:
1058
1059 * elwidth (SIMD) is not covered in the pseudo-code above
1060 * ending the loop early in scalar cases (VINSERT, VEXTRACT) is also
1061 not covered
1062 * zero predication is also not shown (TODO).
1063
1064 ### C.MV Instruction <a name="c_mv"></a>
1065
1066 There is no MV instruction in RV however there is a C.MV instruction.
1067 It is used for copying integer-to-integer registers (vectorised FMV
1068 is used for copying floating-point).
1069
1070 If either the source or the destination register are marked as vectors
1071 C.MV is reinterpreted to be a vectorised (multi-register) predicated
1072 move operation. The actual instruction's format does not change:
1073
1074 [[!table data="""
1075 15 12 | 11 7 | 6 2 | 1 0 |
1076 funct4 | rd | rs | op |
1077 4 | 5 | 5 | 2 |
1078 C.MV | dest | src | C0 |
1079 """]]
1080
1081 A simplified version of the pseudocode for this operation is as follows:
1082
1083 function op_mv(rd, rs) # MV not VMV!
1084  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1085  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1086  ps = get_pred_val(FALSE, rs); # predication on src
1087  pd = get_pred_val(FALSE, rd); # ... AND on dest
1088  for (int i = 0, int j = 0; i < VL && j < VL;):
1089 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1090 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1091 ireg[rd+j] <= ireg[rs+i];
1092 if (int_csr[rs].isvec) i++;
1093 if (int_csr[rd].isvec) j++; else break
1094
1095 There are several different instructions from RVV that are covered by
1096 this one opcode:
1097
1098 [[!table data="""
1099 src | dest | predication | op |
1100 scalar | vector | none | VSPLAT |
1101 scalar | vector | destination | sparse VSPLAT |
1102 scalar | vector | 1-bit dest | VINSERT |
1103 vector | scalar | 1-bit? src | VEXTRACT |
1104 vector | vector | none | VCOPY |
1105 vector | vector | src | Vector Gather |
1106 vector | vector | dest | Vector Scatter |
1107 vector | vector | src & dest | Gather/Scatter |
1108 vector | vector | src == dest | sparse VCOPY |
1109 """]]
1110
1111 Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
1112 operations with inversion on the src and dest predication for one of the
1113 two C.MV operations.
1114
1115 Note that in the instance where the Compressed Extension is not implemented,
1116 MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs.
1117 Note that the behaviour is **different** from C.MV because with addi the
1118 predication mask to use is taken **only** from rd and is applied against
1119 all elements: rs[i] = rd[i].
1120
1121 ### FMV, FNEG and FABS Instructions
1122
1123 These are identical in form to C.MV, except covering floating-point
1124 register copying. The same double-predication rules also apply.
1125 However when elwidth is not set to default the instruction is implicitly
1126 and automatic converted to a (vectorised) floating-point type conversion
1127 operation of the appropriate size covering the source and destination
1128 register bitwidths.
1129
1130 (Note that FMV, FNEG and FABS are all actually pseudo-instructions)
1131
1132 ### FVCT Instructions
1133
1134 These are again identical in form to C.MV, except that they cover
1135 floating-point to integer and integer to floating-point. When element
1136 width in each vector is set to default, the instructions behave exactly
1137 as they are defined for standard RV (scalar) operations, except vectorised
1138 in exactly the same fashion as outlined in C.MV.
1139
1140 However when the source or destination element width is not set to default,
1141 the opcode's explicit element widths are *over-ridden* to new definitions,
1142 and the opcode's element width is taken as indicative of the SIMD width
1143 (if applicable i.e. if packed SIMD is requested) instead.
1144
1145 For example FCVT.S.L would normally be used to convert a 64-bit
1146 integer in register rs1 to a 64-bit floating-point number in rd.
1147 If however the source rs1 is set to be a vector, where elwidth is set to
1148 default/2 and "packed SIMD" is enabled, then the first 32 bits of
1149 rs1 are converted to a floating-point number to be stored in rd's
1150 first element and the higher 32-bits *also* converted to floating-point
1151 and stored in the second. The 32 bit size comes from the fact that
1152 FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to
1153 divide that by two it means that rs1 element width is to be taken as 32.
1154
1155 Similar rules apply to the destination register.
1156
1157 ## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
1158
1159 An earlier draft of SV modified the behaviour of LOAD/STORE (modified
1160 the interpretation of the instruction fields). This
1161 actually undermined the fundamental principle of SV, namely that there
1162 be no modifications to the scalar behaviour (except where absolutely
1163 necessary), in order to simplify an implementor's task if considering
1164 converting a pre-existing scalar design to support parallelism.
1165
1166 So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
1167 do not change in SV, however just as with C.MV it is important to note
1168 that dual-predication is possible.
1169
1170 In vectorised architectures there are usually at least two different modes
1171 for LOAD/STORE:
1172
1173 * Read (or write for STORE) from sequential locations, where one
1174 register specifies the address, and the one address is incremented
1175 by a fixed amount. This is usually known as "Unit Stride" mode.
1176 * Read (or write) from multiple indirected addresses, where the
1177 vector elements each specify separate and distinct addresses.
1178
1179 To support these different addressing modes, the CSR Register "isvector"
1180 bit is used. So, for a LOAD, when the src register is set to
1181 scalar, the LOADs are sequentially incremented by the src register
1182 element width, and when the src register is set to "vector", the
1183 elements are treated as indirection addresses. Simplified
1184 pseudo-code would look like this:
1185
1186 function op_ld(rd, rs) # LD not VLD!
1187  rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
1188  rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
1189  ps = get_pred_val(FALSE, rs); # predication on src
1190  pd = get_pred_val(FALSE, rd); # ... AND on dest
1191  for (int i = 0, int j = 0; i < VL && j < VL;):
1192 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1193 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1194 if (int_csr[rd].isvec)
1195 # indirect mode (multi mode)
1196 srcbase = ireg[rsv+i];
1197 else
1198 # unit stride mode
1199 srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
1200 ireg[rdv+j] <= mem[srcbase + imm_offs];
1201 if (!int_csr[rs].isvec &&
1202 !int_csr[rd].isvec) break # scalar-scalar LD
1203 if (int_csr[rs].isvec) i++;
1204 if (int_csr[rd].isvec) j++;
1205
1206 Notes:
1207
1208 * For simplicity, zeroing and elwidth is not included in the above:
1209 the key focus here is the decision-making for srcbase; vectorised
1210 rs means use sequentially-numbered registers as the indirection
1211 address, and scalar rs is "offset" mode.
1212 * The test towards the end for whether both source and destination are
1213 scalar is what makes the above pseudo-code provide the "standard" RV
1214 Base behaviour for LD operations.
1215 * The offset in bytes (XLEN/8) changes depending on whether the
1216 operation is a LB (1 byte), LH (2 byes), LW (4 bytes) or LD
1217 (8 bytes), and also whether the element width is over-ridden
1218 (see special element width section).
1219
1220 ## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
1221
1222 C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
1223 where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
1224 It is therefore possible to use predicated C.LWSP to efficiently
1225 pop registers off the stack (by predicating x2 as the source), cherry-picking
1226 which registers to store to (by predicating the destination). Likewise
1227 for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
1228
1229 The two modes ("unit stride" and multi-indirection) are still supported,
1230 as with standard LD/ST. Essentially, the only difference is that the
1231 use of x2 is hard-coded into the instruction.
1232
1233 **Note**: it is still possible to redirect x2 to an alternative target
1234 register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
1235 general-purpose LOAD/STORE operations.
1236
1237 ## Compressed LOAD / STORE Instructions
1238
1239 Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
1240 where the same rules apply and the same pseudo-code apply as for
1241 non-compressed LOAD/STORE. Again: setting scalar or vector mode
1242 on the src for LOAD and dest for STORE switches mode from "Unit Stride"
1243 to "Multi-indirection", respectively.
1244
1245 # Element bitwidth polymorphism <a name="elwidth"></a>
1246
1247 Element bitwidth is best covered as its own special section, as it
1248 is quite involved and applies uniformly across-the-board. SV restricts
1249 bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
1250
1251 The effect of setting an element bitwidth is to re-cast each entry
1252 in the register table, and for all memory operations involving
1253 load/stores of certain specific sizes, to a completely different width.
1254 Thus In c-style terms, on an RV64 architecture, effectively each register
1255 now looks like this:
1256
1257 typedef union {
1258 uint8_t b[8];
1259 uint16_t s[4];
1260 uint32_t i[2];
1261 uint64_t l[1];
1262 } reg_t;
1263
1264 // integer table: assume maximum SV 7-bit regfile size
1265 reg_t int_regfile[128];
1266
1267 where the CSR Register table entry (not the instruction alone) determines
1268 which of those union entries is to be used on each operation, and the
1269 VL element offset in the hardware-loop specifies the index into each array.
1270
1271 However a naive interpretation of the data structure above masks the
1272 fact that setting VL greater than 8, for example, when the bitwidth is 8,
1273 accessing one specific register "spills over" to the following parts of
1274 the register file in a sequential fashion. So a much more accurate way
1275 to reflect this would be:
1276
1277 typedef union {
1278 uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
1279 uint8_t b[0]; // array of type uint8_t
1280 uint16_t s[0];
1281 uint32_t i[0];
1282 uint64_t l[0];
1283 uint128_t d[0];
1284 } reg_t;
1285
1286 reg_t int_regfile[128];
1287
1288 where when accessing any individual regfile[n].b entry it is permitted
1289 (in c) to arbitrarily over-run the *declared* length of the array (zero),
1290 and thus "overspill" to consecutive register file entries in a fashion
1291 that is completely transparent to a greatly-simplified software / pseudo-code
1292 representation.
1293 It is however critical to note that it is clearly the responsibility of
1294 the implementor to ensure that, towards the end of the register file,
1295 an exception is thrown if attempts to access beyond the "real" register
1296 bytes is ever attempted.
1297
1298 Now we may modify pseudo-code an operation where all element bitwidths have
1299 been set to the same size, where this pseudo-code is otherwise identical
1300 to its "non" polymorphic versions (above):
1301
1302 function op_add(rd, rs1, rs2) # add not VADD!
1303 ...
1304 ...
1305  for (i = 0; i < VL; i++)
1306 ...
1307 ...
1308 // TODO, calculate if over-run occurs, for each elwidth
1309 if (elwidth == 8) {
1310    int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
1311     int_regfile[rs2].i[irs2];
1312 } else if elwidth == 16 {
1313    int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
1314     int_regfile[rs2].s[irs2];
1315 } else if elwidth == 32 {
1316    int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
1317     int_regfile[rs2].i[irs2];
1318 } else { // elwidth == 64
1319    int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
1320     int_regfile[rs2].l[irs2];
1321 }
1322 ...
1323 ...
1324
1325 So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
1326 following sequentially on respectively from the same) are "type-cast"
1327 to 8-bit; for 16-bit entries likewise and so on.
1328
1329 However that only covers the case where the element widths are the same.
1330 Where the element widths are different, the following algorithm applies:
1331
1332 * Analyse the bitwidth of all source operands and work out the
1333 maximum. Record this as "maxsrcbitwidth"
1334 * If any given source operand requires sign-extension or zero-extension
1335 (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
1336 sign-extension / zero-extension or whatever is specified in the standard
1337 RV specification, **change** that to sign-extending from the respective
1338 individual source operand's bitwidth from the CSR table out to
1339 "maxsrcbitwidth" (previously calculated), instead.
1340 * Following separate and distinct (optional) sign/zero-extension of all
1341 source operands as specifically required for that operation, carry out the
1342 operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
1343 this may be a "null" (copy) operation, and that with FCVT, the changes
1344 to the source and destination bitwidths may also turn FVCT effectively
1345 into a copy).
1346 * If the destination operand requires sign-extension or zero-extension,
1347 instead of a mandatory fixed size (typically 32-bit for arithmetic,
1348 for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
1349 etc.), overload the RV specification with the bitwidth from the
1350 destination register's elwidth entry.
1351 * Finally, store the (optionally) sign/zero-extended value into its
1352 destination: memory for sb/sw etc., or an offset section of the register
1353 file for an arithmetic operation.
1354
1355 In this way, polymorphic bitwidths are achieved without requiring a
1356 massive 64-way permutation of calculations **per opcode**, for example
1357 (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
1358 rd bitwidths). The pseudo-code is therefore as follows:
1359
1360 typedef union {
1361 uint8_t b;
1362 uint16_t s;
1363 uint32_t i;
1364 uint64_t l;
1365 } el_reg_t;
1366
1367 bw(elwidth):
1368 if elwidth == 0:
1369 return xlen
1370 if elwidth == 1:
1371 return xlen / 2
1372 if elwidth == 2:
1373 return xlen * 2
1374 // elwidth == 3:
1375 return 8
1376
1377 get_max_elwidth(rs1, rs2):
1378 return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
1379 bw(int_csr[rs2].elwidth)) # again XLEN if no entry
1380
1381 get_polymorphed_reg(reg, bitwidth, offset):
1382 el_reg_t res;
1383 res.l = 0; // TODO: going to need sign-extending / zero-extending
1384 if bitwidth == 8:
1385 reg.b = int_regfile[reg].b[offset]
1386 elif bitwidth == 16:
1387 reg.s = int_regfile[reg].s[offset]
1388 elif bitwidth == 32:
1389 reg.i = int_regfile[reg].i[offset]
1390 elif bitwidth == 64:
1391 reg.l = int_regfile[reg].l[offset]
1392 return res
1393
1394 set_polymorphed_reg(reg, bitwidth, offset, val):
1395 if (!int_csr[reg].isvec):
1396 # sign/zero-extend depending on opcode requirements, from
1397 # the reg's bitwidth out to the full bitwidth of the regfile
1398 val = sign_or_zero_extend(val, bitwidth, xlen)
1399 int_regfile[reg].l[0] = val
1400 elif bitwidth == 8:
1401 int_regfile[reg].b[offset] = val
1402 elif bitwidth == 16:
1403 int_regfile[reg].s[offset] = val
1404 elif bitwidth == 32:
1405 int_regfile[reg].i[offset] = val
1406 elif bitwidth == 64:
1407 int_regfile[reg].l[offset] = val
1408
1409 maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
1410 destwid = int_csr[rs1].elwidth # destination element width
1411  for (i = 0; i < VL; i++)
1412 if (predval & 1<<i) # predication uses intregs
1413 // TODO, calculate if over-run occurs, for each elwidth
1414 src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
1415 // TODO, sign/zero-extend src1 and src2 as operation requires
1416 if (op_requires_sign_extend_src1)
1417 src1 = sign_extend(src1, maxsrcwid)
1418 src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
1419 result = src1 + src2 # actual add here
1420 // TODO, sign/zero-extend result, as operation requires
1421 if (op_requires_sign_extend_dest)
1422 result = sign_extend(result, maxsrcwid)
1423 set_polymorphed_reg(rd, destwid, ird, result)
1424 if (!int_vec[rd].isvector) break
1425 if (int_vec[rd ].isvector)  { id += 1; }
1426 if (int_vec[rs1].isvector)  { irs1 += 1; }
1427 if (int_vec[rs2].isvector)  { irs2 += 1; }
1428
1429 Whilst specific sign-extension and zero-extension pseudocode call
1430 details are left out, due to each operation being different, the above
1431 should be clear that;
1432
1433 * the source operands are extended out to the maximum bitwidth of all
1434 source operands
1435 * the operation takes place at that maximum source bitwidth (the
1436 destination bitwidth is not involved at this point, at all)
1437 * the result is extended (or potentially even, truncated) before being
1438 stored in the destination. i.e. truncation (if required) to the
1439 destination width occurs **after** the operation **not** before.
1440 * when the destination is not marked as "vectorised", the **full**
1441 (standard, scalar) register file entry is taken up, i.e. the
1442 element is either sign-extended or zero-extended to cover the
1443 full register bitwidth (XLEN) if it is not already XLEN bits long.
1444
1445 Implementors are entirely free to optimise the above, particularly
1446 if it is specifically known that any given operation will complete
1447 accurately in less bits, as long as the results produced are
1448 directly equivalent and equal, for all inputs and all outputs,
1449 to those produced by the above algorithm.
1450
1451 ## Polymorphic floating-point operation exceptions and error-handling
1452
1453 For floating-point operations, conversion takes place without
1454 raising any kind of exception. Exactly as specified in the standard
1455 RV specification, NAN (or appropriate) is stored if the result
1456 is beyond the range of the destination, and, again, exactly as
1457 with the standard RV specification just as with scalar
1458 operations, the floating-point flag is raised (FCSR). And, again, just as
1459 with scalar operations, it is software's responsibility to check this flag.
1460 Given that the FCSR flags are "accrued", the fact that multiple element
1461 operations could have occurred is not a problem.
1462
1463 Note that it is perfectly legitimate for floating-point bitwidths of
1464 only 8 to be specified. However whilst it is possible to apply IEEE 754
1465 principles, no actual standard yet exists. Implementors wishing to
1466 provide hardware-level 8-bit support rather than throw a trap to emulate
1467 in software should contact the author of this specification before
1468 proceeding.
1469
1470 ## Polymorphic shift operators
1471
1472 A special note is needed for changing the element width of left and right
1473 shift operators, particularly right-shift. Even for standard RV base,
1474 in order for correct results to be returned, the second operand RS2 must
1475 be truncated to be within the range of RS1's bitwidth. spike's implementation
1476 of sll for example is as follows:
1477
1478 WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));
1479
1480 which means: where XLEN is 32 (for RV32), restrict RS2 to cover the
1481 range 0..31 so that RS1 will only be left-shifted by the amount that
1482 is possible to fit into a 32-bit register. Whilst this appears not
1483 to matter for hardware, it matters greatly in software implementations,
1484 and it also matters where an RV64 system is set to "RV32" mode, such
1485 that the underlying registers RS1 and RS2 comprise 64 hardware bits
1486 each.
1487
1488 For SV, where each operand's element bitwidth may be over-ridden, the
1489 rule about determining the operation's bitwidth *still applies*, being
1490 defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
1491 **also applies to the truncation of RS2**. In other words, *after*
1492 determining the maximum bitwidth, RS2's range must **also be truncated**
1493 to ensure a correct answer. Example:
1494
1495 * RS1 is over-ridden to a 16-bit width
1496 * RS2 is over-ridden to an 8-bit width
1497 * RD is over-ridden to a 64-bit width
1498 * the maximum bitwidth is thus determined to be 16-bit - max(8,16)
1499 * RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
1500
1501 Pseudocode (in spike) for this example would therefore be:
1502
1503 WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
1504
1505 This example illustrates that considerable care therefore needs to be
1506 taken to ensure that left and right shift operations are implemented
1507 correctly. The key is that
1508
1509 * The operation bitwidth is determined by the maximum bitwidth
1510 of the *source registers*, **not** the destination register bitwidth
1511 * The result is then sign-extend (or truncated) as appropriate.
1512
1513 ## Polymorphic MULH/MULHU/MULHSU
1514
1515 MULH is designed to take the top half MSBs of a multiply that
1516 does not fit within the range of the source operands, such that
1517 smaller width operations may produce a full double-width multiply
1518 in two cycles. The issue is: SV allows the source operands to
1519 have variable bitwidth.
1520
1521 Here again special attention has to be paid to the rules regarding
1522 bitwidth, which, again, are that the operation is performed at
1523 the maximum bitwidth of the **source** registers. Therefore:
1524
1525 * An 8-bit x 8-bit multiply will create a 16-bit result that must
1526 be shifted down by 8 bits
1527 * A 16-bit x 8-bit multiply will create a 24-bit result that must
1528 be shifted down by 16 bits (top 8 bits being zero)
1529 * A 16-bit x 16-bit multiply will create a 32-bit result that must
1530 be shifted down by 16 bits
1531 * A 32-bit x 16-bit multiply will create a 48-bit result that must
1532 be shifted down by 32 bits
1533 * A 32-bit x 8-bit multiply will create a 40-bit result that must
1534 be shifted down by 32 bits
1535
1536 So again, just as with shift-left and shift-right, the result
1537 is shifted down by the maximum of the two source register bitwidths.
1538 And, exactly again, truncation or sign-extension is performed on the
1539 result. If sign-extension is to be carried out, it is performed
1540 from the same maximum of the two source register bitwidths out
1541 to the result element's bitwidth.
1542
1543 If truncation occurs, i.e. the top MSBs of the result are lost,
1544 this is "Officially Not Our Problem", i.e. it is assumed that the
1545 programmer actually desires the result to be truncated. i.e. if the
1546 programmer wanted all of the bits, they would have set the destination
1547 elwidth to accommodate them.
1548
1549 ## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
1550
1551 Polymorphic element widths in vectorised form means that the data
1552 being loaded (or stored) across multiple registers needs to be treated
1553 (reinterpreted) as a contiguous stream of elwidth-wide items, where
1554 the source register's element width is **independent** from the destination's.
1555
1556 This makes for a slightly more complex algorithm when using indirection
1557 on the "addressed" register (source for LOAD and destination for STORE),
1558 particularly given that the LOAD/STORE instruction provides important
1559 information about the width of the data to be reinterpreted.
1560
1561 Let's illustrate the "load" part, where the pseudo-code for elwidth=default
1562 was as follows, and i is the loop from 0 to VL-1:
1563
1564 srcbase = ireg[rs+i];
1565 return mem[srcbase + imm]; // returns XLEN bits
1566
1567 Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
1568 chunks are taken from the source memory location addressed by the current
1569 indexed source address register, and only when a full 32-bits-worth
1570 are taken will the index be moved on to the next contiguous source
1571 address register:
1572
1573 bitwidth = bw(elwidth); // source elwidth from CSR reg entry
1574 elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
1575 srcbase = ireg[rs+i/(elsperblock)]; // integer divide
1576 offs = i % elsperblock; // modulo
1577 return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
1578
1579 Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
1580 and 128 for LQ.
1581
1582 The principle is basically exactly the same as if the srcbase were pointing
1583 at the memory of the *register* file: memory is re-interpreted as containing
1584 groups of elwidth-wide discrete elements.
1585
1586 When storing the result from a load, it's important to respect the fact
1587 that the destination register has its *own separate element width*. Thus,
1588 when each element is loaded (at the source element width), any sign-extension
1589 or zero-extension (or truncation) needs to be done to the *destination*
1590 bitwidth. Also, the storing has the exact same analogous algorithm as
1591 above, where in fact it is just the set\_polymorphed\_reg pseudocode
1592 (completely unchanged) used above.
1593
1594 One issue remains: when the source element width is **greater** than
1595 the width of the operation, it is obvious that a single LB for example
1596 cannot possibly obtain 16-bit-wide data. This condition may be detected
1597 where, when using integer divide, elsperblock (the width of the LOAD
1598 divided by the bitwidth of the element) is zero.
1599
1600 The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
1601
1602 elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
1603
1604 The elements, if the element bitwidth is larger than the LD operation's
1605 size, will then be sign/zero-extended to the full LD operation size, as
1606 specified by the LOAD (LDU instead of LD, LBU instead of LB), before
1607 being passed on to the second phase.
1608
1609 As LOAD/STORE may be twin-predicated, it is important to note that
1610 the rules on twin predication still apply, except where in previous
1611 pseudo-code (elwidth=default for both source and target) it was
1612 the *registers* that the predication was applied to, it is now the
1613 **elements** that the predication is applied to.
1614
1615 Thus the full pseudocode for all LD operations may be written out
1616 as follows:
1617
1618 function LBU(rd, rs):
1619 load_elwidthed(rd, rs, 8, true)
1620 function LB(rd, rs):
1621 load_elwidthed(rd, rs, 8, false)
1622 function LH(rd, rs):
1623 load_elwidthed(rd, rs, 16, false)
1624 ...
1625 ...
1626 function LQ(rd, rs):
1627 load_elwidthed(rd, rs, 128, false)
1628
1629 # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
1630 function load_memory(rs, imm, i, opwidth):
1631 elwidth = int_csr[rs].elwidth
1632 bitwidth = bw(elwidth);
1633 elsperblock = min(1, opwidth / bitwidth)
1634 srcbase = ireg[rs+i/(elsperblock)];
1635 offs = i % elsperblock;
1636 return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
1637
1638 function load_elwidthed(rd, rs, opwidth, unsigned):
1639 destwid = int_csr[rd].elwidth # destination element width
1640  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1641  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1642  ps = get_pred_val(FALSE, rs); # predication on src
1643  pd = get_pred_val(FALSE, rd); # ... AND on dest
1644  for (int i = 0, int j = 0; i < VL && j < VL;):
1645 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1646 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1647 val = load_memory(rs, imm, i, opwidth)
1648 if unsigned:
1649 val = zero_extend(val, min(opwidth, bitwidth))
1650 else:
1651 val = sign_extend(val, min(opwidth, bitwidth))
1652 set_polymorphed_reg(rd, bitwidth, j, val)
1653 if (int_csr[rs].isvec) i++;
1654 if (int_csr[rd].isvec) j++; else break;
1655
1656 Note:
1657
1658 * when comparing against for example the twin-predicated c.mv
1659 pseudo-code, the pattern of independent incrementing of rd and rs
1660 is preserved unchanged.
1661 * just as with the c.mv pseudocode, zeroing is not included and must be
1662 taken into account (TODO).
1663 * that due to the use of a twin-predication algorithm, LOAD/STORE also
1664 take on the same VSPLAT, VINSERT, VREDUCE, VEXTRACT, VGATHER and
1665 VSCATTER characteristics.
1666 * that due to the use of the same set\_polymorphed\_reg pseudocode,
1667 a destination that is not vectorised (marked as scalar) will
1668 result in the element being fully sign-extended or zero-extended
1669 out to the full register file bitwidth (XLEN). When the source
1670 is also marked as scalar, this is how the compatibility with
1671 standard RV LOAD/STORE is preserved by this algorithm.
1672
1673 ### Example Tables showing LOAD elements
1674
1675 This section contains examples of vectorised LOAD operations, showing
1676 how the two stage process works (three if zero/sign-extension is included).
1677
1678
1679 #### Example: LD x8, x5(0), x8 CSR-elwidth=32, x5 CSR-elwidth=16, VL=7
1680
1681 This is:
1682
1683 * a 64-bit load, with an offset of zero
1684 * with a source-address elwidth of 16-bit
1685 * into a destination-register with an elwidth of 32-bit
1686 * where VL=7
1687 * from register x5 (actually x5-x6) to x8 (actually x8 to half of x11)
1688 * RV64, where XLEN=64 is assumed.
1689
1690 First, the memory table, which, due to the
1691 element width being 16 and the operation being LD (64), the 64-bits
1692 loaded from memory are subdivided into groups of **four** elements.
1693 And, with VL being 7 (deliberately to illustrate that this is reasonable
1694 and possible), the first four are sourced from the offset addresses pointed
1695 to by x5, and the next three from the ofset addresses pointed to by
1696 the next contiguous register, x6:
1697
1698 [[!table data="""
1699 addr | byte 0 | byte 1 | byte 2 | byte 3 | byte 4 | byte 5 | byte 6 | byte 7 |
1700 @x5 | elem 0 || elem 1 || elem 2 || elem 3 ||
1701 @x6 | elem 4 || elem 5 || elem 6 || not loaded ||
1702 """]]
1703
1704 Next, the elements are zero-extended from 16-bit to 32-bit, as whilst
1705 the elwidth CSR entry for x5 is 16-bit, the destination elwidth on x8 is 32.
1706
1707 [[!table data="""
1708 byte 3 | byte 2 | byte 1 | byte 0 |
1709 0x0 | 0x0 | elem0 ||
1710 0x0 | 0x0 | elem1 ||
1711 0x0 | 0x0 | elem2 ||
1712 0x0 | 0x0 | elem3 ||
1713 0x0 | 0x0 | elem4 ||
1714 0x0 | 0x0 | elem5 ||
1715 0x0 | 0x0 | elem6 ||
1716 0x0 | 0x0 | elem7 ||
1717 """]]
1718
1719 Lastly, the elements are stored in contiguous blocks, as if x8 was also
1720 byte-addressable "memory". That "memory" happens to cover registers
1721 x8, x9, x10 and x11, with the last 32 "bits" of x11 being **UNMODIFIED**:
1722
1723 [[!table data="""
1724 reg# | byte 7 | byte 6 | byte 5 | byte 4 | byte 3 | byte 2 | byte 1 | byte 0 |
1725 x8 | 0x0 | 0x0 | elem 1 || 0x0 | 0x0 | elem 0 ||
1726 x9 | 0x0 | 0x0 | elem 3 || 0x0 | 0x0 | elem 2 ||
1727 x10 | 0x0 | 0x0 | elem 5 || 0x0 | 0x0 | elem 4 ||
1728 x11 | **UNMODIFIED** |||| 0x0 | 0x0 | elem 6 ||
1729 """]]
1730
1731 Thus we have data that is loaded from the **addresses** pointed to by
1732 x5 and x6, zero-extended from 16-bit to 32-bit, stored in the **registers**
1733 x8 through to half of x11.
1734 The end result is that elements 0 and 1 end up in x8, with element 8 being
1735 shifted up 32 bits, and so on, until finally element 6 is in the
1736 LSBs of x11.
1737
1738 Note that whilst the memory addressing table is shown left-to-right byte order,
1739 the registers are shown in right-to-left (MSB) order. This does **not**
1740 imply that bit or byte-reversal is carried out: it's just easier to visualise
1741 memory as being contiguous bytes, and emphasises that registers are not
1742 really actually "memory" as such.
1743
1744 ## Why SV bitwidth specification is restricted to 4 entries
1745
1746 The four entries for SV element bitwidths only allows three over-rides:
1747
1748 * 8 bit
1749 * 16 hit
1750 * 32 bit
1751
1752 This would seem inadequate, surely it would be better to have 3 bits or more and allow 64, 128 and some other options besides. The answer here is, it gets too complex, no RV128 implementation yet exists, and so RV64's default is 64 bit, so the 4 major element widths are covered anyway.
1753
1754 There is an absolutely crucial aspect oF SV here that explicitly
1755 needs spelling out, and it's whether the "vectorised" bit is set in
1756 the Register's CSR entry.
1757
1758 If "vectorised" is clear (not set), this indicates that the operation
1759 is "scalar". Under these circumstances, when set on a destination (RD),
1760 then sign-extension and zero-extension, whilst changed to match the
1761 override bitwidth (if set), will erase the **full** register entry
1762 (64-bit if RV64).
1763
1764 When vectorised is *set*, this indicates that the operation now treats
1765 **elements** as if they were independent registers, so regardless of
1766 the length, any parts of a given actual register that are not involved
1767 in the operation are **NOT** modified, but are **PRESERVED**.
1768
1769 For example:
1770
1771 * when the vector bit is clear and elwidth set to 16 on the destination register, operations are truncated to 16 bit and then sign or zero extended to the *FULL* XLEN register width.
1772 * when the vector bit is set, elwidth is 16 and VL=1 (or other value where groups of elwidth sized elements do not fill an entire XLEN register), the "top" bits of the destination register do *NOT* get modified, zero'd or otherwise overwritten.
1773
1774 SIMD micro-architectures may implement this by using predication on
1775 any elements in a given actual register that are beyond the end of
1776 multi-element operation.
1777
1778 Other microarchitectures may choose to provide byte-level write-enable lines on the register file, such that each 64 bit register in an RV64 system requires 8 WE lines. Scalar RV64 operations would require activation of all 8 lines, where SV elwidth based operations would activate the required subset of those byte-level write lines.
1779
1780 Example:
1781
1782 * rs1, rs2 and rd are all set to 8-bit
1783 * VL is set to 3
1784 * RV64 architecture is set (UXL=64)
1785 * add operation is carried out
1786 * bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16]
1787 concatenated with similar add operations on bits 15..8 and 7..0
1788 * bits 24 through 63 **remain as they originally were**.
1789
1790 Example SIMD micro-architectural implementation:
1791
1792 * SIMD architecture works out the nearest round number of elements
1793 that would fit into a full RV64 register (in this case: 8)
1794 * SIMD architecture creates a hidden predicate, binary 0b00000111
1795 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
1796 * SIMD architecture goes ahead with the add operation as if it
1797 was a full 8-wide batch of 8 adds
1798 * SIMD architecture passes top 5 elements through the adders
1799 (which are "disabled" due to zero-bit predication)
1800 * SIMD architecture gets the 5 unmodified top 8-bits back unmodified
1801 and stores them in rd.
1802
1803 This requires a read on rd, however this is required anyway in order
1804 to support non-zeroing mode.
1805
1806 ## Polymorphic floating-point
1807
1808 Standard scalar RV integer operations base the register width on XLEN,
1809 which may be changed (UXL in USTATUS, and the corresponding MXL and
1810 SXL in MSTATUS and SSTATUS respectively). Integer LOAD, STORE and
1811 arithmetic operations are therefore restricted to an active XLEN bits,
1812 with sign or zero extension to pad out the upper bits when XLEN has
1813 been dynamically set to less than the actual register size.
1814
1815 For scalar floating-point, the active (used / changed) bits are
1816 specified exclusively by the operation: ADD.S specifies an active
1817 32-bits, with the upper bits of the source registers needing to
1818 be all 1s ("NaN-boxed"), and the destination upper bits being
1819 *set* to all 1s (including on LOAD/STOREs).
1820
1821 Where elwidth is set to default (on any source or the destination)
1822 it is obvious that this NaN-boxing behaviour can and should be
1823 preserved. When elwidth is non-default things are less obvious,
1824 so need to be thought through. Here is a normal (scalar) sequence,
1825 assuming an RV64 which supports Quad (128-bit) FLEN:
1826
1827 * FLD loads 64-bit wide from memory. Top 64 MSBs are set to all 1s
1828 * ADD.D performs a 64-bit-wide add. Top 64 MSBs of destination set to 1s.
1829 * FSD stores lowest 64-bits from the 128-bit-wide register to memory:
1830 top 64 MSBs ignored.
1831
1832 Therefore it makes sense to mirror this behaviour when, for example,
1833 elwidth is set to 32. Assume elwidth set to 32 on all source and
1834 destination registers:
1835
1836 * FLD loads 64-bit wide from memory as **two** 32-bit single-precision
1837 floating-point numbers.
1838 * ADD.D performs **two** 32-bit-wide adds, storing one of the adds
1839 in bits 0-31 and the second in bits 32-63.
1840 * FSD stores lowest 64-bits from the 128-bit-wide register to memory
1841
1842 Here's the thing: it does not make sense to overwrite the top 64 MSBs
1843 of the registers either during the FLD **or** the ADD.D. The reason
1844 is that, effectively, the top 64 MSBs actually represent a completely
1845 independent 64-bit register, so overwriting it is not only gratuitous
1846 but may actually be harmful for a future extension to SV which may
1847 have a way to directly access those top 64 bits.
1848
1849 The decision is therefore **not** to touch the upper parts of floating-point
1850 registers whereever elwidth is set to non-default values, including
1851 when "isvec" is false in a given register's CSR entry. Only when the
1852 elwidth is set to default **and** isvec is false will the standard
1853 RV behaviour be followed, namely that the upper bits be modified.
1854
1855 Ultimately if elwidth is default and isvec false on *all* source
1856 and destination registers, a SimpleV instruction defaults completely
1857 to standard RV scalar behaviour (this holds true for **all** operations,
1858 right across the board).
1859
1860 The nice thing here is that ADD.S, ADD.D and ADD.Q when elwidth are
1861 non-default values are effectively all the same: they all still perform
1862 multiple ADD operations, just at different widths. A future extension
1863 to SimpleV may actually allow ADD.S to access the upper bits of the
1864 register, effectively breaking down a 128-bit register into a bank
1865 of 4 independently-accesible 32-bit registers.
1866
1867 In the meantime, although when e.g. setting VL to 8 it would technically
1868 make no difference to the ALU whether ADD.S, ADD.D or ADD.Q is used,
1869 using ADD.Q may be an easy way to signal to the microarchitecture that
1870 it is to receive a higher VL value. On a superscalar OoO architecture
1871 there may be absolutely no difference, however on simpler SIMD-style
1872 microarchitectures they may not necessarily have the infrastructure in
1873 place to know the difference, such that when VL=8 and an ADD.D instruction
1874 is issued, it completes in 2 cycles (or more) rather than one, where
1875 if an ADD.Q had been issued instead on such simpler microarchitectures
1876 it would complete in one.
1877
1878 ## Specific instruction walk-throughs
1879
1880 This section covers walk-throughs of the above-outlined procedure
1881 for converting standard RISC-V scalar arithmetic operations to
1882 polymorphic widths, to ensure that it is correct.
1883
1884 ### add
1885
1886 Standard Scalar RV32/RV64 (xlen):
1887
1888 * RS1 @ xlen bits
1889 * RS2 @ xlen bits
1890 * add @ xlen bits
1891 * RD @ xlen bits
1892
1893 Polymorphic variant:
1894
1895 * RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
1896 * RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
1897 * add @ max(rs1, rs2) bits
1898 * RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate
1899
1900 Note here that polymorphic add zero-extends its source operands,
1901 where addw sign-extends.
1902
1903 ### addw
1904
1905 The RV Specification specifically states that "W" variants of arithmetic
1906 operations always produce 32-bit signed values. In a polymorphic
1907 environment it is reasonable to assume that the signed aspect is
1908 preserved, where it is the length of the operands and the result
1909 that may be changed.
1910
1911 Standard Scalar RV64 (xlen):
1912
1913 * RS1 @ xlen bits
1914 * RS2 @ xlen bits
1915 * add @ xlen bits
1916 * RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
1917
1918 Polymorphic variant:
1919
1920 * RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
1921 * RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
1922 * add @ max(rs1, rs2) bits
1923 * RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
1924
1925 Note here that polymorphic addw sign-extends its source operands,
1926 where add zero-extends.
1927
1928 This requires a little more in-depth analysis. Where the bitwidth of
1929 rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
1930 only where the bitwidth of either rs1 or rs2 are different, will the
1931 lesser-width operand be sign-extended.
1932
1933 Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
1934 where for add they are both zero-extended. This holds true for all arithmetic
1935 operations ending with "W".
1936
1937 ### addiw
1938
1939 Standard Scalar RV64I:
1940
1941 * RS1 @ xlen bits, truncated to 32-bit
1942 * immed @ 12 bits, sign-extended to 32-bit
1943 * add @ 32 bits
1944 * RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
1945
1946 Polymorphic variant:
1947
1948 * RS1 @ rs1 bits
1949 * immed @ 12 bits, sign-extend to max(rs1, 12) bits
1950 * add @ max(rs1, 12) bits
1951 * RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
1952
1953 # Predication Element Zeroing
1954
1955 The introduction of zeroing on traditional vector predication is usually
1956 intended as an optimisation for lane-based microarchitectures with register
1957 renaming to be able to save power by avoiding a register read on elements
1958 that are passed through en-masse through the ALU. Simpler microarchitectures
1959 do not have this issue: they simply do not pass the element through to
1960 the ALU at all, and therefore do not store it back in the destination.
1961 More complex non-lane-based micro-architectures can, when zeroing is
1962 not set, use the predication bits to simply avoid sending element-based
1963 operations to the ALUs, entirely: thus, over the long term, potentially
1964 keeping all ALUs 100% occupied even when elements are predicated out.
1965
1966 SimpleV's design principle is not based on or influenced by
1967 microarchitectural design factors: it is a hardware-level API.
1968 Therefore, looking purely at whether zeroing is *useful* or not,
1969 (whether less instructions are needed for certain scenarios),
1970 given that a case can be made for zeroing *and* non-zeroing, the
1971 decision was taken to add support for both.
1972
1973 ## Single-predication (based on destination register)
1974
1975 Zeroing on predication for arithmetic operations is taken from
1976 the destination register's predicate. i.e. the predication *and*
1977 zeroing settings to be applied to the whole operation come from the
1978 CSR Predication table entry for the destination register.
1979 Thus when zeroing is set on predication of a destination element,
1980 if the predication bit is clear, then the destination element is *set*
1981 to zero (twin-predication is slightly different, and will be covered
1982 next).
1983
1984 Thus the pseudo-code loop for a predicated arithmetic operation
1985 is modified to as follows:
1986
1987  for (i = 0; i < VL; i++)
1988 if not zeroing: # an optimisation
1989 while (!(predval & 1<<i) && i < VL)
1990 if (int_vec[rd ].isvector)  { id += 1; }
1991 if (int_vec[rs1].isvector)  { irs1 += 1; }
1992 if (int_vec[rs2].isvector)  { irs2 += 1; }
1993 if i == VL:
1994 break
1995 if (predval & 1<<i)
1996 src1 = ....
1997 src2 = ...
1998 else:
1999 result = src1 + src2 # actual add (or other op) here
2000 set_polymorphed_reg(rd, destwid, ird, result)
2001 if (!int_vec[rd].isvector) break
2002 else if zeroing:
2003 result = 0
2004 set_polymorphed_reg(rd, destwid, ird, result)
2005 if (int_vec[rd ].isvector)  { id += 1; }
2006 else if (predval & 1<<i) break;
2007 if (int_vec[rs1].isvector)  { irs1 += 1; }
2008 if (int_vec[rs2].isvector)  { irs2 += 1; }
2009
2010 The optimisation to skip elements entirely is only possible for certain
2011 micro-architectures when zeroing is not set. However for lane-based
2012 micro-architectures this optimisation may not be practical, as it
2013 implies that elements end up in different "lanes". Under these
2014 circumstances it is perfectly fine to simply have the lanes
2015 "inactive" for predicated elements, even though it results in
2016 less than 100% ALU utilisation.
2017
2018 ## Twin-predication (based on source and destination register)
2019
2020 Twin-predication is not that much different, except that that
2021 the source is independently zero-predicated from the destination.
2022 This means that the source may be zero-predicated *or* the
2023 destination zero-predicated *or both*, or neither.
2024
2025 When with twin-predication, zeroing is set on the source and not
2026 the destination, if a predicate bit is set it indicates that a zero
2027 data element is passed through the operation (the exception being:
2028 if the source data element is to be treated as an address - a LOAD -
2029 then the data returned *from* the LOAD is zero, rather than looking up an
2030 *address* of zero.
2031
2032 When zeroing is set on the destination and not the source, then just
2033 as with single-predicated operations, a zero is stored into the destination
2034 element (or target memory address for a STORE).
2035
2036 Zeroing on both source and destination effectively result in a bitwise
2037 NOR operation of the source and destination predicate: the result is that
2038 where either source predicate OR destination predicate is set to 0,
2039 a zero element will ultimately end up in the destination register.
2040
2041 However: this may not necessarily be the case for all operations;
2042 implementors, particularly of custom instructions, clearly need to
2043 think through the implications in each and every case.
2044
2045 Here is pseudo-code for a twin zero-predicated operation:
2046
2047 function op_mv(rd, rs) # MV not VMV!
2048  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
2049  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
2050  ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
2051  pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
2052  for (int i = 0, int j = 0; i < VL && j < VL):
2053 if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
2054 if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
2055 if ((pd & 1<<j))
2056 if ((pd & 1<<j))
2057 sourcedata = ireg[rs+i];
2058 else
2059 sourcedata = 0
2060 ireg[rd+j] <= sourcedata
2061 else if (zerodst)
2062 ireg[rd+j] <= 0
2063 if (int_csr[rs].isvec)
2064 i++;
2065 if (int_csr[rd].isvec)
2066 j++;
2067 else
2068 if ((pd & 1<<j))
2069 break;
2070
2071 Note that in the instance where the destination is a scalar, the hardware
2072 loop is ended the moment a value *or a zero* is placed into the destination
2073 register/element. Also note that, for clarity, variable element widths
2074 have been left out of the above.
2075
2076 # Exceptions
2077
2078 TODO: expand. Exceptions may occur at any time, in any given underlying
2079 scalar operation. This implies that context-switching (traps) may
2080 occur, and operation must be returned to where it left off. That in
2081 turn implies that the full state - including the current parallel
2082 element being processed - has to be saved and restored. This is
2083 what the **STATE** CSR is for.
2084
2085 The implications are that all underlying individual scalar operations
2086 "issued" by the parallelisation have to appear to be executed sequentially.
2087 The further implications are that if two or more individual element
2088 operations are underway, and one with an earlier index causes an exception,
2089 it may be necessary for the microarchitecture to **discard** or terminate
2090 operations with higher indices.
2091
2092 This being somewhat dissatisfactory, an "opaque predication" variant
2093 of the STATE CSR is being considered.
2094
2095 # Hints
2096
2097 A "HINT" is an operation that has no effect on architectural state,
2098 where its use may, by agreed convention, give advance notification
2099 to the microarchitecture: branch prediction notification would be
2100 a good example. Usually HINTs are where rd=x0.
2101
2102 With Simple-V being capable of issuing *parallel* instructions where
2103 rd=x0, the space for possible HINTs is expanded considerably. VL
2104 could be used to indicate different hints. In addition, if predication
2105 is set, the predication register itself could hypothetically be passed
2106 in as a *parameter* to the HINT operation.
2107
2108 No specific hints are yet defined in Simple-V
2109
2110 # VLIW Format <a name="vliw-format"></a>
2111
2112 One issue with SV is the setup and teardown time of the CSRs. The cost
2113 of the use of a full CSRRW (requiring LI) is quite high. A VLIW format
2114 therefore makes sense.
2115
2116 A suitable prefix, which fits the Expanded Instruction-Length encoding
2117 for "(80 + 16 times instruction_length)", as defined in Section 1.5
2118 of the RISC-V ISA, is as follows:
2119
2120 | 15 | 14:12 | 11:10 | 9:8 | 7 | 6:0 |
2121 | - | ----- | ----- | ----- | --- | ------- |
2122 | vlset | 16xil | pplen | rplen | mode | 1111111 |
2123
2124 An optional VL Block, optional predicate entries, optional register entries and finally some 16/32/48 bit standard RV or SVPrefix opcodes follow.
2125
2126 The variable-length format from Section 1.5 of the RISC-V ISA:
2127
2128 | base+4 ... base+2 | base | number of bits |
2129 | ------ ------------------- | ---------------- -------------------------- |
2130 | ..xxxx xxxxxxxxxxxxxxxx | xnnnxxxxx1111111 | (80+16\*nnn)-bit, nnn!=111 |
2131 | {ops}{Pred}{Reg}{VL Block} | SV Prefix | |
2132
2133 VL/MAXVL/SubVL Block:
2134
2135 | 31-30 | 29:28 | 27:22 | 21:17 - 16 |
2136 | - | ----- | ------ | ------ - - |
2137 | 0 | SubVL | VLdest | VLEN vlt |
2138 | 1 | SubVL | VLdest | VLEN |
2139
2140 If vlt is 0, VLEN is a 5 bit immediate value. If vlt is 1, it specifies
2141 the scalar register from which VL is set by this VLIW instruction
2142 group. VL, whether set from the register or the immediate, is then
2143 modified (truncated) to be MIN(VL, MAXVL), and the result stored in the
2144 scalar register specified in VLdest. If VLdest is zero, no store in the
2145 regfile occurs (however VL is still set).
2146
2147 This option will typically be used to start vectorised loops, where
2148 the VLIW instruction effectively embeds an optional "SETSUBVL, SETVL"
2149 sequence (in compact form).
2150
2151 When bit 15 is set to 1, MAXVL and VL are both set to the immediate,
2152 VLEN, which is 6 bits in length, and the same value stored in scalar
2153 register VLdest (if that register is nonzero).
2154
2155 This option will typically not be used so much for loops as it will be
2156 for one-off instructions such as saving the entire register file to the
2157 stack with a single one-off Vectorised and predicated LD/ST.
2158
2159 CSRs needed:
2160
2161 * mepcvliw
2162 * sepcvliw
2163 * uepcvliw
2164 * hepcvliw
2165
2166 Notes:
2167
2168 * Bit 7 specifies if the prefix block format is the full 16 bit format
2169 (1) or the compact less expressive format (0). In the 8 bit format,
2170 pplen is multiplied by 2.
2171 * 8 bit format predicate numbering is implicit and begins from x9. Thus it is critical to put blocks in the correct order as required.
2172 * Bit 7 also specifies if the register block format is 16 bit (1) or 8 bit
2173 (0). In the 8 bit format, rplen is multiplied by 2. If only an odd number
2174 of entries are needed the last may be set to 0x00, indicating "unused".
2175 * Bit 15 specifies if the VL Block is present. If set to 1, the VL Block immediately follows the VLIW instruction Prefix
2176 * Bits 8 and 9 define how many RegCam entries (0 to 3 if bit 15 is 1, otherwise 0 to 6) follow the (optional) VL Block.
2177 * Bits 10 and 11 define how many PredCam entries (0 to 3 if bit 7 is 1, otherwise 0 to 6) follow the (optional) RegCam entries
2178 * Bits 14 to 12 (IL) define the actual length of the instruction: total
2179 number of bits is 80 + 16 times IL. Standard RV32, RVC and also
2180 SVPrefix (P48-\*-Type) instructions fit into this space, after the
2181 (optional) VL / RegCam / PredCam entries
2182 * Anything - any registers - within the VLIW-prefixed format *MUST* have the
2183 RegCam and PredCam entries applied to it.
2184 * At the end of the VLIW Group, the RegCam and PredCam entries
2185 *no longer apply*. VL, MAXVL and SUBVL on the other hand remain at
2186 the values set by the last instruction (whether a CSRRW or the VL
2187 Block header).
2188 * Although an inefficient use of resources, it is fine to set the MAXVL,
2189 VL and SUBVL CSRs with standard CSRRW instructions, within a VLIW block.
2190
2191 All this would greatly reduce the amount of space utilised by Vectorised
2192 instructions, given that 64-bit CSRRW requires 3, even 4 32-bit opcodes: the
2193 CSR itself, a LI, and the setting up of the value into the RS register
2194 of the CSR, which, again, requires a LI / LUI to get the 32 bit
2195 data into the CSR. To get 64-bit data into the register in order to put
2196 it into the CSR(s), LOAD operations from memory are needed!
2197
2198 Given that each 64-bit CSR can hold only 4x PredCAM entries (or 4 RegCAM
2199 entries), that's potentially 6 to eight 32-bit instructions, just to
2200 establish the Vector State!
2201
2202 Not only that: even CSRRW on VL and MAXVL requires 64-bits (even more bits if
2203 VL needs to be set to greater than 32). Bear in mind that in SV, both MAXVL
2204 and VL need to be set.
2205
2206 By contrast, the VLIW prefix is only 16 bits, the VL/MAX/SubVL block is
2207 only 16 bits, and as long as not too many predicates and register vector
2208 qualifiers are specified, several 32-bit and 16-bit opcodes can fit into
2209 the format. If the full flexibility of the 16 bit block formats are not
2210 needed, more space is saved by using the 8 bit formats.
2211
2212 In this light, embedding the VL/MAXVL, PredCam and RegCam CSR entries into
2213 a VLIW format makes a lot of sense.
2214
2215 Open Questions:
2216
2217 * Is it necessary to stick to the RISC-V 1.5 format? Why not go with
2218 using the 15th bit to allow 80 + 16\*0bnnnn bits? Perhaps to be sane,
2219 limit to 256 bits (16 times 0-11).
2220 * Could a "hint" be used to set which operations are parallel and which
2221 are sequential?
2222 * Could a new sub-instruction opcode format be used, one that does not
2223 conform precisely to RISC-V rules, but *unpacks* to RISC-V opcodes?
2224 no need for byte or bit-alignment
2225 * Could a hardware compression algorithm be deployed? Quite likely,
2226 because of the sub-execution context (sub-VLIW PC)
2227
2228 ## Limitations on instructions.
2229
2230 To greatly simplify implementations, it is required to treat the VLIW
2231 group as a separate sub-program with its own separate PC. The sub-pc
2232 advances separately whilst the main PC remains pointing at the beginning
2233 of the VLIW instruction (not to be confused with how VL works, which
2234 is exactly the same principle, except it is VStart in the STATE CSR
2235 that increments).
2236
2237 This has implications, namely that a new set of CSRs identical to xepc
2238 (mepc, srpc, hepc and uepc) must be created and managed and respected
2239 as being a sub extension of the xepc set of CSRs. Thus, xepcvliw CSRs
2240 must be context switched and saved / restored in traps.
2241
2242 The VStart indices in the STATE CSR may be similarly regarded as another
2243 sub-execution context, giving in effect two sets of nested sub-levels
2244 of the RISCV Program Counter.
2245
2246 In addition, as xepcvliw CSRs are relative to the beginning of the VLIW
2247 block, branches MUST be restricted to within the block, i.e. addressing
2248 is now restricted to the start (and very short) length of the block.
2249
2250 Also: calling subroutines is inadviseable, unless they can be entirely
2251 accomplished within a block.
2252
2253 A normal jump and a normal function call may only be taken by letting
2254 the VLIW end, returning to "normal" standard RV mode, using RVC, 32 bit
2255 or P48-*-type opcodes.
2256
2257 ## Links
2258
2259 * <https://groups.google.com/d/msg/comp.arch/yIFmee-Cx-c/jRcf0evSAAAJ>
2260
2261 # Subsets of RV functionality
2262
2263 This section describes the differences when SV is implemented on top of
2264 different subsets of RV.
2265
2266 ## Common options
2267
2268 It is permitted to limit the size of either (or both) the register files
2269 down to the original size of the standard RV architecture. However, below
2270 the mandatory limits set in the RV standard will result in non-compliance
2271 with the SV Specification.
2272
2273 ## RV32 / RV32F
2274
2275 When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
2276 maximum limit for predication is also restricted to 32 bits. Whilst not
2277 actually specifically an "option" it is worth noting.
2278
2279 ## RV32G
2280
2281 Normally in standard RV32 it does not make much sense to have
2282 RV32G, The critical instructions that are missing in standard RV32
2283 are those for moving data to and from the double-width floating-point
2284 registers into the integer ones, as well as the FCVT routines.
2285
2286 In an earlier draft of SV, it was possible to specify an elwidth
2287 of double the standard register size: this had to be dropped,
2288 and may be reintroduced in future revisions.
2289
2290 ## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
2291
2292 When floating-point is not implemented, the size of the User Register and
2293 Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
2294 per table).
2295
2296 ## RV32E
2297
2298 In embedded scenarios the User Register and Predication CSRs may be
2299 dropped entirely, or optionally limited to 1 CSR, such that the combined
2300 number of entries from the M-Mode CSR Register table plus U-Mode
2301 CSR Register table is either 4 16-bit entries or (if the U-Mode is
2302 zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
2303 the Predication CSR tables.
2304
2305 RV32E is the most likely candidate for simply detecting that registers
2306 are marked as "vectorised", and generating an appropriate exception
2307 for the VL loop to be implemented in software.
2308
2309 ## RV128
2310
2311 RV128 has not been especially considered, here, however it has some
2312 extremely large possibilities: double the element width implies
2313 256-bit operands, spanning 2 128-bit registers each, and predication
2314 of total length 128 bit given that XLEN is now 128.
2315
2316 # Under consideration <a name="issues"></a>
2317
2318 for element-grouping, if there is unused space within a register
2319 (3 16-bit elements in a 64-bit register for example), recommend:
2320
2321 * For the unused elements in an integer register, the used element
2322 closest to the MSB is sign-extended on write and the unused elements
2323 are ignored on read.
2324 * The unused elements in a floating-point register are treated as-if
2325 they are set to all ones on write and are ignored on read, matching the
2326 existing standard for storing smaller FP values in larger registers.
2327
2328 ---
2329
2330 info register,
2331
2332 > One solution is to just not support LR/SC wider than a fixed
2333 > implementation-dependent size, which must be at least 
2334 >1 XLEN word, which can be read from a read-only CSR
2335 > that can also be used for info like the kind and width of 
2336 > hw parallelism supported (128-bit SIMD, minimal virtual 
2337 > parallelism, etc.) and other things (like maybe the number 
2338 > of registers supported). 
2339
2340 > That CSR would have to have a flag to make a read trap so
2341 > a hypervisor can simulate different values.
2342
2343 ----
2344
2345 > And what about instructions like JALR? 
2346
2347 answer: they're not vectorised, so not a problem
2348
2349 ----
2350
2351 * if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
2352 XLEN if elwidth==default
2353 * if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
2354 *32* if elwidth == default
2355
2356 ---
2357
2358 TODO: document different lengths for INT / FP regfiles, and provide
2359 as part of info register. 00=32, 01=64, 10=128, 11=reserved.
2360
2361 ---
2362
2363 TODO, update to remove RegCam and PredCam CSRs, just use SVprefix and VLIW format