(no commit message)
[libreriscv.git] / simple_v_extension / specification.mdwn
1 # Simple-V (Parallelism Extension Proposal) Specification
2
3 * Copyright (C) 2017, 2018, 3029 Luke Kenneth Casson Leighton
4 * Status: DRAFTv0.6
5 * Last edited: 21 jun 2019
6 * Ancillary resource: [[opcodes]] [[sv_prefix_proposal]]
7
8 With thanks to:
9
10 * Allen Baum
11 * Bruce Hoult
12 * comp.arch
13 * Jacob Bachmeyer
14 * Guy Lemurieux
15 * Jacob Lifshay
16 * Terje Mathisen
17 * The RISC-V Founders, without whom this all would not be possible.
18
19 [[!toc ]]
20
21 # Summary and Background: Rationale
22
23 Simple-V is a uniform parallelism API for RISC-V hardware that has several
24 unplanned side-effects including code-size reduction, expansion of
25 HINT space and more. The reason for
26 creating it is to provide a manageable way to turn a pre-existing design
27 into a parallel one, in a step-by-step incremental fashion, without adding any new opcodes, thus allowing
28 the implementor to focus on adding hardware where it is needed and necessary.
29 The primary target is for mobile-class 3D GPUs and VPUs, with secondary
30 goals being to reduce executable size and reduce context-switch latency.
31
32 Critically: **No new instructions are added**. The parallelism (if any
33 is implemented) is implicitly added by tagging *standard* scalar registers
34 for redirection. When such a tagged register is used in any instruction,
35 it indicates that the PC shall **not** be incremented; instead a loop
36 is activated where *multiple* instructions are issued to the pipeline
37 (as determined by a length CSR), with contiguously incrementing register
38 numbers starting from the tagged register. When the last "element"
39 has been reached, only then is the PC permitted to move on. Thus
40 Simple-V effectively sits (slots) *in between* the instruction decode phase
41 and the ALU(s).
42
43 The barrier to entry with SV is therefore very low. The minimum
44 compliant implementation is software-emulation (traps), requiring
45 only the CSRs and CSR tables, and that an exception be thrown if an
46 instruction's registers are detected to have been tagged. The looping
47 that would otherwise be done in hardware is thus carried out in software,
48 instead. Whilst much slower, it is "compliant" with the SV specification,
49 and may be suited for implementation in RV32E and also in situations
50 where the implementor wishes to focus on certain aspects of SV, without
51 unnecessary time and resources into the silicon, whilst also conforming
52 strictly with the API. A good area to punt to software would be the
53 polymorphic element width capability for example.
54
55 Hardware Parallelism, if any, is therefore added at the implementor's
56 discretion to turn what would otherwise be a sequential loop into a
57 parallel one.
58
59 To emphasise that clearly: Simple-V (SV) is *not*:
60
61 * A SIMD system
62 * A SIMT system
63 * A Vectorisation Microarchitecture
64 * A microarchitecture of any specific kind
65 * A mandary parallel processor microarchitecture of any kind
66 * A supercomputer extension
67
68 SV does **not** tell implementors how or even if they should implement
69 parallelism: it is a hardware "API" (Application Programming Interface)
70 that, if implemented, presents a uniform and consistent way to *express*
71 parallelism, at the same time leaving the choice of if, how, how much,
72 when and whether to parallelise operations **entirely to the implementor**.
73
74 # Basic Operation
75
76 The principle of SV is as follows:
77
78 * Standard RV instructions are "prefixed" (extended) through a 48 bit format (single instruction option) or a variable
79 length VLIW-like prefix (multi or "grouped" option).
80 * The prefix(es) indicate
81 which registers are "tagged" as "vectorised". Predicates can also be added.
82 * A "Vector Length" CSR is set, indicating the span of any future
83 "parallel" operations.
84 * If any operation (a **scalar** standard RV opcode)
85 uses a register that has been so "marked"
86 ("tagged"),
87 a hardware "macro-unrolling loop" is activated, of length
88 VL, that effectively issues **multiple** identical instructions
89 using contiguous sequentially-incrementing register numbers, based on the "tags".
90 * **Whether they be executed sequentially or in parallel or a
91 mixture of both or punted to software-emulation in a trap handler
92 is entirely up to the implementor**.
93
94 In this way an entire scalar algorithm may be vectorised with
95 the minimum of modification to the hardware and to compiler toolchains.
96
97 To reiterate: **There are *no* new opcodes**. The scheme works *entirely* on hidden context that augments *scalar* RISCV instructions.
98
99 # CSRs <a name="csrs"></a>
100
101 * An optional "reshaping" CSR key-value table which remaps from a 1D
102 linear shape to 2D or 3D, including full transposition.
103
104 There are also five additional User mode CSRs :
105
106 * uMVL (the Maximum Vector Length)
107 * uVL (which has different characteristics from standard CSRs)
108 * uSUBVL (effectively a kind of SIMD)
109 * uEPCVLIW (a copy of the sub-execution Program Counter, that is relative to the start of the current VLIW Group, set on a trap).
110 * uSTATE (useful for saving and restoring during context switch,
111 and for providing fast transitions)
112
113 There are also five additional CSRs for Supervisor-Mode:
114
115 * SMVL
116 * SVL
117 * SSUBVL
118 * SEPCVLIW
119 * SSTATE
120
121 And likewise for M-Mode:
122
123 * MMVL
124 * MVL
125 * MSUBVL
126 * MEPCVLIW
127 * MSTATE
128
129 Both Supervisor and M-Mode have their own CSR registers, independent of the other privilege levels, in order to make it easier to use Vectorisation in each level without affecting other privilege levels.
130
131 The access pattern for these groups of CSRs in each mode follows the
132 same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
133
134 * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
135 * In S-Mode, accessing and changing of the M-Mode CSRs is transparently
136 identical
137 to changing the S-Mode CSRs. Accessing and changing the U-Mode
138 CSRs is permitted.
139 * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
140 is prohibited.
141
142 In M-Mode, only the M-Mode CSRs are in effect, i.e. it is only the
143 M-Mode MVL, the M-Mode STATE and so on that influences the processor
144 behaviour. Likewise for S-Mode, and likewise for U-Mode.
145
146 This has the interesting benefit of allowing M-Mode (or S-Mode)
147 to be set up, for context-switching to take place, and, on return
148 back to the higher privileged mode, the CSRs of that mode will be
149 exactly as they were. Thus, it becomes possible for example to
150 set up CSRs suited best to aiding and assisting low-latency fast
151 context-switching *once and only once* (for example at boot time), without the need for
152 re-initialising the CSRs needed to do so.
153
154 Another interesting side effect of separate S Mode CSRs is that Vectorised saving of the entire register file to the stack is a single instruction (accidental provision of LOAD-MULTI semantics). It can even be predicated, which opens up some very interesting possibilities.
155
156 The xEPCVLIW CSRs must be treated exactly like their corresponding xepc equivalents. See VLIW section for details.
157
158 ## MAXVECTORLENGTH (MVL) <a name="mvl" />
159
160 MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
161 is variable length and may be dynamically set. MVL is
162 however limited to the regfile bitwidth XLEN (1-32 for RV32,
163 1-64 for RV64 and so on).
164
165 The reason for setting this limit is so that predication registers, when
166 marked as such, may fit into a single register as opposed to fanning out
167 over several registers. This keeps the implementation a little simpler.
168
169 The other important factor to note is that the actual MVL is internally stored **offset
170 by one**, so that it can fit into only 6 bits (for RV64) and still cover
171 a range up to XLEN bits. Attempts to set MVL to zero will return an exception. This is expressed more clearly in the "pseudocode"
172 section, where there are subtle differences between CSRRW and CSRRWI.
173
174 ## Vector Length (VL) <a name="vl" />
175
176 VSETVL is slightly different from RVV. Similar to RVV, VL is set to be within
177 the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
178
179 VL = rd = MIN(vlen, MVL)
180
181 where 1 <= MVL <= XLEN
182
183 However just like MVL it is important to note that the range for VL has
184 subtle design implications, covered in the "CSR pseudocode" section
185
186 The fixed (specific) setting of VL allows vector LOAD/STORE to be used
187 to switch the entire bank of registers using a single instruction (see
188 Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
189 is down to the fact that predication bits fit into a single register of
190 length XLEN bits.
191
192 The second change is that when VSETVL is requested to be stored
193 into x0, it is *ignored* silently (VSETVL x0, x5)
194
195 The third and most important change is that, within the limits set by
196 MVL, the value passed in **must** be set in VL (and in the
197 destination register).
198
199 This has implication for the microarchitecture, as VL is required to be
200 set (limits from MVL notwithstanding) to the actual value
201 requested. RVV has the option to set VL to an arbitrary value that suits
202 the conditions and the micro-architecture: SV does *not* permit this.
203
204 The reason is so that if SV is to be used for a context-switch or as a
205 substitute for LOAD/STORE-Multiple, the operation can be done with only
206 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
207 single LD/ST operation). If VL does *not* get set to the register file
208 length when VSETVL is called, then a software-loop would be needed.
209 To avoid this need, VL *must* be set to exactly what is requested
210 (limits notwithstanding).
211
212 Therefore, in turn, unlike RVV, implementors *must* provide
213 pseudo-parallelism (using sequential loops in hardware) if actual
214 hardware-parallelism in the ALUs is not deployed. A hybrid is also
215 permitted (as used in Broadcom's VideoCore-IV) however this must be
216 *entirely* transparent to the ISA.
217
218 The fourth change is that VSETVL is implemented as a CSR, where the
219 behaviour of CSRRW (and CSRRWI) must be changed to specifically store
220 the *new* value in the destination register, **not** the old value.
221 Where context-load/save is to be implemented in the usual fashion
222 by using a single CSRRW instruction to obtain the old value, the
223 *secondary* CSR must be used (SVSTATE). This CSR behaves
224 exactly as standard CSRs, and contains more than just VL.
225
226 One interesting side-effect of using CSRRWI to set VL is that this
227 may be done with a single instruction, useful particularly for a
228 context-load/save. There are however limitations: CSRWI's immediate
229 is limited to 0-31 (representing VL=1-32).
230
231 Note that when VL is set to 1, all parallel operations cease: the
232 hardware loop is reduced to a single element: scalar operations.
233
234 ## SUBVL - Sub Vector Length
235
236 This is a "group by quantity" that effectively divides VL into groups of elements of length SUBVL. VL itself must therefore be set in advance to a multiple of SUBVL.
237
238 Legal values are 1, 2, 3 and 4, and the STATE CSR must hold the 2 bit values 0b00 thru 0b11.
239
240 Setting this CSR to 0 must raise an exception. Setting it to a value greater than 4 likewise.
241
242 The main effect of SUBVL is that predication bits are applied per **group**,
243 rather than by individual element.
244
245 This saves a not insignificant number of instructions when handling 3D vectors, as otherwise a much longer predicate mask would have to be set up with regularly-repeated bit patterns.
246
247 ## STATE
248
249 This is a standard CSR that contains sufficient information for a
250 full context save/restore. It contains (and permits setting of)
251 MVL, VL, SUBVL,
252 the destination element offset of the current parallel
253 instruction being executed, and, for twin-predication, the source
254 element offset as well. Interestingly it may hypothetically
255 also be used to make the immediately-following instruction to skip a
256 certain number of elements.
257
258 Setting destoffs and srcoffs is realistically intended for saving state
259 so that exceptions (page faults in particular) may be serviced and the
260 hardware-loop that was being executed at the time of the trap, from
261 user-mode (or Supervisor-mode), may be returned to and continued from exactly
262 where it left off. The reason why this works is because setting
263 User-Mode STATE will not change (not be used) in M-Mode or S-Mode
264 (and is entirely why M-Mode and S-Mode have their own STATE CSRs).
265
266 The format of the STATE CSR is as follows:
267
268 | (28..27) | (26..24) | (23..18) | (17..12) | (11..6) | (5...0) |
269 | -------- | -------- | -------- | -------- | ------- | ------- |
270 | rsvd | subvl | destoffs | srcoffs | vl | maxvl |
271
272 When setting this CSR, the following characteristics will be enforced:
273
274 * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
275 * **VL** will be truncated (after offset) to be within the range 1 to MAXVL
276 * **SUBVL** which sets a SIMD-like quantity, has only 4 values however if VL is not a multiple of SUBVL an exception will be raised.
277 * **srcoffs** will be truncated to be within the range 0 to VL-1
278 * **destoffs** will be truncated to be within the range 0 to VL-1
279
280 ## MVL and VL Pseudocode
281
282 The pseudo-code for get and set of VL and MVL use the following internal functions as follows:
283
284 set_mvl_csr(value, rd):
285 regs[rd] = MVL
286 MVL = MIN(value, MVL)
287
288 get_mvl_csr(rd):
289 regs[rd] = VL
290
291 set_vl_csr(value, rd):
292 VL = MIN(value, MVL)
293 regs[rd] = VL # yes returning the new value NOT the old CSR
294 return VL
295
296 get_vl_csr(rd):
297 regs[rd] = VL
298 return VL
299
300 Note that where setting MVL behaves as a normal CSR (returns the old value), unlike standard CSR
301 behaviour, setting VL will return the **new** value of VL **not** the old
302 one.
303
304 For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
305 maximise the effectiveness, an immediate of 0 is used to set VL=1,
306 an immediate of 1 is used to set VL=2 and so on:
307
308 CSRRWI_Set_MVL(value):
309 set_mvl_csr(value+1, x0)
310
311 CSRRWI_Set_VL(value):
312 set_vl_csr(value+1, x0)
313
314 However for CSRRW the following pseudocode is used for MVL and VL,
315 where setting the value to zero will cause an exception to be raised.
316 The reason is that if VL or MVL are set to zero, the STATE CSR is
317 not capable of returning that value.
318
319 CSRRW_Set_MVL(rs1, rd):
320 value = regs[rs1]
321 if value == 0 or value > XLEN:
322 raise Exception
323 set_mvl_csr(value, rd)
324
325 CSRRW_Set_VL(rs1, rd):
326 value = regs[rs1]
327 if value == 0 or value > XLEN:
328 raise Exception
329 set_vl_csr(value, rd)
330
331 In this way, when CSRRW is utilised with a loop variable, the value
332 that goes into VL (and into the destination register) may be used
333 in an instruction-minimal fashion:
334
335 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
336 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
337 CSRRWI MVL, 3 # sets MVL == **4** (not 3)
338 j zerotest # in case loop counter a0 already 0
339 loop:
340 CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
341 ld a3, a1 # load 4 registers a3-6 from x
342 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
343 ld a7, a2 # load 4 registers a7-10 from y
344 add a1, a1, t1 # increment pointer to x by vl*8
345 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
346 sub a0, a0, t0 # n -= vl (t0)
347 st a7, a2 # store 4 registers a7-10 to y
348 add a2, a2, t1 # increment pointer to y by vl*8
349 zerotest:
350 bnez a0, loop # repeat if n != 0
351
352 With the STATE CSR, just like with CSRRWI, in order to maximise the
353 utilisation of the limited bitspace, "000000" in binary represents
354 VL==1, "00001" represents VL==2 and so on (likewise for MVL):
355
356 CSRRW_Set_SV_STATE(rs1, rd):
357 value = regs[rs1]
358 get_state_csr(rd)
359 MVL = set_mvl_csr(value[11:6]+1)
360 VL = set_vl_csr(value[5:0]+1)
361 destoffs = value[23:18]>>18
362 srcoffs = value[23:18]>>12
363
364 get_state_csr(rd):
365 regs[rd] = (MVL-1) | (VL-1)<<6 | (srcoffs)<<12 |
366 (destoffs)<<18
367 return regs[rd]
368
369 In both cases, whilst CSR read of VL and MVL return the exact values
370 of VL and MVL respectively, reading and writing the STATE CSR returns
371 those values **minus one**. This is absolutely critical to implement
372 if the STATE CSR is to be used for fast context-switching.
373
374 ## Register key-value (CAM) table <a name="regcsrtable" />
375
376 *NOTE: in prior versions of SV, this table used to be writable and accessible via CSRs. It is now stored in the VLIW instruction format, and entries may be overridden by the SVPrefix format*
377
378 The purpose of the Register table is four-fold:
379
380 * To mark integer and floating-point registers as requiring "redirection"
381 if it is ever used as a source or destination in any given operation.
382 This involves a level of indirection through a 5-to-7-bit lookup table,
383 such that **unmodified** operands with 5 bit (3 for Compressed) may
384 access up to **128** registers.
385 * To indicate whether, after redirection through the lookup table, the
386 register is a vector (or remains a scalar).
387 * To over-ride the implicit or explicit bitwidth that the operation would
388 normally give the register.
389
390 16 bit format:
391
392 | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
393 | ------ | | - | - | - | ------ | ------- |
394 | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
395 | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
396 | .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
397 | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
398
399 8 bit format:
400
401 | RegCAM | | 7 | (6..5) | (4..0) |
402 | ------ | | - | ------ | ------- |
403 | 0 | | i/f | vew0 | regnum |
404
405 i/f is set to "1" to indicate that the redirection/tag entry is to be applied
406 to integer registers; 0 indicates that it is relevant to floating-point
407 registers.
408
409 The 8 bit format is used for a much more compact expression. "isvec" is implicit and, similar to [[sv-prefix-proposal]], the target vector is "regnum<<2", implicitly. Contrast this with the 16-bit format where the target vector is *explicitly* named in bits 8 to 14, and bit 15 may optionally set "scalar" mode.
410
411 Note that whilst SVPrefis adds one extra bit to each of rd, rs1 etc., and thus the "vector" mode need only shift the (6 bit) regnum by 1 to get the actual (7 bit) register number to use, there is not enough space in the 8 bit format so "regnum<<2" is required.
412
413 vew has the following meanings, indicating that the instruction's
414 operand size is "over-ridden" in a polymorphic fashion:
415
416 | vew | bitwidth |
417 | --- | ------------------- |
418 | 00 | default (XLEN/FLEN) |
419 | 01 | 8 bit |
420 | 10 | 16 bit |
421 | 11 | 32 bit |
422
423 As the above table is a CAM (key-value store) it may be appropriate
424 (faster, implementation-wise) to expand it as follows:
425
426 struct vectorised fp_vec[32], int_vec[32];
427
428 for (i = 0; i < 16; i++) // 16 CSRs?
429 tb = int_vec if CSRvec[i].type == 0 else fp_vec
430 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
431 tb[idx].elwidth = CSRvec[i].elwidth
432 tb[idx].regidx = CSRvec[i].regidx // indirection
433 tb[idx].isvector = CSRvec[i].isvector // 0=scalar
434 tb[idx].packed = CSRvec[i].packed // SIMD or not
435
436
437
438 ## Predication Table <a name="predication_csr_table"></a>
439
440 *NOTE: in prior versions of SV, this table used to be writable and accessible via CSRs. It is now stored in the VLIW instruction format, and entries may be overridden by the SVPrefix format*
441
442 The Predication Table is a key-value store indicating whether, if a given
443 destination register (integer or floating-point) is referred to in an
444 instruction, it is to be predicated. Like the Register table, it is an indirect lookup that allows the RV opcodes to not need modification.
445
446 It is particularly important to note
447 that the *actual* register used can be *different* from the one that is
448 in the instruction, due to the redirection through the lookup table.
449
450 * regidx is the register that in combination with the
451 i/f flag, if that integer or floating-point register is referred to
452 in a (standard RV) instruction
453 results in the lookup table being referenced to find the predication
454 mask to use for this operation.
455 * predidx is the
456 *actual* (full, 7 bit) register to be used for the predication mask.
457 * inv indicates that the predication mask bits are to be inverted
458 prior to use *without* actually modifying the contents of the
459 registerfrom which those bits originated.
460 * zeroing is either 1 or 0, and if set to 1, the operation must
461 place zeros in any element position where the predication mask is
462 set to zero. If zeroing is set to 0, unpredicated elements *must*
463 be left alone. Some microarchitectures may choose to interpret
464 this as skipping the operation entirely. Others which wish to
465 stick more closely to a SIMD architecture may choose instead to
466 interpret unpredicated elements as an internal "copy element"
467 operation (which would be necessary in SIMD microarchitectures
468 that perform register-renaming)
469
470 16 bit format:
471
472 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
473 | ----- | - | - | - | - | ------- | ------- |
474 | 0 | predkey | zero0 | inv0 | i/f | regidx | rsrvd |
475 | 1 | predkey | zero1 | inv1 | i/f | regidx | rsvd |
476 | ... | predkey | ..... | .... | i/f | ....... | ....... |
477 | 15 | predkey | zero15 | inv15 | i/f | regidx | rsvd |
478
479
480 8 bit format:
481
482 | PrCSR | 7 | 6 | 5 | (4..0) |
483 | ----- | - | - | - | ------- |
484 | 0 | zero0 | inv0 | i/f | regnum |
485
486 The 8 bit format is a compact and less expressive variant of the full 16 bit format. Using the 8 bit formatis very different: the predicate register to use is implicit, and numbering begins inplicitly from x9. The regnum is still used to "activate" predication, in the same fashion as described above.
487
488 The 16 bit Predication CSR Table is a key-value store, so implementation-wise
489 it will be faster to turn the table around (maintain topologically
490 equivalent state):
491
492 struct pred {
493 bool zero;
494 bool inv;
495 bool enabled;
496 int predidx; // redirection: actual int register to use
497 }
498
499 struct pred fp_pred_reg[32]; // 64 in future (bank=1)
500 struct pred int_pred_reg[32]; // 64 in future (bank=1)
501
502 for (i = 0; i < 16; i++)
503 tb = int_pred_reg if CSRpred[i].type == 0 else fp_pred_reg;
504 idx = CSRpred[i].regidx
505 tb[idx].zero = CSRpred[i].zero
506 tb[idx].inv = CSRpred[i].inv
507 tb[idx].predidx = CSRpred[i].predidx
508 tb[idx].enabled = true
509
510 So when an operation is to be predicated, it is the internal state that
511 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
512 pseudo-code for operations is given, where p is the explicit (direct)
513 reference to the predication register to be used:
514
515 for (int i=0; i<vl; ++i)
516 if ([!]preg[p][i])
517 (d ? vreg[rd][i] : sreg[rd]) =
518 iop(s1 ? vreg[rs1][i] : sreg[rs1],
519 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
520
521 This instead becomes an *indirect* reference using the *internal* state
522 table generated from the Predication CSR key-value store, which is used
523 as follows.
524
525 if type(iop) == INT:
526 preg = int_pred_reg[rd]
527 else:
528 preg = fp_pred_reg[rd]
529
530 for (int i=0; i<vl; ++i)
531 predicate, zeroing = get_pred_val(type(iop) == INT, rd):
532 if (predicate && (1<<i))
533 (d ? regfile[rd+i] : regfile[rd]) =
534 iop(s1 ? regfile[rs1+i] : regfile[rs1],
535 s2 ? regfile[rs2+i] : regfile[rs2]); // for insts with 2 inputs
536 else if (zeroing)
537 (d ? regfile[rd+i] : regfile[rd]) = 0
538
539 Note:
540
541 * d, s1 and s2 are booleans indicating whether destination,
542 source1 and source2 are vector or scalar
543 * key-value CSR-redirection of rd, rs1 and rs2 have NOT been included
544 above, for clarity. rd, rs1 and rs2 all also must ALSO go through
545 register-level redirection (from the Register table) if they are
546 vectors.
547
548 If written as a function, obtaining the predication mask (and whether
549 zeroing takes place) may be done as follows:
550
551 def get_pred_val(bool is_fp_op, int reg):
552 tb = int_reg if is_fp_op else fp_reg
553 if (!tb[reg].enabled):
554 return ~0x0, False // all enabled; no zeroing
555 tb = int_pred if is_fp_op else fp_pred
556 if (!tb[reg].enabled):
557 return ~0x0, False // all enabled; no zeroing
558 predidx = tb[reg].predidx // redirection occurs HERE
559 predicate = intreg[predidx] // actual predicate HERE
560 if (tb[reg].inv):
561 predicate = ~predicate // invert ALL bits
562 return predicate, tb[reg].zero
563
564 Note here, critically, that **only** if the register is marked
565 in its **register** table entry as being "active" does the testing
566 proceed further to check if the **predicate** table entry is
567 also active.
568
569 Note also that this is in direct contrast to branch operations
570 for the storage of comparisions: in these specific circumstances
571 the requirement for there to be an active *register* entry
572 is removed.
573
574 ## REMAP CSR <a name="remap" />
575
576 (Note: both the REMAP and SHAPE sections are best read after the
577 rest of the document has been read)
578
579 There is one 32-bit CSR which may be used to indicate which registers,
580 if used in any operation, must be "reshaped" (re-mapped) from a linear
581 form to a 2D or 3D transposed form, or "offset" to permit arbitrary
582 access to elements within a register.
583
584 The 32-bit REMAP CSR may reshape up to 3 registers:
585
586 | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
587 | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
588 | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
589
590 regidx0-2 refer not to the Register CSR CAM entry but to the underlying
591 *real* register (see regidx, the value) and consequently is 7-bits wide.
592 When set to zero (referring to x0), clearly reshaping x0 is pointless,
593 so is used to indicate "disabled".
594 shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
595 Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
596
597 It is anticipated that these specialist CSRs not be very often used.
598 Unlike the CSR Register and Predication tables, the REMAP CSRs use
599 the full 7-bit regidx so that they can be set once and left alone,
600 whilst the CSR Register entries pointing to them are disabled, instead.
601
602 ## SHAPE 1D/2D/3D vector-matrix remapping CSRs
603
604 (Note: both the REMAP and SHAPE sections are best read after the
605 rest of the document has been read)
606
607 There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
608 which have the same format. When each SHAPE CSR is set entirely to zeros,
609 remapping is disabled: the register's elements are a linear (1D) vector.
610
611 | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
612 | ------- | -- | ------- | -- | ------- | -- | ------- |
613 | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
614
615 offs is a 3-bit field, spread out across bits 7, 15 and 23, which
616 is added to the element index during the loop calculation.
617
618 xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
619 that the array dimensionality for that dimension is 1. A value of xdimsz=2
620 would indicate that in the first dimension there are 3 elements in the
621 array. The format of the array is therefore as follows:
622
623 array[xdim+1][ydim+1][zdim+1]
624
625 However whilst illustrative of the dimensionality, that does not take the
626 "permute" setting into account. "permute" may be any one of six values
627 (0-5, with values of 6 and 7 being reserved, and not legal). The table
628 below shows how the permutation dimensionality order works:
629
630 | permute | order | array format |
631 | ------- | ----- | ------------------------ |
632 | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
633 | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
634 | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
635 | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
636 | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
637 | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
638
639 In other words, the "permute" option changes the order in which
640 nested for-loops over the array would be done. The algorithm below
641 shows this more clearly, and may be executed as a python program:
642
643 # mapidx = REMAP.shape2
644 xdim = 3 # SHAPE[mapidx].xdim_sz+1
645 ydim = 4 # SHAPE[mapidx].ydim_sz+1
646 zdim = 5 # SHAPE[mapidx].zdim_sz+1
647
648 lims = [xdim, ydim, zdim]
649 idxs = [0,0,0] # starting indices
650 order = [1,0,2] # experiment with different permutations, here
651 offs = 0 # experiment with different offsets, here
652
653 for idx in range(xdim * ydim * zdim):
654 new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
655 print new_idx,
656 for i in range(3):
657 idxs[order[i]] = idxs[order[i]] + 1
658 if (idxs[order[i]] != lims[order[i]]):
659 break
660 print
661 idxs[order[i]] = 0
662
663 Here, it is assumed that this algorithm be run within all pseudo-code
664 throughout this document where a (parallelism) for-loop would normally
665 run from 0 to VL-1 to refer to contiguous register
666 elements; instead, where REMAP indicates to do so, the element index
667 is run through the above algorithm to work out the **actual** element
668 index, instead. Given that there are three possible SHAPE entries, up to
669 three separate registers in any given operation may be simultaneously
670 remapped:
671
672 function op_add(rd, rs1, rs2) # add not VADD!
673 ...
674 ...
675  for (i = 0; i < VL; i++)
676 if (predval & 1<<i) # predication uses intregs
677    ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
678 ireg[rs2+remap(irs2)];
679 if (!int_vec[rd ].isvector) break;
680 if (int_vec[rd ].isvector)  { id += 1; }
681 if (int_vec[rs1].isvector)  { irs1 += 1; }
682 if (int_vec[rs2].isvector)  { irs2 += 1; }
683
684 By changing remappings, 2D matrices may be transposed "in-place" for one
685 operation, followed by setting a different permutation order without
686 having to move the values in the registers to or from memory. Also,
687 the reason for having REMAP separate from the three SHAPE CSRs is so
688 that in a chain of matrix multiplications and additions, for example,
689 the SHAPE CSRs need only be set up once; only the REMAP CSR need be
690 changed to target different registers.
691
692 Note that:
693
694 * Over-running the register file clearly has to be detected and
695 an illegal instruction exception thrown
696 * When non-default elwidths are set, the exact same algorithm still
697 applies (i.e. it offsets elements *within* registers rather than
698 entire registers).
699 * If permute option 000 is utilised, the actual order of the
700 reindexing does not change!
701 * If two or more dimensions are set to zero, the actual order does not change!
702 * The above algorithm is pseudo-code **only**. Actual implementations
703 will need to take into account the fact that the element for-looping
704 must be **re-entrant**, due to the possibility of exceptions occurring.
705 See MSTATE CSR, which records the current element index.
706 * Twin-predicated operations require **two** separate and distinct
707 element offsets. The above pseudo-code algorithm will be applied
708 separately and independently to each, should each of the two
709 operands be remapped. *This even includes C.LDSP* and other operations
710 in that category, where in that case it will be the **offset** that is
711 remapped (see Compressed Stack LOAD/STORE section).
712 * Offset is especially useful, on its own, for accessing elements
713 within the middle of a register. Without offsets, it is necessary
714 to either use a predicated MV, skipping the first elements, or
715 performing a LOAD/STORE cycle to memory.
716 With offsets, the data does not have to be moved.
717 * Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to
718 less than MVL is **perfectly legal**, albeit very obscure. It permits
719 entries to be regularly presented to operands **more than once**, thus
720 allowing the same underlying registers to act as an accumulator of
721 multiple vector or matrix operations, for example.
722
723 Clearly here some considerable care needs to be taken as the remapping
724 could hypothetically create arithmetic operations that target the
725 exact same underlying registers, resulting in data corruption due to
726 pipeline overlaps. Out-of-order / Superscalar micro-architectures with
727 register-renaming will have an easier time dealing with this than
728 DSP-style SIMD micro-architectures.
729
730 # Instruction Execution Order
731
732 Simple-V behaves as if it is a hardware-level "macro expansion system",
733 substituting and expanding a single instruction into multiple sequential
734 instructions with contiguous and sequentially-incrementing registers.
735 As such, it does **not** modify - or specify - the behaviour and semantics of
736 the execution order: that may be deduced from the **existing** RV
737 specification in each and every case.
738
739 So for example if a particular micro-architecture permits out-of-order
740 execution, and it is augmented with Simple-V, then wherever instructions
741 may be out-of-order then so may the "post-expansion" SV ones.
742
743 If on the other hand there are memory guarantees which specifically
744 prevent and prohibit certain instructions from being re-ordered
745 (such as the Atomicity Axiom, or FENCE constraints), then clearly
746 those constraints **MUST** also be obeyed "post-expansion".
747
748 It should be absolutely clear that SV is **not** about providing new
749 functionality or changing the existing behaviour of a micro-architetural
750 design, or about changing the RISC-V Specification.
751 It is **purely** about compacting what would otherwise be contiguous
752 instructions that use sequentially-increasing register numbers down
753 to the **one** instruction.
754
755 # Instructions <a name="instructions" />
756
757 Despite being a 98% complete and accurate topological remap of RVV
758 concepts and functionality, no new instructions are needed.
759 Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
760 becomes a critical dependency for efficient manipulation of predication
761 masks (as a bit-field). Despite the removal of all operations,
762 with the exception of CLIP and VSELECT.X
763 *all instructions from RVV Base are topologically re-mapped and retain their
764 complete functionality, intact*. Note that if RV64G ever had
765 a MV.X added as well as FCLIP, the full functionality of RVV-Base would
766 be obtained in SV.
767
768 Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
769 equivalents, so are left out of Simple-V. VSELECT could be included if
770 there existed a MV.X instruction in RV (MV.X is a hypothetical
771 non-immediate variant of MV that would allow another register to
772 specify which register was to be copied). Note that if any of these three
773 instructions are added to any given RV extension, their functionality
774 will be inherently parallelised.
775
776 With some exceptions, where it does not make sense or is simply too
777 challenging, all RV-Base instructions are parallelised:
778
779 * CSR instructions, whilst a case could be made for fast-polling of
780 a CSR into multiple registers, or for being able to copy multiple
781 contiguously addressed CSRs into contiguous registers, and so on,
782 are the fundamental core basis of SV. If parallelised, extreme
783 care would need to be taken. Additionally, CSR reads are done
784 using x0, and it is *really* inadviseable to tag x0.
785 * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
786 left as scalar.
787 * LR/SC could hypothetically be parallelised however their purpose is
788 single (complex) atomic memory operations where the LR must be followed
789 up by a matching SC. A sequence of parallel LR instructions followed
790 by a sequence of parallel SC instructions therefore is guaranteed to
791 not be useful. Not least: the guarantees of a Multi-LR/SC
792 would be impossible to provide if emulated in a trap.
793 * EBREAK, NOP, FENCE and others do not use registers so are not inherently
794 paralleliseable anyway.
795
796 All other operations using registers are automatically parallelised.
797 This includes AMOMAX, AMOSWAP and so on, where particular care and
798 attention must be paid.
799
800 Example pseudo-code for an integer ADD operation (including scalar operations).
801 Floating-point uses fp csrs.
802
803 function op_add(rd, rs1, rs2) # add not VADD!
804  int i, id=0, irs1=0, irs2=0;
805  predval = get_pred_val(FALSE, rd);
806  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
807  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
808  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
809  for (i = 0; i < VL; i++)
810 if (predval & 1<<i) # predication uses intregs
811    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
812 if (!int_vec[rd ].isvector) break;
813 if (int_vec[rd ].isvector)  { id += 1; }
814 if (int_vec[rs1].isvector)  { irs1 += 1; }
815 if (int_vec[rs2].isvector)  { irs2 += 1; }
816
817 Note that for simplicity there is quite a lot missing from the above
818 pseudo-code: element widths, zeroing on predication, dimensional
819 reshaping and offsets and so on. However it demonstrates the basic
820 principle. Augmentations that produce the full pseudo-code are covered in
821 other sections.
822
823 ## Instruction Format
824
825 It is critical to appreciate that there are
826 **no operations added to SV, at all**.
827
828 Instead, by using CSRs to tag registers as an indication of "changed behaviour",
829 SV *overloads* pre-existing branch operations into predicated
830 variants, and implicitly overloads arithmetic operations, MV,
831 FCVT, and LOAD/STORE depending on CSR configurations for bitwidth
832 and predication. **Everything** becomes parallelised. *This includes
833 Compressed instructions* as well as any future instructions and Custom
834 Extensions.
835
836 Note: CSR tags to change behaviour of instructions is nothing new, including
837 in RISC-V. UXL, SXL and MXL change the behaviour so that XLEN=32/64/128.
838 FRM changes the behaviour of the floating-point unit, to alter the rounding
839 mode. Other architectures change the LOAD/STORE byte-order from big-endian
840 to little-endian on a per-instruction basis. SV is just a little more...
841 comprehensive in its effect on instructions.
842
843 ## Branch Instructions
844
845 ### Standard Branch <a name="standard_branch"></a>
846
847 Branch operations use standard RV opcodes that are reinterpreted to
848 be "predicate variants" in the instance where either of the two src
849 registers are marked as vectors (active=1, vector=1).
850
851 Note that the predication register to use (if one is enabled) is taken from
852 the *first* src register, and that this is used, just as with predicated
853 arithmetic operations, to mask whether the comparison operations take
854 place or not. The target (destination) predication register
855 to use (if one is enabled) is taken from the *second* src register.
856
857 If either of src1 or src2 are scalars (whether by there being no
858 CSR register entry or whether by the CSR entry specifically marking
859 the register as "scalar") the comparison goes ahead as vector-scalar
860 or scalar-vector.
861
862 In instances where no vectorisation is detected on either src registers
863 the operation is treated as an absolutely standard scalar branch operation.
864 Where vectorisation is present on either or both src registers, the
865 branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
866 those tests that are predicated out).
867
868 Note that when zero-predication is enabled (from source rs1),
869 a cleared bit in the predicate indicates that the result
870 of the compare is set to "false", i.e. that the corresponding
871 destination bit (or result)) be set to zero. Contrast this with
872 when zeroing is not set: bits in the destination predicate are
873 only *set*; they are **not** cleared. This is important to appreciate,
874 as there may be an expectation that, going into the hardware-loop,
875 the destination predicate is always expected to be set to zero:
876 this is **not** the case. The destination predicate is only set
877 to zero if **zeroing** is enabled.
878
879 Note that just as with the standard (scalar, non-predicated) branch
880 operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
881 src1 and src2.
882
883 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
884 for predicated compare operations of function "cmp":
885
886 for (int i=0; i<vl; ++i)
887 if ([!]preg[p][i])
888 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
889 s2 ? vreg[rs2][i] : sreg[rs2]);
890
891 With associated predication, vector-length adjustments and so on,
892 and temporarily ignoring bitwidth (which makes the comparisons more
893 complex), this becomes:
894
895 s1 = reg_is_vectorised(src1);
896 s2 = reg_is_vectorised(src2);
897
898 if not s1 && not s2
899 if cmp(rs1, rs2) # scalar compare
900 goto branch
901 return
902
903 preg = int_pred_reg[rd]
904 reg = int_regfile
905
906 ps = get_pred_val(I/F==INT, rs1);
907 rd = get_pred_val(I/F==INT, rs2); # this may not exist
908
909 if not exists(rd) or zeroing:
910 result = 0
911 else
912 result = preg[rd]
913
914 for (int i = 0; i < VL; ++i)
915 if (zeroing)
916 if not (ps & (1<<i))
917 result &= ~(1<<i);
918 else if (ps & (1<<i))
919 if (cmp(s1 ? reg[src1+i]:reg[src1],
920 s2 ? reg[src2+i]:reg[src2])
921 result |= 1<<i;
922 else
923 result &= ~(1<<i);
924
925 if not exists(rd)
926 if result == ps
927 goto branch
928 else
929 preg[rd] = result # store in destination
930 if preg[rd] == ps
931 goto branch
932
933 Notes:
934
935 * Predicated SIMD comparisons would break src1 and src2 further down
936 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
937 Reordering") setting Vector-Length times (number of SIMD elements) bits
938 in Predicate Register rd, as opposed to just Vector-Length bits.
939 * The execution of "parallelised" instructions **must** be implemented
940 as "re-entrant" (to use a term from software). If an exception (trap)
941 occurs during the middle of a vectorised
942 Branch (now a SV predicated compare) operation, the partial results
943 of any comparisons must be written out to the destination
944 register before the trap is permitted to begin. If however there
945 is no predicate, the **entire** set of comparisons must be **restarted**,
946 with the offset loop indices set back to zero. This is because
947 there is no place to store the temporary result during the handling
948 of traps.
949
950 TODO: predication now taken from src2. also branch goes ahead
951 if all compares are successful.
952
953 Note also that where normally, predication requires that there must
954 also be a CSR register entry for the register being used in order
955 for the **predication** CSR register entry to also be active,
956 for branches this is **not** the case. src2 does **not** have
957 to have its CSR register entry marked as active in order for
958 predication on src2 to be active.
959
960 Also note: SV Branch operations are **not** twin-predicated
961 (see Twin Predication section). This would require three
962 element offsets: one to track src1, one to track src2 and a third
963 to track where to store the accumulation of the results. Given
964 that the element offsets need to be exposed via CSRs so that
965 the parallel hardware looping may be made re-entrant on traps
966 and exceptions, the decision was made not to make SV Branches
967 twin-predicated.
968
969 ### Floating-point Comparisons
970
971 There does not exist floating-point branch operations, only compare.
972 Interestingly no change is needed to the instruction format because
973 FP Compare already stores a 1 or a zero in its "rd" integer register
974 target, i.e. it's not actually a Branch at all: it's a compare.
975
976 In RV (scalar) Base, a branch on a floating-point compare is
977 done via the sequence "FEQ x1, f0, f5; BEQ x1, x0, #jumploc".
978 This does extend to SV, as long as x1 (in the example sequence given)
979 is vectorised. When that is the case, x1..x(1+VL-1) will also be
980 set to 0 or 1 depending on whether f0==f5, f1==f6, f2==f7 and so on.
981 The BEQ that follows will *also* compare x1==x0, x2==x0, x3==x0 and
982 so on. Consequently, unlike integer-branch, FP Compare needs no
983 modification in its behaviour.
984
985 In addition, it is noted that an entry "FNE" (the opposite of FEQ) is missing,
986 and whilst in ordinary branch code this is fine because the standard
987 RVF compare can always be followed up with an integer BEQ or a BNE (or
988 a compressed comparison to zero or non-zero), in predication terms that
989 becomes more of an impact. To deal with this, SV's predication has
990 had "invert" added to it.
991
992 Also: note that FP Compare may be predicated, using the destination
993 integer register (rd) to determine the predicate. FP Compare is **not**
994 a twin-predication operation, as, again, just as with SV Branches,
995 there are three registers involved: FP src1, FP src2 and INT rd.
996
997 ### Compressed Branch Instruction
998
999 Compressed Branch instructions are, just like standard Branch instructions,
1000 reinterpreted to be vectorised and predicated based on the source register
1001 (rs1s) CSR entries. As however there is only the one source register,
1002 given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
1003 to store the results of the comparisions is taken from CSR predication
1004 table entries for **x0**.
1005
1006 The specific required use of x0 is, with a little thought, quite obvious,
1007 but is counterintuitive. Clearly it is **not** recommended to redirect
1008 x0 with a CSR register entry, however as a means to opaquely obtain
1009 a predication target it is the only sensible option that does not involve
1010 additional special CSRs (or, worse, additional special opcodes).
1011
1012 Note also that, just as with standard branches, the 2nd source
1013 (in this case x0 rather than src2) does **not** have to have its CSR
1014 register table marked as "active" in order for predication to work.
1015
1016 ## Vectorised Dual-operand instructions
1017
1018 There is a series of 2-operand instructions involving copying (and
1019 sometimes alteration):
1020
1021 * C.MV
1022 * FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
1023 * C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
1024 * LOAD(-FP) and STORE(-FP)
1025
1026 All of these operations follow the same two-operand pattern, so it is
1027 *both* the source *and* destination predication masks that are taken into
1028 account. This is different from
1029 the three-operand arithmetic instructions, where the predication mask
1030 is taken from the *destination* register, and applied uniformly to the
1031 elements of the source register(s), element-for-element.
1032
1033 The pseudo-code pattern for twin-predicated operations is as
1034 follows:
1035
1036 function op(rd, rs):
1037  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1038  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1039  ps = get_pred_val(FALSE, rs); # predication on src
1040  pd = get_pred_val(FALSE, rd); # ... AND on dest
1041  for (int i = 0, int j = 0; i < VL && j < VL;):
1042 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1043 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1044 reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
1045 if (int_csr[rs].isvec) i++;
1046 if (int_csr[rd].isvec) j++; else break
1047
1048 This pattern covers scalar-scalar, scalar-vector, vector-scalar
1049 and vector-vector, and predicated variants of all of those.
1050 Zeroing is not presently included (TODO). As such, when compared
1051 to RVV, the twin-predicated variants of C.MV and FMV cover
1052 **all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
1053 VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
1054
1055 Note that:
1056
1057 * elwidth (SIMD) is not covered in the pseudo-code above
1058 * ending the loop early in scalar cases (VINSERT, VEXTRACT) is also
1059 not covered
1060 * zero predication is also not shown (TODO).
1061
1062 ### C.MV Instruction <a name="c_mv"></a>
1063
1064 There is no MV instruction in RV however there is a C.MV instruction.
1065 It is used for copying integer-to-integer registers (vectorised FMV
1066 is used for copying floating-point).
1067
1068 If either the source or the destination register are marked as vectors
1069 C.MV is reinterpreted to be a vectorised (multi-register) predicated
1070 move operation. The actual instruction's format does not change:
1071
1072 [[!table data="""
1073 15 12 | 11 7 | 6 2 | 1 0 |
1074 funct4 | rd | rs | op |
1075 4 | 5 | 5 | 2 |
1076 C.MV | dest | src | C0 |
1077 """]]
1078
1079 A simplified version of the pseudocode for this operation is as follows:
1080
1081 function op_mv(rd, rs) # MV not VMV!
1082  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1083  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1084  ps = get_pred_val(FALSE, rs); # predication on src
1085  pd = get_pred_val(FALSE, rd); # ... AND on dest
1086  for (int i = 0, int j = 0; i < VL && j < VL;):
1087 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1088 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1089 ireg[rd+j] <= ireg[rs+i];
1090 if (int_csr[rs].isvec) i++;
1091 if (int_csr[rd].isvec) j++; else break
1092
1093 There are several different instructions from RVV that are covered by
1094 this one opcode:
1095
1096 [[!table data="""
1097 src | dest | predication | op |
1098 scalar | vector | none | VSPLAT |
1099 scalar | vector | destination | sparse VSPLAT |
1100 scalar | vector | 1-bit dest | VINSERT |
1101 vector | scalar | 1-bit? src | VEXTRACT |
1102 vector | vector | none | VCOPY |
1103 vector | vector | src | Vector Gather |
1104 vector | vector | dest | Vector Scatter |
1105 vector | vector | src & dest | Gather/Scatter |
1106 vector | vector | src == dest | sparse VCOPY |
1107 """]]
1108
1109 Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
1110 operations with inversion on the src and dest predication for one of the
1111 two C.MV operations.
1112
1113 Note that in the instance where the Compressed Extension is not implemented,
1114 MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs.
1115 Note that the behaviour is **different** from C.MV because with addi the
1116 predication mask to use is taken **only** from rd and is applied against
1117 all elements: rs[i] = rd[i].
1118
1119 ### FMV, FNEG and FABS Instructions
1120
1121 These are identical in form to C.MV, except covering floating-point
1122 register copying. The same double-predication rules also apply.
1123 However when elwidth is not set to default the instruction is implicitly
1124 and automatic converted to a (vectorised) floating-point type conversion
1125 operation of the appropriate size covering the source and destination
1126 register bitwidths.
1127
1128 (Note that FMV, FNEG and FABS are all actually pseudo-instructions)
1129
1130 ### FVCT Instructions
1131
1132 These are again identical in form to C.MV, except that they cover
1133 floating-point to integer and integer to floating-point. When element
1134 width in each vector is set to default, the instructions behave exactly
1135 as they are defined for standard RV (scalar) operations, except vectorised
1136 in exactly the same fashion as outlined in C.MV.
1137
1138 However when the source or destination element width is not set to default,
1139 the opcode's explicit element widths are *over-ridden* to new definitions,
1140 and the opcode's element width is taken as indicative of the SIMD width
1141 (if applicable i.e. if packed SIMD is requested) instead.
1142
1143 For example FCVT.S.L would normally be used to convert a 64-bit
1144 integer in register rs1 to a 64-bit floating-point number in rd.
1145 If however the source rs1 is set to be a vector, where elwidth is set to
1146 default/2 and "packed SIMD" is enabled, then the first 32 bits of
1147 rs1 are converted to a floating-point number to be stored in rd's
1148 first element and the higher 32-bits *also* converted to floating-point
1149 and stored in the second. The 32 bit size comes from the fact that
1150 FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to
1151 divide that by two it means that rs1 element width is to be taken as 32.
1152
1153 Similar rules apply to the destination register.
1154
1155 ## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
1156
1157 An earlier draft of SV modified the behaviour of LOAD/STORE (modified
1158 the interpretation of the instruction fields). This
1159 actually undermined the fundamental principle of SV, namely that there
1160 be no modifications to the scalar behaviour (except where absolutely
1161 necessary), in order to simplify an implementor's task if considering
1162 converting a pre-existing scalar design to support parallelism.
1163
1164 So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
1165 do not change in SV, however just as with C.MV it is important to note
1166 that dual-predication is possible.
1167
1168 In vectorised architectures there are usually at least two different modes
1169 for LOAD/STORE:
1170
1171 * Read (or write for STORE) from sequential locations, where one
1172 register specifies the address, and the one address is incremented
1173 by a fixed amount. This is usually known as "Unit Stride" mode.
1174 * Read (or write) from multiple indirected addresses, where the
1175 vector elements each specify separate and distinct addresses.
1176
1177 To support these different addressing modes, the CSR Register "isvector"
1178 bit is used. So, for a LOAD, when the src register is set to
1179 scalar, the LOADs are sequentially incremented by the src register
1180 element width, and when the src register is set to "vector", the
1181 elements are treated as indirection addresses. Simplified
1182 pseudo-code would look like this:
1183
1184 function op_ld(rd, rs) # LD not VLD!
1185  rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
1186  rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
1187  ps = get_pred_val(FALSE, rs); # predication on src
1188  pd = get_pred_val(FALSE, rd); # ... AND on dest
1189  for (int i = 0, int j = 0; i < VL && j < VL;):
1190 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1191 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1192 if (int_csr[rd].isvec)
1193 # indirect mode (multi mode)
1194 srcbase = ireg[rsv+i];
1195 else
1196 # unit stride mode
1197 srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
1198 ireg[rdv+j] <= mem[srcbase + imm_offs];
1199 if (!int_csr[rs].isvec &&
1200 !int_csr[rd].isvec) break # scalar-scalar LD
1201 if (int_csr[rs].isvec) i++;
1202 if (int_csr[rd].isvec) j++;
1203
1204 Notes:
1205
1206 * For simplicity, zeroing and elwidth is not included in the above:
1207 the key focus here is the decision-making for srcbase; vectorised
1208 rs means use sequentially-numbered registers as the indirection
1209 address, and scalar rs is "offset" mode.
1210 * The test towards the end for whether both source and destination are
1211 scalar is what makes the above pseudo-code provide the "standard" RV
1212 Base behaviour for LD operations.
1213 * The offset in bytes (XLEN/8) changes depending on whether the
1214 operation is a LB (1 byte), LH (2 byes), LW (4 bytes) or LD
1215 (8 bytes), and also whether the element width is over-ridden
1216 (see special element width section).
1217
1218 ## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
1219
1220 C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
1221 where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
1222 It is therefore possible to use predicated C.LWSP to efficiently
1223 pop registers off the stack (by predicating x2 as the source), cherry-picking
1224 which registers to store to (by predicating the destination). Likewise
1225 for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
1226
1227 The two modes ("unit stride" and multi-indirection) are still supported,
1228 as with standard LD/ST. Essentially, the only difference is that the
1229 use of x2 is hard-coded into the instruction.
1230
1231 **Note**: it is still possible to redirect x2 to an alternative target
1232 register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
1233 general-purpose LOAD/STORE operations.
1234
1235 ## Compressed LOAD / STORE Instructions
1236
1237 Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
1238 where the same rules apply and the same pseudo-code apply as for
1239 non-compressed LOAD/STORE. Again: setting scalar or vector mode
1240 on the src for LOAD and dest for STORE switches mode from "Unit Stride"
1241 to "Multi-indirection", respectively.
1242
1243 # Element bitwidth polymorphism <a name="elwidth"></a>
1244
1245 Element bitwidth is best covered as its own special section, as it
1246 is quite involved and applies uniformly across-the-board. SV restricts
1247 bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
1248
1249 The effect of setting an element bitwidth is to re-cast each entry
1250 in the register table, and for all memory operations involving
1251 load/stores of certain specific sizes, to a completely different width.
1252 Thus In c-style terms, on an RV64 architecture, effectively each register
1253 now looks like this:
1254
1255 typedef union {
1256 uint8_t b[8];
1257 uint16_t s[4];
1258 uint32_t i[2];
1259 uint64_t l[1];
1260 } reg_t;
1261
1262 // integer table: assume maximum SV 7-bit regfile size
1263 reg_t int_regfile[128];
1264
1265 where the CSR Register table entry (not the instruction alone) determines
1266 which of those union entries is to be used on each operation, and the
1267 VL element offset in the hardware-loop specifies the index into each array.
1268
1269 However a naive interpretation of the data structure above masks the
1270 fact that setting VL greater than 8, for example, when the bitwidth is 8,
1271 accessing one specific register "spills over" to the following parts of
1272 the register file in a sequential fashion. So a much more accurate way
1273 to reflect this would be:
1274
1275 typedef union {
1276 uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
1277 uint8_t b[0]; // array of type uint8_t
1278 uint16_t s[0];
1279 uint32_t i[0];
1280 uint64_t l[0];
1281 uint128_t d[0];
1282 } reg_t;
1283
1284 reg_t int_regfile[128];
1285
1286 where when accessing any individual regfile[n].b entry it is permitted
1287 (in c) to arbitrarily over-run the *declared* length of the array (zero),
1288 and thus "overspill" to consecutive register file entries in a fashion
1289 that is completely transparent to a greatly-simplified software / pseudo-code
1290 representation.
1291 It is however critical to note that it is clearly the responsibility of
1292 the implementor to ensure that, towards the end of the register file,
1293 an exception is thrown if attempts to access beyond the "real" register
1294 bytes is ever attempted.
1295
1296 Now we may modify pseudo-code an operation where all element bitwidths have
1297 been set to the same size, where this pseudo-code is otherwise identical
1298 to its "non" polymorphic versions (above):
1299
1300 function op_add(rd, rs1, rs2) # add not VADD!
1301 ...
1302 ...
1303  for (i = 0; i < VL; i++)
1304 ...
1305 ...
1306 // TODO, calculate if over-run occurs, for each elwidth
1307 if (elwidth == 8) {
1308    int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
1309     int_regfile[rs2].i[irs2];
1310 } else if elwidth == 16 {
1311    int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
1312     int_regfile[rs2].s[irs2];
1313 } else if elwidth == 32 {
1314    int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
1315     int_regfile[rs2].i[irs2];
1316 } else { // elwidth == 64
1317    int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
1318     int_regfile[rs2].l[irs2];
1319 }
1320 ...
1321 ...
1322
1323 So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
1324 following sequentially on respectively from the same) are "type-cast"
1325 to 8-bit; for 16-bit entries likewise and so on.
1326
1327 However that only covers the case where the element widths are the same.
1328 Where the element widths are different, the following algorithm applies:
1329
1330 * Analyse the bitwidth of all source operands and work out the
1331 maximum. Record this as "maxsrcbitwidth"
1332 * If any given source operand requires sign-extension or zero-extension
1333 (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
1334 sign-extension / zero-extension or whatever is specified in the standard
1335 RV specification, **change** that to sign-extending from the respective
1336 individual source operand's bitwidth from the CSR table out to
1337 "maxsrcbitwidth" (previously calculated), instead.
1338 * Following separate and distinct (optional) sign/zero-extension of all
1339 source operands as specifically required for that operation, carry out the
1340 operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
1341 this may be a "null" (copy) operation, and that with FCVT, the changes
1342 to the source and destination bitwidths may also turn FVCT effectively
1343 into a copy).
1344 * If the destination operand requires sign-extension or zero-extension,
1345 instead of a mandatory fixed size (typically 32-bit for arithmetic,
1346 for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
1347 etc.), overload the RV specification with the bitwidth from the
1348 destination register's elwidth entry.
1349 * Finally, store the (optionally) sign/zero-extended value into its
1350 destination: memory for sb/sw etc., or an offset section of the register
1351 file for an arithmetic operation.
1352
1353 In this way, polymorphic bitwidths are achieved without requiring a
1354 massive 64-way permutation of calculations **per opcode**, for example
1355 (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
1356 rd bitwidths). The pseudo-code is therefore as follows:
1357
1358 typedef union {
1359 uint8_t b;
1360 uint16_t s;
1361 uint32_t i;
1362 uint64_t l;
1363 } el_reg_t;
1364
1365 bw(elwidth):
1366 if elwidth == 0:
1367 return xlen
1368 if elwidth == 1:
1369 return xlen / 2
1370 if elwidth == 2:
1371 return xlen * 2
1372 // elwidth == 3:
1373 return 8
1374
1375 get_max_elwidth(rs1, rs2):
1376 return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
1377 bw(int_csr[rs2].elwidth)) # again XLEN if no entry
1378
1379 get_polymorphed_reg(reg, bitwidth, offset):
1380 el_reg_t res;
1381 res.l = 0; // TODO: going to need sign-extending / zero-extending
1382 if bitwidth == 8:
1383 reg.b = int_regfile[reg].b[offset]
1384 elif bitwidth == 16:
1385 reg.s = int_regfile[reg].s[offset]
1386 elif bitwidth == 32:
1387 reg.i = int_regfile[reg].i[offset]
1388 elif bitwidth == 64:
1389 reg.l = int_regfile[reg].l[offset]
1390 return res
1391
1392 set_polymorphed_reg(reg, bitwidth, offset, val):
1393 if (!int_csr[reg].isvec):
1394 # sign/zero-extend depending on opcode requirements, from
1395 # the reg's bitwidth out to the full bitwidth of the regfile
1396 val = sign_or_zero_extend(val, bitwidth, xlen)
1397 int_regfile[reg].l[0] = val
1398 elif bitwidth == 8:
1399 int_regfile[reg].b[offset] = val
1400 elif bitwidth == 16:
1401 int_regfile[reg].s[offset] = val
1402 elif bitwidth == 32:
1403 int_regfile[reg].i[offset] = val
1404 elif bitwidth == 64:
1405 int_regfile[reg].l[offset] = val
1406
1407 maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
1408 destwid = int_csr[rs1].elwidth # destination element width
1409  for (i = 0; i < VL; i++)
1410 if (predval & 1<<i) # predication uses intregs
1411 // TODO, calculate if over-run occurs, for each elwidth
1412 src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
1413 // TODO, sign/zero-extend src1 and src2 as operation requires
1414 if (op_requires_sign_extend_src1)
1415 src1 = sign_extend(src1, maxsrcwid)
1416 src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
1417 result = src1 + src2 # actual add here
1418 // TODO, sign/zero-extend result, as operation requires
1419 if (op_requires_sign_extend_dest)
1420 result = sign_extend(result, maxsrcwid)
1421 set_polymorphed_reg(rd, destwid, ird, result)
1422 if (!int_vec[rd].isvector) break
1423 if (int_vec[rd ].isvector)  { id += 1; }
1424 if (int_vec[rs1].isvector)  { irs1 += 1; }
1425 if (int_vec[rs2].isvector)  { irs2 += 1; }
1426
1427 Whilst specific sign-extension and zero-extension pseudocode call
1428 details are left out, due to each operation being different, the above
1429 should be clear that;
1430
1431 * the source operands are extended out to the maximum bitwidth of all
1432 source operands
1433 * the operation takes place at that maximum source bitwidth (the
1434 destination bitwidth is not involved at this point, at all)
1435 * the result is extended (or potentially even, truncated) before being
1436 stored in the destination. i.e. truncation (if required) to the
1437 destination width occurs **after** the operation **not** before.
1438 * when the destination is not marked as "vectorised", the **full**
1439 (standard, scalar) register file entry is taken up, i.e. the
1440 element is either sign-extended or zero-extended to cover the
1441 full register bitwidth (XLEN) if it is not already XLEN bits long.
1442
1443 Implementors are entirely free to optimise the above, particularly
1444 if it is specifically known that any given operation will complete
1445 accurately in less bits, as long as the results produced are
1446 directly equivalent and equal, for all inputs and all outputs,
1447 to those produced by the above algorithm.
1448
1449 ## Polymorphic floating-point operation exceptions and error-handling
1450
1451 For floating-point operations, conversion takes place without
1452 raising any kind of exception. Exactly as specified in the standard
1453 RV specification, NAN (or appropriate) is stored if the result
1454 is beyond the range of the destination, and, again, exactly as
1455 with the standard RV specification just as with scalar
1456 operations, the floating-point flag is raised (FCSR). And, again, just as
1457 with scalar operations, it is software's responsibility to check this flag.
1458 Given that the FCSR flags are "accrued", the fact that multiple element
1459 operations could have occurred is not a problem.
1460
1461 Note that it is perfectly legitimate for floating-point bitwidths of
1462 only 8 to be specified. However whilst it is possible to apply IEEE 754
1463 principles, no actual standard yet exists. Implementors wishing to
1464 provide hardware-level 8-bit support rather than throw a trap to emulate
1465 in software should contact the author of this specification before
1466 proceeding.
1467
1468 ## Polymorphic shift operators
1469
1470 A special note is needed for changing the element width of left and right
1471 shift operators, particularly right-shift. Even for standard RV base,
1472 in order for correct results to be returned, the second operand RS2 must
1473 be truncated to be within the range of RS1's bitwidth. spike's implementation
1474 of sll for example is as follows:
1475
1476 WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));
1477
1478 which means: where XLEN is 32 (for RV32), restrict RS2 to cover the
1479 range 0..31 so that RS1 will only be left-shifted by the amount that
1480 is possible to fit into a 32-bit register. Whilst this appears not
1481 to matter for hardware, it matters greatly in software implementations,
1482 and it also matters where an RV64 system is set to "RV32" mode, such
1483 that the underlying registers RS1 and RS2 comprise 64 hardware bits
1484 each.
1485
1486 For SV, where each operand's element bitwidth may be over-ridden, the
1487 rule about determining the operation's bitwidth *still applies*, being
1488 defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
1489 **also applies to the truncation of RS2**. In other words, *after*
1490 determining the maximum bitwidth, RS2's range must **also be truncated**
1491 to ensure a correct answer. Example:
1492
1493 * RS1 is over-ridden to a 16-bit width
1494 * RS2 is over-ridden to an 8-bit width
1495 * RD is over-ridden to a 64-bit width
1496 * the maximum bitwidth is thus determined to be 16-bit - max(8,16)
1497 * RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
1498
1499 Pseudocode (in spike) for this example would therefore be:
1500
1501 WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
1502
1503 This example illustrates that considerable care therefore needs to be
1504 taken to ensure that left and right shift operations are implemented
1505 correctly. The key is that
1506
1507 * The operation bitwidth is determined by the maximum bitwidth
1508 of the *source registers*, **not** the destination register bitwidth
1509 * The result is then sign-extend (or truncated) as appropriate.
1510
1511 ## Polymorphic MULH/MULHU/MULHSU
1512
1513 MULH is designed to take the top half MSBs of a multiply that
1514 does not fit within the range of the source operands, such that
1515 smaller width operations may produce a full double-width multiply
1516 in two cycles. The issue is: SV allows the source operands to
1517 have variable bitwidth.
1518
1519 Here again special attention has to be paid to the rules regarding
1520 bitwidth, which, again, are that the operation is performed at
1521 the maximum bitwidth of the **source** registers. Therefore:
1522
1523 * An 8-bit x 8-bit multiply will create a 16-bit result that must
1524 be shifted down by 8 bits
1525 * A 16-bit x 8-bit multiply will create a 24-bit result that must
1526 be shifted down by 16 bits (top 8 bits being zero)
1527 * A 16-bit x 16-bit multiply will create a 32-bit result that must
1528 be shifted down by 16 bits
1529 * A 32-bit x 16-bit multiply will create a 48-bit result that must
1530 be shifted down by 32 bits
1531 * A 32-bit x 8-bit multiply will create a 40-bit result that must
1532 be shifted down by 32 bits
1533
1534 So again, just as with shift-left and shift-right, the result
1535 is shifted down by the maximum of the two source register bitwidths.
1536 And, exactly again, truncation or sign-extension is performed on the
1537 result. If sign-extension is to be carried out, it is performed
1538 from the same maximum of the two source register bitwidths out
1539 to the result element's bitwidth.
1540
1541 If truncation occurs, i.e. the top MSBs of the result are lost,
1542 this is "Officially Not Our Problem", i.e. it is assumed that the
1543 programmer actually desires the result to be truncated. i.e. if the
1544 programmer wanted all of the bits, they would have set the destination
1545 elwidth to accommodate them.
1546
1547 ## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
1548
1549 Polymorphic element widths in vectorised form means that the data
1550 being loaded (or stored) across multiple registers needs to be treated
1551 (reinterpreted) as a contiguous stream of elwidth-wide items, where
1552 the source register's element width is **independent** from the destination's.
1553
1554 This makes for a slightly more complex algorithm when using indirection
1555 on the "addressed" register (source for LOAD and destination for STORE),
1556 particularly given that the LOAD/STORE instruction provides important
1557 information about the width of the data to be reinterpreted.
1558
1559 Let's illustrate the "load" part, where the pseudo-code for elwidth=default
1560 was as follows, and i is the loop from 0 to VL-1:
1561
1562 srcbase = ireg[rs+i];
1563 return mem[srcbase + imm]; // returns XLEN bits
1564
1565 Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
1566 chunks are taken from the source memory location addressed by the current
1567 indexed source address register, and only when a full 32-bits-worth
1568 are taken will the index be moved on to the next contiguous source
1569 address register:
1570
1571 bitwidth = bw(elwidth); // source elwidth from CSR reg entry
1572 elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
1573 srcbase = ireg[rs+i/(elsperblock)]; // integer divide
1574 offs = i % elsperblock; // modulo
1575 return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
1576
1577 Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
1578 and 128 for LQ.
1579
1580 The principle is basically exactly the same as if the srcbase were pointing
1581 at the memory of the *register* file: memory is re-interpreted as containing
1582 groups of elwidth-wide discrete elements.
1583
1584 When storing the result from a load, it's important to respect the fact
1585 that the destination register has its *own separate element width*. Thus,
1586 when each element is loaded (at the source element width), any sign-extension
1587 or zero-extension (or truncation) needs to be done to the *destination*
1588 bitwidth. Also, the storing has the exact same analogous algorithm as
1589 above, where in fact it is just the set\_polymorphed\_reg pseudocode
1590 (completely unchanged) used above.
1591
1592 One issue remains: when the source element width is **greater** than
1593 the width of the operation, it is obvious that a single LB for example
1594 cannot possibly obtain 16-bit-wide data. This condition may be detected
1595 where, when using integer divide, elsperblock (the width of the LOAD
1596 divided by the bitwidth of the element) is zero.
1597
1598 The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
1599
1600 elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
1601
1602 The elements, if the element bitwidth is larger than the LD operation's
1603 size, will then be sign/zero-extended to the full LD operation size, as
1604 specified by the LOAD (LDU instead of LD, LBU instead of LB), before
1605 being passed on to the second phase.
1606
1607 As LOAD/STORE may be twin-predicated, it is important to note that
1608 the rules on twin predication still apply, except where in previous
1609 pseudo-code (elwidth=default for both source and target) it was
1610 the *registers* that the predication was applied to, it is now the
1611 **elements** that the predication is applied to.
1612
1613 Thus the full pseudocode for all LD operations may be written out
1614 as follows:
1615
1616 function LBU(rd, rs):
1617 load_elwidthed(rd, rs, 8, true)
1618 function LB(rd, rs):
1619 load_elwidthed(rd, rs, 8, false)
1620 function LH(rd, rs):
1621 load_elwidthed(rd, rs, 16, false)
1622 ...
1623 ...
1624 function LQ(rd, rs):
1625 load_elwidthed(rd, rs, 128, false)
1626
1627 # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
1628 function load_memory(rs, imm, i, opwidth):
1629 elwidth = int_csr[rs].elwidth
1630 bitwidth = bw(elwidth);
1631 elsperblock = min(1, opwidth / bitwidth)
1632 srcbase = ireg[rs+i/(elsperblock)];
1633 offs = i % elsperblock;
1634 return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
1635
1636 function load_elwidthed(rd, rs, opwidth, unsigned):
1637 destwid = int_csr[rd].elwidth # destination element width
1638  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1639  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1640  ps = get_pred_val(FALSE, rs); # predication on src
1641  pd = get_pred_val(FALSE, rd); # ... AND on dest
1642  for (int i = 0, int j = 0; i < VL && j < VL;):
1643 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1644 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1645 val = load_memory(rs, imm, i, opwidth)
1646 if unsigned:
1647 val = zero_extend(val, min(opwidth, bitwidth))
1648 else:
1649 val = sign_extend(val, min(opwidth, bitwidth))
1650 set_polymorphed_reg(rd, bitwidth, j, val)
1651 if (int_csr[rs].isvec) i++;
1652 if (int_csr[rd].isvec) j++; else break;
1653
1654 Note:
1655
1656 * when comparing against for example the twin-predicated c.mv
1657 pseudo-code, the pattern of independent incrementing of rd and rs
1658 is preserved unchanged.
1659 * just as with the c.mv pseudocode, zeroing is not included and must be
1660 taken into account (TODO).
1661 * that due to the use of a twin-predication algorithm, LOAD/STORE also
1662 take on the same VSPLAT, VINSERT, VREDUCE, VEXTRACT, VGATHER and
1663 VSCATTER characteristics.
1664 * that due to the use of the same set\_polymorphed\_reg pseudocode,
1665 a destination that is not vectorised (marked as scalar) will
1666 result in the element being fully sign-extended or zero-extended
1667 out to the full register file bitwidth (XLEN). When the source
1668 is also marked as scalar, this is how the compatibility with
1669 standard RV LOAD/STORE is preserved by this algorithm.
1670
1671 ### Example Tables showing LOAD elements
1672
1673 This section contains examples of vectorised LOAD operations, showing
1674 how the two stage process works (three if zero/sign-extension is included).
1675
1676
1677 #### Example: LD x8, x5(0), x8 CSR-elwidth=32, x5 CSR-elwidth=16, VL=7
1678
1679 This is:
1680
1681 * a 64-bit load, with an offset of zero
1682 * with a source-address elwidth of 16-bit
1683 * into a destination-register with an elwidth of 32-bit
1684 * where VL=7
1685 * from register x5 (actually x5-x6) to x8 (actually x8 to half of x11)
1686 * RV64, where XLEN=64 is assumed.
1687
1688 First, the memory table, which, due to the
1689 element width being 16 and the operation being LD (64), the 64-bits
1690 loaded from memory are subdivided into groups of **four** elements.
1691 And, with VL being 7 (deliberately to illustrate that this is reasonable
1692 and possible), the first four are sourced from the offset addresses pointed
1693 to by x5, and the next three from the ofset addresses pointed to by
1694 the next contiguous register, x6:
1695
1696 [[!table data="""
1697 addr | byte 0 | byte 1 | byte 2 | byte 3 | byte 4 | byte 5 | byte 6 | byte 7 |
1698 @x5 | elem 0 || elem 1 || elem 2 || elem 3 ||
1699 @x6 | elem 4 || elem 5 || elem 6 || not loaded ||
1700 """]]
1701
1702 Next, the elements are zero-extended from 16-bit to 32-bit, as whilst
1703 the elwidth CSR entry for x5 is 16-bit, the destination elwidth on x8 is 32.
1704
1705 [[!table data="""
1706 byte 3 | byte 2 | byte 1 | byte 0 |
1707 0x0 | 0x0 | elem0 ||
1708 0x0 | 0x0 | elem1 ||
1709 0x0 | 0x0 | elem2 ||
1710 0x0 | 0x0 | elem3 ||
1711 0x0 | 0x0 | elem4 ||
1712 0x0 | 0x0 | elem5 ||
1713 0x0 | 0x0 | elem6 ||
1714 0x0 | 0x0 | elem7 ||
1715 """]]
1716
1717 Lastly, the elements are stored in contiguous blocks, as if x8 was also
1718 byte-addressable "memory". That "memory" happens to cover registers
1719 x8, x9, x10 and x11, with the last 32 "bits" of x11 being **UNMODIFIED**:
1720
1721 [[!table data="""
1722 reg# | byte 7 | byte 6 | byte 5 | byte 4 | byte 3 | byte 2 | byte 1 | byte 0 |
1723 x8 | 0x0 | 0x0 | elem 1 || 0x0 | 0x0 | elem 0 ||
1724 x9 | 0x0 | 0x0 | elem 3 || 0x0 | 0x0 | elem 2 ||
1725 x10 | 0x0 | 0x0 | elem 5 || 0x0 | 0x0 | elem 4 ||
1726 x11 | **UNMODIFIED** |||| 0x0 | 0x0 | elem 6 ||
1727 """]]
1728
1729 Thus we have data that is loaded from the **addresses** pointed to by
1730 x5 and x6, zero-extended from 16-bit to 32-bit, stored in the **registers**
1731 x8 through to half of x11.
1732 The end result is that elements 0 and 1 end up in x8, with element 8 being
1733 shifted up 32 bits, and so on, until finally element 6 is in the
1734 LSBs of x11.
1735
1736 Note that whilst the memory addressing table is shown left-to-right byte order,
1737 the registers are shown in right-to-left (MSB) order. This does **not**
1738 imply that bit or byte-reversal is carried out: it's just easier to visualise
1739 memory as being contiguous bytes, and emphasises that registers are not
1740 really actually "memory" as such.
1741
1742 ## Why SV bitwidth specification is restricted to 4 entries
1743
1744 The four entries for SV element bitwidths only allows three over-rides:
1745
1746 * 8 bit
1747 * 16 hit
1748 * 32 bit
1749
1750 This would seem inadequate, surely it would be better to have 3 bits or more and allow 64, 128 and some other options besides. The answer here is, it gets too complex, no RV128 implementation yet exists, and so RV64's default is 64 bit, so the 4 major element widths are covered anyway.
1751
1752 There is an absolutely crucial aspect oF SV here that explicitly
1753 needs spelling out, and it's whether the "vectorised" bit is set in
1754 the Register's CSR entry.
1755
1756 If "vectorised" is clear (not set), this indicates that the operation
1757 is "scalar". Under these circumstances, when set on a destination (RD),
1758 then sign-extension and zero-extension, whilst changed to match the
1759 override bitwidth (if set), will erase the **full** register entry
1760 (64-bit if RV64).
1761
1762 When vectorised is *set*, this indicates that the operation now treats
1763 **elements** as if they were independent registers, so regardless of
1764 the length, any parts of a given actual register that are not involved
1765 in the operation are **NOT** modified, but are **PRESERVED**.
1766
1767 For example:
1768
1769 * when the vector bit is clear and elwidth set to 16 on the destination register, operations are truncated to 16 bit and then sign or zero extended to the *FULL* XLEN register width.
1770 * when the vector bit is set, elwidth is 16 and VL=1 (or other value where groups of elwidth sized elements do not fill an entire XLEN register), the "top" bits of the destination register do *NOT* get modified, zero'd or otherwise overwritten.
1771
1772 SIMD micro-architectures may implement this by using predication on
1773 any elements in a given actual register that are beyond the end of
1774 multi-element operation.
1775
1776 Other microarchitectures may choose to provide byte-level write-enable lines on the register file, such that each 64 bit register in an RV64 system requires 8 WE lines. Scalar RV64 operations would require activation of all 8 lines, where SV elwidth based operations would activate the required subset of those byte-level write lines.
1777
1778 Example:
1779
1780 * rs1, rs2 and rd are all set to 8-bit
1781 * VL is set to 3
1782 * RV64 architecture is set (UXL=64)
1783 * add operation is carried out
1784 * bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16]
1785 concatenated with similar add operations on bits 15..8 and 7..0
1786 * bits 24 through 63 **remain as they originally were**.
1787
1788 Example SIMD micro-architectural implementation:
1789
1790 * SIMD architecture works out the nearest round number of elements
1791 that would fit into a full RV64 register (in this case: 8)
1792 * SIMD architecture creates a hidden predicate, binary 0b00000111
1793 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
1794 * SIMD architecture goes ahead with the add operation as if it
1795 was a full 8-wide batch of 8 adds
1796 * SIMD architecture passes top 5 elements through the adders
1797 (which are "disabled" due to zero-bit predication)
1798 * SIMD architecture gets the 5 unmodified top 8-bits back unmodified
1799 and stores them in rd.
1800
1801 This requires a read on rd, however this is required anyway in order
1802 to support non-zeroing mode.
1803
1804 ## Polymorphic floating-point
1805
1806 Standard scalar RV integer operations base the register width on XLEN,
1807 which may be changed (UXL in USTATUS, and the corresponding MXL and
1808 SXL in MSTATUS and SSTATUS respectively). Integer LOAD, STORE and
1809 arithmetic operations are therefore restricted to an active XLEN bits,
1810 with sign or zero extension to pad out the upper bits when XLEN has
1811 been dynamically set to less than the actual register size.
1812
1813 For scalar floating-point, the active (used / changed) bits are
1814 specified exclusively by the operation: ADD.S specifies an active
1815 32-bits, with the upper bits of the source registers needing to
1816 be all 1s ("NaN-boxed"), and the destination upper bits being
1817 *set* to all 1s (including on LOAD/STOREs).
1818
1819 Where elwidth is set to default (on any source or the destination)
1820 it is obvious that this NaN-boxing behaviour can and should be
1821 preserved. When elwidth is non-default things are less obvious,
1822 so need to be thought through. Here is a normal (scalar) sequence,
1823 assuming an RV64 which supports Quad (128-bit) FLEN:
1824
1825 * FLD loads 64-bit wide from memory. Top 64 MSBs are set to all 1s
1826 * ADD.D performs a 64-bit-wide add. Top 64 MSBs of destination set to 1s.
1827 * FSD stores lowest 64-bits from the 128-bit-wide register to memory:
1828 top 64 MSBs ignored.
1829
1830 Therefore it makes sense to mirror this behaviour when, for example,
1831 elwidth is set to 32. Assume elwidth set to 32 on all source and
1832 destination registers:
1833
1834 * FLD loads 64-bit wide from memory as **two** 32-bit single-precision
1835 floating-point numbers.
1836 * ADD.D performs **two** 32-bit-wide adds, storing one of the adds
1837 in bits 0-31 and the second in bits 32-63.
1838 * FSD stores lowest 64-bits from the 128-bit-wide register to memory
1839
1840 Here's the thing: it does not make sense to overwrite the top 64 MSBs
1841 of the registers either during the FLD **or** the ADD.D. The reason
1842 is that, effectively, the top 64 MSBs actually represent a completely
1843 independent 64-bit register, so overwriting it is not only gratuitous
1844 but may actually be harmful for a future extension to SV which may
1845 have a way to directly access those top 64 bits.
1846
1847 The decision is therefore **not** to touch the upper parts of floating-point
1848 registers whereever elwidth is set to non-default values, including
1849 when "isvec" is false in a given register's CSR entry. Only when the
1850 elwidth is set to default **and** isvec is false will the standard
1851 RV behaviour be followed, namely that the upper bits be modified.
1852
1853 Ultimately if elwidth is default and isvec false on *all* source
1854 and destination registers, a SimpleV instruction defaults completely
1855 to standard RV scalar behaviour (this holds true for **all** operations,
1856 right across the board).
1857
1858 The nice thing here is that ADD.S, ADD.D and ADD.Q when elwidth are
1859 non-default values are effectively all the same: they all still perform
1860 multiple ADD operations, just at different widths. A future extension
1861 to SimpleV may actually allow ADD.S to access the upper bits of the
1862 register, effectively breaking down a 128-bit register into a bank
1863 of 4 independently-accesible 32-bit registers.
1864
1865 In the meantime, although when e.g. setting VL to 8 it would technically
1866 make no difference to the ALU whether ADD.S, ADD.D or ADD.Q is used,
1867 using ADD.Q may be an easy way to signal to the microarchitecture that
1868 it is to receive a higher VL value. On a superscalar OoO architecture
1869 there may be absolutely no difference, however on simpler SIMD-style
1870 microarchitectures they may not necessarily have the infrastructure in
1871 place to know the difference, such that when VL=8 and an ADD.D instruction
1872 is issued, it completes in 2 cycles (or more) rather than one, where
1873 if an ADD.Q had been issued instead on such simpler microarchitectures
1874 it would complete in one.
1875
1876 ## Specific instruction walk-throughs
1877
1878 This section covers walk-throughs of the above-outlined procedure
1879 for converting standard RISC-V scalar arithmetic operations to
1880 polymorphic widths, to ensure that it is correct.
1881
1882 ### add
1883
1884 Standard Scalar RV32/RV64 (xlen):
1885
1886 * RS1 @ xlen bits
1887 * RS2 @ xlen bits
1888 * add @ xlen bits
1889 * RD @ xlen bits
1890
1891 Polymorphic variant:
1892
1893 * RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
1894 * RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
1895 * add @ max(rs1, rs2) bits
1896 * RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate
1897
1898 Note here that polymorphic add zero-extends its source operands,
1899 where addw sign-extends.
1900
1901 ### addw
1902
1903 The RV Specification specifically states that "W" variants of arithmetic
1904 operations always produce 32-bit signed values. In a polymorphic
1905 environment it is reasonable to assume that the signed aspect is
1906 preserved, where it is the length of the operands and the result
1907 that may be changed.
1908
1909 Standard Scalar RV64 (xlen):
1910
1911 * RS1 @ xlen bits
1912 * RS2 @ xlen bits
1913 * add @ xlen bits
1914 * RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
1915
1916 Polymorphic variant:
1917
1918 * RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
1919 * RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
1920 * add @ max(rs1, rs2) bits
1921 * RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
1922
1923 Note here that polymorphic addw sign-extends its source operands,
1924 where add zero-extends.
1925
1926 This requires a little more in-depth analysis. Where the bitwidth of
1927 rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
1928 only where the bitwidth of either rs1 or rs2 are different, will the
1929 lesser-width operand be sign-extended.
1930
1931 Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
1932 where for add they are both zero-extended. This holds true for all arithmetic
1933 operations ending with "W".
1934
1935 ### addiw
1936
1937 Standard Scalar RV64I:
1938
1939 * RS1 @ xlen bits, truncated to 32-bit
1940 * immed @ 12 bits, sign-extended to 32-bit
1941 * add @ 32 bits
1942 * RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
1943
1944 Polymorphic variant:
1945
1946 * RS1 @ rs1 bits
1947 * immed @ 12 bits, sign-extend to max(rs1, 12) bits
1948 * add @ max(rs1, 12) bits
1949 * RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
1950
1951 # Predication Element Zeroing
1952
1953 The introduction of zeroing on traditional vector predication is usually
1954 intended as an optimisation for lane-based microarchitectures with register
1955 renaming to be able to save power by avoiding a register read on elements
1956 that are passed through en-masse through the ALU. Simpler microarchitectures
1957 do not have this issue: they simply do not pass the element through to
1958 the ALU at all, and therefore do not store it back in the destination.
1959 More complex non-lane-based micro-architectures can, when zeroing is
1960 not set, use the predication bits to simply avoid sending element-based
1961 operations to the ALUs, entirely: thus, over the long term, potentially
1962 keeping all ALUs 100% occupied even when elements are predicated out.
1963
1964 SimpleV's design principle is not based on or influenced by
1965 microarchitectural design factors: it is a hardware-level API.
1966 Therefore, looking purely at whether zeroing is *useful* or not,
1967 (whether less instructions are needed for certain scenarios),
1968 given that a case can be made for zeroing *and* non-zeroing, the
1969 decision was taken to add support for both.
1970
1971 ## Single-predication (based on destination register)
1972
1973 Zeroing on predication for arithmetic operations is taken from
1974 the destination register's predicate. i.e. the predication *and*
1975 zeroing settings to be applied to the whole operation come from the
1976 CSR Predication table entry for the destination register.
1977 Thus when zeroing is set on predication of a destination element,
1978 if the predication bit is clear, then the destination element is *set*
1979 to zero (twin-predication is slightly different, and will be covered
1980 next).
1981
1982 Thus the pseudo-code loop for a predicated arithmetic operation
1983 is modified to as follows:
1984
1985  for (i = 0; i < VL; i++)
1986 if not zeroing: # an optimisation
1987 while (!(predval & 1<<i) && i < VL)
1988 if (int_vec[rd ].isvector)  { id += 1; }
1989 if (int_vec[rs1].isvector)  { irs1 += 1; }
1990 if (int_vec[rs2].isvector)  { irs2 += 1; }
1991 if i == VL:
1992 break
1993 if (predval & 1<<i)
1994 src1 = ....
1995 src2 = ...
1996 else:
1997 result = src1 + src2 # actual add (or other op) here
1998 set_polymorphed_reg(rd, destwid, ird, result)
1999 if (!int_vec[rd].isvector) break
2000 else if zeroing:
2001 result = 0
2002 set_polymorphed_reg(rd, destwid, ird, result)
2003 if (int_vec[rd ].isvector)  { id += 1; }
2004 else if (predval & 1<<i) break;
2005 if (int_vec[rs1].isvector)  { irs1 += 1; }
2006 if (int_vec[rs2].isvector)  { irs2 += 1; }
2007
2008 The optimisation to skip elements entirely is only possible for certain
2009 micro-architectures when zeroing is not set. However for lane-based
2010 micro-architectures this optimisation may not be practical, as it
2011 implies that elements end up in different "lanes". Under these
2012 circumstances it is perfectly fine to simply have the lanes
2013 "inactive" for predicated elements, even though it results in
2014 less than 100% ALU utilisation.
2015
2016 ## Twin-predication (based on source and destination register)
2017
2018 Twin-predication is not that much different, except that that
2019 the source is independently zero-predicated from the destination.
2020 This means that the source may be zero-predicated *or* the
2021 destination zero-predicated *or both*, or neither.
2022
2023 When with twin-predication, zeroing is set on the source and not
2024 the destination, if a predicate bit is set it indicates that a zero
2025 data element is passed through the operation (the exception being:
2026 if the source data element is to be treated as an address - a LOAD -
2027 then the data returned *from* the LOAD is zero, rather than looking up an
2028 *address* of zero.
2029
2030 When zeroing is set on the destination and not the source, then just
2031 as with single-predicated operations, a zero is stored into the destination
2032 element (or target memory address for a STORE).
2033
2034 Zeroing on both source and destination effectively result in a bitwise
2035 NOR operation of the source and destination predicate: the result is that
2036 where either source predicate OR destination predicate is set to 0,
2037 a zero element will ultimately end up in the destination register.
2038
2039 However: this may not necessarily be the case for all operations;
2040 implementors, particularly of custom instructions, clearly need to
2041 think through the implications in each and every case.
2042
2043 Here is pseudo-code for a twin zero-predicated operation:
2044
2045 function op_mv(rd, rs) # MV not VMV!
2046  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
2047  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
2048  ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
2049  pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
2050  for (int i = 0, int j = 0; i < VL && j < VL):
2051 if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
2052 if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
2053 if ((pd & 1<<j))
2054 if ((pd & 1<<j))
2055 sourcedata = ireg[rs+i];
2056 else
2057 sourcedata = 0
2058 ireg[rd+j] <= sourcedata
2059 else if (zerodst)
2060 ireg[rd+j] <= 0
2061 if (int_csr[rs].isvec)
2062 i++;
2063 if (int_csr[rd].isvec)
2064 j++;
2065 else
2066 if ((pd & 1<<j))
2067 break;
2068
2069 Note that in the instance where the destination is a scalar, the hardware
2070 loop is ended the moment a value *or a zero* is placed into the destination
2071 register/element. Also note that, for clarity, variable element widths
2072 have been left out of the above.
2073
2074 # Exceptions
2075
2076 TODO: expand. Exceptions may occur at any time, in any given underlying
2077 scalar operation. This implies that context-switching (traps) may
2078 occur, and operation must be returned to where it left off. That in
2079 turn implies that the full state - including the current parallel
2080 element being processed - has to be saved and restored. This is
2081 what the **STATE** CSR is for.
2082
2083 The implications are that all underlying individual scalar operations
2084 "issued" by the parallelisation have to appear to be executed sequentially.
2085 The further implications are that if two or more individual element
2086 operations are underway, and one with an earlier index causes an exception,
2087 it may be necessary for the microarchitecture to **discard** or terminate
2088 operations with higher indices.
2089
2090 This being somewhat dissatisfactory, an "opaque predication" variant
2091 of the STATE CSR is being considered.
2092
2093 # Hints
2094
2095 A "HINT" is an operation that has no effect on architectural state,
2096 where its use may, by agreed convention, give advance notification
2097 to the microarchitecture: branch prediction notification would be
2098 a good example. Usually HINTs are where rd=x0.
2099
2100 With Simple-V being capable of issuing *parallel* instructions where
2101 rd=x0, the space for possible HINTs is expanded considerably. VL
2102 could be used to indicate different hints. In addition, if predication
2103 is set, the predication register itself could hypothetically be passed
2104 in as a *parameter* to the HINT operation.
2105
2106 No specific hints are yet defined in Simple-V
2107
2108 # VLIW Format <a name="vliw-format"></a>
2109
2110 One issue with SV is the setup and teardown time of the CSRs. The cost
2111 of the use of a full CSRRW (requiring LI) is quite high. A VLIW format
2112 therefore makes sense.
2113
2114 A suitable prefix, which fits the Expanded Instruction-Length encoding
2115 for "(80 + 16 times instruction_length)", as defined in Section 1.5
2116 of the RISC-V ISA, is as follows:
2117
2118 | 15 | 14:12 | 11:10 | 9:8 | 7 | 6:0 |
2119 | - | ----- | ----- | ----- | --- | ------- |
2120 | vlset | 16xil | pplen | rplen | mode | 1111111 |
2121
2122 An optional VL Block, optional predicate entries, optional register entries and finally some 16/32/48 bit standard RV or SVPrefix opcodes follow.
2123
2124 The variable-length format from Section 1.5 of the RISC-V ISA:
2125
2126 | base+4 ... base+2 | base | number of bits |
2127 | ------ ------------------- | ---------------- -------------------------- |
2128 | ..xxxx xxxxxxxxxxxxxxxx | xnnnxxxxx1111111 | (80+16\*nnn)-bit, nnn!=111 |
2129 | {ops}{Pred}{Reg}{VL Block} | SV Prefix | |
2130
2131 VL/MAXVL/SubVL Block:
2132
2133 | 31-30 | 29:28 | 27:22 | 21:17 - 16 |
2134 | - | ----- | ------ | ------ - - |
2135 | 0 | SubVL | VLdest | VLEN vlt |
2136 | 1 | SubVL | VLdest | VLEN |
2137
2138 If vlt is 0, VLEN is a 5 bit immediate value, offset by one (i.e a bit sequence of 0b00000 represents VL=1 and so on). If vlt is 1, it specifies
2139 the scalar register from which VL is set by this VLIW instruction
2140 group. VL, whether set from the register or the immediate, is then
2141 modified (truncated) to be MIN(VL, MAXVL), and the result stored in the
2142 scalar register specified in VLdest. If VLdest is zero, no store in the
2143 regfile occurs (however VL is still set).
2144
2145 This option will typically be used to start vectorised loops, where
2146 the VLIW instruction effectively embeds an optional "SETSUBVL, SETVL"
2147 sequence (in compact form).
2148
2149 When bit 15 is set to 1, MAXVL and VL are both set to the immediate,
2150 VLEN (again, offset by one), which is 6 bits in length, and the same value stored in scalar
2151 register VLdest (if that register is nonzero). A value of 0b000000 will set MAXVL = VL = 1, a value of 0b000001 will set MAXVL = VL = 2 and so on.
2152
2153 This option will typically not be used so much for loops as it will be
2154 for one-off instructions such as saving the entire register file to the
2155 stack with a single one-off Vectorised and predicated LD/ST, or as a way to save or restore registers in a function call with a single instruction.
2156
2157 CSRs needed:
2158
2159 * mepcvliw
2160 * sepcvliw
2161 * uepcvliw
2162 * hepcvliw
2163
2164 Notes:
2165
2166 * Bit 7 specifies if the prefix block format is the full 16 bit format
2167 (1) or the compact less expressive format (0). In the 8 bit format,
2168 pplen is multiplied by 2.
2169 * 8 bit format predicate numbering is implicit and begins from x9. Thus it is critical to put blocks in the correct order as required.
2170 * Bit 7 also specifies if the register block format is 16 bit (1) or 8 bit
2171 (0). In the 8 bit format, rplen is multiplied by 2. If only an odd number
2172 of entries are needed the last may be set to 0x00, indicating "unused".
2173 * Bit 15 specifies if the VL Block is present. If set to 1, the VL Block immediately follows the VLIW instruction Prefix
2174 * Bits 8 and 9 define how many RegCam entries (0 to 3 if bit 15 is 1, otherwise 0 to 6) follow the (optional) VL Block.
2175 * Bits 10 and 11 define how many PredCam entries (0 to 3 if bit 7 is 1, otherwise 0 to 6) follow the (optional) RegCam entries
2176 * Bits 14 to 12 (IL) define the actual length of the instruction: total
2177 number of bits is 80 + 16 times IL. Standard RV32, RVC and also
2178 SVPrefix (P48-\*-Type) instructions fit into this space, after the
2179 (optional) VL / RegCam / PredCam entries
2180 * Anything - any registers - within the VLIW-prefixed format *MUST* have the
2181 RegCam and PredCam entries applied to it.
2182 * At the end of the VLIW Group, the RegCam and PredCam entries
2183 *no longer apply*. VL, MAXVL and SUBVL on the other hand remain at
2184 the values set by the last instruction (whether a CSRRW or the VL
2185 Block header).
2186 * Although an inefficient use of resources, it is fine to set the MAXVL,
2187 VL and SUBVL CSRs with standard CSRRW instructions, within a VLIW block.
2188
2189 All this would greatly reduce the amount of space utilised by Vectorised
2190 instructions, given that 64-bit CSRRW requires 3, even 4 32-bit opcodes: the
2191 CSR itself, a LI, and the setting up of the value into the RS register
2192 of the CSR, which, again, requires a LI / LUI to get the 32 bit
2193 data into the CSR. To get 64-bit data into the register in order to put
2194 it into the CSR(s), LOAD operations from memory are needed!
2195
2196 Given that each 64-bit CSR can hold only 4x PredCAM entries (or 4 RegCAM
2197 entries), that's potentially 6 to eight 32-bit instructions, just to
2198 establish the Vector State!
2199
2200 Not only that: even CSRRW on VL and MAXVL requires 64-bits (even more bits if
2201 VL needs to be set to greater than 32). Bear in mind that in SV, both MAXVL
2202 and VL need to be set.
2203
2204 By contrast, the VLIW prefix is only 16 bits, the VL/MAX/SubVL block is
2205 only 16 bits, and as long as not too many predicates and register vector
2206 qualifiers are specified, several 32-bit and 16-bit opcodes can fit into
2207 the format. If the full flexibility of the 16 bit block formats are not
2208 needed, more space is saved by using the 8 bit formats.
2209
2210 In this light, embedding the VL/MAXVL, PredCam and RegCam CSR entries into
2211 a VLIW format makes a lot of sense.
2212
2213 Open Questions:
2214
2215 * Is it necessary to stick to the RISC-V 1.5 format? Why not go with
2216 using the 15th bit to allow 80 + 16\*0bnnnn bits? Perhaps to be sane,
2217 limit to 256 bits (16 times 0-11).
2218 * Could a "hint" be used to set which operations are parallel and which
2219 are sequential?
2220 * Could a new sub-instruction opcode format be used, one that does not
2221 conform precisely to RISC-V rules, but *unpacks* to RISC-V opcodes?
2222 no need for byte or bit-alignment
2223 * Could a hardware compression algorithm be deployed? Quite likely,
2224 because of the sub-execution context (sub-VLIW PC)
2225
2226 ## Limitations on instructions.
2227
2228 To greatly simplify implementations, it is required to treat the VLIW
2229 group as a separate sub-program with its own separate PC. The sub-pc
2230 advances separately whilst the main PC remains pointing at the beginning
2231 of the VLIW instruction (not to be confused with how VL works, which
2232 is exactly the same principle, except it is VStart in the STATE CSR
2233 that increments).
2234
2235 This has implications, namely that a new set of CSRs identical to xepc
2236 (mepc, srpc, hepc and uepc) must be created and managed and respected
2237 as being a sub extension of the xepc set of CSRs. Thus, xepcvliw CSRs
2238 must be context switched and saved / restored in traps.
2239
2240 The VStart indices in the STATE CSR may be similarly regarded as another
2241 sub-execution context, giving in effect two sets of nested sub-levels
2242 of the RISCV Program Counter.
2243
2244 In addition, as xepcvliw CSRs are relative to the beginning of the VLIW
2245 block, branches MUST be restricted to within the block, i.e. addressing
2246 is now restricted to the start (and very short) length of the block.
2247
2248 Also: calling subroutines is inadviseable, unless they can be entirely
2249 accomplished within a block.
2250
2251 A normal jump and a normal function call may only be taken by letting
2252 the VLIW end, returning to "normal" standard RV mode, using RVC, 32 bit
2253 or P48-*-type opcodes.
2254
2255 ## Links
2256
2257 * <https://groups.google.com/d/msg/comp.arch/yIFmee-Cx-c/jRcf0evSAAAJ>
2258
2259 # Subsets of RV functionality
2260
2261 This section describes the differences when SV is implemented on top of
2262 different subsets of RV.
2263
2264 ## Common options
2265
2266 It is permitted to limit the size of either (or both) the register files
2267 down to the original size of the standard RV architecture. However, below
2268 the mandatory limits set in the RV standard will result in non-compliance
2269 with the SV Specification.
2270
2271 ## RV32 / RV32F
2272
2273 When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
2274 maximum limit for predication is also restricted to 32 bits. Whilst not
2275 actually specifically an "option" it is worth noting.
2276
2277 ## RV32G
2278
2279 Normally in standard RV32 it does not make much sense to have
2280 RV32G, The critical instructions that are missing in standard RV32
2281 are those for moving data to and from the double-width floating-point
2282 registers into the integer ones, as well as the FCVT routines.
2283
2284 In an earlier draft of SV, it was possible to specify an elwidth
2285 of double the standard register size: this had to be dropped,
2286 and may be reintroduced in future revisions.
2287
2288 ## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
2289
2290 When floating-point is not implemented, the size of the User Register and
2291 Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
2292 per table).
2293
2294 ## RV32E
2295
2296 In embedded scenarios the User Register and Predication CSRs may be
2297 dropped entirely, or optionally limited to 1 CSR, such that the combined
2298 number of entries from the M-Mode CSR Register table plus U-Mode
2299 CSR Register table is either 4 16-bit entries or (if the U-Mode is
2300 zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
2301 the Predication CSR tables.
2302
2303 RV32E is the most likely candidate for simply detecting that registers
2304 are marked as "vectorised", and generating an appropriate exception
2305 for the VL loop to be implemented in software.
2306
2307 ## RV128
2308
2309 RV128 has not been especially considered, here, however it has some
2310 extremely large possibilities: double the element width implies
2311 256-bit operands, spanning 2 128-bit registers each, and predication
2312 of total length 128 bit given that XLEN is now 128.
2313
2314 # Under consideration <a name="issues"></a>
2315
2316 for element-grouping, if there is unused space within a register
2317 (3 16-bit elements in a 64-bit register for example), recommend:
2318
2319 * For the unused elements in an integer register, the used element
2320 closest to the MSB is sign-extended on write and the unused elements
2321 are ignored on read.
2322 * The unused elements in a floating-point register are treated as-if
2323 they are set to all ones on write and are ignored on read, matching the
2324 existing standard for storing smaller FP values in larger registers.
2325
2326 ---
2327
2328 info register,
2329
2330 > One solution is to just not support LR/SC wider than a fixed
2331 > implementation-dependent size, which must be at least 
2332 >1 XLEN word, which can be read from a read-only CSR
2333 > that can also be used for info like the kind and width of 
2334 > hw parallelism supported (128-bit SIMD, minimal virtual 
2335 > parallelism, etc.) and other things (like maybe the number 
2336 > of registers supported). 
2337
2338 > That CSR would have to have a flag to make a read trap so
2339 > a hypervisor can simulate different values.
2340
2341 ----
2342
2343 > And what about instructions like JALR? 
2344
2345 answer: they're not vectorised, so not a problem
2346
2347 ----
2348
2349 * if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
2350 XLEN if elwidth==default
2351 * if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
2352 *32* if elwidth == default
2353
2354 ---
2355
2356 TODO: document different lengths for INT / FP regfiles, and provide
2357 as part of info register. 00=32, 01=64, 10=128, 11=reserved.
2358
2359 ---
2360
2361 TODO, update to remove RegCam and PredCam CSRs, just use SVprefix and VLIW format