(no commit message)
[libreriscv.git] / simple_v_extension / specification.mdwn
1 # Simple-V (Parallelism Extension Proposal) Specification
2
3 * Copyright (C) 2017, 2018, 3029 Luke Kenneth Casson Leighton
4 * Status: DRAFTv0.6
5 * Last edited: 21 jun 2019
6 * Ancillary resource: [[opcodes]] [[sv_prefix_proposal]]
7
8 With thanks to:
9
10 * Allen Baum
11 * Bruce Hoult
12 * comp.arch
13 * Jacob Bachmeyer
14 * Guy Lemurieux
15 * Jacob Lifshay
16 * Terje Mathisen
17 * The RISC-V Founders, without whom this all would not be possible.
18
19 [[!toc ]]
20
21 # Summary and Background: Rationale
22
23 Simple-V is a uniform parallelism API for RISC-V hardware that has several
24 unplanned side-effects including code-size reduction, expansion of
25 HINT space and more. The reason for
26 creating it is to provide a manageable way to turn a pre-existing design
27 into a parallel one, in a step-by-step incremental fashion, allowing
28 the implementor to focus on adding hardware where it is needed and necessary.
29 The primary target is for mobile-class 3D GPUs and VPUs, with secondary
30 goals being to reduce executable size and reduce context-switch latency.
31
32 Critically: **No new instructions are added**. The parallelism (if any
33 is implemented) is implicitly added by tagging *standard* scalar registers
34 for redirection. When such a tagged register is used in any instruction,
35 it indicates that the PC shall **not** be incremented; instead a loop
36 is activated where *multiple* instructions are issued to the pipeline
37 (as determined by a length CSR), with contiguously incrementing register
38 numbers starting from the tagged register. When the last "element"
39 has been reached, only then is the PC permitted to move on. Thus
40 Simple-V effectively sits (slots) *in between* the instruction decode phase
41 and the ALU(s).
42
43 The barrier to entry with SV is therefore very low. The minimum
44 compliant implementation is software-emulation (traps), requiring
45 only the CSRs and CSR tables, and that an exception be thrown if an
46 instruction's registers are detected to have been tagged. The looping
47 that would otherwise be done in hardware is thus carried out in software,
48 instead. Whilst much slower, it is "compliant" with the SV specification,
49 and may be suited for implementation in RV32E and also in situations
50 where the implementor wishes to focus on certain aspects of SV, without
51 unnecessary time and resources into the silicon, whilst also conforming
52 strictly with the API. A good area to punt to software would be the
53 polymorphic element width capability for example.
54
55 Hardware Parallelism, if any, is therefore added at the implementor's
56 discretion to turn what would otherwise be a sequential loop into a
57 parallel one.
58
59 To emphasise that clearly: Simple-V (SV) is *not*:
60
61 * A SIMD system
62 * A SIMT system
63 * A Vectorisation Microarchitecture
64 * A microarchitecture of any specific kind
65 * A mandary parallel processor microarchitecture of any kind
66 * A supercomputer extension
67
68 SV does **not** tell implementors how or even if they should implement
69 parallelism: it is a hardware "API" (Application Programming Interface)
70 that, if implemented, presents a uniform and consistent way to *express*
71 parallelism, at the same time leaving the choice of if, how, how much,
72 when and whether to parallelise operations **entirely to the implementor**.
73
74 # Basic Operation
75
76 The principle of SV is as follows:
77
78 * CSRs indicating which registers are "tagged" as "vectorised"
79 (potentially parallel, depending on the microarchitecture)
80 must be set up
81 * A "Vector Length" CSR is set, indicating the span of any future
82 "parallel" operations.
83 * A **scalar** operation, just after the decode phase and before the
84 execution phase, checks the CSR register tables to see if any of
85 its registers have been marked as "vectorised"
86 * If so, a hardware "macro-unrolling loop" is activated, of length
87 VL, that effectively issues **multiple** identical instructions
88 using contiguous sequentially-incrementing registers.
89 **Whether they be executed sequentially or in parallel or a
90 mixture of both or punted to software-emulation in a trap handler
91 is entirely up to the implementor**.
92
93 In this way an entire scalar algorithm may be vectorised with
94 the minimum of modification to the hardware and to compiler toolchains.
95 There are **no** new opcodes.
96
97 # CSRs <a name="csrs"></a>
98
99 For U-Mode there are two CSR key-value stores needed to create lookup
100 tables which are used at the register decode phase.
101
102 * A register CSR key-value table (typically 8 32-bit CSRs of 2 16-bits each)
103 * A predication CSR key-value table (again, 8 32-bit CSRs of 2 16-bits each)
104 * Small U-Mode and S-Mode register and predication CSR key-value tables
105 (2 32-bit CSRs of 2x 16-bit entries each).
106 * An optional "reshaping" CSR key-value table which remaps from a 1D
107 linear shape to 2D or 3D, including full transposition.
108
109 There are also four additional CSRs for User-Mode:
110
111 * MVL (the Maximum Vector Length)
112 * VL (which has different characteristics from standard CSRs)
113 * STATE (useful for saving and restoring during context switch,
114 and for providing fast transitions)
115
116 There are also three additional CSRs for Supervisor-Mode:
117
118 * SMVL
119 * SVL
120 * SSTATE
121
122 And likewise for M-Mode:
123
124 * MMVL
125 * MVL
126 * MSTATE
127
128 Both Supervisor and M-Mode have their own (small) CSR register and
129 predication tables of only 4 entries each.
130
131 The access pattern for these groups of CSRs in each mode follows the
132 same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
133
134 * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
135 * In S-Mode, accessing and changing of the M-Mode CSRs is identical
136 to changing the S-Mode CSRs. Accessing and changing the U-Mode
137 CSRs is permitted.
138 * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
139 is prohibited.
140
141 In M-Mode, only the M-Mode CSRs are in effect, i.e. it is only the
142 M-Mode MVL, the M-Mode STATE and so on that influences the processor
143 behaviour. Likewise for S-Mode, and likewise for U-Mode.
144
145 This has the interesting benefit of allowing M-Mode (or S-Mode)
146 to be set up, for context-switching to take place, and, on return
147 back to the higher privileged mode, the CSRs of that mode will be
148 exactly as they were. Thus, it becomes possible for example to
149 set up CSRs suited best to aiding and assisting low-latency fast
150 context-switching *once and only once*, without the need for
151 re-initialising the CSRs needed to do so.
152
153 ## MAXVECTORLENGTH (MVL) <a name="mvl" />
154
155 MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
156 is variable length and may be dynamically set. MVL is
157 however limited to the regfile bitwidth XLEN (1-32 for RV32,
158 1-64 for RV64 and so on).
159
160 The reason for setting this limit is so that predication registers, when
161 marked as such, may fit into a single register as opposed to fanning out
162 over several registers. This keeps the implementation a little simpler.
163
164 The other important factor to note is that the actual MVL is **offset
165 by one**, so that it can fit into only 6 bits (for RV64) and still cover
166 a range up to XLEN bits. So, when setting the MVL CSR to 0, this actually
167 means that MVL==1. When setting the MVL CSR to 3, this actually means
168 that MVL==4, and so on. This is expressed more clearly in the "pseudocode"
169 section, where there are subtle differences between CSRRW and CSRRWI.
170
171 ## Vector Length (VL) <a name="vl" />
172
173 VSETVL is slightly different from RVV. Like RVV, VL is set to be within
174 the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
175
176 VL = rd = MIN(vlen, MVL)
177
178 where 1 <= MVL <= XLEN
179
180 However just like MVL it is important to note that the range for VL has
181 subtle design implications, covered in the "CSR pseudocode" section
182
183 The fixed (specific) setting of VL allows vector LOAD/STORE to be used
184 to switch the entire bank of registers using a single instruction (see
185 Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
186 is down to the fact that predication bits fit into a single register of
187 length XLEN bits.
188
189 The second change is that when VSETVL is requested to be stored
190 into x0, it is *ignored* silently (VSETVL x0, x5)
191
192 The third and most important change is that, within the limits set by
193 MVL, the value passed in **must** be set in VL (and in the
194 destination register).
195
196 This has implication for the microarchitecture, as VL is required to be
197 set (limits from MVL notwithstanding) to the actual value
198 requested. RVV has the option to set VL to an arbitrary value that suits
199 the conditions and the micro-architecture: SV does *not* permit this.
200
201 The reason is so that if SV is to be used for a context-switch or as a
202 substitute for LOAD/STORE-Multiple, the operation can be done with only
203 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
204 single LD/ST operation). If VL does *not* get set to the register file
205 length when VSETVL is called, then a software-loop would be needed.
206 To avoid this need, VL *must* be set to exactly what is requested
207 (limits notwithstanding).
208
209 Therefore, in turn, unlike RVV, implementors *must* provide
210 pseudo-parallelism (using sequential loops in hardware) if actual
211 hardware-parallelism in the ALUs is not deployed. A hybrid is also
212 permitted (as used in Broadcom's VideoCore-IV) however this must be
213 *entirely* transparent to the ISA.
214
215 The fourth change is that VSETVL is implemented as a CSR, where the
216 behaviour of CSRRW (and CSRRWI) must be changed to specifically store
217 the *new* value in the destination register, **not** the old value.
218 Where context-load/save is to be implemented in the usual fashion
219 by using a single CSRRW instruction to obtain the old value, the
220 *secondary* CSR must be used (SVSTATE). This CSR behaves
221 exactly as standard CSRs, and contains more than just VL.
222
223 One interesting side-effect of using CSRRWI to set VL is that this
224 may be done with a single instruction, useful particularly for a
225 context-load/save. There are however limitations: CSRWI's immediate
226 is limited to 0-31 (representing VL=1-32).
227
228 Note that when VL is set to 1, all parallel operations cease: the
229 hardware loop is reduced to a single element: scalar operations.
230
231 ## STATE
232
233 This is a standard CSR that contains sufficient information for a
234 full context save/restore. It contains (and permits setting of)
235 MVL, VL, the destination element offset of the current parallel
236 instruction being executed, and, for twin-predication, the source
237 element offset as well. Interestingly it may hypothetically
238 also be used to make the immediately-following instruction to skip a
239 certain number of elements, however the recommended method to do
240 this is predication or using the offset mode of the REMAP CSRs.
241
242 Setting destoffs and srcoffs is realistically intended for saving state
243 so that exceptions (page faults in particular) may be serviced and the
244 hardware-loop that was being executed at the time of the trap, from
245 user-mode (or Supervisor-mode), may be returned to and continued from
246 where it left off. The reason why this works is because setting
247 User-Mode STATE will not change (not be used) in M-Mode or S-Mode
248 (and is entirely why M-Mode and S-Mode have their own STATE CSRs).
249
250 The format of the STATE CSR is as follows:
251
252 | (28..27) | (26..24) | (23..18) | (17..12) | (11..6) | (5...0) |
253 | -------- | -------- | -------- | -------- | ------- | ------- |
254 | rsvd | rsvd | destoffs | srcoffs | vl | maxvl |
255
256 When setting this CSR, the following characteristics will be enforced:
257
258 * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
259 * **VL** will be truncated (after offset) to be within the range 1 to MAXVL
260 * **srcoffs** will be truncated to be within the range 0 to VL-1
261 * **destoffs** will be truncated to be within the range 0 to VL-1
262
263 ## MVL and VL Pseudocode
264
265 The pseudo-code for get and set of VL and MVL are as follows:
266
267 set_mvl_csr(value, rd):
268 regs[rd] = MVL
269 MVL = MIN(value, MVL)
270
271 get_mvl_csr(rd):
272 regs[rd] = VL
273
274 set_vl_csr(value, rd):
275 VL = MIN(value, MVL)
276 regs[rd] = VL # yes returning the new value NOT the old CSR
277 return VL
278
279 get_vl_csr(rd):
280 regs[rd] = VL
281 return VL
282
283 Note that where setting MVL behaves as a normal CSR, unlike standard CSR
284 behaviour, setting VL will return the **new** value of VL **not** the old
285 one.
286
287 For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
288 maximise the effectiveness, an immediate of 0 is used to set VL=1,
289 an immediate of 1 is used to set VL=2 and so on:
290
291 CSRRWI_Set_MVL(value):
292 set_mvl_csr(value+1, x0)
293
294 CSRRWI_Set_VL(value):
295 set_vl_csr(value+1, x0)
296
297 However for CSRRW the following pseudocode is used for MVL and VL,
298 where setting the value to zero will cause an exception to be raised.
299 The reason is that if VL or MVL are set to zero, the STATE CSR is
300 not capable of returning that value.
301
302 CSRRW_Set_MVL(rs1, rd):
303 value = regs[rs1]
304 if value == 0:
305 raise Exception
306 set_mvl_csr(value, rd)
307
308 CSRRW_Set_VL(rs1, rd):
309 value = regs[rs1]
310 if value == 0:
311 raise Exception
312 set_vl_csr(value, rd)
313
314 In this way, when CSRRW is utilised with a loop variable, the value
315 that goes into VL (and into the destination register) may be used
316 in an instruction-minimal fashion:
317
318 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
319 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
320 CSRRWI MVL, 3 # sets MVL == **4** (not 3)
321 j zerotest # in case loop counter a0 already 0
322 loop:
323 CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
324 ld a3, a1 # load 4 registers a3-6 from x
325 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
326 ld a7, a2 # load 4 registers a7-10 from y
327 add a1, a1, t1 # increment pointer to x by vl*8
328 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
329 sub a0, a0, t0 # n -= vl (t0)
330 st a7, a2 # store 4 registers a7-10 to y
331 add a2, a2, t1 # increment pointer to y by vl*8
332 zerotest:
333 bnez a0, loop # repeat if n != 0
334
335 With the STATE CSR, just like with CSRRWI, in order to maximise the
336 utilisation of the limited bitspace, "000000" in binary represents
337 VL==1, "00001" represents VL==2 and so on (likewise for MVL):
338
339 CSRRW_Set_SV_STATE(rs1, rd):
340 value = regs[rs1]
341 get_state_csr(rd)
342 MVL = set_mvl_csr(value[11:6]+1)
343 VL = set_vl_csr(value[5:0]+1)
344 destoffs = value[23:18]>>18
345 srcoffs = value[23:18]>>12
346
347 get_state_csr(rd):
348 regs[rd] = (MVL-1) | (VL-1)<<6 | (srcoffs)<<12 |
349 (destoffs)<<18
350 return regs[rd]
351
352 In both cases, whilst CSR read of VL and MVL return the exact values
353 of VL and MVL respectively, reading and writing the STATE CSR returns
354 those values **minus one**. This is absolutely critical to implement
355 if the STATE CSR is to be used for fast context-switching.
356
357 ## Register CSR key-value (CAM) table <a name="regcsrtable" />
358
359 The purpose of the Register CSR table is four-fold:
360
361 * To mark integer and floating-point registers as requiring "redirection"
362 if it is ever used as a source or destination in any given operation.
363 This involves a level of indirection through a 5-to-7-bit lookup table,
364 such that **unmodified** operands with 5 bit (3 for Compressed) may
365 access up to **128** registers.
366 * To indicate whether, after redirection through the lookup table, the
367 register is a vector (or remains a scalar).
368 * To over-ride the implicit or explicit bitwidth that the operation would
369 normally give the register.
370
371 16 bit format:
372
373 | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
374 | ------ | | - | - | - | ------ | ------- |
375 | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
376 | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
377 | .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
378 | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
379
380 8 bit format:
381
382 | RegCAM | | 7 | (6..5) | (4..0) |
383 | ------ | | - | ------ | ------- |
384 | 0 | | i/f | vew0 | regnum |
385
386 i/f is set to "1" to indicate that the redirection/tag entry is to be applied
387 to integer registers; 0 indicates that it is relevant to floating-point
388 registers.
389
390 The 8 bit format is used for a much more compact expression. "isvec" is implicit and, as in [[sv-prefix-proposal]], the target vector is "regnum<<2", implicitly. Contrast this with the 16-bit format where the target vector is *explicitly* named in bits 8 to 14, and bit 15 may optionally set "scalar" mode.
391
392 vew has the following meanings, indicating that the instruction's
393 operand size is "over-ridden" in a polymorphic fashion:
394
395 | vew | bitwidth |
396 | --- | ------------------- |
397 | 00 | default (XLEN/FLEN) |
398 | 01 | 8 bit |
399 | 10 | 16 bit |
400 | 11 | 32 bit |
401
402 As the above table is a CAM (key-value store) it may be appropriate
403 (faster, implementation-wise) to expand it as follows:
404
405 struct vectorised fp_vec[32], int_vec[32];
406
407 for (i = 0; i < 16; i++) // 16 CSRs?
408 tb = int_vec if CSRvec[i].type == 0 else fp_vec
409 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
410 tb[idx].elwidth = CSRvec[i].elwidth
411 tb[idx].regidx = CSRvec[i].regidx // indirection
412 tb[idx].isvector = CSRvec[i].isvector // 0=scalar
413 tb[idx].packed = CSRvec[i].packed // SIMD or not
414
415 The actual size of the CSR Register table depends on the platform
416 and on whether other Extensions are present (RV64G, RV32E, etc.).
417 For details see "Subsets" section.
418
419
420
421
422
423
424 ## Predication CSR <a name="predication_csr_table"></a>
425
426 TODO: update CSR tables, now 7-bit for regidx
427
428 The Predication CSR is a key-value store indicating whether, if a given
429 destination register (integer or floating-point) is referred to in an
430 instruction, it is to be predicated. Tt is particularly important to note
431 that the *actual* register used can be *different* from the one that is
432 in the instruction, due to the redirection through the lookup table.
433
434 * regidx is the actual register that in combination with the
435 i/f flag, if that integer or floating-point register is referred to,
436 results in the lookup table being referenced to find the predication
437 mask to use on the operation in which that (regidx) register has
438 been used
439 * predidx (in combination with the bank bit in the future) is the
440 *actual* register to be used for the predication mask. Note:
441 in effect predidx is actually a 6-bit register address, as the bank
442 bit is the MSB (and is nominally set to zero for now).
443 * inv indicates that the predication mask bits are to be inverted
444 prior to use *without* actually modifying the contents of the
445 register itself.
446 * zeroing is either 1 or 0, and if set to 1, the operation must
447 place zeros in any element position where the predication mask is
448 set to zero. If zeroing is set to 0, unpredicated elements *must*
449 be left alone. Some microarchitectures may choose to interpret
450 this as skipping the operation entirely. Others which wish to
451 stick more closely to a SIMD architecture may choose instead to
452 interpret unpredicated elements as an internal "copy element"
453 operation (which would be necessary in SIMD microarchitectures
454 that perform register-renaming)
455
456 16 bit format:
457
458 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
459 | ----- | - | - | - | - | ------- | ------- |
460 | 0 | predkey | zero0 | inv0 | i/f | regidx | rsrvd |
461 | 1 | predkey | zero1 | inv1 | i/f | regidx | rsvd |
462 | ... | predkey | ..... | .... | i/f | ....... | ....... |
463 | 15 | predkey | zero15 | inv15 | i/f | regidx | rsvd |
464
465
466 8 bit format:
467
468 | PrCSR | 7 | 6 | 5 | (4..0) |
469 | ----- | - | - | - | ------- |
470 | 0 | zero0 | inv0 | i/f | regnum |
471
472 The 8 bit format is a compact and less expressive variant of the full 16 bit format. Using the 8 bit formatis very different: the predicate register to use is implicit, and numbering begins inplicitly from x9. The regnum is still used to "activate" predication.
473
474 The 16 bit Predication CSR Table is a key-value store, so implementation-wise
475 it will be faster to turn the table around (maintain topologically
476 equivalent state):
477
478 struct pred {
479 bool zero;
480 bool inv;
481 bool enabled;
482 int predidx; // redirection: actual int register to use
483 }
484
485 struct pred fp_pred_reg[32]; // 64 in future (bank=1)
486 struct pred int_pred_reg[32]; // 64 in future (bank=1)
487
488 for (i = 0; i < 16; i++)
489 tb = int_pred_reg if CSRpred[i].type == 0 else fp_pred_reg;
490 idx = CSRpred[i].regidx
491 tb[idx].zero = CSRpred[i].zero
492 tb[idx].inv = CSRpred[i].inv
493 tb[idx].predidx = CSRpred[i].predidx
494 tb[idx].enabled = true
495
496 So when an operation is to be predicated, it is the internal state that
497 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
498 pseudo-code for operations is given, where p is the explicit (direct)
499 reference to the predication register to be used:
500
501 for (int i=0; i<vl; ++i)
502 if ([!]preg[p][i])
503 (d ? vreg[rd][i] : sreg[rd]) =
504 iop(s1 ? vreg[rs1][i] : sreg[rs1],
505 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
506
507 This instead becomes an *indirect* reference using the *internal* state
508 table generated from the Predication CSR key-value store, which is used
509 as follows.
510
511 if type(iop) == INT:
512 preg = int_pred_reg[rd]
513 else:
514 preg = fp_pred_reg[rd]
515
516 for (int i=0; i<vl; ++i)
517 predicate, zeroing = get_pred_val(type(iop) == INT, rd):
518 if (predicate && (1<<i))
519 (d ? regfile[rd+i] : regfile[rd]) =
520 iop(s1 ? regfile[rs1+i] : regfile[rs1],
521 s2 ? regfile[rs2+i] : regfile[rs2]); // for insts with 2 inputs
522 else if (zeroing)
523 (d ? regfile[rd+i] : regfile[rd]) = 0
524
525 Note:
526
527 * d, s1 and s2 are booleans indicating whether destination,
528 source1 and source2 are vector or scalar
529 * key-value CSR-redirection of rd, rs1 and rs2 have NOT been included
530 above, for clarity. rd, rs1 and rs2 all also must ALSO go through
531 register-level redirection (from the Register CSR table) if they are
532 vectors.
533
534 If written as a function, obtaining the predication mask (and whether
535 zeroing takes place) may be done as follows:
536
537 def get_pred_val(bool is_fp_op, int reg):
538 tb = int_reg if is_fp_op else fp_reg
539 if (!tb[reg].enabled):
540 return ~0x0, False // all enabled; no zeroing
541 tb = int_pred if is_fp_op else fp_pred
542 if (!tb[reg].enabled):
543 return ~0x0, False // all enabled; no zeroing
544 predidx = tb[reg].predidx // redirection occurs HERE
545 predicate = intreg[predidx] // actual predicate HERE
546 if (tb[reg].inv):
547 predicate = ~predicate // invert ALL bits
548 return predicate, tb[reg].zero
549
550 Note here, critically, that **only** if the register is marked
551 in its CSR **register** table entry as being "active" does the testing
552 proceed further to check if the CSR **predicate** table entry is
553 also active.
554
555 Note also that this is in direct contrast to branch operations
556 for the storage of comparisions: in these specific circumstances
557 the requirement for there to be an active CSR *register* entry
558 is removed.
559
560 ## REMAP CSR <a name="remap" />
561
562 (Note: both the REMAP and SHAPE sections are best read after the
563 rest of the document has been read)
564
565 There is one 32-bit CSR which may be used to indicate which registers,
566 if used in any operation, must be "reshaped" (re-mapped) from a linear
567 form to a 2D or 3D transposed form, or "offset" to permit arbitrary
568 access to elements within a register.
569
570 The 32-bit REMAP CSR may reshape up to 3 registers:
571
572 | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
573 | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
574 | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
575
576 regidx0-2 refer not to the Register CSR CAM entry but to the underlying
577 *real* register (see regidx, the value) and consequently is 7-bits wide.
578 When set to zero (referring to x0), clearly reshaping x0 is pointless,
579 so is used to indicate "disabled".
580 shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
581 Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
582
583 It is anticipated that these specialist CSRs not be very often used.
584 Unlike the CSR Register and Predication tables, the REMAP CSRs use
585 the full 7-bit regidx so that they can be set once and left alone,
586 whilst the CSR Register entries pointing to them are disabled, instead.
587
588 ## SHAPE 1D/2D/3D vector-matrix remapping CSRs
589
590 (Note: both the REMAP and SHAPE sections are best read after the
591 rest of the document has been read)
592
593 There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
594 which have the same format. When each SHAPE CSR is set entirely to zeros,
595 remapping is disabled: the register's elements are a linear (1D) vector.
596
597 | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
598 | ------- | -- | ------- | -- | ------- | -- | ------- |
599 | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
600
601 offs is a 3-bit field, spread out across bits 7, 15 and 23, which
602 is added to the element index during the loop calculation.
603
604 xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
605 that the array dimensionality for that dimension is 1. A value of xdimsz=2
606 would indicate that in the first dimension there are 3 elements in the
607 array. The format of the array is therefore as follows:
608
609 array[xdim+1][ydim+1][zdim+1]
610
611 However whilst illustrative of the dimensionality, that does not take the
612 "permute" setting into account. "permute" may be any one of six values
613 (0-5, with values of 6 and 7 being reserved, and not legal). The table
614 below shows how the permutation dimensionality order works:
615
616 | permute | order | array format |
617 | ------- | ----- | ------------------------ |
618 | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
619 | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
620 | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
621 | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
622 | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
623 | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
624
625 In other words, the "permute" option changes the order in which
626 nested for-loops over the array would be done. The algorithm below
627 shows this more clearly, and may be executed as a python program:
628
629 # mapidx = REMAP.shape2
630 xdim = 3 # SHAPE[mapidx].xdim_sz+1
631 ydim = 4 # SHAPE[mapidx].ydim_sz+1
632 zdim = 5 # SHAPE[mapidx].zdim_sz+1
633
634 lims = [xdim, ydim, zdim]
635 idxs = [0,0,0] # starting indices
636 order = [1,0,2] # experiment with different permutations, here
637 offs = 0 # experiment with different offsets, here
638
639 for idx in range(xdim * ydim * zdim):
640 new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
641 print new_idx,
642 for i in range(3):
643 idxs[order[i]] = idxs[order[i]] + 1
644 if (idxs[order[i]] != lims[order[i]]):
645 break
646 print
647 idxs[order[i]] = 0
648
649 Here, it is assumed that this algorithm be run within all pseudo-code
650 throughout this document where a (parallelism) for-loop would normally
651 run from 0 to VL-1 to refer to contiguous register
652 elements; instead, where REMAP indicates to do so, the element index
653 is run through the above algorithm to work out the **actual** element
654 index, instead. Given that there are three possible SHAPE entries, up to
655 three separate registers in any given operation may be simultaneously
656 remapped:
657
658 function op_add(rd, rs1, rs2) # add not VADD!
659 ...
660 ...
661  for (i = 0; i < VL; i++)
662 if (predval & 1<<i) # predication uses intregs
663    ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
664 ireg[rs2+remap(irs2)];
665 if (!int_vec[rd ].isvector) break;
666 if (int_vec[rd ].isvector)  { id += 1; }
667 if (int_vec[rs1].isvector)  { irs1 += 1; }
668 if (int_vec[rs2].isvector)  { irs2 += 1; }
669
670 By changing remappings, 2D matrices may be transposed "in-place" for one
671 operation, followed by setting a different permutation order without
672 having to move the values in the registers to or from memory. Also,
673 the reason for having REMAP separate from the three SHAPE CSRs is so
674 that in a chain of matrix multiplications and additions, for example,
675 the SHAPE CSRs need only be set up once; only the REMAP CSR need be
676 changed to target different registers.
677
678 Note that:
679
680 * Over-running the register file clearly has to be detected and
681 an illegal instruction exception thrown
682 * When non-default elwidths are set, the exact same algorithm still
683 applies (i.e. it offsets elements *within* registers rather than
684 entire registers).
685 * If permute option 000 is utilised, the actual order of the
686 reindexing does not change!
687 * If two or more dimensions are set to zero, the actual order does not change!
688 * The above algorithm is pseudo-code **only**. Actual implementations
689 will need to take into account the fact that the element for-looping
690 must be **re-entrant**, due to the possibility of exceptions occurring.
691 See MSTATE CSR, which records the current element index.
692 * Twin-predicated operations require **two** separate and distinct
693 element offsets. The above pseudo-code algorithm will be applied
694 separately and independently to each, should each of the two
695 operands be remapped. *This even includes C.LDSP* and other operations
696 in that category, where in that case it will be the **offset** that is
697 remapped (see Compressed Stack LOAD/STORE section).
698 * Offset is especially useful, on its own, for accessing elements
699 within the middle of a register. Without offsets, it is necessary
700 to either use a predicated MV, skipping the first elements, or
701 performing a LOAD/STORE cycle to memory.
702 With offsets, the data does not have to be moved.
703 * Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to
704 less than MVL is **perfectly legal**, albeit very obscure. It permits
705 entries to be regularly presented to operands **more than once**, thus
706 allowing the same underlying registers to act as an accumulator of
707 multiple vector or matrix operations, for example.
708
709 Clearly here some considerable care needs to be taken as the remapping
710 could hypothetically create arithmetic operations that target the
711 exact same underlying registers, resulting in data corruption due to
712 pipeline overlaps. Out-of-order / Superscalar micro-architectures with
713 register-renaming will have an easier time dealing with this than
714 DSP-style SIMD micro-architectures.
715
716 # Instruction Execution Order
717
718 Simple-V behaves as if it is a hardware-level "macro expansion system",
719 substituting and expanding a single instruction into multiple sequential
720 instructions with contiguous and sequentially-incrementing registers.
721 As such, it does **not** modify - or specify - the behaviour and semantics of
722 the execution order: that may be deduced from the **existing** RV
723 specification in each and every case.
724
725 So for example if a particular micro-architecture permits out-of-order
726 execution, and it is augmented with Simple-V, then wherever instructions
727 may be out-of-order then so may the "post-expansion" SV ones.
728
729 If on the other hand there are memory guarantees which specifically
730 prevent and prohibit certain instructions from being re-ordered
731 (such as the Atomicity Axiom, or FENCE constraints), then clearly
732 those constraints **MUST** also be obeyed "post-expansion".
733
734 It should be absolutely clear that SV is **not** about providing new
735 functionality or changing the existing behaviour of a micro-architetural
736 design, or about changing the RISC-V Specification.
737 It is **purely** about compacting what would otherwise be contiguous
738 instructions that use sequentially-increasing register numbers down
739 to the **one** instruction.
740
741 # Instructions <a name="instructions" />
742
743 Despite being a 98% complete and accurate topological remap of RVV
744 concepts and functionality, no new instructions are needed.
745 Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
746 becomes a critical dependency for efficient manipulation of predication
747 masks (as a bit-field). Despite the removal of all operations,
748 with the exception of CLIP and VSELECT.X
749 *all instructions from RVV Base are topologically re-mapped and retain their
750 complete functionality, intact*. Note that if RV64G ever had
751 a MV.X added as well as FCLIP, the full functionality of RVV-Base would
752 be obtained in SV.
753
754 Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
755 equivalents, so are left out of Simple-V. VSELECT could be included if
756 there existed a MV.X instruction in RV (MV.X is a hypothetical
757 non-immediate variant of MV that would allow another register to
758 specify which register was to be copied). Note that if any of these three
759 instructions are added to any given RV extension, their functionality
760 will be inherently parallelised.
761
762 With some exceptions, where it does not make sense or is simply too
763 challenging, all RV-Base instructions are parallelised:
764
765 * CSR instructions, whilst a case could be made for fast-polling of
766 a CSR into multiple registers, or for being able to copy multiple
767 contiguously addressed CSRs into contiguous registers, and so on,
768 are the fundamental core basis of SV. If parallelised, extreme
769 care would need to be taken. Additionally, CSR reads are done
770 using x0, and it is *really* inadviseable to tag x0.
771 * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
772 left as scalar.
773 * LR/SC could hypothetically be parallelised however their purpose is
774 single (complex) atomic memory operations where the LR must be followed
775 up by a matching SC. A sequence of parallel LR instructions followed
776 by a sequence of parallel SC instructions therefore is guaranteed to
777 not be useful. Not least: the guarantees of a Multi-LR/SC
778 would be impossible to provide if emulated in a trap.
779 * EBREAK, NOP, FENCE and others do not use registers so are not inherently
780 paralleliseable anyway.
781
782 All other operations using registers are automatically parallelised.
783 This includes AMOMAX, AMOSWAP and so on, where particular care and
784 attention must be paid.
785
786 Example pseudo-code for an integer ADD operation (including scalar operations).
787 Floating-point uses fp csrs.
788
789 function op_add(rd, rs1, rs2) # add not VADD!
790  int i, id=0, irs1=0, irs2=0;
791  predval = get_pred_val(FALSE, rd);
792  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
793  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
794  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
795  for (i = 0; i < VL; i++)
796 if (predval & 1<<i) # predication uses intregs
797    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
798 if (!int_vec[rd ].isvector) break;
799 if (int_vec[rd ].isvector)  { id += 1; }
800 if (int_vec[rs1].isvector)  { irs1 += 1; }
801 if (int_vec[rs2].isvector)  { irs2 += 1; }
802
803 Note that for simplicity there is quite a lot missing from the above
804 pseudo-code: element widths, zeroing on predication, dimensional
805 reshaping and offsets and so on. However it demonstrates the basic
806 principle. Augmentations that produce the full pseudo-code are covered in
807 other sections.
808
809 ## Instruction Format
810
811 It is critical to appreciate that there are
812 **no operations added to SV, at all**.
813
814 Instead, by using CSRs to tag registers as an indication of "changed behaviour",
815 SV *overloads* pre-existing branch operations into predicated
816 variants, and implicitly overloads arithmetic operations, MV,
817 FCVT, and LOAD/STORE depending on CSR configurations for bitwidth
818 and predication. **Everything** becomes parallelised. *This includes
819 Compressed instructions* as well as any future instructions and Custom
820 Extensions.
821
822 Note: CSR tags to change behaviour of instructions is nothing new, including
823 in RISC-V. UXL, SXL and MXL change the behaviour so that XLEN=32/64/128.
824 FRM changes the behaviour of the floating-point unit, to alter the rounding
825 mode. Other architectures change the LOAD/STORE byte-order from big-endian
826 to little-endian on a per-instruction basis. SV is just a little more...
827 comprehensive in its effect on instructions.
828
829 ## Branch Instructions
830
831 ### Standard Branch <a name="standard_branch"></a>
832
833 Branch operations use standard RV opcodes that are reinterpreted to
834 be "predicate variants" in the instance where either of the two src
835 registers are marked as vectors (active=1, vector=1).
836
837 Note that the predication register to use (if one is enabled) is taken from
838 the *first* src register, and that this is used, just as with predicated
839 arithmetic operations, to mask whether the comparison operations take
840 place or not. The target (destination) predication register
841 to use (if one is enabled) is taken from the *second* src register.
842
843 If either of src1 or src2 are scalars (whether by there being no
844 CSR register entry or whether by the CSR entry specifically marking
845 the register as "scalar") the comparison goes ahead as vector-scalar
846 or scalar-vector.
847
848 In instances where no vectorisation is detected on either src registers
849 the operation is treated as an absolutely standard scalar branch operation.
850 Where vectorisation is present on either or both src registers, the
851 branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
852 those tests that are predicated out).
853
854 Note that when zero-predication is enabled (from source rs1),
855 a cleared bit in the predicate indicates that the result
856 of the compare is set to "false", i.e. that the corresponding
857 destination bit (or result)) be set to zero. Contrast this with
858 when zeroing is not set: bits in the destination predicate are
859 only *set*; they are **not** cleared. This is important to appreciate,
860 as there may be an expectation that, going into the hardware-loop,
861 the destination predicate is always expected to be set to zero:
862 this is **not** the case. The destination predicate is only set
863 to zero if **zeroing** is enabled.
864
865 Note that just as with the standard (scalar, non-predicated) branch
866 operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
867 src1 and src2.
868
869 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
870 for predicated compare operations of function "cmp":
871
872 for (int i=0; i<vl; ++i)
873 if ([!]preg[p][i])
874 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
875 s2 ? vreg[rs2][i] : sreg[rs2]);
876
877 With associated predication, vector-length adjustments and so on,
878 and temporarily ignoring bitwidth (which makes the comparisons more
879 complex), this becomes:
880
881 s1 = reg_is_vectorised(src1);
882 s2 = reg_is_vectorised(src2);
883
884 if not s1 && not s2
885 if cmp(rs1, rs2) # scalar compare
886 goto branch
887 return
888
889 preg = int_pred_reg[rd]
890 reg = int_regfile
891
892 ps = get_pred_val(I/F==INT, rs1);
893 rd = get_pred_val(I/F==INT, rs2); # this may not exist
894
895 if not exists(rd) or zeroing:
896 result = 0
897 else
898 result = preg[rd]
899
900 for (int i = 0; i < VL; ++i)
901 if (zeroing)
902 if not (ps & (1<<i))
903 result &= ~(1<<i);
904 else if (ps & (1<<i))
905 if (cmp(s1 ? reg[src1+i]:reg[src1],
906 s2 ? reg[src2+i]:reg[src2])
907 result |= 1<<i;
908 else
909 result &= ~(1<<i);
910
911 if not exists(rd)
912 if result == ps
913 goto branch
914 else
915 preg[rd] = result # store in destination
916 if preg[rd] == ps
917 goto branch
918
919 Notes:
920
921 * Predicated SIMD comparisons would break src1 and src2 further down
922 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
923 Reordering") setting Vector-Length times (number of SIMD elements) bits
924 in Predicate Register rd, as opposed to just Vector-Length bits.
925 * The execution of "parallelised" instructions **must** be implemented
926 as "re-entrant" (to use a term from software). If an exception (trap)
927 occurs during the middle of a vectorised
928 Branch (now a SV predicated compare) operation, the partial results
929 of any comparisons must be written out to the destination
930 register before the trap is permitted to begin. If however there
931 is no predicate, the **entire** set of comparisons must be **restarted**,
932 with the offset loop indices set back to zero. This is because
933 there is no place to store the temporary result during the handling
934 of traps.
935
936 TODO: predication now taken from src2. also branch goes ahead
937 if all compares are successful.
938
939 Note also that where normally, predication requires that there must
940 also be a CSR register entry for the register being used in order
941 for the **predication** CSR register entry to also be active,
942 for branches this is **not** the case. src2 does **not** have
943 to have its CSR register entry marked as active in order for
944 predication on src2 to be active.
945
946 Also note: SV Branch operations are **not** twin-predicated
947 (see Twin Predication section). This would require three
948 element offsets: one to track src1, one to track src2 and a third
949 to track where to store the accumulation of the results. Given
950 that the element offsets need to be exposed via CSRs so that
951 the parallel hardware looping may be made re-entrant on traps
952 and exceptions, the decision was made not to make SV Branches
953 twin-predicated.
954
955 ### Floating-point Comparisons
956
957 There does not exist floating-point branch operations, only compare.
958 Interestingly no change is needed to the instruction format because
959 FP Compare already stores a 1 or a zero in its "rd" integer register
960 target, i.e. it's not actually a Branch at all: it's a compare.
961
962 In RV (scalar) Base, a branch on a floating-point compare is
963 done via the sequence "FEQ x1, f0, f5; BEQ x1, x0, #jumploc".
964 This does extend to SV, as long as x1 (in the example sequence given)
965 is vectorised. When that is the case, x1..x(1+VL-1) will also be
966 set to 0 or 1 depending on whether f0==f5, f1==f6, f2==f7 and so on.
967 The BEQ that follows will *also* compare x1==x0, x2==x0, x3==x0 and
968 so on. Consequently, unlike integer-branch, FP Compare needs no
969 modification in its behaviour.
970
971 In addition, it is noted that an entry "FNE" (the opposite of FEQ) is missing,
972 and whilst in ordinary branch code this is fine because the standard
973 RVF compare can always be followed up with an integer BEQ or a BNE (or
974 a compressed comparison to zero or non-zero), in predication terms that
975 becomes more of an impact. To deal with this, SV's predication has
976 had "invert" added to it.
977
978 Also: note that FP Compare may be predicated, using the destination
979 integer register (rd) to determine the predicate. FP Compare is **not**
980 a twin-predication operation, as, again, just as with SV Branches,
981 there are three registers involved: FP src1, FP src2 and INT rd.
982
983 ### Compressed Branch Instruction
984
985 Compressed Branch instructions are, just like standard Branch instructions,
986 reinterpreted to be vectorised and predicated based on the source register
987 (rs1s) CSR entries. As however there is only the one source register,
988 given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
989 to store the results of the comparisions is taken from CSR predication
990 table entries for **x0**.
991
992 The specific required use of x0 is, with a little thought, quite obvious,
993 but is counterintuitive. Clearly it is **not** recommended to redirect
994 x0 with a CSR register entry, however as a means to opaquely obtain
995 a predication target it is the only sensible option that does not involve
996 additional special CSRs (or, worse, additional special opcodes).
997
998 Note also that, just as with standard branches, the 2nd source
999 (in this case x0 rather than src2) does **not** have to have its CSR
1000 register table marked as "active" in order for predication to work.
1001
1002 ## Vectorised Dual-operand instructions
1003
1004 There is a series of 2-operand instructions involving copying (and
1005 sometimes alteration):
1006
1007 * C.MV
1008 * FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
1009 * C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
1010 * LOAD(-FP) and STORE(-FP)
1011
1012 All of these operations follow the same two-operand pattern, so it is
1013 *both* the source *and* destination predication masks that are taken into
1014 account. This is different from
1015 the three-operand arithmetic instructions, where the predication mask
1016 is taken from the *destination* register, and applied uniformly to the
1017 elements of the source register(s), element-for-element.
1018
1019 The pseudo-code pattern for twin-predicated operations is as
1020 follows:
1021
1022 function op(rd, rs):
1023  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1024  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1025  ps = get_pred_val(FALSE, rs); # predication on src
1026  pd = get_pred_val(FALSE, rd); # ... AND on dest
1027  for (int i = 0, int j = 0; i < VL && j < VL;):
1028 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1029 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1030 reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
1031 if (int_csr[rs].isvec) i++;
1032 if (int_csr[rd].isvec) j++; else break
1033
1034 This pattern covers scalar-scalar, scalar-vector, vector-scalar
1035 and vector-vector, and predicated variants of all of those.
1036 Zeroing is not presently included (TODO). As such, when compared
1037 to RVV, the twin-predicated variants of C.MV and FMV cover
1038 **all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
1039 VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
1040
1041 Note that:
1042
1043 * elwidth (SIMD) is not covered in the pseudo-code above
1044 * ending the loop early in scalar cases (VINSERT, VEXTRACT) is also
1045 not covered
1046 * zero predication is also not shown (TODO).
1047
1048 ### C.MV Instruction <a name="c_mv"></a>
1049
1050 There is no MV instruction in RV however there is a C.MV instruction.
1051 It is used for copying integer-to-integer registers (vectorised FMV
1052 is used for copying floating-point).
1053
1054 If either the source or the destination register are marked as vectors
1055 C.MV is reinterpreted to be a vectorised (multi-register) predicated
1056 move operation. The actual instruction's format does not change:
1057
1058 [[!table data="""
1059 15 12 | 11 7 | 6 2 | 1 0 |
1060 funct4 | rd | rs | op |
1061 4 | 5 | 5 | 2 |
1062 C.MV | dest | src | C0 |
1063 """]]
1064
1065 A simplified version of the pseudocode for this operation is as follows:
1066
1067 function op_mv(rd, rs) # MV not VMV!
1068  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1069  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1070  ps = get_pred_val(FALSE, rs); # predication on src
1071  pd = get_pred_val(FALSE, rd); # ... AND on dest
1072  for (int i = 0, int j = 0; i < VL && j < VL;):
1073 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1074 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1075 ireg[rd+j] <= ireg[rs+i];
1076 if (int_csr[rs].isvec) i++;
1077 if (int_csr[rd].isvec) j++; else break
1078
1079 There are several different instructions from RVV that are covered by
1080 this one opcode:
1081
1082 [[!table data="""
1083 src | dest | predication | op |
1084 scalar | vector | none | VSPLAT |
1085 scalar | vector | destination | sparse VSPLAT |
1086 scalar | vector | 1-bit dest | VINSERT |
1087 vector | scalar | 1-bit? src | VEXTRACT |
1088 vector | vector | none | VCOPY |
1089 vector | vector | src | Vector Gather |
1090 vector | vector | dest | Vector Scatter |
1091 vector | vector | src & dest | Gather/Scatter |
1092 vector | vector | src == dest | sparse VCOPY |
1093 """]]
1094
1095 Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
1096 operations with inversion on the src and dest predication for one of the
1097 two C.MV operations.
1098
1099 Note that in the instance where the Compressed Extension is not implemented,
1100 MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs.
1101 Note that the behaviour is **different** from C.MV because with addi the
1102 predication mask to use is taken **only** from rd and is applied against
1103 all elements: rs[i] = rd[i].
1104
1105 ### FMV, FNEG and FABS Instructions
1106
1107 These are identical in form to C.MV, except covering floating-point
1108 register copying. The same double-predication rules also apply.
1109 However when elwidth is not set to default the instruction is implicitly
1110 and automatic converted to a (vectorised) floating-point type conversion
1111 operation of the appropriate size covering the source and destination
1112 register bitwidths.
1113
1114 (Note that FMV, FNEG and FABS are all actually pseudo-instructions)
1115
1116 ### FVCT Instructions
1117
1118 These are again identical in form to C.MV, except that they cover
1119 floating-point to integer and integer to floating-point. When element
1120 width in each vector is set to default, the instructions behave exactly
1121 as they are defined for standard RV (scalar) operations, except vectorised
1122 in exactly the same fashion as outlined in C.MV.
1123
1124 However when the source or destination element width is not set to default,
1125 the opcode's explicit element widths are *over-ridden* to new definitions,
1126 and the opcode's element width is taken as indicative of the SIMD width
1127 (if applicable i.e. if packed SIMD is requested) instead.
1128
1129 For example FCVT.S.L would normally be used to convert a 64-bit
1130 integer in register rs1 to a 64-bit floating-point number in rd.
1131 If however the source rs1 is set to be a vector, where elwidth is set to
1132 default/2 and "packed SIMD" is enabled, then the first 32 bits of
1133 rs1 are converted to a floating-point number to be stored in rd's
1134 first element and the higher 32-bits *also* converted to floating-point
1135 and stored in the second. The 32 bit size comes from the fact that
1136 FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to
1137 divide that by two it means that rs1 element width is to be taken as 32.
1138
1139 Similar rules apply to the destination register.
1140
1141 ## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
1142
1143 An earlier draft of SV modified the behaviour of LOAD/STORE (modified
1144 the interpretation of the instruction fields). This
1145 actually undermined the fundamental principle of SV, namely that there
1146 be no modifications to the scalar behaviour (except where absolutely
1147 necessary), in order to simplify an implementor's task if considering
1148 converting a pre-existing scalar design to support parallelism.
1149
1150 So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
1151 do not change in SV, however just as with C.MV it is important to note
1152 that dual-predication is possible.
1153
1154 In vectorised architectures there are usually at least two different modes
1155 for LOAD/STORE:
1156
1157 * Read (or write for STORE) from sequential locations, where one
1158 register specifies the address, and the one address is incremented
1159 by a fixed amount. This is usually known as "Unit Stride" mode.
1160 * Read (or write) from multiple indirected addresses, where the
1161 vector elements each specify separate and distinct addresses.
1162
1163 To support these different addressing modes, the CSR Register "isvector"
1164 bit is used. So, for a LOAD, when the src register is set to
1165 scalar, the LOADs are sequentially incremented by the src register
1166 element width, and when the src register is set to "vector", the
1167 elements are treated as indirection addresses. Simplified
1168 pseudo-code would look like this:
1169
1170 function op_ld(rd, rs) # LD not VLD!
1171  rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
1172  rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
1173  ps = get_pred_val(FALSE, rs); # predication on src
1174  pd = get_pred_val(FALSE, rd); # ... AND on dest
1175  for (int i = 0, int j = 0; i < VL && j < VL;):
1176 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1177 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1178 if (int_csr[rd].isvec)
1179 # indirect mode (multi mode)
1180 srcbase = ireg[rsv+i];
1181 else
1182 # unit stride mode
1183 srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
1184 ireg[rdv+j] <= mem[srcbase + imm_offs];
1185 if (!int_csr[rs].isvec &&
1186 !int_csr[rd].isvec) break # scalar-scalar LD
1187 if (int_csr[rs].isvec) i++;
1188 if (int_csr[rd].isvec) j++;
1189
1190 Notes:
1191
1192 * For simplicity, zeroing and elwidth is not included in the above:
1193 the key focus here is the decision-making for srcbase; vectorised
1194 rs means use sequentially-numbered registers as the indirection
1195 address, and scalar rs is "offset" mode.
1196 * The test towards the end for whether both source and destination are
1197 scalar is what makes the above pseudo-code provide the "standard" RV
1198 Base behaviour for LD operations.
1199 * The offset in bytes (XLEN/8) changes depending on whether the
1200 operation is a LB (1 byte), LH (2 byes), LW (4 bytes) or LD
1201 (8 bytes), and also whether the element width is over-ridden
1202 (see special element width section).
1203
1204 ## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
1205
1206 C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
1207 where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
1208 It is therefore possible to use predicated C.LWSP to efficiently
1209 pop registers off the stack (by predicating x2 as the source), cherry-picking
1210 which registers to store to (by predicating the destination). Likewise
1211 for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
1212
1213 The two modes ("unit stride" and multi-indirection) are still supported,
1214 as with standard LD/ST. Essentially, the only difference is that the
1215 use of x2 is hard-coded into the instruction.
1216
1217 **Note**: it is still possible to redirect x2 to an alternative target
1218 register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
1219 general-purpose LOAD/STORE operations.
1220
1221 ## Compressed LOAD / STORE Instructions
1222
1223 Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
1224 where the same rules apply and the same pseudo-code apply as for
1225 non-compressed LOAD/STORE. Again: setting scalar or vector mode
1226 on the src for LOAD and dest for STORE switches mode from "Unit Stride"
1227 to "Multi-indirection", respectively.
1228
1229 # Element bitwidth polymorphism <a name="elwidth"></a>
1230
1231 Element bitwidth is best covered as its own special section, as it
1232 is quite involved and applies uniformly across-the-board. SV restricts
1233 bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
1234
1235 The effect of setting an element bitwidth is to re-cast each entry
1236 in the register table, and for all memory operations involving
1237 load/stores of certain specific sizes, to a completely different width.
1238 Thus In c-style terms, on an RV64 architecture, effectively each register
1239 now looks like this:
1240
1241 typedef union {
1242 uint8_t b[8];
1243 uint16_t s[4];
1244 uint32_t i[2];
1245 uint64_t l[1];
1246 } reg_t;
1247
1248 // integer table: assume maximum SV 7-bit regfile size
1249 reg_t int_regfile[128];
1250
1251 where the CSR Register table entry (not the instruction alone) determines
1252 which of those union entries is to be used on each operation, and the
1253 VL element offset in the hardware-loop specifies the index into each array.
1254
1255 However a naive interpretation of the data structure above masks the
1256 fact that setting VL greater than 8, for example, when the bitwidth is 8,
1257 accessing one specific register "spills over" to the following parts of
1258 the register file in a sequential fashion. So a much more accurate way
1259 to reflect this would be:
1260
1261 typedef union {
1262 uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
1263 uint8_t b[0]; // array of type uint8_t
1264 uint16_t s[0];
1265 uint32_t i[0];
1266 uint64_t l[0];
1267 uint128_t d[0];
1268 } reg_t;
1269
1270 reg_t int_regfile[128];
1271
1272 where when accessing any individual regfile[n].b entry it is permitted
1273 (in c) to arbitrarily over-run the *declared* length of the array (zero),
1274 and thus "overspill" to consecutive register file entries in a fashion
1275 that is completely transparent to a greatly-simplified software / pseudo-code
1276 representation.
1277 It is however critical to note that it is clearly the responsibility of
1278 the implementor to ensure that, towards the end of the register file,
1279 an exception is thrown if attempts to access beyond the "real" register
1280 bytes is ever attempted.
1281
1282 Now we may modify pseudo-code an operation where all element bitwidths have
1283 been set to the same size, where this pseudo-code is otherwise identical
1284 to its "non" polymorphic versions (above):
1285
1286 function op_add(rd, rs1, rs2) # add not VADD!
1287 ...
1288 ...
1289  for (i = 0; i < VL; i++)
1290 ...
1291 ...
1292 // TODO, calculate if over-run occurs, for each elwidth
1293 if (elwidth == 8) {
1294    int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
1295     int_regfile[rs2].i[irs2];
1296 } else if elwidth == 16 {
1297    int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
1298     int_regfile[rs2].s[irs2];
1299 } else if elwidth == 32 {
1300    int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
1301     int_regfile[rs2].i[irs2];
1302 } else { // elwidth == 64
1303    int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
1304     int_regfile[rs2].l[irs2];
1305 }
1306 ...
1307 ...
1308
1309 So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
1310 following sequentially on respectively from the same) are "type-cast"
1311 to 8-bit; for 16-bit entries likewise and so on.
1312
1313 However that only covers the case where the element widths are the same.
1314 Where the element widths are different, the following algorithm applies:
1315
1316 * Analyse the bitwidth of all source operands and work out the
1317 maximum. Record this as "maxsrcbitwidth"
1318 * If any given source operand requires sign-extension or zero-extension
1319 (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
1320 sign-extension / zero-extension or whatever is specified in the standard
1321 RV specification, **change** that to sign-extending from the respective
1322 individual source operand's bitwidth from the CSR table out to
1323 "maxsrcbitwidth" (previously calculated), instead.
1324 * Following separate and distinct (optional) sign/zero-extension of all
1325 source operands as specifically required for that operation, carry out the
1326 operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
1327 this may be a "null" (copy) operation, and that with FCVT, the changes
1328 to the source and destination bitwidths may also turn FVCT effectively
1329 into a copy).
1330 * If the destination operand requires sign-extension or zero-extension,
1331 instead of a mandatory fixed size (typically 32-bit for arithmetic,
1332 for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
1333 etc.), overload the RV specification with the bitwidth from the
1334 destination register's elwidth entry.
1335 * Finally, store the (optionally) sign/zero-extended value into its
1336 destination: memory for sb/sw etc., or an offset section of the register
1337 file for an arithmetic operation.
1338
1339 In this way, polymorphic bitwidths are achieved without requiring a
1340 massive 64-way permutation of calculations **per opcode**, for example
1341 (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
1342 rd bitwidths). The pseudo-code is therefore as follows:
1343
1344 typedef union {
1345 uint8_t b;
1346 uint16_t s;
1347 uint32_t i;
1348 uint64_t l;
1349 } el_reg_t;
1350
1351 bw(elwidth):
1352 if elwidth == 0:
1353 return xlen
1354 if elwidth == 1:
1355 return xlen / 2
1356 if elwidth == 2:
1357 return xlen * 2
1358 // elwidth == 3:
1359 return 8
1360
1361 get_max_elwidth(rs1, rs2):
1362 return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
1363 bw(int_csr[rs2].elwidth)) # again XLEN if no entry
1364
1365 get_polymorphed_reg(reg, bitwidth, offset):
1366 el_reg_t res;
1367 res.l = 0; // TODO: going to need sign-extending / zero-extending
1368 if bitwidth == 8:
1369 reg.b = int_regfile[reg].b[offset]
1370 elif bitwidth == 16:
1371 reg.s = int_regfile[reg].s[offset]
1372 elif bitwidth == 32:
1373 reg.i = int_regfile[reg].i[offset]
1374 elif bitwidth == 64:
1375 reg.l = int_regfile[reg].l[offset]
1376 return res
1377
1378 set_polymorphed_reg(reg, bitwidth, offset, val):
1379 if (!int_csr[reg].isvec):
1380 # sign/zero-extend depending on opcode requirements, from
1381 # the reg's bitwidth out to the full bitwidth of the regfile
1382 val = sign_or_zero_extend(val, bitwidth, xlen)
1383 int_regfile[reg].l[0] = val
1384 elif bitwidth == 8:
1385 int_regfile[reg].b[offset] = val
1386 elif bitwidth == 16:
1387 int_regfile[reg].s[offset] = val
1388 elif bitwidth == 32:
1389 int_regfile[reg].i[offset] = val
1390 elif bitwidth == 64:
1391 int_regfile[reg].l[offset] = val
1392
1393 maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
1394 destwid = int_csr[rs1].elwidth # destination element width
1395  for (i = 0; i < VL; i++)
1396 if (predval & 1<<i) # predication uses intregs
1397 // TODO, calculate if over-run occurs, for each elwidth
1398 src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
1399 // TODO, sign/zero-extend src1 and src2 as operation requires
1400 if (op_requires_sign_extend_src1)
1401 src1 = sign_extend(src1, maxsrcwid)
1402 src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
1403 result = src1 + src2 # actual add here
1404 // TODO, sign/zero-extend result, as operation requires
1405 if (op_requires_sign_extend_dest)
1406 result = sign_extend(result, maxsrcwid)
1407 set_polymorphed_reg(rd, destwid, ird, result)
1408 if (!int_vec[rd].isvector) break
1409 if (int_vec[rd ].isvector)  { id += 1; }
1410 if (int_vec[rs1].isvector)  { irs1 += 1; }
1411 if (int_vec[rs2].isvector)  { irs2 += 1; }
1412
1413 Whilst specific sign-extension and zero-extension pseudocode call
1414 details are left out, due to each operation being different, the above
1415 should be clear that;
1416
1417 * the source operands are extended out to the maximum bitwidth of all
1418 source operands
1419 * the operation takes place at that maximum source bitwidth (the
1420 destination bitwidth is not involved at this point, at all)
1421 * the result is extended (or potentially even, truncated) before being
1422 stored in the destination. i.e. truncation (if required) to the
1423 destination width occurs **after** the operation **not** before.
1424 * when the destination is not marked as "vectorised", the **full**
1425 (standard, scalar) register file entry is taken up, i.e. the
1426 element is either sign-extended or zero-extended to cover the
1427 full register bitwidth (XLEN) if it is not already XLEN bits long.
1428
1429 Implementors are entirely free to optimise the above, particularly
1430 if it is specifically known that any given operation will complete
1431 accurately in less bits, as long as the results produced are
1432 directly equivalent and equal, for all inputs and all outputs,
1433 to those produced by the above algorithm.
1434
1435 ## Polymorphic floating-point operation exceptions and error-handling
1436
1437 For floating-point operations, conversion takes place without
1438 raising any kind of exception. Exactly as specified in the standard
1439 RV specification, NAN (or appropriate) is stored if the result
1440 is beyond the range of the destination, and, again, exactly as
1441 with the standard RV specification just as with scalar
1442 operations, the floating-point flag is raised (FCSR). And, again, just as
1443 with scalar operations, it is software's responsibility to check this flag.
1444 Given that the FCSR flags are "accrued", the fact that multiple element
1445 operations could have occurred is not a problem.
1446
1447 Note that it is perfectly legitimate for floating-point bitwidths of
1448 only 8 to be specified. However whilst it is possible to apply IEEE 754
1449 principles, no actual standard yet exists. Implementors wishing to
1450 provide hardware-level 8-bit support rather than throw a trap to emulate
1451 in software should contact the author of this specification before
1452 proceeding.
1453
1454 ## Polymorphic shift operators
1455
1456 A special note is needed for changing the element width of left and right
1457 shift operators, particularly right-shift. Even for standard RV base,
1458 in order for correct results to be returned, the second operand RS2 must
1459 be truncated to be within the range of RS1's bitwidth. spike's implementation
1460 of sll for example is as follows:
1461
1462 WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));
1463
1464 which means: where XLEN is 32 (for RV32), restrict RS2 to cover the
1465 range 0..31 so that RS1 will only be left-shifted by the amount that
1466 is possible to fit into a 32-bit register. Whilst this appears not
1467 to matter for hardware, it matters greatly in software implementations,
1468 and it also matters where an RV64 system is set to "RV32" mode, such
1469 that the underlying registers RS1 and RS2 comprise 64 hardware bits
1470 each.
1471
1472 For SV, where each operand's element bitwidth may be over-ridden, the
1473 rule about determining the operation's bitwidth *still applies*, being
1474 defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
1475 **also applies to the truncation of RS2**. In other words, *after*
1476 determining the maximum bitwidth, RS2's range must **also be truncated**
1477 to ensure a correct answer. Example:
1478
1479 * RS1 is over-ridden to a 16-bit width
1480 * RS2 is over-ridden to an 8-bit width
1481 * RD is over-ridden to a 64-bit width
1482 * the maximum bitwidth is thus determined to be 16-bit - max(8,16)
1483 * RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
1484
1485 Pseudocode (in spike) for this example would therefore be:
1486
1487 WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
1488
1489 This example illustrates that considerable care therefore needs to be
1490 taken to ensure that left and right shift operations are implemented
1491 correctly. The key is that
1492
1493 * The operation bitwidth is determined by the maximum bitwidth
1494 of the *source registers*, **not** the destination register bitwidth
1495 * The result is then sign-extend (or truncated) as appropriate.
1496
1497 ## Polymorphic MULH/MULHU/MULHSU
1498
1499 MULH is designed to take the top half MSBs of a multiply that
1500 does not fit within the range of the source operands, such that
1501 smaller width operations may produce a full double-width multiply
1502 in two cycles. The issue is: SV allows the source operands to
1503 have variable bitwidth.
1504
1505 Here again special attention has to be paid to the rules regarding
1506 bitwidth, which, again, are that the operation is performed at
1507 the maximum bitwidth of the **source** registers. Therefore:
1508
1509 * An 8-bit x 8-bit multiply will create a 16-bit result that must
1510 be shifted down by 8 bits
1511 * A 16-bit x 8-bit multiply will create a 24-bit result that must
1512 be shifted down by 16 bits (top 8 bits being zero)
1513 * A 16-bit x 16-bit multiply will create a 32-bit result that must
1514 be shifted down by 16 bits
1515 * A 32-bit x 16-bit multiply will create a 48-bit result that must
1516 be shifted down by 32 bits
1517 * A 32-bit x 8-bit multiply will create a 40-bit result that must
1518 be shifted down by 32 bits
1519
1520 So again, just as with shift-left and shift-right, the result
1521 is shifted down by the maximum of the two source register bitwidths.
1522 And, exactly again, truncation or sign-extension is performed on the
1523 result. If sign-extension is to be carried out, it is performed
1524 from the same maximum of the two source register bitwidths out
1525 to the result element's bitwidth.
1526
1527 If truncation occurs, i.e. the top MSBs of the result are lost,
1528 this is "Officially Not Our Problem", i.e. it is assumed that the
1529 programmer actually desires the result to be truncated. i.e. if the
1530 programmer wanted all of the bits, they would have set the destination
1531 elwidth to accommodate them.
1532
1533 ## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
1534
1535 Polymorphic element widths in vectorised form means that the data
1536 being loaded (or stored) across multiple registers needs to be treated
1537 (reinterpreted) as a contiguous stream of elwidth-wide items, where
1538 the source register's element width is **independent** from the destination's.
1539
1540 This makes for a slightly more complex algorithm when using indirection
1541 on the "addressed" register (source for LOAD and destination for STORE),
1542 particularly given that the LOAD/STORE instruction provides important
1543 information about the width of the data to be reinterpreted.
1544
1545 Let's illustrate the "load" part, where the pseudo-code for elwidth=default
1546 was as follows, and i is the loop from 0 to VL-1:
1547
1548 srcbase = ireg[rs+i];
1549 return mem[srcbase + imm]; // returns XLEN bits
1550
1551 Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
1552 chunks are taken from the source memory location addressed by the current
1553 indexed source address register, and only when a full 32-bits-worth
1554 are taken will the index be moved on to the next contiguous source
1555 address register:
1556
1557 bitwidth = bw(elwidth); // source elwidth from CSR reg entry
1558 elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
1559 srcbase = ireg[rs+i/(elsperblock)]; // integer divide
1560 offs = i % elsperblock; // modulo
1561 return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
1562
1563 Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
1564 and 128 for LQ.
1565
1566 The principle is basically exactly the same as if the srcbase were pointing
1567 at the memory of the *register* file: memory is re-interpreted as containing
1568 groups of elwidth-wide discrete elements.
1569
1570 When storing the result from a load, it's important to respect the fact
1571 that the destination register has its *own separate element width*. Thus,
1572 when each element is loaded (at the source element width), any sign-extension
1573 or zero-extension (or truncation) needs to be done to the *destination*
1574 bitwidth. Also, the storing has the exact same analogous algorithm as
1575 above, where in fact it is just the set\_polymorphed\_reg pseudocode
1576 (completely unchanged) used above.
1577
1578 One issue remains: when the source element width is **greater** than
1579 the width of the operation, it is obvious that a single LB for example
1580 cannot possibly obtain 16-bit-wide data. This condition may be detected
1581 where, when using integer divide, elsperblock (the width of the LOAD
1582 divided by the bitwidth of the element) is zero.
1583
1584 The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
1585
1586 elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
1587
1588 The elements, if the element bitwidth is larger than the LD operation's
1589 size, will then be sign/zero-extended to the full LD operation size, as
1590 specified by the LOAD (LDU instead of LD, LBU instead of LB), before
1591 being passed on to the second phase.
1592
1593 As LOAD/STORE may be twin-predicated, it is important to note that
1594 the rules on twin predication still apply, except where in previous
1595 pseudo-code (elwidth=default for both source and target) it was
1596 the *registers* that the predication was applied to, it is now the
1597 **elements** that the predication is applied to.
1598
1599 Thus the full pseudocode for all LD operations may be written out
1600 as follows:
1601
1602 function LBU(rd, rs):
1603 load_elwidthed(rd, rs, 8, true)
1604 function LB(rd, rs):
1605 load_elwidthed(rd, rs, 8, false)
1606 function LH(rd, rs):
1607 load_elwidthed(rd, rs, 16, false)
1608 ...
1609 ...
1610 function LQ(rd, rs):
1611 load_elwidthed(rd, rs, 128, false)
1612
1613 # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
1614 function load_memory(rs, imm, i, opwidth):
1615 elwidth = int_csr[rs].elwidth
1616 bitwidth = bw(elwidth);
1617 elsperblock = min(1, opwidth / bitwidth)
1618 srcbase = ireg[rs+i/(elsperblock)];
1619 offs = i % elsperblock;
1620 return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
1621
1622 function load_elwidthed(rd, rs, opwidth, unsigned):
1623 destwid = int_csr[rd].elwidth # destination element width
1624  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1625  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1626  ps = get_pred_val(FALSE, rs); # predication on src
1627  pd = get_pred_val(FALSE, rd); # ... AND on dest
1628  for (int i = 0, int j = 0; i < VL && j < VL;):
1629 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1630 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1631 val = load_memory(rs, imm, i, opwidth)
1632 if unsigned:
1633 val = zero_extend(val, min(opwidth, bitwidth))
1634 else:
1635 val = sign_extend(val, min(opwidth, bitwidth))
1636 set_polymorphed_reg(rd, bitwidth, j, val)
1637 if (int_csr[rs].isvec) i++;
1638 if (int_csr[rd].isvec) j++; else break;
1639
1640 Note:
1641
1642 * when comparing against for example the twin-predicated c.mv
1643 pseudo-code, the pattern of independent incrementing of rd and rs
1644 is preserved unchanged.
1645 * just as with the c.mv pseudocode, zeroing is not included and must be
1646 taken into account (TODO).
1647 * that due to the use of a twin-predication algorithm, LOAD/STORE also
1648 take on the same VSPLAT, VINSERT, VREDUCE, VEXTRACT, VGATHER and
1649 VSCATTER characteristics.
1650 * that due to the use of the same set\_polymorphed\_reg pseudocode,
1651 a destination that is not vectorised (marked as scalar) will
1652 result in the element being fully sign-extended or zero-extended
1653 out to the full register file bitwidth (XLEN). When the source
1654 is also marked as scalar, this is how the compatibility with
1655 standard RV LOAD/STORE is preserved by this algorithm.
1656
1657 ### Example Tables showing LOAD elements
1658
1659 This section contains examples of vectorised LOAD operations, showing
1660 how the two stage process works (three if zero/sign-extension is included).
1661
1662
1663 #### Example: LD x8, x5(0), x8 CSR-elwidth=32, x5 CSR-elwidth=16, VL=7
1664
1665 This is:
1666
1667 * a 64-bit load, with an offset of zero
1668 * with a source-address elwidth of 16-bit
1669 * into a destination-register with an elwidth of 32-bit
1670 * where VL=7
1671 * from register x5 (actually x5-x6) to x8 (actually x8 to half of x11)
1672 * RV64, where XLEN=64 is assumed.
1673
1674 First, the memory table, which, due to the
1675 element width being 16 and the operation being LD (64), the 64-bits
1676 loaded from memory are subdivided into groups of **four** elements.
1677 And, with VL being 7 (deliberately to illustrate that this is reasonable
1678 and possible), the first four are sourced from the offset addresses pointed
1679 to by x5, and the next three from the ofset addresses pointed to by
1680 the next contiguous register, x6:
1681
1682 [[!table data="""
1683 addr | byte 0 | byte 1 | byte 2 | byte 3 | byte 4 | byte 5 | byte 6 | byte 7 |
1684 @x5 | elem 0 || elem 1 || elem 2 || elem 3 ||
1685 @x6 | elem 4 || elem 5 || elem 6 || not loaded ||
1686 """]]
1687
1688 Next, the elements are zero-extended from 16-bit to 32-bit, as whilst
1689 the elwidth CSR entry for x5 is 16-bit, the destination elwidth on x8 is 32.
1690
1691 [[!table data="""
1692 byte 3 | byte 2 | byte 1 | byte 0 |
1693 0x0 | 0x0 | elem0 ||
1694 0x0 | 0x0 | elem1 ||
1695 0x0 | 0x0 | elem2 ||
1696 0x0 | 0x0 | elem3 ||
1697 0x0 | 0x0 | elem4 ||
1698 0x0 | 0x0 | elem5 ||
1699 0x0 | 0x0 | elem6 ||
1700 0x0 | 0x0 | elem7 ||
1701 """]]
1702
1703 Lastly, the elements are stored in contiguous blocks, as if x8 was also
1704 byte-addressable "memory". That "memory" happens to cover registers
1705 x8, x9, x10 and x11, with the last 32 "bits" of x11 being **UNMODIFIED**:
1706
1707 [[!table data="""
1708 reg# | byte 7 | byte 6 | byte 5 | byte 4 | byte 3 | byte 2 | byte 1 | byte 0 |
1709 x8 | 0x0 | 0x0 | elem 1 || 0x0 | 0x0 | elem 0 ||
1710 x9 | 0x0 | 0x0 | elem 3 || 0x0 | 0x0 | elem 2 ||
1711 x10 | 0x0 | 0x0 | elem 5 || 0x0 | 0x0 | elem 4 ||
1712 x11 | **UNMODIFIED** |||| 0x0 | 0x0 | elem 6 ||
1713 """]]
1714
1715 Thus we have data that is loaded from the **addresses** pointed to by
1716 x5 and x6, zero-extended from 16-bit to 32-bit, stored in the **registers**
1717 x8 through to half of x11.
1718 The end result is that elements 0 and 1 end up in x8, with element 8 being
1719 shifted up 32 bits, and so on, until finally element 6 is in the
1720 LSBs of x11.
1721
1722 Note that whilst the memory addressing table is shown left-to-right byte order,
1723 the registers are shown in right-to-left (MSB) order. This does **not**
1724 imply that bit or byte-reversal is carried out: it's just easier to visualise
1725 memory as being contiguous bytes, and emphasises that registers are not
1726 really actually "memory" as such.
1727
1728 ## Why SV bitwidth specification is restricted to 4 entries
1729
1730 The four entries for SV element bitwidths only allows three over-rides:
1731
1732 * default bitwidth for a given operation *divided* by two
1733 * default bitwidth for a given operation *multiplied* by two
1734 * 8-bit
1735
1736 At first glance this seems completely inadequate: for example, RV64
1737 cannot possibly operate on 16-bit operations, because 64 divided by
1738 2 is 32. However, the reader may have forgotten that it is possible,
1739 at run-time, to switch a 64-bit application into 32-bit mode, by
1740 setting UXL. Once switched, opcodes that formerly had 64-bit
1741 meanings now have 32-bit meanings, and in this way, "default/2"
1742 now reaches **16-bit** where previously it meant "32-bit".
1743
1744 There is however an absolutely crucial aspect oF SV here that explicitly
1745 needs spelling out, and it's whether the "vectorised" bit is set in
1746 the Register's CSR entry.
1747
1748 If "vectorised" is clear (not set), this indicates that the operation
1749 is "scalar". Under these circumstances, when set on a destination (RD),
1750 then sign-extension and zero-extension, whilst changed to match the
1751 override bitwidth (if set), will erase the **full** register entry
1752 (64-bit if RV64).
1753
1754 When vectorised is *set*, this indicates that the operation now treats
1755 **elements** as if they were independent registers, so regardless of
1756 the length, any parts of a given actual register that are not involved
1757 in the operation are **NOT** modified, but are **PRESERVED**.
1758
1759 SIMD micro-architectures may implement this by using predication on
1760 any elements in a given actual register that are beyond the end of
1761 multi-element operation.
1762
1763 Example:
1764
1765 * rs1, rs2 and rd are all set to 8-bit
1766 * VL is set to 3
1767 * RV64 architecture is set (UXL=64)
1768 * add operation is carried out
1769 * bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16]
1770 concatenated with similar add operations on bits 15..8 and 7..0
1771 * bits 24 through 63 **remain as they originally were**.
1772
1773 Example SIMD micro-architectural implementation:
1774
1775 * SIMD architecture works out the nearest round number of elements
1776 that would fit into a full RV64 register (in this case: 8)
1777 * SIMD architecture creates a hidden predicate, binary 0b00000111
1778 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
1779 * SIMD architecture goes ahead with the add operation as if it
1780 was a full 8-wide batch of 8 adds
1781 * SIMD architecture passes top 5 elements through the adders
1782 (which are "disabled" due to zero-bit predication)
1783 * SIMD architecture gets the 5 unmodified top 8-bits back unmodified
1784 and stores them in rd.
1785
1786 This requires a read on rd, however this is required anyway in order
1787 to support non-zeroing mode.
1788
1789 ## Polymorphic floating-point
1790
1791 Standard scalar RV integer operations base the register width on XLEN,
1792 which may be changed (UXL in USTATUS, and the corresponding MXL and
1793 SXL in MSTATUS and SSTATUS respectively). Integer LOAD, STORE and
1794 arithmetic operations are therefore restricted to an active XLEN bits,
1795 with sign or zero extension to pad out the upper bits when XLEN has
1796 been dynamically set to less than the actual register size.
1797
1798 For scalar floating-point, the active (used / changed) bits are
1799 specified exclusively by the operation: ADD.S specifies an active
1800 32-bits, with the upper bits of the source registers needing to
1801 be all 1s ("NaN-boxed"), and the destination upper bits being
1802 *set* to all 1s (including on LOAD/STOREs).
1803
1804 Where elwidth is set to default (on any source or the destination)
1805 it is obvious that this NaN-boxing behaviour can and should be
1806 preserved. When elwidth is non-default things are less obvious,
1807 so need to be thought through. Here is a normal (scalar) sequence,
1808 assuming an RV64 which supports Quad (128-bit) FLEN:
1809
1810 * FLD loads 64-bit wide from memory. Top 64 MSBs are set to all 1s
1811 * ADD.D performs a 64-bit-wide add. Top 64 MSBs of destination set to 1s.
1812 * FSD stores lowest 64-bits from the 128-bit-wide register to memory:
1813 top 64 MSBs ignored.
1814
1815 Therefore it makes sense to mirror this behaviour when, for example,
1816 elwidth is set to 32. Assume elwidth set to 32 on all source and
1817 destination registers:
1818
1819 * FLD loads 64-bit wide from memory as **two** 32-bit single-precision
1820 floating-point numbers.
1821 * ADD.D performs **two** 32-bit-wide adds, storing one of the adds
1822 in bits 0-31 and the second in bits 32-63.
1823 * FSD stores lowest 64-bits from the 128-bit-wide register to memory
1824
1825 Here's the thing: it does not make sense to overwrite the top 64 MSBs
1826 of the registers either during the FLD **or** the ADD.D. The reason
1827 is that, effectively, the top 64 MSBs actually represent a completely
1828 independent 64-bit register, so overwriting it is not only gratuitous
1829 but may actually be harmful for a future extension to SV which may
1830 have a way to directly access those top 64 bits.
1831
1832 The decision is therefore **not** to touch the upper parts of floating-point
1833 registers whereever elwidth is set to non-default values, including
1834 when "isvec" is false in a given register's CSR entry. Only when the
1835 elwidth is set to default **and** isvec is false will the standard
1836 RV behaviour be followed, namely that the upper bits be modified.
1837
1838 Ultimately if elwidth is default and isvec false on *all* source
1839 and destination registers, a SimpleV instruction defaults completely
1840 to standard RV scalar behaviour (this holds true for **all** operations,
1841 right across the board).
1842
1843 The nice thing here is that ADD.S, ADD.D and ADD.Q when elwidth are
1844 non-default values are effectively all the same: they all still perform
1845 multiple ADD operations, just at different widths. A future extension
1846 to SimpleV may actually allow ADD.S to access the upper bits of the
1847 register, effectively breaking down a 128-bit register into a bank
1848 of 4 independently-accesible 32-bit registers.
1849
1850 In the meantime, although when e.g. setting VL to 8 it would technically
1851 make no difference to the ALU whether ADD.S, ADD.D or ADD.Q is used,
1852 using ADD.Q may be an easy way to signal to the microarchitecture that
1853 it is to receive a higher VL value. On a superscalar OoO architecture
1854 there may be absolutely no difference, however on simpler SIMD-style
1855 microarchitectures they may not necessarily have the infrastructure in
1856 place to know the difference, such that when VL=8 and an ADD.D instruction
1857 is issued, it completes in 2 cycles (or more) rather than one, where
1858 if an ADD.Q had been issued instead on such simpler microarchitectures
1859 it would complete in one.
1860
1861 ## Specific instruction walk-throughs
1862
1863 This section covers walk-throughs of the above-outlined procedure
1864 for converting standard RISC-V scalar arithmetic operations to
1865 polymorphic widths, to ensure that it is correct.
1866
1867 ### add
1868
1869 Standard Scalar RV32/RV64 (xlen):
1870
1871 * RS1 @ xlen bits
1872 * RS2 @ xlen bits
1873 * add @ xlen bits
1874 * RD @ xlen bits
1875
1876 Polymorphic variant:
1877
1878 * RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
1879 * RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
1880 * add @ max(rs1, rs2) bits
1881 * RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate
1882
1883 Note here that polymorphic add zero-extends its source operands,
1884 where addw sign-extends.
1885
1886 ### addw
1887
1888 The RV Specification specifically states that "W" variants of arithmetic
1889 operations always produce 32-bit signed values. In a polymorphic
1890 environment it is reasonable to assume that the signed aspect is
1891 preserved, where it is the length of the operands and the result
1892 that may be changed.
1893
1894 Standard Scalar RV64 (xlen):
1895
1896 * RS1 @ xlen bits
1897 * RS2 @ xlen bits
1898 * add @ xlen bits
1899 * RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
1900
1901 Polymorphic variant:
1902
1903 * RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
1904 * RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
1905 * add @ max(rs1, rs2) bits
1906 * RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
1907
1908 Note here that polymorphic addw sign-extends its source operands,
1909 where add zero-extends.
1910
1911 This requires a little more in-depth analysis. Where the bitwidth of
1912 rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
1913 only where the bitwidth of either rs1 or rs2 are different, will the
1914 lesser-width operand be sign-extended.
1915
1916 Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
1917 where for add they are both zero-extended. This holds true for all arithmetic
1918 operations ending with "W".
1919
1920 ### addiw
1921
1922 Standard Scalar RV64I:
1923
1924 * RS1 @ xlen bits, truncated to 32-bit
1925 * immed @ 12 bits, sign-extended to 32-bit
1926 * add @ 32 bits
1927 * RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
1928
1929 Polymorphic variant:
1930
1931 * RS1 @ rs1 bits
1932 * immed @ 12 bits, sign-extend to max(rs1, 12) bits
1933 * add @ max(rs1, 12) bits
1934 * RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
1935
1936 # Predication Element Zeroing
1937
1938 The introduction of zeroing on traditional vector predication is usually
1939 intended as an optimisation for lane-based microarchitectures with register
1940 renaming to be able to save power by avoiding a register read on elements
1941 that are passed through en-masse through the ALU. Simpler microarchitectures
1942 do not have this issue: they simply do not pass the element through to
1943 the ALU at all, and therefore do not store it back in the destination.
1944 More complex non-lane-based micro-architectures can, when zeroing is
1945 not set, use the predication bits to simply avoid sending element-based
1946 operations to the ALUs, entirely: thus, over the long term, potentially
1947 keeping all ALUs 100% occupied even when elements are predicated out.
1948
1949 SimpleV's design principle is not based on or influenced by
1950 microarchitectural design factors: it is a hardware-level API.
1951 Therefore, looking purely at whether zeroing is *useful* or not,
1952 (whether less instructions are needed for certain scenarios),
1953 given that a case can be made for zeroing *and* non-zeroing, the
1954 decision was taken to add support for both.
1955
1956 ## Single-predication (based on destination register)
1957
1958 Zeroing on predication for arithmetic operations is taken from
1959 the destination register's predicate. i.e. the predication *and*
1960 zeroing settings to be applied to the whole operation come from the
1961 CSR Predication table entry for the destination register.
1962 Thus when zeroing is set on predication of a destination element,
1963 if the predication bit is clear, then the destination element is *set*
1964 to zero (twin-predication is slightly different, and will be covered
1965 next).
1966
1967 Thus the pseudo-code loop for a predicated arithmetic operation
1968 is modified to as follows:
1969
1970  for (i = 0; i < VL; i++)
1971 if not zeroing: # an optimisation
1972 while (!(predval & 1<<i) && i < VL)
1973 if (int_vec[rd ].isvector)  { id += 1; }
1974 if (int_vec[rs1].isvector)  { irs1 += 1; }
1975 if (int_vec[rs2].isvector)  { irs2 += 1; }
1976 if i == VL:
1977 break
1978 if (predval & 1<<i)
1979 src1 = ....
1980 src2 = ...
1981 else:
1982 result = src1 + src2 # actual add (or other op) here
1983 set_polymorphed_reg(rd, destwid, ird, result)
1984 if (!int_vec[rd].isvector) break
1985 else if zeroing:
1986 result = 0
1987 set_polymorphed_reg(rd, destwid, ird, result)
1988 if (int_vec[rd ].isvector)  { id += 1; }
1989 else if (predval & 1<<i) break;
1990 if (int_vec[rs1].isvector)  { irs1 += 1; }
1991 if (int_vec[rs2].isvector)  { irs2 += 1; }
1992
1993 The optimisation to skip elements entirely is only possible for certain
1994 micro-architectures when zeroing is not set. However for lane-based
1995 micro-architectures this optimisation may not be practical, as it
1996 implies that elements end up in different "lanes". Under these
1997 circumstances it is perfectly fine to simply have the lanes
1998 "inactive" for predicated elements, even though it results in
1999 less than 100% ALU utilisation.
2000
2001 ## Twin-predication (based on source and destination register)
2002
2003 Twin-predication is not that much different, except that that
2004 the source is independently zero-predicated from the destination.
2005 This means that the source may be zero-predicated *or* the
2006 destination zero-predicated *or both*, or neither.
2007
2008 When with twin-predication, zeroing is set on the source and not
2009 the destination, if a predicate bit is set it indicates that a zero
2010 data element is passed through the operation (the exception being:
2011 if the source data element is to be treated as an address - a LOAD -
2012 then the data returned *from* the LOAD is zero, rather than looking up an
2013 *address* of zero.
2014
2015 When zeroing is set on the destination and not the source, then just
2016 as with single-predicated operations, a zero is stored into the destination
2017 element (or target memory address for a STORE).
2018
2019 Zeroing on both source and destination effectively result in a bitwise
2020 NOR operation of the source and destination predicate: the result is that
2021 where either source predicate OR destination predicate is set to 0,
2022 a zero element will ultimately end up in the destination register.
2023
2024 However: this may not necessarily be the case for all operations;
2025 implementors, particularly of custom instructions, clearly need to
2026 think through the implications in each and every case.
2027
2028 Here is pseudo-code for a twin zero-predicated operation:
2029
2030 function op_mv(rd, rs) # MV not VMV!
2031  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
2032  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
2033  ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
2034  pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
2035  for (int i = 0, int j = 0; i < VL && j < VL):
2036 if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
2037 if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
2038 if ((pd & 1<<j))
2039 if ((pd & 1<<j))
2040 sourcedata = ireg[rs+i];
2041 else
2042 sourcedata = 0
2043 ireg[rd+j] <= sourcedata
2044 else if (zerodst)
2045 ireg[rd+j] <= 0
2046 if (int_csr[rs].isvec)
2047 i++;
2048 if (int_csr[rd].isvec)
2049 j++;
2050 else
2051 if ((pd & 1<<j))
2052 break;
2053
2054 Note that in the instance where the destination is a scalar, the hardware
2055 loop is ended the moment a value *or a zero* is placed into the destination
2056 register/element. Also note that, for clarity, variable element widths
2057 have been left out of the above.
2058
2059 # Exceptions
2060
2061 TODO: expand. Exceptions may occur at any time, in any given underlying
2062 scalar operation. This implies that context-switching (traps) may
2063 occur, and operation must be returned to where it left off. That in
2064 turn implies that the full state - including the current parallel
2065 element being processed - has to be saved and restored. This is
2066 what the **STATE** CSR is for.
2067
2068 The implications are that all underlying individual scalar operations
2069 "issued" by the parallelisation have to appear to be executed sequentially.
2070 The further implications are that if two or more individual element
2071 operations are underway, and one with an earlier index causes an exception,
2072 it may be necessary for the microarchitecture to **discard** or terminate
2073 operations with higher indices.
2074
2075 This being somewhat dissatisfactory, an "opaque predication" variant
2076 of the STATE CSR is being considered.
2077
2078 # Hints
2079
2080 A "HINT" is an operation that has no effect on architectural state,
2081 where its use may, by agreed convention, give advance notification
2082 to the microarchitecture: branch prediction notification would be
2083 a good example. Usually HINTs are where rd=x0.
2084
2085 With Simple-V being capable of issuing *parallel* instructions where
2086 rd=x0, the space for possible HINTs is expanded considerably. VL
2087 could be used to indicate different hints. In addition, if predication
2088 is set, the predication register itself could hypothetically be passed
2089 in as a *parameter* to the HINT operation.
2090
2091 No specific hints are yet defined in Simple-V
2092
2093 # VLIW Format <a name="vliw-format"></a>
2094
2095 One issue with SV is the setup and teardown time of the CSRs. The cost
2096 of the use of a full CSRRW (requiring LI) is quite high. A VLIW format
2097 therefore makes sense.
2098
2099 A suitable prefix, which fits the Expanded Instruction-Length encoding
2100 for "(80 + 16 times instruction_length)", as defined in Section 1.5
2101 of the RISC-V ISA, is as follows:
2102
2103 | 15 | 14:12 | 11:10 | 9:8 | 7 | 6:0 |
2104 | - | ----- | ----- | ----- | --- | ------- |
2105 | vlset | 16xil | pplen | rplen | mode | 1111111 |
2106
2107 An optional VL Block, optional predicate entries, optional register entries and finally some 16/32/48 bit standard RV or SVPrefix opcodes follow.
2108
2109 The variable-length format from Section 1.5 of the RISC-V ISA:
2110
2111 | base+4 ... base+2 | base | number of bits |
2112 | ------ ------------------- | ---------------- -------------------------- |
2113 | ..xxxx xxxxxxxxxxxxxxxx | xnnnxxxxx1111111 | (80+16\*nnn)-bit, nnn!=111 |
2114 | {ops}{Pred}{Reg}{VL Block} | SV Prefix | |
2115
2116 VL/MAXVL/SubVL Block:
2117
2118 | 31-30 | 29:28 | 27:22 | 21:17 | 16 |
2119 | - | ----- | ------ | ------ | - |
2120 | 0 | SubVL | VLdest | VLEN | vlt |
2121 | 1 | SubVL | VLdest | VLEN ||
2122
2123 If vlt is 0, VLEN is a 5 bit immediate value. If vlt is 1, it specifies
2124 the scalar register from which VL is set by this VLIW instruction
2125 group. VL, whether set from the register or the immediate, is then
2126 modified (truncated) to be max(VL, MAXVL), and the result stored in the
2127 scalar register specified in VLdest. If VLdest is zero, no store in the
2128 regfile occurs.
2129
2130 This option will typically be used to start vectorised loops, where
2131 the VLIW instruction effectively embeds an optional "SETSUBVL, SETVL"
2132 sequence (in compact form).
2133
2134 When bit 15 is set to 1, MAXVL and VL are both set to the immediate,
2135 VLEN, which is 6 bits in length, and the same value stored in scalar
2136 register VLdest (if that register is nonzero).
2137
2138 This option will typically not be used so much for loops as it will be
2139 for one-off instructions such as saving the entire register file to the
2140 stack with a single one-off Vectorised LD/ST.
2141
2142 CSRs needed:
2143
2144 * mepcvliw
2145 * sepcvliw
2146 * uepcvliw
2147 * hepcvliw
2148
2149 Notes:
2150
2151 * Bit 7 specifies if the prefix block format is the full 16 bit format
2152 (1) or the compact less expressive format (0). In the 8 bit format,
2153 pplen is multiplied by 2.
2154 * 8 bit format predicate numbering is implicit and begins from x9. Thus it is critical to put blocks in the correct order as required.
2155 * Bit 7 also specifies if the register block format is 16 bit (1) or 8 bit
2156 (0). In the 8 bit format, rplen is multiplied by 2. If only an odd number
2157 of entries are needed the last may be set to 0x00, indicating "unused".
2158 * Bit 15 specifies if the VL Block is present. If set to 1, the VL Block immediately follows the VLIW instruction Prefix
2159 * Bits 8 and 9 define how many RegCam entries (0 to 3 if bit 15 is 1, otherwise 0 to 6) follow the (optional) VL Block.
2160 * Bits 10 and 11 define how many PredCam entries (0 to 3 if bit 7 is 1, otherwise 0 to 6) follow the (optional) RegCam entries
2161 * Bits 14 to 12 (IL) define the actual length of the instruction: total
2162 number of bits is 80 + 16 times IL. Standard RV32, RVC and also
2163 SVPrefix (P48-\*-Type) instructions fit into this space, after the
2164 (optional) VL / RegCam / PredCam entries
2165 * Anything - any registers - within the VLIW-prefixed format *MUST* have the
2166 RegCam and PredCam entries applied to it.
2167 * At the end of the VLIW Group, the RegCam and PredCam entries
2168 *no longer apply*. VL, MAXVL and SUBVL on the other hand remain at
2169 the values set by the last instruction (whether a CSRRW or the VL
2170 Block header).
2171 * Although an inefficient use of resources, it is fine to set the MAXVL, VL and SUBVL CSRs with standard CSRRW instructions, within a VLIW block.
2172
2173 All this would greatly reduce the amount of space utilised by Vectorised
2174 instructions, given that 64-bit CSRRW requires 3, even 4 32-bit opcodes: the
2175 CSR itself, a LI, and the setting up of the value into the RS register
2176 of the CSR, which, again, requires a LI / LUI to get the 32 bit
2177 data into the CSR. To get 64-bit data into the register in order to put
2178 it into the CSR(s), LOAD operations from memory are needed!
2179
2180 Given that each 64-bit CSR can hold only 4x PredCAM entries (or 4 RegCAM
2181 entries), that's potentially 6 to eight 32-bit instructions, just to
2182 establish the Vector State!
2183
2184 Not only that: even CSRRW on VL and MAXVL requires 64-bits (even more bits if
2185 VL needs to be set to greater than 32). Bear in mind that in SV, both MAXVL
2186 and VL need to be set.
2187
2188 By contrast, the VLIW prefix is only 16 bits, the VL/MAX/SubVL block is
2189 only 16 bits, and as long as not too many predicates and register vector
2190 qualifiers are specified, several 32-bit and 16-bit opcodes can fit into
2191 the format. If the full flexibility of the 16 bit block formats are not
2192 needed, more space is saved by using the 8 bit formats.
2193
2194 In this light, embedding the VL/MAXVL, PredCam and RegCam CSR entries into
2195 a VLIW format makes a lot of sense.
2196
2197 Open Questions:
2198
2199 * Is it necessary to stick to the RISC-V 1.5 format? Why not go with
2200 using the 15th bit to allow 80 + 16\*0bnnnn bits? Perhaps to be sane,
2201 limit to 256 bits (16 times 0-11).
2202 * Could a "hint" be used to set which operations are parallel and which
2203 are sequential?
2204 * Could a new sub-instruction opcode format be used, one that does not
2205 conform precisely to RISC-V rules, but *unpacks* to RISC-V opcodes?
2206 no need for byte or bit-alignment
2207 * Could a hardware compression algorithm be deployed? Quite likely,
2208 because of the sub-execution context (sub-VLIW PC)
2209
2210 ## Limitations on instructions.
2211
2212 To greatly simplify implementations, it is required to treat the VLIW
2213 group as a separate sub-program with its own separate PC. The sub-pc
2214 advances separately whilst the main PC remains pointing at the beginning
2215 of the VLIW instruction (not to be confused with how VL works, which
2216 is exactly the same principle, except it is VStart in the STATE CSR
2217 that increments).
2218
2219 This has implications, namely that a new set of CSRs identical to xepc
2220 (mepc, srpc, hepc and uepc) must be created and managed and respected
2221 as being a sub extension of the xepc set of CSRs. Thus, xepcvliw CSRs
2222 must be context switched and saved / restored in traps.
2223
2224 The VStart indices in the STATE CSR may be similarly regarded as another
2225 sub-execution context, giving in effect two sets of nested sub-levels
2226 of the RISCV Program Counter.
2227
2228 In addition, as xepcvliw CSRs are relative to the beginning of the VLIW
2229 block, branches MUST be restricted to within the block, i.e. addressing
2230 is now restricted to the start (and very short) length of the block.
2231
2232 Also: calling subroutines is inadviseable, unless they can be entirely
2233 accomplished within a block.
2234
2235 A normal jump and a normal function call may only be taken by letting
2236 the VLIW end, returning to "normal" standard RV mode, using RVC, 32 bit
2237 or P48-*-type opcodes.
2238
2239 ## Links
2240
2241 * <https://groups.google.com/d/msg/comp.arch/yIFmee-Cx-c/jRcf0evSAAAJ>
2242
2243 # Subsets of RV functionality
2244
2245 This section describes the differences when SV is implemented on top of
2246 different subsets of RV.
2247
2248 ## Common options
2249
2250 It is permitted to limit the size of either (or both) the register files
2251 down to the original size of the standard RV architecture. However, below
2252 the mandatory limits set in the RV standard will result in non-compliance
2253 with the SV Specification.
2254
2255 ## RV32 / RV32F
2256
2257 When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
2258 maximum limit for predication is also restricted to 32 bits. Whilst not
2259 actually specifically an "option" it is worth noting.
2260
2261 ## RV32G
2262
2263 Normally in standard RV32 it does not make much sense to have
2264 RV32G, The critical instructions that are missing in standard RV32
2265 are those for moving data to and from the double-width floating-point
2266 registers into the integer ones, as well as the FCVT routines.
2267
2268 In an earlier draft of SV, it was possible to specify an elwidth
2269 of double the standard register size: this had to be dropped,
2270 and may be reintroduced in future revisions.
2271
2272 ## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
2273
2274 When floating-point is not implemented, the size of the User Register and
2275 Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
2276 per table).
2277
2278 ## RV32E
2279
2280 In embedded scenarios the User Register and Predication CSRs may be
2281 dropped entirely, or optionally limited to 1 CSR, such that the combined
2282 number of entries from the M-Mode CSR Register table plus U-Mode
2283 CSR Register table is either 4 16-bit entries or (if the U-Mode is
2284 zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
2285 the Predication CSR tables.
2286
2287 RV32E is the most likely candidate for simply detecting that registers
2288 are marked as "vectorised", and generating an appropriate exception
2289 for the VL loop to be implemented in software.
2290
2291 ## RV128
2292
2293 RV128 has not been especially considered, here, however it has some
2294 extremely large possibilities: double the element width implies
2295 256-bit operands, spanning 2 128-bit registers each, and predication
2296 of total length 128 bit given that XLEN is now 128.
2297
2298 # Under consideration <a name="issues"></a>
2299
2300 for element-grouping, if there is unused space within a register
2301 (3 16-bit elements in a 64-bit register for example), recommend:
2302
2303 * For the unused elements in an integer register, the used element
2304 closest to the MSB is sign-extended on write and the unused elements
2305 are ignored on read.
2306 * The unused elements in a floating-point register are treated as-if
2307 they are set to all ones on write and are ignored on read, matching the
2308 existing standard for storing smaller FP values in larger registers.
2309
2310 ---
2311
2312 info register,
2313
2314 > One solution is to just not support LR/SC wider than a fixed
2315 > implementation-dependent size, which must be at least 
2316 >1 XLEN word, which can be read from a read-only CSR
2317 > that can also be used for info like the kind and width of 
2318 > hw parallelism supported (128-bit SIMD, minimal virtual 
2319 > parallelism, etc.) and other things (like maybe the number 
2320 > of registers supported). 
2321
2322 > That CSR would have to have a flag to make a read trap so
2323 > a hypervisor can simulate different values.
2324
2325 ----
2326
2327 > And what about instructions like JALR? 
2328
2329 answer: they're not vectorised, so not a problem
2330
2331 ----
2332
2333 * if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
2334 XLEN if elwidth==default
2335 * if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
2336 *32* if elwidth == default
2337
2338 ---
2339
2340 TODO: update elwidth to be default / 8 / 16 / 32
2341
2342 ---
2343
2344 TODO: document different lengths for INT / FP regfiles, and provide
2345 as part of info register. 00=32, 01=64, 10=128, 11=reserved.
2346
2347 ---
2348
2349 push/pop of vector config state:
2350 <https://groups.google.com/d/msg/comp.arch/bGBeaNjAKvc/z2d_cST7AgAJ>
2351
2352 when Bank in CFG is altered, shift the "addressing" of Reg and
2353 Pred CSRs to match. i.e. treat the Reg and Pred CSRs as a
2354 "mini stack".