(no commit message)
[libreriscv.git] / simple_v_extension / specification.mdwn
1 # Simple-V (Parallelism Extension Proposal) Specification
2
3 * Copyright (C) 2017, 2018, 3029 Luke Kenneth Casson Leighton
4 * Status: DRAFTv0.6
5 * Last edited: 21 jun 2019
6 * Ancillary resource: [[opcodes]] [[sv_prefix_proposal]]
7
8 With thanks to:
9
10 * Allen Baum
11 * Bruce Hoult
12 * comp.arch
13 * Jacob Bachmeyer
14 * Guy Lemurieux
15 * Jacob Lifshay
16 * Terje Mathisen
17 * The RISC-V Founders, without whom this all would not be possible.
18
19 [[!toc ]]
20
21 # Summary and Background: Rationale
22
23 Simple-V is a uniform parallelism API for RISC-V hardware that has several
24 unplanned side-effects including code-size reduction, expansion of
25 HINT space and more. The reason for
26 creating it is to provide a manageable way to turn a pre-existing design
27 into a parallel one, in a step-by-step incremental fashion, without adding any new opcodes, thus allowing
28 the implementor to focus on adding hardware where it is needed and necessary.
29 The primary target is for mobile-class 3D GPUs and VPUs, with secondary
30 goals being to reduce executable size and reduce context-switch latency.
31
32 Critically: **No new instructions are added**. The parallelism (if any
33 is implemented) is implicitly added by tagging *standard* scalar registers
34 for redirection. When such a tagged register is used in any instruction,
35 it indicates that the PC shall **not** be incremented; instead a loop
36 is activated where *multiple* instructions are issued to the pipeline
37 (as determined by a length CSR), with contiguously incrementing register
38 numbers starting from the tagged register. When the last "element"
39 has been reached, only then is the PC permitted to move on. Thus
40 Simple-V effectively sits (slots) *in between* the instruction decode phase
41 and the ALU(s).
42
43 The barrier to entry with SV is therefore very low. The minimum
44 compliant implementation is software-emulation (traps), requiring
45 only the CSRs and CSR tables, and that an exception be thrown if an
46 instruction's registers are detected to have been tagged. The looping
47 that would otherwise be done in hardware is thus carried out in software,
48 instead. Whilst much slower, it is "compliant" with the SV specification,
49 and may be suited for implementation in RV32E and also in situations
50 where the implementor wishes to focus on certain aspects of SV, without
51 unnecessary time and resources into the silicon, whilst also conforming
52 strictly with the API. A good area to punt to software would be the
53 polymorphic element width capability for example.
54
55 Hardware Parallelism, if any, is therefore added at the implementor's
56 discretion to turn what would otherwise be a sequential loop into a
57 parallel one.
58
59 To emphasise that clearly: Simple-V (SV) is *not*:
60
61 * A SIMD system
62 * A SIMT system
63 * A Vectorisation Microarchitecture
64 * A microarchitecture of any specific kind
65 * A mandary parallel processor microarchitecture of any kind
66 * A supercomputer extension
67
68 SV does **not** tell implementors how or even if they should implement
69 parallelism: it is a hardware "API" (Application Programming Interface)
70 that, if implemented, presents a uniform and consistent way to *express*
71 parallelism, at the same time leaving the choice of if, how, how much,
72 when and whether to parallelise operations **entirely to the implementor**.
73
74 # Basic Operation
75
76 The principle of SV is as follows:
77
78 * Standard RV instructions are "prefixed" (extended) through a 48/64
79 bit format (single instruction option) or a variable
80 length VLIW-like prefix (multi or "grouped" option).
81 * The prefix(es) indicate which registers are "tagged" as
82 "vectorised". Predicates can also be added.
83 * A "Vector Length" CSR is set, indicating the span of any future
84 "parallel" operations.
85 * If any operation (a **scalar** standard RV opcode) uses a register
86 that has been so "marked" ("tagged"), a hardware "macro-unrolling loop"
87 is activated, of length VL, that effectively issues **multiple**
88 identical instructions using contiguous sequentially-incrementing
89 register numbers, based on the "tags".
90 * **Whether they be executed sequentially or in parallel or a
91 mixture of both or punted to software-emulation in a trap handler
92 is entirely up to the implementor**.
93
94 In this way an entire scalar algorithm may be vectorised with
95 the minimum of modification to the hardware and to compiler toolchains.
96
97 To reiterate: **There are *no* new opcodes**. The scheme works *entirely*
98 on hidden context that augments *scalar* RISCV instructions.
99
100 # CSRs <a name="csrs"></a>
101
102 * An optional "reshaping" CSR key-value table which remaps from a 1D
103 linear shape to 2D or 3D, including full transposition.
104
105 There are also five additional User mode CSRs :
106
107 * uMVL (the Maximum Vector Length)
108 * uVL (which has different characteristics from standard CSRs)
109 * uSUBVL (effectively a kind of SIMD)
110 * uEPCVLIW (a copy of the sub-execution Program Counter, that is relative
111 to the start of the current VLIW Group, set on a trap).
112 * uSTATE (useful for saving and restoring during context switch,
113 and for providing fast transitions)
114
115 There are also five additional CSRs for Supervisor-Mode:
116
117 * SMVL
118 * SVL
119 * SSUBVL
120 * SEPCVLIW
121 * SSTATE
122
123 And likewise for M-Mode:
124
125 * MMVL
126 * MVL
127 * MSUBVL
128 * MEPCVLIW
129 * MSTATE
130
131 Both Supervisor and M-Mode have their own CSR registers, independent
132 of the other privilege levels, in order to make it easier to use
133 Vectorisation in each level without affecting other privilege levels.
134
135 The access pattern for these groups of CSRs in each mode follows the
136 same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
137
138 * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
139 * In S-Mode, accessing and changing of the M-Mode CSRs is transparently
140 identical
141 to changing the S-Mode CSRs. Accessing and changing the U-Mode
142 CSRs is permitted.
143 * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
144 is prohibited.
145
146 In M-Mode, only the M-Mode CSRs are in effect, i.e. it is only the
147 M-Mode MVL, the M-Mode STATE and so on that influences the processor
148 behaviour. Likewise for S-Mode, and likewise for U-Mode.
149
150 This has the interesting benefit of allowing M-Mode (or S-Mode) to be set
151 up, for context-switching to take place, and, on return back to the higher
152 privileged mode, the CSRs of that mode will be exactly as they were.
153 Thus, it becomes possible for example to set up CSRs suited best to aiding
154 and assisting low-latency fast context-switching *once and only once*
155 (for example at boot time), without the need for re-initialising the
156 CSRs needed to do so.
157
158 Another interesting side effect of separate S Mode CSRs is that Vectorised
159 saving of the entire register file to the stack is a single instruction
160 (accidental provision of LOAD-MULTI semantics). It can even be predicated,
161 which opens up some very interesting possibilities.
162
163 The xEPCVLIW CSRs must be treated exactly like their corresponding xepc
164 equivalents. See VLIW section for details.
165
166 ## MAXVECTORLENGTH (MVL) <a name="mvl" />
167
168 MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
169 is variable length and may be dynamically set. MVL is
170 however limited to the regfile bitwidth XLEN (1-32 for RV32,
171 1-64 for RV64 and so on).
172
173 The reason for setting this limit is so that predication registers, when
174 marked as such, may fit into a single register as opposed to fanning out
175 over several registers. This keeps the implementation a little simpler.
176
177 The other important factor to note is that the actual MVL is internally
178 stored **offset by one**, so that it can fit into only 6 bits (for RV64)
179 and still cover a range up to XLEN bits. Attempts to set MVL to zero will
180 return an exception. This is expressed more clearly in the "pseudocode"
181 section, where there are subtle differences between CSRRW and CSRRWI.
182
183 ## Vector Length (VL) <a name="vl" />
184
185 VSETVL is slightly different from RVV. Similar to RVV, VL is set to be within
186 the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
187
188 VL = rd = MIN(vlen, MVL)
189
190 where 1 <= MVL <= XLEN
191
192 However just like MVL it is important to note that the range for VL has
193 subtle design implications, covered in the "CSR pseudocode" section
194
195 The fixed (specific) setting of VL allows vector LOAD/STORE to be used
196 to switch the entire bank of registers using a single instruction (see
197 Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
198 is down to the fact that predication bits fit into a single register of
199 length XLEN bits.
200
201 The second change is that when VSETVL is requested to be stored
202 into x0, it is *ignored* silently (VSETVL x0, x5)
203
204 The third and most important change is that, within the limits set by
205 MVL, the value passed in **must** be set in VL (and in the
206 destination register).
207
208 This has implication for the microarchitecture, as VL is required to be
209 set (limits from MVL notwithstanding) to the actual value
210 requested. RVV has the option to set VL to an arbitrary value that suits
211 the conditions and the micro-architecture: SV does *not* permit this.
212
213 The reason is so that if SV is to be used for a context-switch or as a
214 substitute for LOAD/STORE-Multiple, the operation can be done with only
215 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
216 single LD/ST operation). If VL does *not* get set to the register file
217 length when VSETVL is called, then a software-loop would be needed.
218 To avoid this need, VL *must* be set to exactly what is requested
219 (limits notwithstanding).
220
221 Therefore, in turn, unlike RVV, implementors *must* provide
222 pseudo-parallelism (using sequential loops in hardware) if actual
223 hardware-parallelism in the ALUs is not deployed. A hybrid is also
224 permitted (as used in Broadcom's VideoCore-IV) however this must be
225 *entirely* transparent to the ISA.
226
227 The fourth change is that VSETVL is implemented as a CSR, where the
228 behaviour of CSRRW (and CSRRWI) must be changed to specifically store
229 the *new* value in the destination register, **not** the old value.
230 Where context-load/save is to be implemented in the usual fashion
231 by using a single CSRRW instruction to obtain the old value, the
232 *secondary* CSR must be used (SVSTATE). This CSR behaves
233 exactly as standard CSRs, and contains more than just VL.
234
235 One interesting side-effect of using CSRRWI to set VL is that this
236 may be done with a single instruction, useful particularly for a
237 context-load/save. There are however limitations: CSRWI's immediate
238 is limited to 0-31 (representing VL=1-32).
239
240 Note that when VL is set to 1, all parallel operations cease: the
241 hardware loop is reduced to a single element: scalar operations.
242
243 ## SUBVL - Sub Vector Length
244
245 This is a "group by quantity" that effectivrly asks each iteration of the hardware loop to load SUBVL elements of width elwidth at a time. Effectively, SUBVL is like a SIMD multiplier: instead of just 1 operation issued, SUBVL operations are issued.
246
247 Another way to view SUBVL is that each element in the VL length vector is now SUBVL times elwidth bits in length.
248
249 The primary use case for SUBVL is for 3D FP Vectors. A Vector of 3D coordinates X,Y,Z for example may be loaded and multiplied the stored, per VL element iteration, rather than having to set VL to three times larger.
250
251 Legal values are 1, 2, 3 and 4, and the STATE CSR must hold the 2 bit values 0b00 thru 0b11.
252
253 Setting this CSR to 0 must raise an exception. Setting it to a value
254 greater than 4 likewise.
255
256 The main effect of SUBVL is that predication bits are applied per **group**,
257 rather than by individual element.
258
259 This saves a not insignificant number of instructions when handling 3D
260 vectors, as otherwise a much longer predicate mask would have to be set
261 up with regularly-repeated bit patterns.
262
263 See SUBVL Pseudocode illustration for details.
264
265 ## STATE
266
267 This is a standard CSR that contains sufficient information for a
268 full context save/restore. It contains (and permits setting of):
269
270 * MVL
271 * VL
272 * SUBVL
273 * the destination element offset of the current parallel instruction
274 being executed
275 * and, for twin-predication, the source element offset as well.
276
277 Interestingly STATE may hypothetically also be used to make the
278 immediately-following instruction to skip a certain number of elements,
279 by playing with destoffs and srcoffs.
280
281 Setting destoffs and srcoffs is realistically intended for saving state
282 so that exceptions (page faults in particular) may be serviced and the
283 hardware-loop that was being executed at the time of the trap, from
284 user-mode (or Supervisor-mode), may be returned to and continued from exactly
285 where it left off. The reason why this works is because setting
286 User-Mode STATE will not change (not be used) in M-Mode or S-Mode
287 (and is entirely why M-Mode and S-Mode have their own STATE CSRs).
288
289 The format of the STATE CSR is as follows:
290
291 | (28..27) | (26..24) | (23..18) | (17..12) | (11..6) | (5...0) |
292 | -------- | -------- | -------- | -------- | ------- | ------- |
293 | rsvd | subvl | destoffs | srcoffs | vl | maxvl |
294
295 When setting this CSR, the following characteristics will be enforced:
296
297 * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
298 * **VL** will be truncated (after offset) to be within the range 1 to MAXVL
299 * **SUBVL** which sets a SIMD-like quantity, has only 4 values however
300 if VL is not a multiple of SUBVL an exception will be raised.
301 * **srcoffs** will be truncated to be within the range 0 to VL-1
302 * **destoffs** will be truncated to be within the range 0 to VL-1
303
304 ## MVL and VL Pseudocode
305
306 The pseudo-code for get and set of VL and MVL use the following internal
307 functions as follows:
308
309 set_mvl_csr(value, rd):
310 regs[rd] = MVL
311 MVL = MIN(value, MVL)
312
313 get_mvl_csr(rd):
314 regs[rd] = VL
315
316 set_vl_csr(value, rd):
317 VL = MIN(value, MVL)
318 regs[rd] = VL # yes returning the new value NOT the old CSR
319 return VL
320
321 get_vl_csr(rd):
322 regs[rd] = VL
323 return VL
324
325 Note that where setting MVL behaves as a normal CSR (returns the old
326 value), unlike standard CSR behaviour, setting VL will return the **new**
327 value of VL **not** the old one.
328
329 For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
330 maximise the effectiveness, an immediate of 0 is used to set VL=1,
331 an immediate of 1 is used to set VL=2 and so on:
332
333 CSRRWI_Set_MVL(value):
334 set_mvl_csr(value+1, x0)
335
336 CSRRWI_Set_VL(value):
337 set_vl_csr(value+1, x0)
338
339 However for CSRRW the following pseudocode is used for MVL and VL,
340 where setting the value to zero will cause an exception to be raised.
341 The reason is that if VL or MVL are set to zero, the STATE CSR is
342 not capable of returning that value.
343
344 CSRRW_Set_MVL(rs1, rd):
345 value = regs[rs1]
346 if value == 0 or value > XLEN:
347 raise Exception
348 set_mvl_csr(value, rd)
349
350 CSRRW_Set_VL(rs1, rd):
351 value = regs[rs1]
352 if value == 0 or value > XLEN:
353 raise Exception
354 set_vl_csr(value, rd)
355
356 In this way, when CSRRW is utilised with a loop variable, the value
357 that goes into VL (and into the destination register) may be used
358 in an instruction-minimal fashion:
359
360 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
361 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
362 CSRRWI MVL, 3 # sets MVL == **4** (not 3)
363 j zerotest # in case loop counter a0 already 0
364 loop:
365 CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
366 ld a3, a1 # load 4 registers a3-6 from x
367 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
368 ld a7, a2 # load 4 registers a7-10 from y
369 add a1, a1, t1 # increment pointer to x by vl*8
370 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
371 sub a0, a0, t0 # n -= vl (t0)
372 st a7, a2 # store 4 registers a7-10 to y
373 add a2, a2, t1 # increment pointer to y by vl*8
374 zerotest:
375 bnez a0, loop # repeat if n != 0
376
377 With the STATE CSR, just like with CSRRWI, in order to maximise the
378 utilisation of the limited bitspace, "000000" in binary represents
379 VL==1, "00001" represents VL==2 and so on (likewise for MVL):
380
381 CSRRW_Set_SV_STATE(rs1, rd):
382 value = regs[rs1]
383 get_state_csr(rd)
384 MVL = set_mvl_csr(value[11:6]+1)
385 VL = set_vl_csr(value[5:0]+1)
386 destoffs = value[23:18]>>18
387 srcoffs = value[23:18]>>12
388
389 get_state_csr(rd):
390 regs[rd] = (MVL-1) | (VL-1)<<6 | (srcoffs)<<12 |
391 (destoffs)<<18
392 return regs[rd]
393
394 In both cases, whilst CSR read of VL and MVL return the exact values
395 of VL and MVL respectively, reading and writing the STATE CSR returns
396 those values **minus one**. This is absolutely critical to implement
397 if the STATE CSR is to be used for fast context-switching.
398
399 ## Register key-value (CAM) table <a name="regcsrtable" />
400
401 *NOTE: in prior versions of SV, this table used to be writable and
402 accessible via CSRs. It is now stored in the VLIW instruction format,
403 and entries may be overridden by the SVPrefix format*
404
405 The purpose of the Register table is four-fold:
406
407 * To mark integer and floating-point registers as requiring "redirection"
408 if it is ever used as a source or destination in any given operation.
409 This involves a level of indirection through a 5-to-7-bit lookup table,
410 such that **unmodified** operands with 5 bit (3 for Compressed) may
411 access up to **128** registers.
412 * To indicate whether, after redirection through the lookup table, the
413 register is a vector (or remains a scalar).
414 * To over-ride the implicit or explicit bitwidth that the operation would
415 normally give the register.
416
417 16 bit format:
418
419 | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
420 | ------ | | - | - | - | ------ | ------- |
421 | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
422 | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
423 | .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
424 | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
425
426 8 bit format:
427
428 | RegCAM | | 7 | (6..5) | (4..0) |
429 | ------ | | - | ------ | ------- |
430 | 0 | | i/f | vew0 | regnum |
431
432 i/f is set to "1" to indicate that the redirection/tag entry is to be applied
433 to integer registers; 0 indicates that it is relevant to floating-point
434 registers.
435
436 The 8 bit format is used for a much more compact expression. "isvec"
437 is implicit and, similar to [[sv-prefix-proposal]], the target vector
438 is "regnum<<2", implicitly. Contrast this with the 16-bit format where
439 the target vector is *explicitly* named in bits 8 to 14, and bit 15 may
440 optionally set "scalar" mode.
441
442 Note that whilst SVPrefis adds one extra bit to each of rd, rs1 etc.,
443 and thus the "vector" mode need only shift the (6 bit) regnum by 1 to
444 get the actual (7 bit) register number to use, there is not enough space
445 in the 8 bit format so "regnum<<2" is required.
446
447 vew has the following meanings, indicating that the instruction's
448 operand size is "over-ridden" in a polymorphic fashion:
449
450 | vew | bitwidth |
451 | --- | ------------------- |
452 | 00 | default (XLEN/FLEN) |
453 | 01 | 8 bit |
454 | 10 | 16 bit |
455 | 11 | 32 bit |
456
457 As the above table is a CAM (key-value store) it may be appropriate
458 (faster, implementation-wise) to expand it as follows:
459
460 struct vectorised fp_vec[32], int_vec[32];
461
462 for (i = 0; i < 16; i++) // 16 CSRs?
463 tb = int_vec if CSRvec[i].type == 0 else fp_vec
464 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
465 tb[idx].elwidth = CSRvec[i].elwidth
466 tb[idx].regidx = CSRvec[i].regidx // indirection
467 tb[idx].isvector = CSRvec[i].isvector // 0=scalar
468 tb[idx].packed = CSRvec[i].packed // SIMD or not
469
470
471
472 ## Predication Table <a name="predication_csr_table"></a>
473
474 *NOTE: in prior versions of SV, this table used to be writable and
475 accessible via CSRs. It is now stored in the VLIW instruction format,
476 and entries may be overridden by the SVPrefix format*
477
478 The Predication Table is a key-value store indicating whether, if a
479 given destination register (integer or floating-point) is referred to
480 in an instruction, it is to be predicated. Like the Register table, it
481 is an indirect lookup that allows the RV opcodes to not need modification.
482
483 It is particularly important to note
484 that the *actual* register used can be *different* from the one that is
485 in the instruction, due to the redirection through the lookup table.
486
487 * regidx is the register that in combination with the
488 i/f flag, if that integer or floating-point register is referred to
489 in a (standard RV) instruction
490 results in the lookup table being referenced to find the predication
491 mask to use for this operation.
492 * predidx is the
493 *actual* (full, 7 bit) register to be used for the predication mask.
494 * inv indicates that the predication mask bits are to be inverted
495 prior to use *without* actually modifying the contents of the
496 registerfrom which those bits originated.
497 * zeroing is either 1 or 0, and if set to 1, the operation must
498 place zeros in any element position where the predication mask is
499 set to zero. If zeroing is set to 0, unpredicated elements *must*
500 be left alone. Some microarchitectures may choose to interpret
501 this as skipping the operation entirely. Others which wish to
502 stick more closely to a SIMD architecture may choose instead to
503 interpret unpredicated elements as an internal "copy element"
504 operation (which would be necessary in SIMD microarchitectures
505 that perform register-renaming)
506
507 16 bit format:
508
509 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
510 | ----- | - | - | - | - | ------- | ------- |
511 | 0 | predkey | zero0 | inv0 | i/f | regidx | rsrvd |
512 | 1 | predkey | zero1 | inv1 | i/f | regidx | rsvd |
513 | ... | predkey | ..... | .... | i/f | ....... | ....... |
514 | 15 | predkey | zero15 | inv15 | i/f | regidx | rsvd |
515
516
517 8 bit format:
518
519 | PrCSR | 7 | 6 | 5 | (4..0) |
520 | ----- | - | - | - | ------- |
521 | 0 | zero0 | inv0 | i/f | regnum |
522
523 The 8 bit format is a compact and less expressive variant of the full
524 16 bit format. Using the 8 bit formatis very different: the predicate
525 register to use is implicit, and numbering begins inplicitly from x9. The
526 regnum is still used to "activate" predication, in the same fashion as
527 described above.
528
529 The 16 bit Predication CSR Table is a key-value store, so implementation-wise
530 it will be faster to turn the table around (maintain topologically
531 equivalent state):
532
533 struct pred {
534 bool zero;
535 bool inv;
536 bool enabled;
537 int predidx; // redirection: actual int register to use
538 }
539
540 struct pred fp_pred_reg[32]; // 64 in future (bank=1)
541 struct pred int_pred_reg[32]; // 64 in future (bank=1)
542
543 for (i = 0; i < 16; i++)
544 tb = int_pred_reg if CSRpred[i].type == 0 else fp_pred_reg;
545 idx = CSRpred[i].regidx
546 tb[idx].zero = CSRpred[i].zero
547 tb[idx].inv = CSRpred[i].inv
548 tb[idx].predidx = CSRpred[i].predidx
549 tb[idx].enabled = true
550
551 So when an operation is to be predicated, it is the internal state that
552 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
553 pseudo-code for operations is given, where p is the explicit (direct)
554 reference to the predication register to be used:
555
556 for (int i=0; i<vl; ++i)
557 if ([!]preg[p][i])
558 (d ? vreg[rd][i] : sreg[rd]) =
559 iop(s1 ? vreg[rs1][i] : sreg[rs1],
560 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
561
562 This instead becomes an *indirect* reference using the *internal* state
563 table generated from the Predication CSR key-value store, which is used
564 as follows.
565
566 if type(iop) == INT:
567 preg = int_pred_reg[rd]
568 else:
569 preg = fp_pred_reg[rd]
570
571 for (int i=0; i<vl; ++i)
572 predicate, zeroing = get_pred_val(type(iop) == INT, rd):
573 if (predicate && (1<<i))
574 (d ? regfile[rd+i] : regfile[rd]) =
575 iop(s1 ? regfile[rs1+i] : regfile[rs1],
576 s2 ? regfile[rs2+i] : regfile[rs2]); // for insts with 2 inputs
577 else if (zeroing)
578 (d ? regfile[rd+i] : regfile[rd]) = 0
579
580 Note:
581
582 * d, s1 and s2 are booleans indicating whether destination,
583 source1 and source2 are vector or scalar
584 * key-value CSR-redirection of rd, rs1 and rs2 have NOT been included
585 above, for clarity. rd, rs1 and rs2 all also must ALSO go through
586 register-level redirection (from the Register table) if they are
587 vectors.
588
589 If written as a function, obtaining the predication mask (and whether
590 zeroing takes place) may be done as follows:
591
592 def get_pred_val(bool is_fp_op, int reg):
593 tb = int_reg if is_fp_op else fp_reg
594 if (!tb[reg].enabled):
595 return ~0x0, False // all enabled; no zeroing
596 tb = int_pred if is_fp_op else fp_pred
597 if (!tb[reg].enabled):
598 return ~0x0, False // all enabled; no zeroing
599 predidx = tb[reg].predidx // redirection occurs HERE
600 predicate = intreg[predidx] // actual predicate HERE
601 if (tb[reg].inv):
602 predicate = ~predicate // invert ALL bits
603 return predicate, tb[reg].zero
604
605 Note here, critically, that **only** if the register is marked
606 in its **register** table entry as being "active" does the testing
607 proceed further to check if the **predicate** table entry is
608 also active.
609
610 Note also that this is in direct contrast to branch operations
611 for the storage of comparisions: in these specific circumstances
612 the requirement for there to be an active *register* entry
613 is removed.
614
615 ## REMAP CSR <a name="remap" />
616
617 (Note: both the REMAP and SHAPE sections are best read after the
618 rest of the document has been read)
619
620 There is one 32-bit CSR which may be used to indicate which registers,
621 if used in any operation, must be "reshaped" (re-mapped) from a linear
622 form to a 2D or 3D transposed form, or "offset" to permit arbitrary
623 access to elements within a register.
624
625 The 32-bit REMAP CSR may reshape up to 3 registers:
626
627 | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
628 | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
629 | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
630
631 regidx0-2 refer not to the Register CSR CAM entry but to the underlying
632 *real* register (see regidx, the value) and consequently is 7-bits wide.
633 When set to zero (referring to x0), clearly reshaping x0 is pointless,
634 so is used to indicate "disabled".
635 shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
636 Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
637
638 It is anticipated that these specialist CSRs not be very often used.
639 Unlike the CSR Register and Predication tables, the REMAP CSRs use
640 the full 7-bit regidx so that they can be set once and left alone,
641 whilst the CSR Register entries pointing to them are disabled, instead.
642
643 ## SHAPE 1D/2D/3D vector-matrix remapping CSRs
644
645 (Note: both the REMAP and SHAPE sections are best read after the
646 rest of the document has been read)
647
648 There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
649 which have the same format. When each SHAPE CSR is set entirely to zeros,
650 remapping is disabled: the register's elements are a linear (1D) vector.
651
652 | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
653 | ------- | -- | ------- | -- | ------- | -- | ------- |
654 | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
655
656 offs is a 3-bit field, spread out across bits 7, 15 and 23, which
657 is added to the element index during the loop calculation.
658
659 xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
660 that the array dimensionality for that dimension is 1. A value of xdimsz=2
661 would indicate that in the first dimension there are 3 elements in the
662 array. The format of the array is therefore as follows:
663
664 array[xdim+1][ydim+1][zdim+1]
665
666 However whilst illustrative of the dimensionality, that does not take the
667 "permute" setting into account. "permute" may be any one of six values
668 (0-5, with values of 6 and 7 being reserved, and not legal). The table
669 below shows how the permutation dimensionality order works:
670
671 | permute | order | array format |
672 | ------- | ----- | ------------------------ |
673 | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
674 | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
675 | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
676 | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
677 | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
678 | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
679
680 In other words, the "permute" option changes the order in which
681 nested for-loops over the array would be done. The algorithm below
682 shows this more clearly, and may be executed as a python program:
683
684 # mapidx = REMAP.shape2
685 xdim = 3 # SHAPE[mapidx].xdim_sz+1
686 ydim = 4 # SHAPE[mapidx].ydim_sz+1
687 zdim = 5 # SHAPE[mapidx].zdim_sz+1
688
689 lims = [xdim, ydim, zdim]
690 idxs = [0,0,0] # starting indices
691 order = [1,0,2] # experiment with different permutations, here
692 offs = 0 # experiment with different offsets, here
693
694 for idx in range(xdim * ydim * zdim):
695 new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
696 print new_idx,
697 for i in range(3):
698 idxs[order[i]] = idxs[order[i]] + 1
699 if (idxs[order[i]] != lims[order[i]]):
700 break
701 print
702 idxs[order[i]] = 0
703
704 Here, it is assumed that this algorithm be run within all pseudo-code
705 throughout this document where a (parallelism) for-loop would normally
706 run from 0 to VL-1 to refer to contiguous register
707 elements; instead, where REMAP indicates to do so, the element index
708 is run through the above algorithm to work out the **actual** element
709 index, instead. Given that there are three possible SHAPE entries, up to
710 three separate registers in any given operation may be simultaneously
711 remapped:
712
713 function op_add(rd, rs1, rs2) # add not VADD!
714 ...
715 ...
716  for (i = 0; i < VL; i++)
717 if (predval & 1<<i) # predication uses intregs
718    ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
719 ireg[rs2+remap(irs2)];
720 if (!int_vec[rd ].isvector) break;
721 if (int_vec[rd ].isvector)  { id += 1; }
722 if (int_vec[rs1].isvector)  { irs1 += 1; }
723 if (int_vec[rs2].isvector)  { irs2 += 1; }
724
725 By changing remappings, 2D matrices may be transposed "in-place" for one
726 operation, followed by setting a different permutation order without
727 having to move the values in the registers to or from memory. Also,
728 the reason for having REMAP separate from the three SHAPE CSRs is so
729 that in a chain of matrix multiplications and additions, for example,
730 the SHAPE CSRs need only be set up once; only the REMAP CSR need be
731 changed to target different registers.
732
733 Note that:
734
735 * Over-running the register file clearly has to be detected and
736 an illegal instruction exception thrown
737 * When non-default elwidths are set, the exact same algorithm still
738 applies (i.e. it offsets elements *within* registers rather than
739 entire registers).
740 * If permute option 000 is utilised, the actual order of the
741 reindexing does not change!
742 * If two or more dimensions are set to zero, the actual order does not change!
743 * The above algorithm is pseudo-code **only**. Actual implementations
744 will need to take into account the fact that the element for-looping
745 must be **re-entrant**, due to the possibility of exceptions occurring.
746 See MSTATE CSR, which records the current element index.
747 * Twin-predicated operations require **two** separate and distinct
748 element offsets. The above pseudo-code algorithm will be applied
749 separately and independently to each, should each of the two
750 operands be remapped. *This even includes C.LDSP* and other operations
751 in that category, where in that case it will be the **offset** that is
752 remapped (see Compressed Stack LOAD/STORE section).
753 * Offset is especially useful, on its own, for accessing elements
754 within the middle of a register. Without offsets, it is necessary
755 to either use a predicated MV, skipping the first elements, or
756 performing a LOAD/STORE cycle to memory.
757 With offsets, the data does not have to be moved.
758 * Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to
759 less than MVL is **perfectly legal**, albeit very obscure. It permits
760 entries to be regularly presented to operands **more than once**, thus
761 allowing the same underlying registers to act as an accumulator of
762 multiple vector or matrix operations, for example.
763
764 Clearly here some considerable care needs to be taken as the remapping
765 could hypothetically create arithmetic operations that target the
766 exact same underlying registers, resulting in data corruption due to
767 pipeline overlaps. Out-of-order / Superscalar micro-architectures with
768 register-renaming will have an easier time dealing with this than
769 DSP-style SIMD micro-architectures.
770
771 # Instruction Execution Order
772
773 Simple-V behaves as if it is a hardware-level "macro expansion system",
774 substituting and expanding a single instruction into multiple sequential
775 instructions with contiguous and sequentially-incrementing registers.
776 As such, it does **not** modify - or specify - the behaviour and semantics of
777 the execution order: that may be deduced from the **existing** RV
778 specification in each and every case.
779
780 So for example if a particular micro-architecture permits out-of-order
781 execution, and it is augmented with Simple-V, then wherever instructions
782 may be out-of-order then so may the "post-expansion" SV ones.
783
784 If on the other hand there are memory guarantees which specifically
785 prevent and prohibit certain instructions from being re-ordered
786 (such as the Atomicity Axiom, or FENCE constraints), then clearly
787 those constraints **MUST** also be obeyed "post-expansion".
788
789 It should be absolutely clear that SV is **not** about providing new
790 functionality or changing the existing behaviour of a micro-architetural
791 design, or about changing the RISC-V Specification.
792 It is **purely** about compacting what would otherwise be contiguous
793 instructions that use sequentially-increasing register numbers down
794 to the **one** instruction.
795
796 # Instructions <a name="instructions" />
797
798 Despite being a 98% complete and accurate topological remap of RVV
799 concepts and functionality, no new instructions are needed.
800 Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
801 becomes a critical dependency for efficient manipulation of predication
802 masks (as a bit-field). Despite the removal of all operations,
803 with the exception of CLIP and VSELECT.X
804 *all instructions from RVV Base are topologically re-mapped and retain their
805 complete functionality, intact*. Note that if RV64G ever had
806 a MV.X added as well as FCLIP, the full functionality of RVV-Base would
807 be obtained in SV.
808
809 Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
810 equivalents, so are left out of Simple-V. VSELECT could be included if
811 there existed a MV.X instruction in RV (MV.X is a hypothetical
812 non-immediate variant of MV that would allow another register to
813 specify which register was to be copied). Note that if any of these three
814 instructions are added to any given RV extension, their functionality
815 will be inherently parallelised.
816
817 With some exceptions, where it does not make sense or is simply too
818 challenging, all RV-Base instructions are parallelised:
819
820 * CSR instructions, whilst a case could be made for fast-polling of
821 a CSR into multiple registers, or for being able to copy multiple
822 contiguously addressed CSRs into contiguous registers, and so on,
823 are the fundamental core basis of SV. If parallelised, extreme
824 care would need to be taken. Additionally, CSR reads are done
825 using x0, and it is *really* inadviseable to tag x0.
826 * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
827 left as scalar.
828 * LR/SC could hypothetically be parallelised however their purpose is
829 single (complex) atomic memory operations where the LR must be followed
830 up by a matching SC. A sequence of parallel LR instructions followed
831 by a sequence of parallel SC instructions therefore is guaranteed to
832 not be useful. Not least: the guarantees of a Multi-LR/SC
833 would be impossible to provide if emulated in a trap.
834 * EBREAK, NOP, FENCE and others do not use registers so are not inherently
835 paralleliseable anyway.
836
837 All other operations using registers are automatically parallelised.
838 This includes AMOMAX, AMOSWAP and so on, where particular care and
839 attention must be paid.
840
841 Example pseudo-code for an integer ADD operation (including scalar operations).
842 Floating-point uses fp csrs.
843
844 function op_add(rd, rs1, rs2) # add not VADD!
845  int i, id=0, irs1=0, irs2=0;
846  predval = get_pred_val(FALSE, rd);
847  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
848  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
849  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
850  for (i = 0; i < VL; i++)
851 if (predval & 1<<i) # predication uses intregs
852    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
853 if (!int_vec[rd ].isvector) break;
854 if (int_vec[rd ].isvector)  { id += 1; }
855 if (int_vec[rs1].isvector)  { irs1 += 1; }
856 if (int_vec[rs2].isvector)  { irs2 += 1; }
857
858 Note that for simplicity there is quite a lot missing from the above
859 pseudo-code: element widths, zeroing on predication, dimensional
860 reshaping and offsets and so on. However it demonstrates the basic
861 principle. Augmentations that produce the full pseudo-code are covered in
862 other sections.
863
864 ## SUBVL Pseudocode
865
866 Adding in support for SUBVL is a matter of adding in an extra inner for-loop, where register src and dest are still incremented inside the inner part. Not that the predication is still taken from the VL index.
867
868 So whilst elements are indexed by (i * SUBVL + s), predicate bits are indexed by i
869
870 function op_add(rd, rs1, rs2) # add not VADD!
871  int i, id=0, irs1=0, irs2=0;
872  predval = get_pred_val(FALSE, rd);
873  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
874  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
875  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
876  for (i = 0; i < VL; i++)
877 for (s = 0; s < SUBVL; s++)
878 if (predval & 1<<i) # predication uses intregs
879    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
880 if (!int_vec[rd ].isvector) break;
881 if (int_vec[rd ].isvector)  { id += 1; }
882 if (int_vec[rs1].isvector)  { irs1 += 1; }
883 if (int_vec[rs2].isvector)  { irs2 += 1; }
884
885 NOTE: pseudocode simplified greatly: zeroing, proper predicate handling, elwidth handling etc. all left out.
886
887 ## Instruction Format
888
889 It is critical to appreciate that there are
890 **no operations added to SV, at all**.
891
892 Instead, by using CSRs to tag registers as an indication of "changed
893 behaviour", SV *overloads* pre-existing branch operations into predicated
894 variants, and implicitly overloads arithmetic operations, MV, FCVT, and
895 LOAD/STORE depending on CSR configurations for bitwidth and predication.
896 **Everything** becomes parallelised. *This includes Compressed
897 instructions* as well as any future instructions and Custom Extensions.
898
899 Note: CSR tags to change behaviour of instructions is nothing new, including
900 in RISC-V. UXL, SXL and MXL change the behaviour so that XLEN=32/64/128.
901 FRM changes the behaviour of the floating-point unit, to alter the rounding
902 mode. Other architectures change the LOAD/STORE byte-order from big-endian
903 to little-endian on a per-instruction basis. SV is just a little more...
904 comprehensive in its effect on instructions.
905
906 ## Branch Instructions
907
908 ### Standard Branch <a name="standard_branch"></a>
909
910 Branch operations use standard RV opcodes that are reinterpreted to
911 be "predicate variants" in the instance where either of the two src
912 registers are marked as vectors (active=1, vector=1).
913
914 Note that the predication register to use (if one is enabled) is taken from
915 the *first* src register, and that this is used, just as with predicated
916 arithmetic operations, to mask whether the comparison operations take
917 place or not. The target (destination) predication register
918 to use (if one is enabled) is taken from the *second* src register.
919
920 If either of src1 or src2 are scalars (whether by there being no
921 CSR register entry or whether by the CSR entry specifically marking
922 the register as "scalar") the comparison goes ahead as vector-scalar
923 or scalar-vector.
924
925 In instances where no vectorisation is detected on either src registers
926 the operation is treated as an absolutely standard scalar branch operation.
927 Where vectorisation is present on either or both src registers, the
928 branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
929 those tests that are predicated out).
930
931 Note that when zero-predication is enabled (from source rs1),
932 a cleared bit in the predicate indicates that the result
933 of the compare is set to "false", i.e. that the corresponding
934 destination bit (or result)) be set to zero. Contrast this with
935 when zeroing is not set: bits in the destination predicate are
936 only *set*; they are **not** cleared. This is important to appreciate,
937 as there may be an expectation that, going into the hardware-loop,
938 the destination predicate is always expected to be set to zero:
939 this is **not** the case. The destination predicate is only set
940 to zero if **zeroing** is enabled.
941
942 Note that just as with the standard (scalar, non-predicated) branch
943 operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
944 src1 and src2.
945
946 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
947 for predicated compare operations of function "cmp":
948
949 for (int i=0; i<vl; ++i)
950 if ([!]preg[p][i])
951 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
952 s2 ? vreg[rs2][i] : sreg[rs2]);
953
954 With associated predication, vector-length adjustments and so on,
955 and temporarily ignoring bitwidth (which makes the comparisons more
956 complex), this becomes:
957
958 s1 = reg_is_vectorised(src1);
959 s2 = reg_is_vectorised(src2);
960
961 if not s1 && not s2
962 if cmp(rs1, rs2) # scalar compare
963 goto branch
964 return
965
966 preg = int_pred_reg[rd]
967 reg = int_regfile
968
969 ps = get_pred_val(I/F==INT, rs1);
970 rd = get_pred_val(I/F==INT, rs2); # this may not exist
971
972 if not exists(rd) or zeroing:
973 result = 0
974 else
975 result = preg[rd]
976
977 for (int i = 0; i < VL; ++i)
978 if (zeroing)
979 if not (ps & (1<<i))
980 result &= ~(1<<i);
981 else if (ps & (1<<i))
982 if (cmp(s1 ? reg[src1+i]:reg[src1],
983 s2 ? reg[src2+i]:reg[src2])
984 result |= 1<<i;
985 else
986 result &= ~(1<<i);
987
988 if not exists(rd)
989 if result == ps
990 goto branch
991 else
992 preg[rd] = result # store in destination
993 if preg[rd] == ps
994 goto branch
995
996 Notes:
997
998 * Predicated SIMD comparisons would break src1 and src2 further down
999 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
1000 Reordering") setting Vector-Length times (number of SIMD elements) bits
1001 in Predicate Register rd, as opposed to just Vector-Length bits.
1002 * The execution of "parallelised" instructions **must** be implemented
1003 as "re-entrant" (to use a term from software). If an exception (trap)
1004 occurs during the middle of a vectorised
1005 Branch (now a SV predicated compare) operation, the partial results
1006 of any comparisons must be written out to the destination
1007 register before the trap is permitted to begin. If however there
1008 is no predicate, the **entire** set of comparisons must be **restarted**,
1009 with the offset loop indices set back to zero. This is because
1010 there is no place to store the temporary result during the handling
1011 of traps.
1012
1013 TODO: predication now taken from src2. also branch goes ahead
1014 if all compares are successful.
1015
1016 Note also that where normally, predication requires that there must
1017 also be a CSR register entry for the register being used in order
1018 for the **predication** CSR register entry to also be active,
1019 for branches this is **not** the case. src2 does **not** have
1020 to have its CSR register entry marked as active in order for
1021 predication on src2 to be active.
1022
1023 Also note: SV Branch operations are **not** twin-predicated
1024 (see Twin Predication section). This would require three
1025 element offsets: one to track src1, one to track src2 and a third
1026 to track where to store the accumulation of the results. Given
1027 that the element offsets need to be exposed via CSRs so that
1028 the parallel hardware looping may be made re-entrant on traps
1029 and exceptions, the decision was made not to make SV Branches
1030 twin-predicated.
1031
1032 ### Floating-point Comparisons
1033
1034 There does not exist floating-point branch operations, only compare.
1035 Interestingly no change is needed to the instruction format because
1036 FP Compare already stores a 1 or a zero in its "rd" integer register
1037 target, i.e. it's not actually a Branch at all: it's a compare.
1038
1039 In RV (scalar) Base, a branch on a floating-point compare is
1040 done via the sequence "FEQ x1, f0, f5; BEQ x1, x0, #jumploc".
1041 This does extend to SV, as long as x1 (in the example sequence given)
1042 is vectorised. When that is the case, x1..x(1+VL-1) will also be
1043 set to 0 or 1 depending on whether f0==f5, f1==f6, f2==f7 and so on.
1044 The BEQ that follows will *also* compare x1==x0, x2==x0, x3==x0 and
1045 so on. Consequently, unlike integer-branch, FP Compare needs no
1046 modification in its behaviour.
1047
1048 In addition, it is noted that an entry "FNE" (the opposite of FEQ) is missing,
1049 and whilst in ordinary branch code this is fine because the standard
1050 RVF compare can always be followed up with an integer BEQ or a BNE (or
1051 a compressed comparison to zero or non-zero), in predication terms that
1052 becomes more of an impact. To deal with this, SV's predication has
1053 had "invert" added to it.
1054
1055 Also: note that FP Compare may be predicated, using the destination
1056 integer register (rd) to determine the predicate. FP Compare is **not**
1057 a twin-predication operation, as, again, just as with SV Branches,
1058 there are three registers involved: FP src1, FP src2 and INT rd.
1059
1060 ### Compressed Branch Instruction
1061
1062 Compressed Branch instructions are, just like standard Branch instructions,
1063 reinterpreted to be vectorised and predicated based on the source register
1064 (rs1s) CSR entries. As however there is only the one source register,
1065 given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
1066 to store the results of the comparisions is taken from CSR predication
1067 table entries for **x0**.
1068
1069 The specific required use of x0 is, with a little thought, quite obvious,
1070 but is counterintuitive. Clearly it is **not** recommended to redirect
1071 x0 with a CSR register entry, however as a means to opaquely obtain
1072 a predication target it is the only sensible option that does not involve
1073 additional special CSRs (or, worse, additional special opcodes).
1074
1075 Note also that, just as with standard branches, the 2nd source
1076 (in this case x0 rather than src2) does **not** have to have its CSR
1077 register table marked as "active" in order for predication to work.
1078
1079 ## Vectorised Dual-operand instructions
1080
1081 There is a series of 2-operand instructions involving copying (and
1082 sometimes alteration):
1083
1084 * C.MV
1085 * FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
1086 * C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
1087 * LOAD(-FP) and STORE(-FP)
1088
1089 All of these operations follow the same two-operand pattern, so it is
1090 *both* the source *and* destination predication masks that are taken into
1091 account. This is different from
1092 the three-operand arithmetic instructions, where the predication mask
1093 is taken from the *destination* register, and applied uniformly to the
1094 elements of the source register(s), element-for-element.
1095
1096 The pseudo-code pattern for twin-predicated operations is as
1097 follows:
1098
1099 function op(rd, rs):
1100  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1101  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1102  ps = get_pred_val(FALSE, rs); # predication on src
1103  pd = get_pred_val(FALSE, rd); # ... AND on dest
1104  for (int i = 0, int j = 0; i < VL && j < VL;):
1105 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1106 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1107 reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
1108 if (int_csr[rs].isvec) i++;
1109 if (int_csr[rd].isvec) j++; else break
1110
1111 This pattern covers scalar-scalar, scalar-vector, vector-scalar
1112 and vector-vector, and predicated variants of all of those.
1113 Zeroing is not presently included (TODO). As such, when compared
1114 to RVV, the twin-predicated variants of C.MV and FMV cover
1115 **all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
1116 VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
1117
1118 Note that:
1119
1120 * elwidth (SIMD) is not covered in the pseudo-code above
1121 * ending the loop early in scalar cases (VINSERT, VEXTRACT) is also
1122 not covered
1123 * zero predication is also not shown (TODO).
1124
1125 ### C.MV Instruction <a name="c_mv"></a>
1126
1127 There is no MV instruction in RV however there is a C.MV instruction.
1128 It is used for copying integer-to-integer registers (vectorised FMV
1129 is used for copying floating-point).
1130
1131 If either the source or the destination register are marked as vectors
1132 C.MV is reinterpreted to be a vectorised (multi-register) predicated
1133 move operation. The actual instruction's format does not change:
1134
1135 [[!table data="""
1136 15 12 | 11 7 | 6 2 | 1 0 |
1137 funct4 | rd | rs | op |
1138 4 | 5 | 5 | 2 |
1139 C.MV | dest | src | C0 |
1140 """]]
1141
1142 A simplified version of the pseudocode for this operation is as follows:
1143
1144 function op_mv(rd, rs) # MV not VMV!
1145  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1146  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1147  ps = get_pred_val(FALSE, rs); # predication on src
1148  pd = get_pred_val(FALSE, rd); # ... AND on dest
1149  for (int i = 0, int j = 0; i < VL && j < VL;):
1150 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1151 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1152 ireg[rd+j] <= ireg[rs+i];
1153 if (int_csr[rs].isvec) i++;
1154 if (int_csr[rd].isvec) j++; else break
1155
1156 There are several different instructions from RVV that are covered by
1157 this one opcode:
1158
1159 [[!table data="""
1160 src | dest | predication | op |
1161 scalar | vector | none | VSPLAT |
1162 scalar | vector | destination | sparse VSPLAT |
1163 scalar | vector | 1-bit dest | VINSERT |
1164 vector | scalar | 1-bit? src | VEXTRACT |
1165 vector | vector | none | VCOPY |
1166 vector | vector | src | Vector Gather |
1167 vector | vector | dest | Vector Scatter |
1168 vector | vector | src & dest | Gather/Scatter |
1169 vector | vector | src == dest | sparse VCOPY |
1170 """]]
1171
1172 Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
1173 operations with inversion on the src and dest predication for one of the
1174 two C.MV operations.
1175
1176 Note that in the instance where the Compressed Extension is not implemented,
1177 MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs.
1178 Note that the behaviour is **different** from C.MV because with addi the
1179 predication mask to use is taken **only** from rd and is applied against
1180 all elements: rs[i] = rd[i].
1181
1182 ### FMV, FNEG and FABS Instructions
1183
1184 These are identical in form to C.MV, except covering floating-point
1185 register copying. The same double-predication rules also apply.
1186 However when elwidth is not set to default the instruction is implicitly
1187 and automatic converted to a (vectorised) floating-point type conversion
1188 operation of the appropriate size covering the source and destination
1189 register bitwidths.
1190
1191 (Note that FMV, FNEG and FABS are all actually pseudo-instructions)
1192
1193 ### FVCT Instructions
1194
1195 These are again identical in form to C.MV, except that they cover
1196 floating-point to integer and integer to floating-point. When element
1197 width in each vector is set to default, the instructions behave exactly
1198 as they are defined for standard RV (scalar) operations, except vectorised
1199 in exactly the same fashion as outlined in C.MV.
1200
1201 However when the source or destination element width is not set to default,
1202 the opcode's explicit element widths are *over-ridden* to new definitions,
1203 and the opcode's element width is taken as indicative of the SIMD width
1204 (if applicable i.e. if packed SIMD is requested) instead.
1205
1206 For example FCVT.S.L would normally be used to convert a 64-bit
1207 integer in register rs1 to a 64-bit floating-point number in rd.
1208 If however the source rs1 is set to be a vector, where elwidth is set to
1209 default/2 and "packed SIMD" is enabled, then the first 32 bits of
1210 rs1 are converted to a floating-point number to be stored in rd's
1211 first element and the higher 32-bits *also* converted to floating-point
1212 and stored in the second. The 32 bit size comes from the fact that
1213 FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to
1214 divide that by two it means that rs1 element width is to be taken as 32.
1215
1216 Similar rules apply to the destination register.
1217
1218 ## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
1219
1220 An earlier draft of SV modified the behaviour of LOAD/STORE (modified
1221 the interpretation of the instruction fields). This
1222 actually undermined the fundamental principle of SV, namely that there
1223 be no modifications to the scalar behaviour (except where absolutely
1224 necessary), in order to simplify an implementor's task if considering
1225 converting a pre-existing scalar design to support parallelism.
1226
1227 So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
1228 do not change in SV, however just as with C.MV it is important to note
1229 that dual-predication is possible.
1230
1231 In vectorised architectures there are usually at least two different modes
1232 for LOAD/STORE:
1233
1234 * Read (or write for STORE) from sequential locations, where one
1235 register specifies the address, and the one address is incremented
1236 by a fixed amount. This is usually known as "Unit Stride" mode.
1237 * Read (or write) from multiple indirected addresses, where the
1238 vector elements each specify separate and distinct addresses.
1239
1240 To support these different addressing modes, the CSR Register "isvector"
1241 bit is used. So, for a LOAD, when the src register is set to
1242 scalar, the LOADs are sequentially incremented by the src register
1243 element width, and when the src register is set to "vector", the
1244 elements are treated as indirection addresses. Simplified
1245 pseudo-code would look like this:
1246
1247 function op_ld(rd, rs) # LD not VLD!
1248  rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
1249  rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
1250  ps = get_pred_val(FALSE, rs); # predication on src
1251  pd = get_pred_val(FALSE, rd); # ... AND on dest
1252  for (int i = 0, int j = 0; i < VL && j < VL;):
1253 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1254 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1255 if (int_csr[rd].isvec)
1256 # indirect mode (multi mode)
1257 srcbase = ireg[rsv+i];
1258 else
1259 # unit stride mode
1260 srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
1261 ireg[rdv+j] <= mem[srcbase + imm_offs];
1262 if (!int_csr[rs].isvec &&
1263 !int_csr[rd].isvec) break # scalar-scalar LD
1264 if (int_csr[rs].isvec) i++;
1265 if (int_csr[rd].isvec) j++;
1266
1267 Notes:
1268
1269 * For simplicity, zeroing and elwidth is not included in the above:
1270 the key focus here is the decision-making for srcbase; vectorised
1271 rs means use sequentially-numbered registers as the indirection
1272 address, and scalar rs is "offset" mode.
1273 * The test towards the end for whether both source and destination are
1274 scalar is what makes the above pseudo-code provide the "standard" RV
1275 Base behaviour for LD operations.
1276 * The offset in bytes (XLEN/8) changes depending on whether the
1277 operation is a LB (1 byte), LH (2 byes), LW (4 bytes) or LD
1278 (8 bytes), and also whether the element width is over-ridden
1279 (see special element width section).
1280
1281 ## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
1282
1283 C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
1284 where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
1285 It is therefore possible to use predicated C.LWSP to efficiently
1286 pop registers off the stack (by predicating x2 as the source), cherry-picking
1287 which registers to store to (by predicating the destination). Likewise
1288 for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
1289
1290 The two modes ("unit stride" and multi-indirection) are still supported,
1291 as with standard LD/ST. Essentially, the only difference is that the
1292 use of x2 is hard-coded into the instruction.
1293
1294 **Note**: it is still possible to redirect x2 to an alternative target
1295 register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
1296 general-purpose LOAD/STORE operations.
1297
1298 ## Compressed LOAD / STORE Instructions
1299
1300 Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
1301 where the same rules apply and the same pseudo-code apply as for
1302 non-compressed LOAD/STORE. Again: setting scalar or vector mode
1303 on the src for LOAD and dest for STORE switches mode from "Unit Stride"
1304 to "Multi-indirection", respectively.
1305
1306 # Element bitwidth polymorphism <a name="elwidth"></a>
1307
1308 Element bitwidth is best covered as its own special section, as it
1309 is quite involved and applies uniformly across-the-board. SV restricts
1310 bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
1311
1312 The effect of setting an element bitwidth is to re-cast each entry
1313 in the register table, and for all memory operations involving
1314 load/stores of certain specific sizes, to a completely different width.
1315 Thus In c-style terms, on an RV64 architecture, effectively each register
1316 now looks like this:
1317
1318 typedef union {
1319 uint8_t b[8];
1320 uint16_t s[4];
1321 uint32_t i[2];
1322 uint64_t l[1];
1323 } reg_t;
1324
1325 // integer table: assume maximum SV 7-bit regfile size
1326 reg_t int_regfile[128];
1327
1328 where the CSR Register table entry (not the instruction alone) determines
1329 which of those union entries is to be used on each operation, and the
1330 VL element offset in the hardware-loop specifies the index into each array.
1331
1332 However a naive interpretation of the data structure above masks the
1333 fact that setting VL greater than 8, for example, when the bitwidth is 8,
1334 accessing one specific register "spills over" to the following parts of
1335 the register file in a sequential fashion. So a much more accurate way
1336 to reflect this would be:
1337
1338 typedef union {
1339 uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
1340 uint8_t b[0]; // array of type uint8_t
1341 uint16_t s[0];
1342 uint32_t i[0];
1343 uint64_t l[0];
1344 uint128_t d[0];
1345 } reg_t;
1346
1347 reg_t int_regfile[128];
1348
1349 where when accessing any individual regfile[n].b entry it is permitted
1350 (in c) to arbitrarily over-run the *declared* length of the array (zero),
1351 and thus "overspill" to consecutive register file entries in a fashion
1352 that is completely transparent to a greatly-simplified software / pseudo-code
1353 representation.
1354 It is however critical to note that it is clearly the responsibility of
1355 the implementor to ensure that, towards the end of the register file,
1356 an exception is thrown if attempts to access beyond the "real" register
1357 bytes is ever attempted.
1358
1359 Now we may modify pseudo-code an operation where all element bitwidths have
1360 been set to the same size, where this pseudo-code is otherwise identical
1361 to its "non" polymorphic versions (above):
1362
1363 function op_add(rd, rs1, rs2) # add not VADD!
1364 ...
1365 ...
1366  for (i = 0; i < VL; i++)
1367 ...
1368 ...
1369 // TODO, calculate if over-run occurs, for each elwidth
1370 if (elwidth == 8) {
1371    int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
1372     int_regfile[rs2].i[irs2];
1373 } else if elwidth == 16 {
1374    int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
1375     int_regfile[rs2].s[irs2];
1376 } else if elwidth == 32 {
1377    int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
1378     int_regfile[rs2].i[irs2];
1379 } else { // elwidth == 64
1380    int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
1381     int_regfile[rs2].l[irs2];
1382 }
1383 ...
1384 ...
1385
1386 So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
1387 following sequentially on respectively from the same) are "type-cast"
1388 to 8-bit; for 16-bit entries likewise and so on.
1389
1390 However that only covers the case where the element widths are the same.
1391 Where the element widths are different, the following algorithm applies:
1392
1393 * Analyse the bitwidth of all source operands and work out the
1394 maximum. Record this as "maxsrcbitwidth"
1395 * If any given source operand requires sign-extension or zero-extension
1396 (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
1397 sign-extension / zero-extension or whatever is specified in the standard
1398 RV specification, **change** that to sign-extending from the respective
1399 individual source operand's bitwidth from the CSR table out to
1400 "maxsrcbitwidth" (previously calculated), instead.
1401 * Following separate and distinct (optional) sign/zero-extension of all
1402 source operands as specifically required for that operation, carry out the
1403 operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
1404 this may be a "null" (copy) operation, and that with FCVT, the changes
1405 to the source and destination bitwidths may also turn FVCT effectively
1406 into a copy).
1407 * If the destination operand requires sign-extension or zero-extension,
1408 instead of a mandatory fixed size (typically 32-bit for arithmetic,
1409 for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
1410 etc.), overload the RV specification with the bitwidth from the
1411 destination register's elwidth entry.
1412 * Finally, store the (optionally) sign/zero-extended value into its
1413 destination: memory for sb/sw etc., or an offset section of the register
1414 file for an arithmetic operation.
1415
1416 In this way, polymorphic bitwidths are achieved without requiring a
1417 massive 64-way permutation of calculations **per opcode**, for example
1418 (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
1419 rd bitwidths). The pseudo-code is therefore as follows:
1420
1421 typedef union {
1422 uint8_t b;
1423 uint16_t s;
1424 uint32_t i;
1425 uint64_t l;
1426 } el_reg_t;
1427
1428 bw(elwidth):
1429 if elwidth == 0:
1430 return xlen
1431 if elwidth == 1:
1432 return xlen / 2
1433 if elwidth == 2:
1434 return xlen * 2
1435 // elwidth == 3:
1436 return 8
1437
1438 get_max_elwidth(rs1, rs2):
1439 return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
1440 bw(int_csr[rs2].elwidth)) # again XLEN if no entry
1441
1442 get_polymorphed_reg(reg, bitwidth, offset):
1443 el_reg_t res;
1444 res.l = 0; // TODO: going to need sign-extending / zero-extending
1445 if bitwidth == 8:
1446 reg.b = int_regfile[reg].b[offset]
1447 elif bitwidth == 16:
1448 reg.s = int_regfile[reg].s[offset]
1449 elif bitwidth == 32:
1450 reg.i = int_regfile[reg].i[offset]
1451 elif bitwidth == 64:
1452 reg.l = int_regfile[reg].l[offset]
1453 return res
1454
1455 set_polymorphed_reg(reg, bitwidth, offset, val):
1456 if (!int_csr[reg].isvec):
1457 # sign/zero-extend depending on opcode requirements, from
1458 # the reg's bitwidth out to the full bitwidth of the regfile
1459 val = sign_or_zero_extend(val, bitwidth, xlen)
1460 int_regfile[reg].l[0] = val
1461 elif bitwidth == 8:
1462 int_regfile[reg].b[offset] = val
1463 elif bitwidth == 16:
1464 int_regfile[reg].s[offset] = val
1465 elif bitwidth == 32:
1466 int_regfile[reg].i[offset] = val
1467 elif bitwidth == 64:
1468 int_regfile[reg].l[offset] = val
1469
1470 maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
1471 destwid = int_csr[rs1].elwidth # destination element width
1472  for (i = 0; i < VL; i++)
1473 if (predval & 1<<i) # predication uses intregs
1474 // TODO, calculate if over-run occurs, for each elwidth
1475 src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
1476 // TODO, sign/zero-extend src1 and src2 as operation requires
1477 if (op_requires_sign_extend_src1)
1478 src1 = sign_extend(src1, maxsrcwid)
1479 src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
1480 result = src1 + src2 # actual add here
1481 // TODO, sign/zero-extend result, as operation requires
1482 if (op_requires_sign_extend_dest)
1483 result = sign_extend(result, maxsrcwid)
1484 set_polymorphed_reg(rd, destwid, ird, result)
1485 if (!int_vec[rd].isvector) break
1486 if (int_vec[rd ].isvector)  { id += 1; }
1487 if (int_vec[rs1].isvector)  { irs1 += 1; }
1488 if (int_vec[rs2].isvector)  { irs2 += 1; }
1489
1490 Whilst specific sign-extension and zero-extension pseudocode call
1491 details are left out, due to each operation being different, the above
1492 should be clear that;
1493
1494 * the source operands are extended out to the maximum bitwidth of all
1495 source operands
1496 * the operation takes place at that maximum source bitwidth (the
1497 destination bitwidth is not involved at this point, at all)
1498 * the result is extended (or potentially even, truncated) before being
1499 stored in the destination. i.e. truncation (if required) to the
1500 destination width occurs **after** the operation **not** before.
1501 * when the destination is not marked as "vectorised", the **full**
1502 (standard, scalar) register file entry is taken up, i.e. the
1503 element is either sign-extended or zero-extended to cover the
1504 full register bitwidth (XLEN) if it is not already XLEN bits long.
1505
1506 Implementors are entirely free to optimise the above, particularly
1507 if it is specifically known that any given operation will complete
1508 accurately in less bits, as long as the results produced are
1509 directly equivalent and equal, for all inputs and all outputs,
1510 to those produced by the above algorithm.
1511
1512 ## Polymorphic floating-point operation exceptions and error-handling
1513
1514 For floating-point operations, conversion takes place without
1515 raising any kind of exception. Exactly as specified in the standard
1516 RV specification, NAN (or appropriate) is stored if the result
1517 is beyond the range of the destination, and, again, exactly as
1518 with the standard RV specification just as with scalar
1519 operations, the floating-point flag is raised (FCSR). And, again, just as
1520 with scalar operations, it is software's responsibility to check this flag.
1521 Given that the FCSR flags are "accrued", the fact that multiple element
1522 operations could have occurred is not a problem.
1523
1524 Note that it is perfectly legitimate for floating-point bitwidths of
1525 only 8 to be specified. However whilst it is possible to apply IEEE 754
1526 principles, no actual standard yet exists. Implementors wishing to
1527 provide hardware-level 8-bit support rather than throw a trap to emulate
1528 in software should contact the author of this specification before
1529 proceeding.
1530
1531 ## Polymorphic shift operators
1532
1533 A special note is needed for changing the element width of left and right
1534 shift operators, particularly right-shift. Even for standard RV base,
1535 in order for correct results to be returned, the second operand RS2 must
1536 be truncated to be within the range of RS1's bitwidth. spike's implementation
1537 of sll for example is as follows:
1538
1539 WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));
1540
1541 which means: where XLEN is 32 (for RV32), restrict RS2 to cover the
1542 range 0..31 so that RS1 will only be left-shifted by the amount that
1543 is possible to fit into a 32-bit register. Whilst this appears not
1544 to matter for hardware, it matters greatly in software implementations,
1545 and it also matters where an RV64 system is set to "RV32" mode, such
1546 that the underlying registers RS1 and RS2 comprise 64 hardware bits
1547 each.
1548
1549 For SV, where each operand's element bitwidth may be over-ridden, the
1550 rule about determining the operation's bitwidth *still applies*, being
1551 defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
1552 **also applies to the truncation of RS2**. In other words, *after*
1553 determining the maximum bitwidth, RS2's range must **also be truncated**
1554 to ensure a correct answer. Example:
1555
1556 * RS1 is over-ridden to a 16-bit width
1557 * RS2 is over-ridden to an 8-bit width
1558 * RD is over-ridden to a 64-bit width
1559 * the maximum bitwidth is thus determined to be 16-bit - max(8,16)
1560 * RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
1561
1562 Pseudocode (in spike) for this example would therefore be:
1563
1564 WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
1565
1566 This example illustrates that considerable care therefore needs to be
1567 taken to ensure that left and right shift operations are implemented
1568 correctly. The key is that
1569
1570 * The operation bitwidth is determined by the maximum bitwidth
1571 of the *source registers*, **not** the destination register bitwidth
1572 * The result is then sign-extend (or truncated) as appropriate.
1573
1574 ## Polymorphic MULH/MULHU/MULHSU
1575
1576 MULH is designed to take the top half MSBs of a multiply that
1577 does not fit within the range of the source operands, such that
1578 smaller width operations may produce a full double-width multiply
1579 in two cycles. The issue is: SV allows the source operands to
1580 have variable bitwidth.
1581
1582 Here again special attention has to be paid to the rules regarding
1583 bitwidth, which, again, are that the operation is performed at
1584 the maximum bitwidth of the **source** registers. Therefore:
1585
1586 * An 8-bit x 8-bit multiply will create a 16-bit result that must
1587 be shifted down by 8 bits
1588 * A 16-bit x 8-bit multiply will create a 24-bit result that must
1589 be shifted down by 16 bits (top 8 bits being zero)
1590 * A 16-bit x 16-bit multiply will create a 32-bit result that must
1591 be shifted down by 16 bits
1592 * A 32-bit x 16-bit multiply will create a 48-bit result that must
1593 be shifted down by 32 bits
1594 * A 32-bit x 8-bit multiply will create a 40-bit result that must
1595 be shifted down by 32 bits
1596
1597 So again, just as with shift-left and shift-right, the result
1598 is shifted down by the maximum of the two source register bitwidths.
1599 And, exactly again, truncation or sign-extension is performed on the
1600 result. If sign-extension is to be carried out, it is performed
1601 from the same maximum of the two source register bitwidths out
1602 to the result element's bitwidth.
1603
1604 If truncation occurs, i.e. the top MSBs of the result are lost,
1605 this is "Officially Not Our Problem", i.e. it is assumed that the
1606 programmer actually desires the result to be truncated. i.e. if the
1607 programmer wanted all of the bits, they would have set the destination
1608 elwidth to accommodate them.
1609
1610 ## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
1611
1612 Polymorphic element widths in vectorised form means that the data
1613 being loaded (or stored) across multiple registers needs to be treated
1614 (reinterpreted) as a contiguous stream of elwidth-wide items, where
1615 the source register's element width is **independent** from the destination's.
1616
1617 This makes for a slightly more complex algorithm when using indirection
1618 on the "addressed" register (source for LOAD and destination for STORE),
1619 particularly given that the LOAD/STORE instruction provides important
1620 information about the width of the data to be reinterpreted.
1621
1622 Let's illustrate the "load" part, where the pseudo-code for elwidth=default
1623 was as follows, and i is the loop from 0 to VL-1:
1624
1625 srcbase = ireg[rs+i];
1626 return mem[srcbase + imm]; // returns XLEN bits
1627
1628 Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
1629 chunks are taken from the source memory location addressed by the current
1630 indexed source address register, and only when a full 32-bits-worth
1631 are taken will the index be moved on to the next contiguous source
1632 address register:
1633
1634 bitwidth = bw(elwidth); // source elwidth from CSR reg entry
1635 elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
1636 srcbase = ireg[rs+i/(elsperblock)]; // integer divide
1637 offs = i % elsperblock; // modulo
1638 return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
1639
1640 Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
1641 and 128 for LQ.
1642
1643 The principle is basically exactly the same as if the srcbase were pointing
1644 at the memory of the *register* file: memory is re-interpreted as containing
1645 groups of elwidth-wide discrete elements.
1646
1647 When storing the result from a load, it's important to respect the fact
1648 that the destination register has its *own separate element width*. Thus,
1649 when each element is loaded (at the source element width), any sign-extension
1650 or zero-extension (or truncation) needs to be done to the *destination*
1651 bitwidth. Also, the storing has the exact same analogous algorithm as
1652 above, where in fact it is just the set\_polymorphed\_reg pseudocode
1653 (completely unchanged) used above.
1654
1655 One issue remains: when the source element width is **greater** than
1656 the width of the operation, it is obvious that a single LB for example
1657 cannot possibly obtain 16-bit-wide data. This condition may be detected
1658 where, when using integer divide, elsperblock (the width of the LOAD
1659 divided by the bitwidth of the element) is zero.
1660
1661 The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
1662
1663 elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
1664
1665 The elements, if the element bitwidth is larger than the LD operation's
1666 size, will then be sign/zero-extended to the full LD operation size, as
1667 specified by the LOAD (LDU instead of LD, LBU instead of LB), before
1668 being passed on to the second phase.
1669
1670 As LOAD/STORE may be twin-predicated, it is important to note that
1671 the rules on twin predication still apply, except where in previous
1672 pseudo-code (elwidth=default for both source and target) it was
1673 the *registers* that the predication was applied to, it is now the
1674 **elements** that the predication is applied to.
1675
1676 Thus the full pseudocode for all LD operations may be written out
1677 as follows:
1678
1679 function LBU(rd, rs):
1680 load_elwidthed(rd, rs, 8, true)
1681 function LB(rd, rs):
1682 load_elwidthed(rd, rs, 8, false)
1683 function LH(rd, rs):
1684 load_elwidthed(rd, rs, 16, false)
1685 ...
1686 ...
1687 function LQ(rd, rs):
1688 load_elwidthed(rd, rs, 128, false)
1689
1690 # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
1691 function load_memory(rs, imm, i, opwidth):
1692 elwidth = int_csr[rs].elwidth
1693 bitwidth = bw(elwidth);
1694 elsperblock = min(1, opwidth / bitwidth)
1695 srcbase = ireg[rs+i/(elsperblock)];
1696 offs = i % elsperblock;
1697 return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
1698
1699 function load_elwidthed(rd, rs, opwidth, unsigned):
1700 destwid = int_csr[rd].elwidth # destination element width
1701  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1702  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1703  ps = get_pred_val(FALSE, rs); # predication on src
1704  pd = get_pred_val(FALSE, rd); # ... AND on dest
1705  for (int i = 0, int j = 0; i < VL && j < VL;):
1706 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1707 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1708 val = load_memory(rs, imm, i, opwidth)
1709 if unsigned:
1710 val = zero_extend(val, min(opwidth, bitwidth))
1711 else:
1712 val = sign_extend(val, min(opwidth, bitwidth))
1713 set_polymorphed_reg(rd, bitwidth, j, val)
1714 if (int_csr[rs].isvec) i++;
1715 if (int_csr[rd].isvec) j++; else break;
1716
1717 Note:
1718
1719 * when comparing against for example the twin-predicated c.mv
1720 pseudo-code, the pattern of independent incrementing of rd and rs
1721 is preserved unchanged.
1722 * just as with the c.mv pseudocode, zeroing is not included and must be
1723 taken into account (TODO).
1724 * that due to the use of a twin-predication algorithm, LOAD/STORE also
1725 take on the same VSPLAT, VINSERT, VREDUCE, VEXTRACT, VGATHER and
1726 VSCATTER characteristics.
1727 * that due to the use of the same set\_polymorphed\_reg pseudocode,
1728 a destination that is not vectorised (marked as scalar) will
1729 result in the element being fully sign-extended or zero-extended
1730 out to the full register file bitwidth (XLEN). When the source
1731 is also marked as scalar, this is how the compatibility with
1732 standard RV LOAD/STORE is preserved by this algorithm.
1733
1734 ### Example Tables showing LOAD elements
1735
1736 This section contains examples of vectorised LOAD operations, showing
1737 how the two stage process works (three if zero/sign-extension is included).
1738
1739
1740 #### Example: LD x8, x5(0), x8 CSR-elwidth=32, x5 CSR-elwidth=16, VL=7
1741
1742 This is:
1743
1744 * a 64-bit load, with an offset of zero
1745 * with a source-address elwidth of 16-bit
1746 * into a destination-register with an elwidth of 32-bit
1747 * where VL=7
1748 * from register x5 (actually x5-x6) to x8 (actually x8 to half of x11)
1749 * RV64, where XLEN=64 is assumed.
1750
1751 First, the memory table, which, due to the
1752 element width being 16 and the operation being LD (64), the 64-bits
1753 loaded from memory are subdivided into groups of **four** elements.
1754 And, with VL being 7 (deliberately to illustrate that this is reasonable
1755 and possible), the first four are sourced from the offset addresses pointed
1756 to by x5, and the next three from the ofset addresses pointed to by
1757 the next contiguous register, x6:
1758
1759 [[!table data="""
1760 addr | byte 0 | byte 1 | byte 2 | byte 3 | byte 4 | byte 5 | byte 6 | byte 7 |
1761 @x5 | elem 0 || elem 1 || elem 2 || elem 3 ||
1762 @x6 | elem 4 || elem 5 || elem 6 || not loaded ||
1763 """]]
1764
1765 Next, the elements are zero-extended from 16-bit to 32-bit, as whilst
1766 the elwidth CSR entry for x5 is 16-bit, the destination elwidth on x8 is 32.
1767
1768 [[!table data="""
1769 byte 3 | byte 2 | byte 1 | byte 0 |
1770 0x0 | 0x0 | elem0 ||
1771 0x0 | 0x0 | elem1 ||
1772 0x0 | 0x0 | elem2 ||
1773 0x0 | 0x0 | elem3 ||
1774 0x0 | 0x0 | elem4 ||
1775 0x0 | 0x0 | elem5 ||
1776 0x0 | 0x0 | elem6 ||
1777 0x0 | 0x0 | elem7 ||
1778 """]]
1779
1780 Lastly, the elements are stored in contiguous blocks, as if x8 was also
1781 byte-addressable "memory". That "memory" happens to cover registers
1782 x8, x9, x10 and x11, with the last 32 "bits" of x11 being **UNMODIFIED**:
1783
1784 [[!table data="""
1785 reg# | byte 7 | byte 6 | byte 5 | byte 4 | byte 3 | byte 2 | byte 1 | byte 0 |
1786 x8 | 0x0 | 0x0 | elem 1 || 0x0 | 0x0 | elem 0 ||
1787 x9 | 0x0 | 0x0 | elem 3 || 0x0 | 0x0 | elem 2 ||
1788 x10 | 0x0 | 0x0 | elem 5 || 0x0 | 0x0 | elem 4 ||
1789 x11 | **UNMODIFIED** |||| 0x0 | 0x0 | elem 6 ||
1790 """]]
1791
1792 Thus we have data that is loaded from the **addresses** pointed to by
1793 x5 and x6, zero-extended from 16-bit to 32-bit, stored in the **registers**
1794 x8 through to half of x11.
1795 The end result is that elements 0 and 1 end up in x8, with element 8 being
1796 shifted up 32 bits, and so on, until finally element 6 is in the
1797 LSBs of x11.
1798
1799 Note that whilst the memory addressing table is shown left-to-right byte order,
1800 the registers are shown in right-to-left (MSB) order. This does **not**
1801 imply that bit or byte-reversal is carried out: it's just easier to visualise
1802 memory as being contiguous bytes, and emphasises that registers are not
1803 really actually "memory" as such.
1804
1805 ## Why SV bitwidth specification is restricted to 4 entries
1806
1807 The four entries for SV element bitwidths only allows three over-rides:
1808
1809 * 8 bit
1810 * 16 hit
1811 * 32 bit
1812
1813 This would seem inadequate, surely it would be better to have 3 bits or
1814 more and allow 64, 128 and some other options besides. The answer here
1815 is, it gets too complex, no RV128 implementation yet exists, and so RV64's
1816 default is 64 bit, so the 4 major element widths are covered anyway.
1817
1818 There is an absolutely crucial aspect oF SV here that explicitly
1819 needs spelling out, and it's whether the "vectorised" bit is set in
1820 the Register's CSR entry.
1821
1822 If "vectorised" is clear (not set), this indicates that the operation
1823 is "scalar". Under these circumstances, when set on a destination (RD),
1824 then sign-extension and zero-extension, whilst changed to match the
1825 override bitwidth (if set), will erase the **full** register entry
1826 (64-bit if RV64).
1827
1828 When vectorised is *set*, this indicates that the operation now treats
1829 **elements** as if they were independent registers, so regardless of
1830 the length, any parts of a given actual register that are not involved
1831 in the operation are **NOT** modified, but are **PRESERVED**.
1832
1833 For example:
1834
1835 * when the vector bit is clear and elwidth set to 16 on the destination
1836 register, operations are truncated to 16 bit and then sign or zero
1837 extended to the *FULL* XLEN register width.
1838 * when the vector bit is set, elwidth is 16 and VL=1 (or other value where
1839 groups of elwidth sized elements do not fill an entire XLEN register),
1840 the "top" bits of the destination register do *NOT* get modified, zero'd
1841 or otherwise overwritten.
1842
1843 SIMD micro-architectures may implement this by using predication on
1844 any elements in a given actual register that are beyond the end of
1845 multi-element operation.
1846
1847 Other microarchitectures may choose to provide byte-level write-enable
1848 lines on the register file, such that each 64 bit register in an RV64
1849 system requires 8 WE lines. Scalar RV64 operations would require
1850 activation of all 8 lines, where SV elwidth based operations would
1851 activate the required subset of those byte-level write lines.
1852
1853 Example:
1854
1855 * rs1, rs2 and rd are all set to 8-bit
1856 * VL is set to 3
1857 * RV64 architecture is set (UXL=64)
1858 * add operation is carried out
1859 * bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16]
1860 concatenated with similar add operations on bits 15..8 and 7..0
1861 * bits 24 through 63 **remain as they originally were**.
1862
1863 Example SIMD micro-architectural implementation:
1864
1865 * SIMD architecture works out the nearest round number of elements
1866 that would fit into a full RV64 register (in this case: 8)
1867 * SIMD architecture creates a hidden predicate, binary 0b00000111
1868 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
1869 * SIMD architecture goes ahead with the add operation as if it
1870 was a full 8-wide batch of 8 adds
1871 * SIMD architecture passes top 5 elements through the adders
1872 (which are "disabled" due to zero-bit predication)
1873 * SIMD architecture gets the 5 unmodified top 8-bits back unmodified
1874 and stores them in rd.
1875
1876 This requires a read on rd, however this is required anyway in order
1877 to support non-zeroing mode.
1878
1879 ## Polymorphic floating-point
1880
1881 Standard scalar RV integer operations base the register width on XLEN,
1882 which may be changed (UXL in USTATUS, and the corresponding MXL and
1883 SXL in MSTATUS and SSTATUS respectively). Integer LOAD, STORE and
1884 arithmetic operations are therefore restricted to an active XLEN bits,
1885 with sign or zero extension to pad out the upper bits when XLEN has
1886 been dynamically set to less than the actual register size.
1887
1888 For scalar floating-point, the active (used / changed) bits are
1889 specified exclusively by the operation: ADD.S specifies an active
1890 32-bits, with the upper bits of the source registers needing to
1891 be all 1s ("NaN-boxed"), and the destination upper bits being
1892 *set* to all 1s (including on LOAD/STOREs).
1893
1894 Where elwidth is set to default (on any source or the destination)
1895 it is obvious that this NaN-boxing behaviour can and should be
1896 preserved. When elwidth is non-default things are less obvious,
1897 so need to be thought through. Here is a normal (scalar) sequence,
1898 assuming an RV64 which supports Quad (128-bit) FLEN:
1899
1900 * FLD loads 64-bit wide from memory. Top 64 MSBs are set to all 1s
1901 * ADD.D performs a 64-bit-wide add. Top 64 MSBs of destination set to 1s.
1902 * FSD stores lowest 64-bits from the 128-bit-wide register to memory:
1903 top 64 MSBs ignored.
1904
1905 Therefore it makes sense to mirror this behaviour when, for example,
1906 elwidth is set to 32. Assume elwidth set to 32 on all source and
1907 destination registers:
1908
1909 * FLD loads 64-bit wide from memory as **two** 32-bit single-precision
1910 floating-point numbers.
1911 * ADD.D performs **two** 32-bit-wide adds, storing one of the adds
1912 in bits 0-31 and the second in bits 32-63.
1913 * FSD stores lowest 64-bits from the 128-bit-wide register to memory
1914
1915 Here's the thing: it does not make sense to overwrite the top 64 MSBs
1916 of the registers either during the FLD **or** the ADD.D. The reason
1917 is that, effectively, the top 64 MSBs actually represent a completely
1918 independent 64-bit register, so overwriting it is not only gratuitous
1919 but may actually be harmful for a future extension to SV which may
1920 have a way to directly access those top 64 bits.
1921
1922 The decision is therefore **not** to touch the upper parts of floating-point
1923 registers whereever elwidth is set to non-default values, including
1924 when "isvec" is false in a given register's CSR entry. Only when the
1925 elwidth is set to default **and** isvec is false will the standard
1926 RV behaviour be followed, namely that the upper bits be modified.
1927
1928 Ultimately if elwidth is default and isvec false on *all* source
1929 and destination registers, a SimpleV instruction defaults completely
1930 to standard RV scalar behaviour (this holds true for **all** operations,
1931 right across the board).
1932
1933 The nice thing here is that ADD.S, ADD.D and ADD.Q when elwidth are
1934 non-default values are effectively all the same: they all still perform
1935 multiple ADD operations, just at different widths. A future extension
1936 to SimpleV may actually allow ADD.S to access the upper bits of the
1937 register, effectively breaking down a 128-bit register into a bank
1938 of 4 independently-accesible 32-bit registers.
1939
1940 In the meantime, although when e.g. setting VL to 8 it would technically
1941 make no difference to the ALU whether ADD.S, ADD.D or ADD.Q is used,
1942 using ADD.Q may be an easy way to signal to the microarchitecture that
1943 it is to receive a higher VL value. On a superscalar OoO architecture
1944 there may be absolutely no difference, however on simpler SIMD-style
1945 microarchitectures they may not necessarily have the infrastructure in
1946 place to know the difference, such that when VL=8 and an ADD.D instruction
1947 is issued, it completes in 2 cycles (or more) rather than one, where
1948 if an ADD.Q had been issued instead on such simpler microarchitectures
1949 it would complete in one.
1950
1951 ## Specific instruction walk-throughs
1952
1953 This section covers walk-throughs of the above-outlined procedure
1954 for converting standard RISC-V scalar arithmetic operations to
1955 polymorphic widths, to ensure that it is correct.
1956
1957 ### add
1958
1959 Standard Scalar RV32/RV64 (xlen):
1960
1961 * RS1 @ xlen bits
1962 * RS2 @ xlen bits
1963 * add @ xlen bits
1964 * RD @ xlen bits
1965
1966 Polymorphic variant:
1967
1968 * RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
1969 * RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
1970 * add @ max(rs1, rs2) bits
1971 * RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate
1972
1973 Note here that polymorphic add zero-extends its source operands,
1974 where addw sign-extends.
1975
1976 ### addw
1977
1978 The RV Specification specifically states that "W" variants of arithmetic
1979 operations always produce 32-bit signed values. In a polymorphic
1980 environment it is reasonable to assume that the signed aspect is
1981 preserved, where it is the length of the operands and the result
1982 that may be changed.
1983
1984 Standard Scalar RV64 (xlen):
1985
1986 * RS1 @ xlen bits
1987 * RS2 @ xlen bits
1988 * add @ xlen bits
1989 * RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
1990
1991 Polymorphic variant:
1992
1993 * RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
1994 * RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
1995 * add @ max(rs1, rs2) bits
1996 * RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
1997
1998 Note here that polymorphic addw sign-extends its source operands,
1999 where add zero-extends.
2000
2001 This requires a little more in-depth analysis. Where the bitwidth of
2002 rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
2003 only where the bitwidth of either rs1 or rs2 are different, will the
2004 lesser-width operand be sign-extended.
2005
2006 Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
2007 where for add they are both zero-extended. This holds true for all arithmetic
2008 operations ending with "W".
2009
2010 ### addiw
2011
2012 Standard Scalar RV64I:
2013
2014 * RS1 @ xlen bits, truncated to 32-bit
2015 * immed @ 12 bits, sign-extended to 32-bit
2016 * add @ 32 bits
2017 * RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
2018
2019 Polymorphic variant:
2020
2021 * RS1 @ rs1 bits
2022 * immed @ 12 bits, sign-extend to max(rs1, 12) bits
2023 * add @ max(rs1, 12) bits
2024 * RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
2025
2026 # Predication Element Zeroing
2027
2028 The introduction of zeroing on traditional vector predication is usually
2029 intended as an optimisation for lane-based microarchitectures with register
2030 renaming to be able to save power by avoiding a register read on elements
2031 that are passed through en-masse through the ALU. Simpler microarchitectures
2032 do not have this issue: they simply do not pass the element through to
2033 the ALU at all, and therefore do not store it back in the destination.
2034 More complex non-lane-based micro-architectures can, when zeroing is
2035 not set, use the predication bits to simply avoid sending element-based
2036 operations to the ALUs, entirely: thus, over the long term, potentially
2037 keeping all ALUs 100% occupied even when elements are predicated out.
2038
2039 SimpleV's design principle is not based on or influenced by
2040 microarchitectural design factors: it is a hardware-level API.
2041 Therefore, looking purely at whether zeroing is *useful* or not,
2042 (whether less instructions are needed for certain scenarios),
2043 given that a case can be made for zeroing *and* non-zeroing, the
2044 decision was taken to add support for both.
2045
2046 ## Single-predication (based on destination register)
2047
2048 Zeroing on predication for arithmetic operations is taken from
2049 the destination register's predicate. i.e. the predication *and*
2050 zeroing settings to be applied to the whole operation come from the
2051 CSR Predication table entry for the destination register.
2052 Thus when zeroing is set on predication of a destination element,
2053 if the predication bit is clear, then the destination element is *set*
2054 to zero (twin-predication is slightly different, and will be covered
2055 next).
2056
2057 Thus the pseudo-code loop for a predicated arithmetic operation
2058 is modified to as follows:
2059
2060  for (i = 0; i < VL; i++)
2061 if not zeroing: # an optimisation
2062 while (!(predval & 1<<i) && i < VL)
2063 if (int_vec[rd ].isvector)  { id += 1; }
2064 if (int_vec[rs1].isvector)  { irs1 += 1; }
2065 if (int_vec[rs2].isvector)  { irs2 += 1; }
2066 if i == VL:
2067 break
2068 if (predval & 1<<i)
2069 src1 = ....
2070 src2 = ...
2071 else:
2072 result = src1 + src2 # actual add (or other op) here
2073 set_polymorphed_reg(rd, destwid, ird, result)
2074 if (!int_vec[rd].isvector) break
2075 else if zeroing:
2076 result = 0
2077 set_polymorphed_reg(rd, destwid, ird, result)
2078 if (int_vec[rd ].isvector)  { id += 1; }
2079 else if (predval & 1<<i) break;
2080 if (int_vec[rs1].isvector)  { irs1 += 1; }
2081 if (int_vec[rs2].isvector)  { irs2 += 1; }
2082
2083 The optimisation to skip elements entirely is only possible for certain
2084 micro-architectures when zeroing is not set. However for lane-based
2085 micro-architectures this optimisation may not be practical, as it
2086 implies that elements end up in different "lanes". Under these
2087 circumstances it is perfectly fine to simply have the lanes
2088 "inactive" for predicated elements, even though it results in
2089 less than 100% ALU utilisation.
2090
2091 ## Twin-predication (based on source and destination register)
2092
2093 Twin-predication is not that much different, except that that
2094 the source is independently zero-predicated from the destination.
2095 This means that the source may be zero-predicated *or* the
2096 destination zero-predicated *or both*, or neither.
2097
2098 When with twin-predication, zeroing is set on the source and not
2099 the destination, if a predicate bit is set it indicates that a zero
2100 data element is passed through the operation (the exception being:
2101 if the source data element is to be treated as an address - a LOAD -
2102 then the data returned *from* the LOAD is zero, rather than looking up an
2103 *address* of zero.
2104
2105 When zeroing is set on the destination and not the source, then just
2106 as with single-predicated operations, a zero is stored into the destination
2107 element (or target memory address for a STORE).
2108
2109 Zeroing on both source and destination effectively result in a bitwise
2110 NOR operation of the source and destination predicate: the result is that
2111 where either source predicate OR destination predicate is set to 0,
2112 a zero element will ultimately end up in the destination register.
2113
2114 However: this may not necessarily be the case for all operations;
2115 implementors, particularly of custom instructions, clearly need to
2116 think through the implications in each and every case.
2117
2118 Here is pseudo-code for a twin zero-predicated operation:
2119
2120 function op_mv(rd, rs) # MV not VMV!
2121  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
2122  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
2123  ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
2124  pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
2125  for (int i = 0, int j = 0; i < VL && j < VL):
2126 if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
2127 if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
2128 if ((pd & 1<<j))
2129 if ((pd & 1<<j))
2130 sourcedata = ireg[rs+i];
2131 else
2132 sourcedata = 0
2133 ireg[rd+j] <= sourcedata
2134 else if (zerodst)
2135 ireg[rd+j] <= 0
2136 if (int_csr[rs].isvec)
2137 i++;
2138 if (int_csr[rd].isvec)
2139 j++;
2140 else
2141 if ((pd & 1<<j))
2142 break;
2143
2144 Note that in the instance where the destination is a scalar, the hardware
2145 loop is ended the moment a value *or a zero* is placed into the destination
2146 register/element. Also note that, for clarity, variable element widths
2147 have been left out of the above.
2148
2149 # Exceptions
2150
2151 TODO: expand. Exceptions may occur at any time, in any given underlying
2152 scalar operation. This implies that context-switching (traps) may
2153 occur, and operation must be returned to where it left off. That in
2154 turn implies that the full state - including the current parallel
2155 element being processed - has to be saved and restored. This is
2156 what the **STATE** CSR is for.
2157
2158 The implications are that all underlying individual scalar operations
2159 "issued" by the parallelisation have to appear to be executed sequentially.
2160 The further implications are that if two or more individual element
2161 operations are underway, and one with an earlier index causes an exception,
2162 it may be necessary for the microarchitecture to **discard** or terminate
2163 operations with higher indices.
2164
2165 This being somewhat dissatisfactory, an "opaque predication" variant
2166 of the STATE CSR is being considered.
2167
2168 # Hints
2169
2170 A "HINT" is an operation that has no effect on architectural state,
2171 where its use may, by agreed convention, give advance notification
2172 to the microarchitecture: branch prediction notification would be
2173 a good example. Usually HINTs are where rd=x0.
2174
2175 With Simple-V being capable of issuing *parallel* instructions where
2176 rd=x0, the space for possible HINTs is expanded considerably. VL
2177 could be used to indicate different hints. In addition, if predication
2178 is set, the predication register itself could hypothetically be passed
2179 in as a *parameter* to the HINT operation.
2180
2181 No specific hints are yet defined in Simple-V
2182
2183 # VLIW Format <a name="vliw-format"></a>
2184
2185 One issue with SV is the setup and teardown time of the CSRs. The cost
2186 of the use of a full CSRRW (requiring LI) is quite high. A VLIW format
2187 therefore makes sense.
2188
2189 A suitable prefix, which fits the Expanded Instruction-Length encoding
2190 for "(80 + 16 times instruction_length)", as defined in Section 1.5
2191 of the RISC-V ISA, is as follows:
2192
2193 | 15 | 14:12 | 11:10 | 9:8 | 7 | 6:0 |
2194 | - | ----- | ----- | ----- | --- | ------- |
2195 | vlset | 16xil | pplen | rplen | mode | 1111111 |
2196
2197 An optional VL Block, optional predicate entries, optional register
2198 entries and finally some 16/32/48 bit standard RV or SVPrefix opcodes
2199 follow.
2200
2201 The variable-length format from Section 1.5 of the RISC-V ISA:
2202
2203 | base+4 ... base+2 | base | number of bits |
2204 | ------ ----------------- | ---------------- | -------------------------- |
2205 | ..xxxx xxxxxxxxxxxxxxxx | xnnnxxxxx1111111 | (80+16\*nnn)-bit, nnn!=111 |
2206 | {ops}{Pred}{Reg}{VL Block} | SV Prefix | |
2207
2208 VL/MAXVL/SubVL Block:
2209
2210 | 31-30 | 29:28 | 27:22 | 21:17 - 16 |
2211 | - | ----- | ------ | ------ - - |
2212 | 0 | SubVL | VLdest | VLEN vlt |
2213 | 1 | SubVL | VLdest | VLEN |
2214
2215 Note: this format is very similar to that used in [[sv_prefix_proposal]]
2216
2217 If vlt is 0, VLEN is a 5 bit immediate value, offset by one (i.e
2218 a bit sequence of 0b00000 represents VL=1 and so on). If vlt is 1,
2219 it specifies the scalar register from which VL is set by this VLIW
2220 instruction group. VL, whether set from the register or the immediate,
2221 is then modified (truncated) to be MIN(VL, MAXVL), and the result stored
2222 in the scalar register specified in VLdest. If VLdest is zero, no store
2223 in the regfile occurs (however VL is still set).
2224
2225 This option will typically be used to start vectorised loops, where
2226 the VLIW instruction effectively embeds an optional "SETSUBVL, SETVL"
2227 sequence (in compact form).
2228
2229 When bit 15 is set to 1, MAXVL and VL are both set to the immediate,
2230 VLEN (again, offset by one), which is 6 bits in length, and the same
2231 value stored in scalar register VLdest (if that register is nonzero).
2232 A value of 0b000000 will set MAXVL=VL=1, a value of 0b000001 will
2233 set MAXVL=VL= 2 and so on.
2234
2235 This option will typically not be used so much for loops as it will be
2236 for one-off instructions such as saving the entire register file to the
2237 stack with a single one-off Vectorised and predicated LD/ST, or as a way
2238 to save or restore registers in a function call with a single instruction.
2239
2240 CSRs needed:
2241
2242 * mepcvliw
2243 * sepcvliw
2244 * uepcvliw
2245 * hepcvliw
2246
2247 Notes:
2248
2249 * Bit 7 specifies if the prefix block format is the full 16 bit format
2250 (1) or the compact less expressive format (0). In the 8 bit format,
2251 pplen is multiplied by 2.
2252 * 8 bit format predicate numbering is implicit and begins from x9. Thus
2253 it is critical to put blocks in the correct order as required.
2254 * Bit 7 also specifies if the register block format is 16 bit (1) or 8 bit
2255 (0). In the 8 bit format, rplen is multiplied by 2. If only an odd number
2256 of entries are needed the last may be set to 0x00, indicating "unused".
2257 * Bit 15 specifies if the VL Block is present. If set to 1, the VL Block
2258 immediately follows the VLIW instruction Prefix
2259 * Bits 8 and 9 define how many RegCam entries (0 to 3 if bit 15 is 1,
2260 otherwise 0 to 6) follow the (optional) VL Block.
2261 * Bits 10 and 11 define how many PredCam entries (0 to 3 if bit 7 is 1,
2262 otherwise 0 to 6) follow the (optional) RegCam entries
2263 * Bits 14 to 12 (IL) define the actual length of the instruction: total
2264 number of bits is 80 + 16 times IL. Standard RV32, RVC and also
2265 SVPrefix (P48/64-\*-Type) instructions fit into this space, after the
2266 (optional) VL / RegCam / PredCam entries
2267 * Anything - any registers - within the VLIW-prefixed format *MUST* have the
2268 RegCam and PredCam entries applied to it.
2269 * At the end of the VLIW Group, the RegCam and PredCam entries
2270 *no longer apply*. VL, MAXVL and SUBVL on the other hand remain at
2271 the values set by the last instruction (whether a CSRRW or the VL
2272 Block header).
2273 * Although an inefficient use of resources, it is fine to set the MAXVL,
2274 VL and SUBVL CSRs with standard CSRRW instructions, within a VLIW block.
2275
2276 All this would greatly reduce the amount of space utilised by Vectorised
2277 instructions, given that 64-bit CSRRW requires 3, even 4 32-bit opcodes: the
2278 CSR itself, a LI, and the setting up of the value into the RS register
2279 of the CSR, which, again, requires a LI / LUI to get the 32 bit
2280 data into the CSR. To get 64-bit data into the register in order to put
2281 it into the CSR(s), LOAD operations from memory are needed!
2282
2283 Given that each 64-bit CSR can hold only 4x PredCAM entries (or 4 RegCAM
2284 entries), that's potentially 6 to eight 32-bit instructions, just to
2285 establish the Vector State!
2286
2287 Not only that: even CSRRW on VL and MAXVL requires 64-bits (even more bits if
2288 VL needs to be set to greater than 32). Bear in mind that in SV, both MAXVL
2289 and VL need to be set.
2290
2291 By contrast, the VLIW prefix is only 16 bits, the VL/MAX/SubVL block is
2292 only 16 bits, and as long as not too many predicates and register vector
2293 qualifiers are specified, several 32-bit and 16-bit opcodes can fit into
2294 the format. If the full flexibility of the 16 bit block formats are not
2295 needed, more space is saved by using the 8 bit formats.
2296
2297 In this light, embedding the VL/MAXVL, PredCam and RegCam CSR entries into
2298 a VLIW format makes a lot of sense.
2299
2300 Open Questions:
2301
2302 * Is it necessary to stick to the RISC-V 1.5 format? Why not go with
2303 using the 15th bit to allow 80 + 16\*0bnnnn bits? Perhaps to be sane,
2304 limit to 256 bits (16 times 0-11).
2305 * Could a "hint" be used to set which operations are parallel and which
2306 are sequential?
2307 * Could a new sub-instruction opcode format be used, one that does not
2308 conform precisely to RISC-V rules, but *unpacks* to RISC-V opcodes?
2309 no need for byte or bit-alignment
2310 * Could a hardware compression algorithm be deployed? Quite likely,
2311 because of the sub-execution context (sub-VLIW PC)
2312
2313 ## Limitations on instructions.
2314
2315 To greatly simplify implementations, it is required to treat the VLIW
2316 group as a separate sub-program with its own separate PC. The sub-pc
2317 advances separately whilst the main PC remains pointing at the beginning
2318 of the VLIW instruction (not to be confused with how VL works, which
2319 is exactly the same principle, except it is VStart in the STATE CSR
2320 that increments).
2321
2322 This has implications, namely that a new set of CSRs identical to xepc
2323 (mepc, srpc, hepc and uepc) must be created and managed and respected
2324 as being a sub extension of the xepc set of CSRs. Thus, xepcvliw CSRs
2325 must be context switched and saved / restored in traps.
2326
2327 The VStart indices in the STATE CSR may be similarly regarded as another
2328 sub-execution context, giving in effect two sets of nested sub-levels
2329 of the RISCV Program Counter.
2330
2331 In addition, as xepcvliw CSRs are relative to the beginning of the VLIW
2332 block, branches MUST be restricted to within the block, i.e. addressing
2333 is now restricted to the start (and very short) length of the block.
2334
2335 Also: calling subroutines is inadviseable, unless they can be entirely
2336 accomplished within a block.
2337
2338 A normal jump and a normal function call may only be taken by letting
2339 the VLIW end, returning to "normal" standard RV mode, using RVC, 32 bit
2340 or P48/64-\*-type opcodes.
2341
2342 ## Links
2343
2344 * <https://groups.google.com/d/msg/comp.arch/yIFmee-Cx-c/jRcf0evSAAAJ>
2345
2346 # Subsets of RV functionality
2347
2348 This section describes the differences when SV is implemented on top of
2349 different subsets of RV.
2350
2351 ## Common options
2352
2353 It is permitted to limit the size of either (or both) the register files
2354 down to the original size of the standard RV architecture. However, below
2355 the mandatory limits set in the RV standard will result in non-compliance
2356 with the SV Specification.
2357
2358 ## RV32 / RV32F
2359
2360 When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
2361 maximum limit for predication is also restricted to 32 bits. Whilst not
2362 actually specifically an "option" it is worth noting.
2363
2364 ## RV32G
2365
2366 Normally in standard RV32 it does not make much sense to have
2367 RV32G, The critical instructions that are missing in standard RV32
2368 are those for moving data to and from the double-width floating-point
2369 registers into the integer ones, as well as the FCVT routines.
2370
2371 In an earlier draft of SV, it was possible to specify an elwidth
2372 of double the standard register size: this had to be dropped,
2373 and may be reintroduced in future revisions.
2374
2375 ## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
2376
2377 When floating-point is not implemented, the size of the User Register and
2378 Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
2379 per table).
2380
2381 ## RV32E
2382
2383 In embedded scenarios the User Register and Predication CSRs may be
2384 dropped entirely, or optionally limited to 1 CSR, such that the combined
2385 number of entries from the M-Mode CSR Register table plus U-Mode
2386 CSR Register table is either 4 16-bit entries or (if the U-Mode is
2387 zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
2388 the Predication CSR tables.
2389
2390 RV32E is the most likely candidate for simply detecting that registers
2391 are marked as "vectorised", and generating an appropriate exception
2392 for the VL loop to be implemented in software.
2393
2394 ## RV128
2395
2396 RV128 has not been especially considered, here, however it has some
2397 extremely large possibilities: double the element width implies
2398 256-bit operands, spanning 2 128-bit registers each, and predication
2399 of total length 128 bit given that XLEN is now 128.
2400
2401 # Under consideration <a name="issues"></a>
2402
2403 for element-grouping, if there is unused space within a register
2404 (3 16-bit elements in a 64-bit register for example), recommend:
2405
2406 * For the unused elements in an integer register, the used element
2407 closest to the MSB is sign-extended on write and the unused elements
2408 are ignored on read.
2409 * The unused elements in a floating-point register are treated as-if
2410 they are set to all ones on write and are ignored on read, matching the
2411 existing standard for storing smaller FP values in larger registers.
2412
2413 ---
2414
2415 info register,
2416
2417 > One solution is to just not support LR/SC wider than a fixed
2418 > implementation-dependent size, which must be at least 
2419 >1 XLEN word, which can be read from a read-only CSR
2420 > that can also be used for info like the kind and width of 
2421 > hw parallelism supported (128-bit SIMD, minimal virtual 
2422 > parallelism, etc.) and other things (like maybe the number 
2423 > of registers supported). 
2424
2425 > That CSR would have to have a flag to make a read trap so
2426 > a hypervisor can simulate different values.
2427
2428 ----
2429
2430 > And what about instructions like JALR? 
2431
2432 answer: they're not vectorised, so not a problem
2433
2434 ----
2435
2436 * if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
2437 XLEN if elwidth==default
2438 * if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
2439 *32* if elwidth == default
2440
2441 ---
2442
2443 TODO: document different lengths for INT / FP regfiles, and provide
2444 as part of info register. 00=32, 01=64, 10=128, 11=reserved.
2445
2446 ---
2447
2448 TODO, update to remove RegCam and PredCam CSRs, just use SVprefix and
2449 VLIW format