(no commit message)
[libreriscv.git] / simple_v_extension / specification.mdwn
1 # Simple-V (Parallelism Extension Proposal) Specification
2
3 * Copyright (C) 2017, 2018, 2019 Luke Kenneth Casson Leighton
4 * Status: DRAFTv0.6
5 * Last edited: 21 jun 2019
6 * Ancillary resource: [[opcodes]] [[sv_prefix_proposal]]
7
8 With thanks to:
9
10 * Allen Baum
11 * Bruce Hoult
12 * comp.arch
13 * Jacob Bachmeyer
14 * Guy Lemurieux
15 * Jacob Lifshay
16 * Terje Mathisen
17 * The RISC-V Founders, without whom this all would not be possible.
18
19 [[!toc ]]
20
21 # Summary and Background: Rationale
22
23 Simple-V is a uniform parallelism API for RISC-V hardware that has several
24 unplanned side-effects including code-size reduction, expansion of
25 HINT space and more. The reason for
26 creating it is to provide a manageable way to turn a pre-existing design
27 into a parallel one, in a step-by-step incremental fashion, without adding any new opcodes, thus allowing
28 the implementor to focus on adding hardware where it is needed and necessary.
29 The primary target is for mobile-class 3D GPUs and VPUs, with secondary
30 goals being to reduce executable size (by extending the effectiveness of RV opcodes, RVC in particular) and reduce context-switch latency.
31
32 Critically: **No new instructions are added**. The parallelism (if any
33 is implemented) is implicitly added by tagging *standard* scalar registers
34 for redirection. When such a tagged register is used in any instruction,
35 it indicates that the PC shall **not** be incremented; instead a loop
36 is activated where *multiple* instructions are issued to the pipeline
37 (as determined by a length CSR), with contiguously incrementing register
38 numbers starting from the tagged register. When the last "element"
39 has been reached, only then is the PC permitted to move on. Thus
40 Simple-V effectively sits (slots) *in between* the instruction decode phase
41 and the ALU(s).
42
43 The barrier to entry with SV is therefore very low. The minimum
44 compliant implementation is software-emulation (traps), requiring
45 only the CSRs and CSR tables, and that an exception be thrown if an
46 instruction's registers are detected to have been tagged. The looping
47 that would otherwise be done in hardware is thus carried out in software,
48 instead. Whilst much slower, it is "compliant" with the SV specification,
49 and may be suited for implementation in RV32E and also in situations
50 where the implementor wishes to focus on certain aspects of SV, without
51 unnecessary time and resources into the silicon, whilst also conforming
52 strictly with the API. A good area to punt to software would be the
53 polymorphic element width capability for example.
54
55 Hardware Parallelism, if any, is therefore added at the implementor's
56 discretion to turn what would otherwise be a sequential loop into a
57 parallel one.
58
59 To emphasise that clearly: Simple-V (SV) is *not*:
60
61 * A SIMD system
62 * A SIMT system
63 * A Vectorisation Microarchitecture
64 * A microarchitecture of any specific kind
65 * A mandary parallel processor microarchitecture of any kind
66 * A supercomputer extension
67
68 SV does **not** tell implementors how or even if they should implement
69 parallelism: it is a hardware "API" (Application Programming Interface)
70 that, if implemented, presents a uniform and consistent way to *express*
71 parallelism, at the same time leaving the choice of if, how, how much,
72 when and whether to parallelise operations **entirely to the implementor**.
73
74 # Basic Operation
75
76 The principle of SV is as follows:
77
78 * Standard RV instructions are "prefixed" (extended) through a 48/64
79 bit format (single instruction option) or a variable
80 length VLIW-like prefix (multi or "grouped" option).
81 * The prefix(es) indicate which registers are "tagged" as
82 "vectorised". Predicates can also be added, and element widths overridden on any src or dest register.
83 * A "Vector Length" CSR is set, indicating the span of any future
84 "parallel" operations.
85 * If any operation (a **scalar** standard RV opcode) uses a register
86 that has been so "marked" ("tagged"), a hardware "macro-unrolling loop"
87 is activated, of length VL, that effectively issues **multiple**
88 identical instructions using contiguous sequentially-incrementing
89 register numbers, based on the "tags".
90 * **Whether they be executed sequentially or in parallel or a
91 mixture of both or punted to software-emulation in a trap handler
92 is entirely up to the implementor**.
93
94 In this way an entire scalar algorithm may be vectorised with
95 the minimum of modification to the hardware and to compiler toolchains.
96
97 To reiterate: **There are *no* new opcodes**. The scheme works *entirely*
98 on hidden context that augments *scalar* RISCV instructions.
99
100 # CSRs <a name="csrs"></a>
101
102 * An optional "reshaping" CSR key-value table which remaps from a 1D
103 linear shape to 2D or 3D, including full transposition.
104
105 There are also five additional User mode CSRs :
106
107 * uMVL (the Maximum Vector Length)
108 * uVL (which has different characteristics from standard CSRs)
109 * uSUBVL (effectively a kind of SIMD)
110 * uEPCVLIW (a copy of the sub-execution Program Counter, that is relative
111 to the start of the current VLIW Group, set on a trap).
112 * uSTATE (useful for saving and restoring during context switch,
113 and for providing fast transitions)
114
115 There are also five additional CSRs for Supervisor-Mode:
116
117 * SMVL
118 * SVL
119 * SSUBVL
120 * SEPCVLIW
121 * SSTATE
122
123 And likewise for M-Mode:
124
125 * MMVL
126 * MVL
127 * MSUBVL
128 * MEPCVLIW
129 * MSTATE
130
131 Both Supervisor and M-Mode have their own CSR registers, independent
132 of the other privilege levels, in order to make it easier to use
133 Vectorisation in each level without affecting other privilege levels.
134
135 The access pattern for these groups of CSRs in each mode follows the
136 same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
137
138 * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
139 * In S-Mode, accessing and changing of the M-Mode CSRs is transparently
140 identical
141 to changing the S-Mode CSRs. Accessing and changing the U-Mode
142 CSRs is permitted.
143 * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
144 is prohibited.
145
146 In M-Mode, only the M-Mode CSRs are in effect, i.e. it is only the
147 M-Mode MVL, the M-Mode STATE and so on that influences the processor
148 behaviour. Likewise for S-Mode, and likewise for U-Mode.
149
150 This has the interesting benefit of allowing M-Mode (or S-Mode) to be set
151 up, for context-switching to take place, and, on return back to the higher
152 privileged mode, the CSRs of that mode will be exactly as they were.
153 Thus, it becomes possible for example to set up CSRs suited best to aiding
154 and assisting low-latency fast context-switching *once and only once*
155 (for example at boot time), without the need for re-initialising the
156 CSRs needed to do so.
157
158 Another interesting side effect of separate S Mode CSRs is that Vectorised
159 saving of the entire register file to the stack is a single instruction
160 (accidental provision of LOAD-MULTI semantics). If the SVPrefix P64-LD-type format is used, LOAD-MULTI may even be done with a single standalone 64 bit opcode (P64 may set up both VL and MVL from an immediate field). It can even be predicated,
161 which opens up some very interesting possibilities.
162
163 The (x)EPCVLIW CSRs must be treated exactly like their corresponding (x)epc
164 equivalents. See VLIW section for details.
165
166 ## MAXVECTORLENGTH (MVL) <a name="mvl" />
167
168 MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
169 is variable length and may be dynamically set. MVL is
170 however limited to the regfile bitwidth XLEN (1-32 for RV32,
171 1-64 for RV64 and so on).
172
173 The reason for setting this limit is so that predication registers, when
174 marked as such, may fit into a single register as opposed to fanning out
175 over several registers. This keeps the hardware implementation a little simpler.
176
177 The other important factor to note is that the actual MVL is internally
178 stored **offset by one**, so that it can fit into only 6 bits (for RV64)
179 and still cover a range up to XLEN bits. Attempts to set MVL to zero will
180 return an exception. This is expressed more clearly in the "pseudocode"
181 section, where there are subtle differences between CSRRW and CSRRWI.
182
183 ## Vector Length (VL) <a name="vl" />
184
185 VSETVL is slightly different from RVV. Similar to RVV, VL is set to be within
186 the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
187
188 VL = rd = MIN(vlen, MVL)
189
190 where 1 <= MVL <= XLEN
191
192 However just like MVL it is important to note that the range for VL has
193 subtle design implications, covered in the "CSR pseudocode" section
194
195 The fixed (specific) setting of VL allows vector LOAD/STORE to be used
196 to switch the entire bank of registers using a single instruction (see
197 Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
198 is down to the fact that predication bits fit into a single register of
199 length XLEN bits.
200
201 The second and most important change is that, within the limits set by
202 MVL, the value passed in **must** be set in VL (and in the
203 destination register).
204
205 This has implication for the microarchitecture, as VL is required to be
206 set (limits from MVL notwithstanding) to the actual value
207 requested. RVV has the option to set VL to an arbitrary value that suits
208 the conditions and the micro-architecture: SV does *not* permit this.
209
210 The reason is so that if SV is to be used for a context-switch or as a
211 substitute for LOAD/STORE-Multiple, the operation can be done with only
212 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
213 single LD/ST operation). If VL does *not* get set to the register file
214 length when VSETVL is called, then a software-loop would be needed.
215 To avoid this need, VL *must* be set to exactly what is requested
216 (limits notwithstanding).
217
218 Therefore, in turn, unlike RVV, implementors *must* provide
219 pseudo-parallelism (using sequential loops in hardware) if actual
220 hardware-parallelism in the ALUs is not deployed. A hybrid is also
221 permitted (as used in Broadcom's VideoCore-IV) however this must be
222 *entirely* transparent to the ISA.
223
224 The third change is that VSETVL is implemented as a CSR, where the
225 behaviour of CSRRW (and CSRRWI) must be changed to specifically store
226 the *new* value in the destination register, **not** the old value.
227 Where context-load/save is to be implemented in the usual fashion
228 by using a single CSRRW instruction to obtain the old value, the
229 *secondary* CSR must be used (SVSTATE). This CSR by contrast behaves
230 exactly as standard CSRs, and contains more than just VL.
231
232 One interesting side-effect of using CSRRWI to set VL is that this
233 may be done with a single instruction, useful particularly for a
234 context-load/save. There are however limitations: CSRWI's immediate
235 is limited to 0-31 (representing VL=1-32).
236
237 Note that when VL is set to 1, parallel operations cease: the
238 hardware loop is reduced to a single element: scalar operations.
239 This is in effect the default, normal
240 operating mode. However it is important
241 to appreciate that this does **not**
242 result in the Register table or SUBVL
243 being disabled. Only when the Register
244 table is empty (P48/64 prefix fields notwithstanding)
245 would SV have no effect.
246
247 ## SUBVL - Sub Vector Length
248
249 This is a "group by quantity" that effectivrly asks each iteration of the hardware loop to load SUBVL elements of width elwidth at a time. Effectively, SUBVL is like a SIMD multiplier: instead of just 1 operation issued, SUBVL operations are issued.
250
251 Another way to view SUBVL is that each element in the VL length vector is now SUBVL times elwidth bits in length and
252 now comprises SUBVL discrete sub
253 operations. An inner SUBVL for-loop within
254 a VL for-loop in effect, with the
255 sub-element increased every time in the
256 innermost loop. This is best illustrated
257 in the (simplified) pseudocode example,
258 later.
259
260 The primary use case for SUBVL is for 3D FP Vectors. A Vector of 3D coordinates X,Y,Z for example may be loaded and multiplied the stored, per VL element iteration, rather than having to set VL to three times larger.
261
262 Legal values are 1, 2, 3 and 4, and the STATE CSR must hold the 2 bit values 0b00 thru 0b11.
263
264 Setting this CSR to 0 must raise an exception. Setting it to a value
265 greater than 4 likewise.
266
267 The main effect of SUBVL is that predication bits are applied per **group**,
268 rather than by individual element.
269
270 This saves a not insignificant number of instructions when handling 3D
271 vectors, as otherwise a much longer predicate mask would have to be set
272 up with regularly-repeated bit patterns.
273
274 See SUBVL Pseudocode illustration for details.
275
276 ## STATE
277
278 This is a standard CSR that contains sufficient information for a
279 full context save/restore. It contains (and permits setting of):
280
281 * MVL
282 * VL
283 * the destination element offset of the current parallel instruction
284 being executed
285 * and, for twin-predication, the source element offset as well.
286 * SUBVL
287 * the subvector destination element offset of the current parallel instruction
288 being executed
289 * and, for twin-predication, the subvector source element offset as well.
290
291 Interestingly STATE may hypothetically also be used to make the
292 immediately-following instruction to skip a certain number of elements,
293 by playing with destoffs and srcoffs
294 (and the subvector offsets as well)
295
296 Setting destoffs and srcoffs is realistically intended for saving state
297 so that exceptions (page faults in particular) may be serviced and the
298 hardware-loop that was being executed at the time of the trap, from
299 user-mode (or Supervisor-mode), may be returned to and continued from exactly
300 where it left off. The reason why this works is because setting
301 User-Mode STATE will not change (not be used) in M-Mode or S-Mode
302 (and is entirely why M-Mode and S-Mode have their own STATE CSRs).
303
304 The format of the STATE CSR is as follows:
305
306 | (30..29 | (28..27) | (26..24) | (23..18) | (17..12) | (11..6) | (5...0) |
307 | ------- | -------- | -------- | -------- | -------- | ------- | ------- |
308 | dsvoffs | ssvoffs | subvl | destoffs | srcoffs | vl | maxvl |
309
310 When setting this CSR, the following characteristics will be enforced:
311
312 * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
313 * **VL** will be truncated (after offset) to be within the range 1 to MAXVL
314 * **SUBVL** which sets a SIMD-like quantity, has only 4 values there are no changes needed
315 * **srcoffs** will be truncated to be within the range 0 to VL-1
316 * **destoffs** will be truncated to be within the range 0 to VL-1
317 * **ssvoffs** will be truncated to be within the range 0 to SUBVL-1
318 * **dsvoffs** will be truncated to be within the range 0 to SUBVL-1
319
320 NOTE: if the following instruction is not a twin predicated instruction, and destoffs or dsvoffs has been set to non-zero, subsequent execution behaviour is undefined. **USE WITH CARE**.
321
322 ### Rules for when to increment STATE offsets
323
324 The offsets inside STATE are like the indices in a loop, except in hardware. They are also partially (conceptually) similar to a "sub-execution Program Counter". As such, and to allow proper context switching and to define correct exception behaviour, the following rules must be observed:
325
326 * When the VL CSR is set, srcoffs and destoffs are reset to zero.
327 * Each instruction that contains a "tagged" register shall start execution at the *current* value of srcoffs (and destoffs in the case of twin predication)
328 * Unpredicated bits (in nonzeroing mode) shall cause the element operation to skip, incrementing the srcoffs (or destoffs)
329 * On execution of an element operation, Exceptions shall **NOT** cause srcoffs or destoffs to increment.
330 * On completion of the full Vector Loop (srcoffs = VL-1 or destoffs = VL-1 after the last element is executed), both srcoffs and destoffs shall be reset to zero.
331
332 This latter is why srcoffs and destoffs may be stored as values from 0 to XLEN-1 in the STATE CSR, because as loop indices they refer to elements. srcoffs and destoffs never need to be set to VL: their maximum operating values are limited to 0 to VL-1.
333
334 The same corresponding rules apply to SUBVL, svsrcoffs and svdestoffs.
335
336 ## MVL and VL Pseudocode
337
338 The pseudo-code for get and set of VL and MVL use the following internal
339 functions as follows:
340
341 set_mvl_csr(value, rd):
342 regs[rd] = MVL
343 MVL = MIN(value, MVL)
344
345 get_mvl_csr(rd):
346 regs[rd] = VL
347
348 set_vl_csr(value, rd):
349 VL = MIN(value, MVL)
350 regs[rd] = VL # yes returning the new value NOT the old CSR
351 return VL
352
353 get_vl_csr(rd):
354 regs[rd] = VL
355 return VL
356
357 Note that where setting MVL behaves as a normal CSR (returns the old
358 value), unlike standard CSR behaviour, setting VL will return the **new**
359 value of VL **not** the old one.
360
361 For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
362 maximise the effectiveness, an immediate of 0 is used to set VL=1,
363 an immediate of 1 is used to set VL=2 and so on:
364
365 CSRRWI_Set_MVL(value):
366 set_mvl_csr(value+1, x0)
367
368 CSRRWI_Set_VL(value):
369 set_vl_csr(value+1, x0)
370
371 However for CSRRW the following pseudocode is used for MVL and VL,
372 where setting the value to zero will cause an exception to be raised.
373 The reason is that if VL or MVL are set to zero, the STATE CSR is
374 not capable of returning that value.
375
376 CSRRW_Set_MVL(rs1, rd):
377 value = regs[rs1]
378 if value == 0 or value > XLEN:
379 raise Exception
380 set_mvl_csr(value, rd)
381
382 CSRRW_Set_VL(rs1, rd):
383 value = regs[rs1]
384 if value == 0 or value > XLEN:
385 raise Exception
386 set_vl_csr(value, rd)
387
388 In this way, when CSRRW is utilised with a loop variable, the value
389 that goes into VL (and into the destination register) may be used
390 in an instruction-minimal fashion:
391
392 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
393 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
394 CSRRWI MVL, 3 # sets MVL == **4** (not 3)
395 j zerotest # in case loop counter a0 already 0
396 loop:
397 CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
398 ld a3, a1 # load 4 registers a3-6 from x
399 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
400 ld a7, a2 # load 4 registers a7-10 from y
401 add a1, a1, t1 # increment pointer to x by vl*8
402 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
403 sub a0, a0, t0 # n -= vl (t0)
404 st a7, a2 # store 4 registers a7-10 to y
405 add a2, a2, t1 # increment pointer to y by vl*8
406 zerotest:
407 bnez a0, loop # repeat if n != 0
408
409 With the STATE CSR, just like with CSRRWI, in order to maximise the
410 utilisation of the limited bitspace, "000000" in binary represents
411 VL==1, "00001" represents VL==2 and so on (likewise for MVL):
412
413 CSRRW_Set_SV_STATE(rs1, rd):
414 value = regs[rs1]
415 get_state_csr(rd)
416 MVL = set_mvl_csr(value[11:6]+1)
417 VL = set_vl_csr(value[5:0]+1)
418 destoffs = value[23:18]>>18
419 srcoffs = value[23:18]>>12
420
421 get_state_csr(rd):
422 regs[rd] = (MVL-1) | (VL-1)<<6 | (srcoffs)<<12 |
423 (destoffs)<<18
424 return regs[rd]
425
426 In both cases, whilst CSR read of VL and MVL return the exact values
427 of VL and MVL respectively, reading and writing the STATE CSR returns
428 those values **minus one**. This is absolutely critical to implement
429 if the STATE CSR is to be used for fast context-switching.
430
431 ## VL, MVL and SUBVL instruction aliases
432
433 | alias | CSR |
434 | - | - |
435 | SETVL rd, rs | CSRRW VL, rd, rs |
436 | SETVLi rd, #n | CSRRWI VL, rd, #n-1 |
437 | GETVL rd | CSRRW VL, rd, x0 |
438 | SETMVL rd, rs | CSRRW MVL, rd, rs |
439 | SETMVLi rd, #n | CSRRWI MVL,rd, #n-1 |
440 | GETMVL rd | CSRRW MVL, rd, x0 |
441
442 Note: CSRRC and other bitsetting may still be used, they are however not particularly useful (very obscure).
443
444 ## Register key-value (CAM) table <a name="regcsrtable" />
445
446 *NOTE: in prior versions of SV, this table used to be writable and
447 accessible via CSRs. It is now stored in the VLIW instruction format,
448 and entries may be overridden temporarily by the SVPrefix P48/64 format*
449
450 The purpose of the Register table is three-fold:
451
452 * To mark integer and floating-point registers as requiring "redirection"
453 if it is ever used as a source or destination in any given operation.
454 This involves a level of indirection through a 5-to-7-bit lookup table,
455 such that **unmodified** operands with 5 bits (3 for some RVC ops) may
456 access up to **128** registers.
457 * To indicate whether, after redirection through the lookup table, the
458 register is a vector (or remains a scalar).
459 * To over-ride the implicit or explicit bitwidth that the operation would
460 normally give the register.
461
462 Note: clearly, if an RVC operation uses a 3 bit spec'd register (x8-x15) and the Register table contains entried that only refer to registerd x1-x14 or x16-x31, such operations will *never* activate the VL hardware loop!
463
464 If however the (16 bit) Register table does contain such an entry (x8-x15 or x2 in the case of LWSP), that src or dest reg may be redirected anywhere to the *full* 128 register range. Thus, RVC becomes far more powerful and has many more opportunities to reduce code size that in Standard RV32/RV64 executables.
465
466 16 bit format:
467
468 | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
469 | ------ | | - | - | - | ------ | ------- |
470 | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
471 | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
472 | .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
473 | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
474
475 8 bit format:
476
477 | RegCAM | | 7 | (6..5) | (4..0) |
478 | ------ | | - | ------ | ------- |
479 | 0 | | i/f | vew0 | regnum |
480
481 i/f is set to "1" to indicate that the redirection/tag entry is to be applied
482 to integer registers; 0 indicates that it is relevant to floating-point
483 registers.
484
485 The 8 bit format is used for a much more compact expression. "isvec"
486 is implicit and, similar to [[sv-prefix-proposal]], the target vector
487 is "regnum<<2", implicitly. Contrast this with the 16-bit format where
488 the target vector is *explicitly* named in bits 8 to 14, and bit 15 may
489 optionally set "scalar" mode.
490
491 Note that whilst SVPrefix adds one extra bit to each of rd, rs1 etc.,
492 and thus the "vector" mode need only shift the (6 bit) regnum by 1 to
493 get the actual (7 bit) register number to use, there is not enough space
494 in the 8 bit format (only 5 bits for regnum) so "regnum<<2" is required.
495
496 vew has the following meanings, indicating that the instruction's
497 operand size is "over-ridden" in a polymorphic fashion:
498
499 | vew | bitwidth |
500 | --- | ------------------- |
501 | 00 | default (XLEN/FLEN) |
502 | 01 | 8 bit |
503 | 10 | 16 bit |
504 | 11 | 32 bit |
505
506 As the above table is a CAM (key-value store) it may be appropriate
507 (faster, implementation-wise) to expand it as follows:
508
509 struct vectorised fp_vec[32], int_vec[32];
510
511 for (i = 0; i < 16; i++) // 16 CSRs?
512 tb = int_vec if CSRvec[i].type == 0 else fp_vec
513 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
514 tb[idx].elwidth = CSRvec[i].elwidth
515 tb[idx].regidx = CSRvec[i].regidx // indirection
516 tb[idx].isvector = CSRvec[i].isvector // 0=scalar
517 tb[idx].packed = CSRvec[i].packed // SIMD or not
518
519
520
521 ## Predication Table <a name="predication_csr_table"></a>
522
523 *NOTE: in prior versions of SV, this table used to be writable and
524 accessible via CSRs. It is now stored in the VLIW instruction format,
525 and entries may be overridden by the SVPrefix format*
526
527 The Predication Table is a key-value store indicating whether, if a
528 given destination register (integer or floating-point) is referred to
529 in an instruction, it is to be predicated. Like the Register table, it
530 is an indirect lookup that allows the RV opcodes to not need modification.
531
532 It is particularly important to note
533 that the *actual* register used can be *different* from the one that is
534 in the instruction, due to the redirection through the lookup table.
535
536 * regidx is the register that in combination with the
537 i/f flag, if that integer or floating-point register is referred to
538 in a (standard RV) instruction
539 results in the lookup table being referenced to find the predication
540 mask to use for this operation.
541 * predidx is the
542 *actual* (full, 7 bit) register to be used for the predication mask.
543 * inv indicates that the predication mask bits are to be inverted
544 prior to use *without* actually modifying the contents of the
545 registerfrom which those bits originated.
546 * zeroing is either 1 or 0, and if set to 1, the operation must
547 place zeros in any element position where the predication mask is
548 set to zero. If zeroing is set to 0, unpredicated elements *must*
549 be left alone. Some microarchitectures may choose to interpret
550 this as skipping the operation entirely. Others which wish to
551 stick more closely to a SIMD architecture may choose instead to
552 interpret unpredicated elements as an internal "copy element"
553 operation (which would be necessary in SIMD microarchitectures
554 that perform register-renaming)
555
556 16 bit format:
557
558 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
559 | ----- | - | - | - | - | ------- | ------- |
560 | 0 | predkey | zero0 | inv0 | i/f | regidx | rsrvd |
561 | 1 | predkey | zero1 | inv1 | i/f | regidx | rsvd |
562 | ... | predkey | ..... | .... | i/f | ....... | ....... |
563 | 15 | predkey | zero15 | inv15 | i/f | regidx | rsvd |
564
565
566 8 bit format:
567
568 | PrCSR | 7 | 6 | 5 | (4..0) |
569 | ----- | - | - | - | ------- |
570 | 0 | zero0 | inv0 | i/f | regnum |
571
572 The 8 bit format is a compact and less expressive variant of the full
573 16 bit format. Using the 8 bit formatis very different: the predicate
574 register to use is implicit, and numbering begins inplicitly from x9. The
575 regnum is still used to "activate" predication, in the same fashion as
576 described above.
577
578 The 16 bit Predication CSR Table is a key-value store, so implementation-wise
579 it will be faster to turn the table around (maintain topologically
580 equivalent state):
581
582 struct pred {
583 bool zero;
584 bool inv;
585 bool enabled;
586 int predidx; // redirection: actual int register to use
587 }
588
589 struct pred fp_pred_reg[32]; // 64 in future (bank=1)
590 struct pred int_pred_reg[32]; // 64 in future (bank=1)
591
592 for (i = 0; i < 16; i++)
593 tb = int_pred_reg if CSRpred[i].type == 0 else fp_pred_reg;
594 idx = CSRpred[i].regidx
595 tb[idx].zero = CSRpred[i].zero
596 tb[idx].inv = CSRpred[i].inv
597 tb[idx].predidx = CSRpred[i].predidx
598 tb[idx].enabled = true
599
600 So when an operation is to be predicated, it is the internal state that
601 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
602 pseudo-code for operations is given, where p is the explicit (direct)
603 reference to the predication register to be used:
604
605 for (int i=0; i<vl; ++i)
606 if ([!]preg[p][i])
607 (d ? vreg[rd][i] : sreg[rd]) =
608 iop(s1 ? vreg[rs1][i] : sreg[rs1],
609 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
610
611 This instead becomes an *indirect* reference using the *internal* state
612 table generated from the Predication CSR key-value store, which is used
613 as follows.
614
615 if type(iop) == INT:
616 preg = int_pred_reg[rd]
617 else:
618 preg = fp_pred_reg[rd]
619
620 for (int i=0; i<vl; ++i)
621 predicate, zeroing = get_pred_val(type(iop) == INT, rd):
622 if (predicate && (1<<i))
623 (d ? regfile[rd+i] : regfile[rd]) =
624 iop(s1 ? regfile[rs1+i] : regfile[rs1],
625 s2 ? regfile[rs2+i] : regfile[rs2]); // for insts with 2 inputs
626 else if (zeroing)
627 (d ? regfile[rd+i] : regfile[rd]) = 0
628
629 Note:
630
631 * d, s1 and s2 are booleans indicating whether destination,
632 source1 and source2 are vector or scalar
633 * key-value CSR-redirection of rd, rs1 and rs2 have NOT been included
634 above, for clarity. rd, rs1 and rs2 all also must ALSO go through
635 register-level redirection (from the Register table) if they are
636 vectors.
637
638 If written as a function, obtaining the predication mask (and whether
639 zeroing takes place) may be done as follows:
640
641 def get_pred_val(bool is_fp_op, int reg):
642 tb = int_reg if is_fp_op else fp_reg
643 if (!tb[reg].enabled):
644 return ~0x0, False // all enabled; no zeroing
645 tb = int_pred if is_fp_op else fp_pred
646 if (!tb[reg].enabled):
647 return ~0x0, False // all enabled; no zeroing
648 predidx = tb[reg].predidx // redirection occurs HERE
649 predicate = intreg[predidx] // actual predicate HERE
650 if (tb[reg].inv):
651 predicate = ~predicate // invert ALL bits
652 return predicate, tb[reg].zero
653
654 Note here, critically, that **only** if the register is marked
655 in its **register** table entry as being "active" does the testing
656 proceed further to check if the **predicate** table entry is
657 also active.
658
659 Note also that this is in direct contrast to branch operations
660 for the storage of comparisions: in these specific circumstances
661 the requirement for there to be an active *register* entry
662 is removed.
663
664 ## REMAP CSR <a name="remap" />
665
666 (Note: both the REMAP and SHAPE sections are best read after the
667 rest of the document has been read)
668
669 There is one 32-bit CSR which may be used to indicate which registers,
670 if used in any operation, must be "reshaped" (re-mapped) from a linear
671 form to a 2D or 3D transposed form, or "offset" to permit arbitrary
672 access to elements within a register.
673
674 The 32-bit REMAP CSR may reshape up to 3 registers:
675
676 | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
677 | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
678 | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
679
680 regidx0-2 refer not to the Register CSR CAM entry but to the underlying
681 *real* register (see regidx, the value) and consequently is 7-bits wide.
682 When set to zero (referring to x0), clearly reshaping x0 is pointless,
683 so is used to indicate "disabled".
684 shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
685 Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
686
687 It is anticipated that these specialist CSRs not be very often used.
688 Unlike the CSR Register and Predication tables, the REMAP CSRs use
689 the full 7-bit regidx so that they can be set once and left alone,
690 whilst the CSR Register entries pointing to them are disabled, instead.
691
692 ## SHAPE 1D/2D/3D vector-matrix remapping CSRs
693
694 (Note: both the REMAP and SHAPE sections are best read after the
695 rest of the document has been read)
696
697 There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
698 which have the same format. When each SHAPE CSR is set entirely to zeros,
699 remapping is disabled: the register's elements are a linear (1D) vector.
700
701 | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
702 | ------- | -- | ------- | -- | ------- | -- | ------- |
703 | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
704
705 offs is a 3-bit field, spread out across bits 7, 15 and 23, which
706 is added to the element index during the loop calculation.
707
708 xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
709 that the array dimensionality for that dimension is 1. A value of xdimsz=2
710 would indicate that in the first dimension there are 3 elements in the
711 array. The format of the array is therefore as follows:
712
713 array[xdim+1][ydim+1][zdim+1]
714
715 However whilst illustrative of the dimensionality, that does not take the
716 "permute" setting into account. "permute" may be any one of six values
717 (0-5, with values of 6 and 7 being reserved, and not legal). The table
718 below shows how the permutation dimensionality order works:
719
720 | permute | order | array format |
721 | ------- | ----- | ------------------------ |
722 | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
723 | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
724 | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
725 | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
726 | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
727 | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
728
729 In other words, the "permute" option changes the order in which
730 nested for-loops over the array would be done. The algorithm below
731 shows this more clearly, and may be executed as a python program:
732
733 # mapidx = REMAP.shape2
734 xdim = 3 # SHAPE[mapidx].xdim_sz+1
735 ydim = 4 # SHAPE[mapidx].ydim_sz+1
736 zdim = 5 # SHAPE[mapidx].zdim_sz+1
737
738 lims = [xdim, ydim, zdim]
739 idxs = [0,0,0] # starting indices
740 order = [1,0,2] # experiment with different permutations, here
741 offs = 0 # experiment with different offsets, here
742
743 for idx in range(xdim * ydim * zdim):
744 new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
745 print new_idx,
746 for i in range(3):
747 idxs[order[i]] = idxs[order[i]] + 1
748 if (idxs[order[i]] != lims[order[i]]):
749 break
750 print
751 idxs[order[i]] = 0
752
753 Here, it is assumed that this algorithm be run within all pseudo-code
754 throughout this document where a (parallelism) for-loop would normally
755 run from 0 to VL-1 to refer to contiguous register
756 elements; instead, where REMAP indicates to do so, the element index
757 is run through the above algorithm to work out the **actual** element
758 index, instead. Given that there are three possible SHAPE entries, up to
759 three separate registers in any given operation may be simultaneously
760 remapped:
761
762 function op_add(rd, rs1, rs2) # add not VADD!
763 ...
764 ...
765  for (i = 0; i < VL; i++)
766 xSTATE.srcoffs = i # save context
767 if (predval & 1<<i) # predication uses intregs
768    ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
769 ireg[rs2+remap(irs2)];
770 if (!int_vec[rd ].isvector) break;
771 if (int_vec[rd ].isvector)  { id += 1; }
772 if (int_vec[rs1].isvector)  { irs1 += 1; }
773 if (int_vec[rs2].isvector)  { irs2 += 1; }
774
775 By changing remappings, 2D matrices may be transposed "in-place" for one
776 operation, followed by setting a different permutation order without
777 having to move the values in the registers to or from memory. Also,
778 the reason for having REMAP separate from the three SHAPE CSRs is so
779 that in a chain of matrix multiplications and additions, for example,
780 the SHAPE CSRs need only be set up once; only the REMAP CSR need be
781 changed to target different registers.
782
783 Note that:
784
785 * Over-running the register file clearly has to be detected and
786 an illegal instruction exception thrown
787 * When non-default elwidths are set, the exact same algorithm still
788 applies (i.e. it offsets elements *within* registers rather than
789 entire registers).
790 * If permute option 000 is utilised, the actual order of the
791 reindexing does not change!
792 * If two or more dimensions are set to zero, the actual order does not change!
793 * The above algorithm is pseudo-code **only**. Actual implementations
794 will need to take into account the fact that the element for-looping
795 must be **re-entrant**, due to the possibility of exceptions occurring.
796 See MSTATE CSR, which records the current element index.
797 * Twin-predicated operations require **two** separate and distinct
798 element offsets. The above pseudo-code algorithm will be applied
799 separately and independently to each, should each of the two
800 operands be remapped. *This even includes C.LDSP* and other operations
801 in that category, where in that case it will be the **offset** that is
802 remapped (see Compressed Stack LOAD/STORE section).
803 * Offset is especially useful, on its own, for accessing elements
804 within the middle of a register. Without offsets, it is necessary
805 to either use a predicated MV, skipping the first elements, or
806 performing a LOAD/STORE cycle to memory.
807 With offsets, the data does not have to be moved.
808 * Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to
809 less than MVL is **perfectly legal**, albeit very obscure. It permits
810 entries to be regularly presented to operands **more than once**, thus
811 allowing the same underlying registers to act as an accumulator of
812 multiple vector or matrix operations, for example.
813
814 Clearly here some considerable care needs to be taken as the remapping
815 could hypothetically create arithmetic operations that target the
816 exact same underlying registers, resulting in data corruption due to
817 pipeline overlaps. Out-of-order / Superscalar micro-architectures with
818 register-renaming will have an easier time dealing with this than
819 DSP-style SIMD micro-architectures.
820
821 # Instruction Execution Order
822
823 Simple-V behaves as if it is a hardware-level "macro expansion system",
824 substituting and expanding a single instruction into multiple sequential
825 instructions with contiguous and sequentially-incrementing registers.
826 As such, it does **not** modify - or specify - the behaviour and semantics of
827 the execution order: that may be deduced from the **existing** RV
828 specification in each and every case.
829
830 So for example if a particular micro-architecture permits out-of-order
831 execution, and it is augmented with Simple-V, then wherever instructions
832 may be out-of-order then so may the "post-expansion" SV ones.
833
834 If on the other hand there are memory guarantees which specifically
835 prevent and prohibit certain instructions from being re-ordered
836 (such as the Atomicity Axiom, or FENCE constraints), then clearly
837 those constraints **MUST** also be obeyed "post-expansion".
838
839 It should be absolutely clear that SV is **not** about providing new
840 functionality or changing the existing behaviour of a micro-architetural
841 design, or about changing the RISC-V Specification.
842 It is **purely** about compacting what would otherwise be contiguous
843 instructions that use sequentially-increasing register numbers down
844 to the **one** instruction.
845
846 # Instructions <a name="instructions" />
847
848 Despite being a 98% complete and accurate topological remap of RVV
849 concepts and functionality, no new instructions are needed.
850 Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
851 becomes a critical dependency for efficient manipulation of predication
852 masks (as a bit-field). Despite the removal of all operations,
853 with the exception of CLIP and VSELECT.X
854 *all instructions from RVV Base are topologically re-mapped and retain their
855 complete functionality, intact*. Note that if RV64G ever had
856 a MV.X added as well as FCLIP, the full functionality of RVV-Base would
857 be obtained in SV.
858
859 Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
860 equivalents, so are left out of Simple-V. VSELECT could be included if
861 there existed a MV.X instruction in RV (MV.X is a hypothetical
862 non-immediate variant of MV that would allow another register to
863 specify which register was to be copied). Note that if any of these three
864 instructions are added to any given RV extension, their functionality
865 will be inherently parallelised.
866
867 With some exceptions, where it does not make sense or is simply too
868 challenging, all RV-Base instructions are parallelised:
869
870 * CSR instructions, whilst a case could be made for fast-polling of
871 a CSR into multiple registers, or for being able to copy multiple
872 contiguously addressed CSRs into contiguous registers, and so on,
873 are the fundamental core basis of SV. If parallelised, extreme
874 care would need to be taken. Additionally, CSR reads are done
875 using x0, and it is *really* inadviseable to tag x0.
876 * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
877 left as scalar.
878 * LR/SC could hypothetically be parallelised however their purpose is
879 single (complex) atomic memory operations where the LR must be followed
880 up by a matching SC. A sequence of parallel LR instructions followed
881 by a sequence of parallel SC instructions therefore is guaranteed to
882 not be useful. Not least: the guarantees of a Multi-LR/SC
883 would be impossible to provide if emulated in a trap.
884 * EBREAK, NOP, FENCE and others do not use registers so are not inherently
885 paralleliseable anyway.
886
887 All other operations using registers are automatically parallelised.
888 This includes AMOMAX, AMOSWAP and so on, where particular care and
889 attention must be paid.
890
891 Example pseudo-code for an integer ADD operation (including scalar operations).
892 Floating-point uses fp csrs.
893
894 function op_add(rd, rs1, rs2) # add not VADD!
895  int i, id=0, irs1=0, irs2=0;
896  predval = get_pred_val(FALSE, rd);
897  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
898  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
899  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
900  for (i = 0; i < VL; i++)
901 xSTATE.srcoffs = i # save context
902 if (predval & 1<<i) # predication uses intregs
903    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
904 if (!int_vec[rd ].isvector) break;
905 if (int_vec[rd ].isvector)  { id += 1; }
906 if (int_vec[rs1].isvector)  { irs1 += 1; }
907 if (int_vec[rs2].isvector)  { irs2 += 1; }
908
909 Note that for simplicity there is quite a lot missing from the above
910 pseudo-code: element widths, zeroing on predication, dimensional
911 reshaping and offsets and so on. However it demonstrates the basic
912 principle. Augmentations that produce the full pseudo-code are covered in
913 other sections.
914
915 ## SUBVL Pseudocode
916
917 Adding in support for SUBVL is a matter of adding in an extra inner for-loop, where register src and dest are still incremented inside the inner part. Not that the predication is still taken from the VL index.
918
919 So whilst elements are indexed by (i * SUBVL + s), predicate bits are indexed by i
920
921 function op_add(rd, rs1, rs2) # add not VADD!
922  int i, id=0, irs1=0, irs2=0;
923  predval = get_pred_val(FALSE, rd);
924  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
925  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
926  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
927  for (i = 0; i < VL; i++)
928 xSTATE.srcoffs = i # save context
929 for (s = 0; s < SUBVL; s++)
930 xSTATE.ssvoffs = s # save context
931 if (predval & 1<<i) # predication uses intregs
932 # actual add is here (at last)
933    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
934 if (!int_vec[rd ].isvector) break;
935 if (int_vec[rd ].isvector)  { id += 1; }
936 if (int_vec[rs1].isvector)  { irs1 += 1; }
937 if (int_vec[rs2].isvector)  { irs2 += 1; }
938 if (id == VL or irs1 == VL or irs2 == VL) {
939 # end VL hardware loop
940 xSTATE.srcoffs = 0; # reset
941 xSTATE.ssvoffs = 0; # reset
942 return;
943 }
944
945
946 NOTE: pseudocode simplified greatly: zeroing, proper predicate handling, elwidth handling etc. all left out.
947
948 ## Instruction Format
949
950 It is critical to appreciate that there are
951 **no operations added to SV, at all**.
952
953 Instead, by using CSRs to tag registers as an indication of "changed
954 behaviour", SV *overloads* pre-existing branch operations into predicated
955 variants, and implicitly overloads arithmetic operations, MV, FCVT, and
956 LOAD/STORE depending on CSR configurations for bitwidth and predication.
957 **Everything** becomes parallelised. *This includes Compressed
958 instructions* as well as any future instructions and Custom Extensions.
959
960 Note: CSR tags to change behaviour of instructions is nothing new, including
961 in RISC-V. UXL, SXL and MXL change the behaviour so that XLEN=32/64/128.
962 FRM changes the behaviour of the floating-point unit, to alter the rounding
963 mode. Other architectures change the LOAD/STORE byte-order from big-endian
964 to little-endian on a per-instruction basis. SV is just a little more...
965 comprehensive in its effect on instructions.
966
967 ## Branch Instructions
968
969 ### Standard Branch <a name="standard_branch"></a>
970
971 Branch operations use standard RV opcodes that are reinterpreted to
972 be "predicate variants" in the instance where either of the two src
973 registers are marked as vectors (active=1, vector=1).
974
975 Note that the predication register to use (if one is enabled) is taken from
976 the *first* src register, and that this is used, just as with predicated
977 arithmetic operations, to mask whether the comparison operations take
978 place or not. The target (destination) predication register
979 to use (if one is enabled) is taken from the *second* src register.
980
981 If either of src1 or src2 are scalars (whether by there being no
982 CSR register entry or whether by the CSR entry specifically marking
983 the register as "scalar") the comparison goes ahead as vector-scalar
984 or scalar-vector.
985
986 In instances where no vectorisation is detected on either src registers
987 the operation is treated as an absolutely standard scalar branch operation.
988 Where vectorisation is present on either or both src registers, the
989 branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
990 those tests that are predicated out).
991
992 Note that when zero-predication is enabled (from source rs1),
993 a cleared bit in the predicate indicates that the result
994 of the compare is set to "false", i.e. that the corresponding
995 destination bit (or result)) be set to zero. Contrast this with
996 when zeroing is not set: bits in the destination predicate are
997 only *set*; they are **not** cleared. This is important to appreciate,
998 as there may be an expectation that, going into the hardware-loop,
999 the destination predicate is always expected to be set to zero:
1000 this is **not** the case. The destination predicate is only set
1001 to zero if **zeroing** is enabled.
1002
1003 Note that just as with the standard (scalar, non-predicated) branch
1004 operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
1005 src1 and src2.
1006
1007 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
1008 for predicated compare operations of function "cmp":
1009
1010 for (int i=0; i<vl; ++i)
1011 if ([!]preg[p][i])
1012 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
1013 s2 ? vreg[rs2][i] : sreg[rs2]);
1014
1015 With associated predication, vector-length adjustments and so on,
1016 and temporarily ignoring bitwidth (which makes the comparisons more
1017 complex), this becomes:
1018
1019 s1 = reg_is_vectorised(src1);
1020 s2 = reg_is_vectorised(src2);
1021
1022 if not s1 && not s2
1023 if cmp(rs1, rs2) # scalar compare
1024 goto branch
1025 return
1026
1027 preg = int_pred_reg[rd]
1028 reg = int_regfile
1029
1030 ps = get_pred_val(I/F==INT, rs1);
1031 rd = get_pred_val(I/F==INT, rs2); # this may not exist
1032
1033 if not exists(rd) or zeroing:
1034 result = 0
1035 else
1036 result = preg[rd]
1037
1038 for (int i = 0; i < VL; ++i)
1039 if (zeroing)
1040 if not (ps & (1<<i))
1041 result &= ~(1<<i);
1042 else if (ps & (1<<i))
1043 if (cmp(s1 ? reg[src1+i]:reg[src1],
1044 s2 ? reg[src2+i]:reg[src2])
1045 result |= 1<<i;
1046 else
1047 result &= ~(1<<i);
1048
1049 if not exists(rd)
1050 if result == ps
1051 goto branch
1052 else
1053 preg[rd] = result # store in destination
1054 if preg[rd] == ps
1055 goto branch
1056
1057 Notes:
1058
1059 * Predicated SIMD comparisons would break src1 and src2 further down
1060 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
1061 Reordering") setting Vector-Length times (number of SIMD elements) bits
1062 in Predicate Register rd, as opposed to just Vector-Length bits.
1063 * The execution of "parallelised" instructions **must** be implemented
1064 as "re-entrant" (to use a term from software). If an exception (trap)
1065 occurs during the middle of a vectorised
1066 Branch (now a SV predicated compare) operation, the partial results
1067 of any comparisons must be written out to the destination
1068 register before the trap is permitted to begin. If however there
1069 is no predicate, the **entire** set of comparisons must be **restarted**,
1070 with the offset loop indices set back to zero. This is because
1071 there is no place to store the temporary result during the handling
1072 of traps.
1073
1074 TODO: predication now taken from src2. also branch goes ahead
1075 if all compares are successful.
1076
1077 Note also that where normally, predication requires that there must
1078 also be a CSR register entry for the register being used in order
1079 for the **predication** CSR register entry to also be active,
1080 for branches this is **not** the case. src2 does **not** have
1081 to have its CSR register entry marked as active in order for
1082 predication on src2 to be active.
1083
1084 Also note: SV Branch operations are **not** twin-predicated
1085 (see Twin Predication section). This would require three
1086 element offsets: one to track src1, one to track src2 and a third
1087 to track where to store the accumulation of the results. Given
1088 that the element offsets need to be exposed via CSRs so that
1089 the parallel hardware looping may be made re-entrant on traps
1090 and exceptions, the decision was made not to make SV Branches
1091 twin-predicated.
1092
1093 ### Floating-point Comparisons
1094
1095 There does not exist floating-point branch operations, only compare.
1096 Interestingly no change is needed to the instruction format because
1097 FP Compare already stores a 1 or a zero in its "rd" integer register
1098 target, i.e. it's not actually a Branch at all: it's a compare.
1099
1100 In RV (scalar) Base, a branch on a floating-point compare is
1101 done via the sequence "FEQ x1, f0, f5; BEQ x1, x0, #jumploc".
1102 This does extend to SV, as long as x1 (in the example sequence given)
1103 is vectorised. When that is the case, x1..x(1+VL-1) will also be
1104 set to 0 or 1 depending on whether f0==f5, f1==f6, f2==f7 and so on.
1105 The BEQ that follows will *also* compare x1==x0, x2==x0, x3==x0 and
1106 so on. Consequently, unlike integer-branch, FP Compare needs no
1107 modification in its behaviour.
1108
1109 In addition, it is noted that an entry "FNE" (the opposite of FEQ) is missing,
1110 and whilst in ordinary branch code this is fine because the standard
1111 RVF compare can always be followed up with an integer BEQ or a BNE (or
1112 a compressed comparison to zero or non-zero), in predication terms that
1113 becomes more of an impact. To deal with this, SV's predication has
1114 had "invert" added to it.
1115
1116 Also: note that FP Compare may be predicated, using the destination
1117 integer register (rd) to determine the predicate. FP Compare is **not**
1118 a twin-predication operation, as, again, just as with SV Branches,
1119 there are three registers involved: FP src1, FP src2 and INT rd.
1120
1121 ### Compressed Branch Instruction
1122
1123 Compressed Branch instructions are, just like standard Branch instructions,
1124 reinterpreted to be vectorised and predicated based on the source register
1125 (rs1s) CSR entries. As however there is only the one source register,
1126 given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
1127 to store the results of the comparisions is taken from CSR predication
1128 table entries for **x0**.
1129
1130 The specific required use of x0 is, with a little thought, quite obvious,
1131 but is counterintuitive. Clearly it is **not** recommended to redirect
1132 x0 with a CSR register entry, however as a means to opaquely obtain
1133 a predication target it is the only sensible option that does not involve
1134 additional special CSRs (or, worse, additional special opcodes).
1135
1136 Note also that, just as with standard branches, the 2nd source
1137 (in this case x0 rather than src2) does **not** have to have its CSR
1138 register table marked as "active" in order for predication to work.
1139
1140 ## Vectorised Dual-operand instructions
1141
1142 There is a series of 2-operand instructions involving copying (and
1143 sometimes alteration):
1144
1145 * C.MV
1146 * FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
1147 * C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
1148 * LOAD(-FP) and STORE(-FP)
1149
1150 All of these operations follow the same two-operand pattern, so it is
1151 *both* the source *and* destination predication masks that are taken into
1152 account. This is different from
1153 the three-operand arithmetic instructions, where the predication mask
1154 is taken from the *destination* register, and applied uniformly to the
1155 elements of the source register(s), element-for-element.
1156
1157 The pseudo-code pattern for twin-predicated operations is as
1158 follows:
1159
1160 function op(rd, rs):
1161  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1162  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1163  ps = get_pred_val(FALSE, rs); # predication on src
1164  pd = get_pred_val(FALSE, rd); # ... AND on dest
1165  for (int i = 0, int j = 0; i < VL && j < VL;):
1166 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1167 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1168 xSTATE.srcoffs = i # save context
1169 xSTATE.destoffs = j # save context
1170 reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
1171 if (int_csr[rs].isvec) i++;
1172 if (int_csr[rd].isvec) j++; else break
1173
1174 This pattern covers scalar-scalar, scalar-vector, vector-scalar
1175 and vector-vector, and predicated variants of all of those.
1176 Zeroing is not presently included (TODO). As such, when compared
1177 to RVV, the twin-predicated variants of C.MV and FMV cover
1178 **all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
1179 VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
1180
1181 Note that:
1182
1183 * elwidth (SIMD) is not covered in the pseudo-code above
1184 * ending the loop early in scalar cases (VINSERT, VEXTRACT) is also
1185 not covered
1186 * zero predication is also not shown (TODO).
1187
1188 ### C.MV Instruction <a name="c_mv"></a>
1189
1190 There is no MV instruction in RV however there is a C.MV instruction.
1191 It is used for copying integer-to-integer registers (vectorised FMV
1192 is used for copying floating-point).
1193
1194 If either the source or the destination register are marked as vectors
1195 C.MV is reinterpreted to be a vectorised (multi-register) predicated
1196 move operation. The actual instruction's format does not change:
1197
1198 [[!table data="""
1199 15 12 | 11 7 | 6 2 | 1 0 |
1200 funct4 | rd | rs | op |
1201 4 | 5 | 5 | 2 |
1202 C.MV | dest | src | C0 |
1203 """]]
1204
1205 A simplified version of the pseudocode for this operation is as follows:
1206
1207 function op_mv(rd, rs) # MV not VMV!
1208  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1209  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1210  ps = get_pred_val(FALSE, rs); # predication on src
1211  pd = get_pred_val(FALSE, rd); # ... AND on dest
1212  for (int i = 0, int j = 0; i < VL && j < VL;):
1213 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1214 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1215 xSTATE.srcoffs = i # save context
1216 xSTATE.destoffs = j # save context
1217 ireg[rd+j] <= ireg[rs+i];
1218 if (int_csr[rs].isvec) i++;
1219 if (int_csr[rd].isvec) j++; else break
1220
1221 There are several different instructions from RVV that are covered by
1222 this one opcode:
1223
1224 [[!table data="""
1225 src | dest | predication | op |
1226 scalar | vector | none | VSPLAT |
1227 scalar | vector | destination | sparse VSPLAT |
1228 scalar | vector | 1-bit dest | VINSERT |
1229 vector | scalar | 1-bit? src | VEXTRACT |
1230 vector | vector | none | VCOPY |
1231 vector | vector | src | Vector Gather |
1232 vector | vector | dest | Vector Scatter |
1233 vector | vector | src & dest | Gather/Scatter |
1234 vector | vector | src == dest | sparse VCOPY |
1235 """]]
1236
1237 Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
1238 operations with inversion on the src and dest predication for one of the
1239 two C.MV operations.
1240
1241 Note that in the instance where the Compressed Extension is not implemented,
1242 MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs.
1243 Note that the behaviour is **different** from C.MV because with addi the
1244 predication mask to use is taken **only** from rd and is applied against
1245 all elements: rs[i] = rd[i].
1246
1247 ### FMV, FNEG and FABS Instructions
1248
1249 These are identical in form to C.MV, except covering floating-point
1250 register copying. The same double-predication rules also apply.
1251 However when elwidth is not set to default the instruction is implicitly
1252 and automatic converted to a (vectorised) floating-point type conversion
1253 operation of the appropriate size covering the source and destination
1254 register bitwidths.
1255
1256 (Note that FMV, FNEG and FABS are all actually pseudo-instructions)
1257
1258 ### FVCT Instructions
1259
1260 These are again identical in form to C.MV, except that they cover
1261 floating-point to integer and integer to floating-point. When element
1262 width in each vector is set to default, the instructions behave exactly
1263 as they are defined for standard RV (scalar) operations, except vectorised
1264 in exactly the same fashion as outlined in C.MV.
1265
1266 However when the source or destination element width is not set to default,
1267 the opcode's explicit element widths are *over-ridden* to new definitions,
1268 and the opcode's element width is taken as indicative of the SIMD width
1269 (if applicable i.e. if packed SIMD is requested) instead.
1270
1271 For example FCVT.S.L would normally be used to convert a 64-bit
1272 integer in register rs1 to a 64-bit floating-point number in rd.
1273 If however the source rs1 is set to be a vector, where elwidth is set to
1274 default/2 and "packed SIMD" is enabled, then the first 32 bits of
1275 rs1 are converted to a floating-point number to be stored in rd's
1276 first element and the higher 32-bits *also* converted to floating-point
1277 and stored in the second. The 32 bit size comes from the fact that
1278 FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to
1279 divide that by two it means that rs1 element width is to be taken as 32.
1280
1281 Similar rules apply to the destination register.
1282
1283 ## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
1284
1285 An earlier draft of SV modified the behaviour of LOAD/STORE (modified
1286 the interpretation of the instruction fields). This
1287 actually undermined the fundamental principle of SV, namely that there
1288 be no modifications to the scalar behaviour (except where absolutely
1289 necessary), in order to simplify an implementor's task if considering
1290 converting a pre-existing scalar design to support parallelism.
1291
1292 So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
1293 do not change in SV, however just as with C.MV it is important to note
1294 that dual-predication is possible.
1295
1296 In vectorised architectures there are usually at least two different modes
1297 for LOAD/STORE:
1298
1299 * Read (or write for STORE) from sequential locations, where one
1300 register specifies the address, and the one address is incremented
1301 by a fixed amount. This is usually known as "Unit Stride" mode.
1302 * Read (or write) from multiple indirected addresses, where the
1303 vector elements each specify separate and distinct addresses.
1304
1305 To support these different addressing modes, the CSR Register "isvector"
1306 bit is used. So, for a LOAD, when the src register is set to
1307 scalar, the LOADs are sequentially incremented by the src register
1308 element width, and when the src register is set to "vector", the
1309 elements are treated as indirection addresses. Simplified
1310 pseudo-code would look like this:
1311
1312 function op_ld(rd, rs) # LD not VLD!
1313  rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
1314  rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
1315  ps = get_pred_val(FALSE, rs); # predication on src
1316  pd = get_pred_val(FALSE, rd); # ... AND on dest
1317  for (int i = 0, int j = 0; i < VL && j < VL;):
1318 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1319 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1320 if (int_csr[rd].isvec)
1321 # indirect mode (multi mode)
1322 srcbase = ireg[rsv+i];
1323 else
1324 # unit stride mode
1325 srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
1326 ireg[rdv+j] <= mem[srcbase + imm_offs];
1327 if (!int_csr[rs].isvec &&
1328 !int_csr[rd].isvec) break # scalar-scalar LD
1329 if (int_csr[rs].isvec) i++;
1330 if (int_csr[rd].isvec) j++;
1331
1332 Notes:
1333
1334 * For simplicity, zeroing and elwidth is not included in the above:
1335 the key focus here is the decision-making for srcbase; vectorised
1336 rs means use sequentially-numbered registers as the indirection
1337 address, and scalar rs is "offset" mode.
1338 * The test towards the end for whether both source and destination are
1339 scalar is what makes the above pseudo-code provide the "standard" RV
1340 Base behaviour for LD operations.
1341 * The offset in bytes (XLEN/8) changes depending on whether the
1342 operation is a LB (1 byte), LH (2 byes), LW (4 bytes) or LD
1343 (8 bytes), and also whether the element width is over-ridden
1344 (see special element width section).
1345
1346 ## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
1347
1348 C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
1349 where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
1350 It is therefore possible to use predicated C.LWSP to efficiently
1351 pop registers off the stack (by predicating x2 as the source), cherry-picking
1352 which registers to store to (by predicating the destination). Likewise
1353 for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
1354
1355 The two modes ("unit stride" and multi-indirection) are still supported,
1356 as with standard LD/ST. Essentially, the only difference is that the
1357 use of x2 is hard-coded into the instruction.
1358
1359 **Note**: it is still possible to redirect x2 to an alternative target
1360 register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
1361 general-purpose LOAD/STORE operations.
1362
1363 ## Compressed LOAD / STORE Instructions
1364
1365 Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
1366 where the same rules apply and the same pseudo-code apply as for
1367 non-compressed LOAD/STORE. Again: setting scalar or vector mode
1368 on the src for LOAD and dest for STORE switches mode from "Unit Stride"
1369 to "Multi-indirection", respectively.
1370
1371 # Element bitwidth polymorphism <a name="elwidth"></a>
1372
1373 Element bitwidth is best covered as its own special section, as it
1374 is quite involved and applies uniformly across-the-board. SV restricts
1375 bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
1376
1377 The effect of setting an element bitwidth is to re-cast each entry
1378 in the register table, and for all memory operations involving
1379 load/stores of certain specific sizes, to a completely different width.
1380 Thus In c-style terms, on an RV64 architecture, effectively each register
1381 now looks like this:
1382
1383 typedef union {
1384 uint8_t b[8];
1385 uint16_t s[4];
1386 uint32_t i[2];
1387 uint64_t l[1];
1388 } reg_t;
1389
1390 // integer table: assume maximum SV 7-bit regfile size
1391 reg_t int_regfile[128];
1392
1393 where the CSR Register table entry (not the instruction alone) determines
1394 which of those union entries is to be used on each operation, and the
1395 VL element offset in the hardware-loop specifies the index into each array.
1396
1397 However a naive interpretation of the data structure above masks the
1398 fact that setting VL greater than 8, for example, when the bitwidth is 8,
1399 accessing one specific register "spills over" to the following parts of
1400 the register file in a sequential fashion. So a much more accurate way
1401 to reflect this would be:
1402
1403 typedef union {
1404 uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
1405 uint8_t b[0]; // array of type uint8_t
1406 uint16_t s[0];
1407 uint32_t i[0];
1408 uint64_t l[0];
1409 uint128_t d[0];
1410 } reg_t;
1411
1412 reg_t int_regfile[128];
1413
1414 where when accessing any individual regfile[n].b entry it is permitted
1415 (in c) to arbitrarily over-run the *declared* length of the array (zero),
1416 and thus "overspill" to consecutive register file entries in a fashion
1417 that is completely transparent to a greatly-simplified software / pseudo-code
1418 representation.
1419 It is however critical to note that it is clearly the responsibility of
1420 the implementor to ensure that, towards the end of the register file,
1421 an exception is thrown if attempts to access beyond the "real" register
1422 bytes is ever attempted.
1423
1424 Now we may modify pseudo-code an operation where all element bitwidths have
1425 been set to the same size, where this pseudo-code is otherwise identical
1426 to its "non" polymorphic versions (above):
1427
1428 function op_add(rd, rs1, rs2) # add not VADD!
1429 ...
1430 ...
1431  for (i = 0; i < VL; i++)
1432 ...
1433 ...
1434 // TODO, calculate if over-run occurs, for each elwidth
1435 if (elwidth == 8) {
1436    int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
1437     int_regfile[rs2].i[irs2];
1438 } else if elwidth == 16 {
1439    int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
1440     int_regfile[rs2].s[irs2];
1441 } else if elwidth == 32 {
1442    int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
1443     int_regfile[rs2].i[irs2];
1444 } else { // elwidth == 64
1445    int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
1446     int_regfile[rs2].l[irs2];
1447 }
1448 ...
1449 ...
1450
1451 So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
1452 following sequentially on respectively from the same) are "type-cast"
1453 to 8-bit; for 16-bit entries likewise and so on.
1454
1455 However that only covers the case where the element widths are the same.
1456 Where the element widths are different, the following algorithm applies:
1457
1458 * Analyse the bitwidth of all source operands and work out the
1459 maximum. Record this as "maxsrcbitwidth"
1460 * If any given source operand requires sign-extension or zero-extension
1461 (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
1462 sign-extension / zero-extension or whatever is specified in the standard
1463 RV specification, **change** that to sign-extending from the respective
1464 individual source operand's bitwidth from the CSR table out to
1465 "maxsrcbitwidth" (previously calculated), instead.
1466 * Following separate and distinct (optional) sign/zero-extension of all
1467 source operands as specifically required for that operation, carry out the
1468 operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
1469 this may be a "null" (copy) operation, and that with FCVT, the changes
1470 to the source and destination bitwidths may also turn FVCT effectively
1471 into a copy).
1472 * If the destination operand requires sign-extension or zero-extension,
1473 instead of a mandatory fixed size (typically 32-bit for arithmetic,
1474 for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
1475 etc.), overload the RV specification with the bitwidth from the
1476 destination register's elwidth entry.
1477 * Finally, store the (optionally) sign/zero-extended value into its
1478 destination: memory for sb/sw etc., or an offset section of the register
1479 file for an arithmetic operation.
1480
1481 In this way, polymorphic bitwidths are achieved without requiring a
1482 massive 64-way permutation of calculations **per opcode**, for example
1483 (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
1484 rd bitwidths). The pseudo-code is therefore as follows:
1485
1486 typedef union {
1487 uint8_t b;
1488 uint16_t s;
1489 uint32_t i;
1490 uint64_t l;
1491 } el_reg_t;
1492
1493 bw(elwidth):
1494 if elwidth == 0:
1495 return xlen
1496 if elwidth == 1:
1497 return xlen / 2
1498 if elwidth == 2:
1499 return xlen * 2
1500 // elwidth == 3:
1501 return 8
1502
1503 get_max_elwidth(rs1, rs2):
1504 return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
1505 bw(int_csr[rs2].elwidth)) # again XLEN if no entry
1506
1507 get_polymorphed_reg(reg, bitwidth, offset):
1508 el_reg_t res;
1509 res.l = 0; // TODO: going to need sign-extending / zero-extending
1510 if bitwidth == 8:
1511 reg.b = int_regfile[reg].b[offset]
1512 elif bitwidth == 16:
1513 reg.s = int_regfile[reg].s[offset]
1514 elif bitwidth == 32:
1515 reg.i = int_regfile[reg].i[offset]
1516 elif bitwidth == 64:
1517 reg.l = int_regfile[reg].l[offset]
1518 return res
1519
1520 set_polymorphed_reg(reg, bitwidth, offset, val):
1521 if (!int_csr[reg].isvec):
1522 # sign/zero-extend depending on opcode requirements, from
1523 # the reg's bitwidth out to the full bitwidth of the regfile
1524 val = sign_or_zero_extend(val, bitwidth, xlen)
1525 int_regfile[reg].l[0] = val
1526 elif bitwidth == 8:
1527 int_regfile[reg].b[offset] = val
1528 elif bitwidth == 16:
1529 int_regfile[reg].s[offset] = val
1530 elif bitwidth == 32:
1531 int_regfile[reg].i[offset] = val
1532 elif bitwidth == 64:
1533 int_regfile[reg].l[offset] = val
1534
1535 maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
1536 destwid = int_csr[rs1].elwidth # destination element width
1537  for (i = 0; i < VL; i++)
1538 if (predval & 1<<i) # predication uses intregs
1539 // TODO, calculate if over-run occurs, for each elwidth
1540 src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
1541 // TODO, sign/zero-extend src1 and src2 as operation requires
1542 if (op_requires_sign_extend_src1)
1543 src1 = sign_extend(src1, maxsrcwid)
1544 src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
1545 result = src1 + src2 # actual add here
1546 // TODO, sign/zero-extend result, as operation requires
1547 if (op_requires_sign_extend_dest)
1548 result = sign_extend(result, maxsrcwid)
1549 set_polymorphed_reg(rd, destwid, ird, result)
1550 if (!int_vec[rd].isvector) break
1551 if (int_vec[rd ].isvector)  { id += 1; }
1552 if (int_vec[rs1].isvector)  { irs1 += 1; }
1553 if (int_vec[rs2].isvector)  { irs2 += 1; }
1554
1555 Whilst specific sign-extension and zero-extension pseudocode call
1556 details are left out, due to each operation being different, the above
1557 should be clear that;
1558
1559 * the source operands are extended out to the maximum bitwidth of all
1560 source operands
1561 * the operation takes place at that maximum source bitwidth (the
1562 destination bitwidth is not involved at this point, at all)
1563 * the result is extended (or potentially even, truncated) before being
1564 stored in the destination. i.e. truncation (if required) to the
1565 destination width occurs **after** the operation **not** before.
1566 * when the destination is not marked as "vectorised", the **full**
1567 (standard, scalar) register file entry is taken up, i.e. the
1568 element is either sign-extended or zero-extended to cover the
1569 full register bitwidth (XLEN) if it is not already XLEN bits long.
1570
1571 Implementors are entirely free to optimise the above, particularly
1572 if it is specifically known that any given operation will complete
1573 accurately in less bits, as long as the results produced are
1574 directly equivalent and equal, for all inputs and all outputs,
1575 to those produced by the above algorithm.
1576
1577 ## Polymorphic floating-point operation exceptions and error-handling
1578
1579 For floating-point operations, conversion takes place without
1580 raising any kind of exception. Exactly as specified in the standard
1581 RV specification, NAN (or appropriate) is stored if the result
1582 is beyond the range of the destination, and, again, exactly as
1583 with the standard RV specification just as with scalar
1584 operations, the floating-point flag is raised (FCSR). And, again, just as
1585 with scalar operations, it is software's responsibility to check this flag.
1586 Given that the FCSR flags are "accrued", the fact that multiple element
1587 operations could have occurred is not a problem.
1588
1589 Note that it is perfectly legitimate for floating-point bitwidths of
1590 only 8 to be specified. However whilst it is possible to apply IEEE 754
1591 principles, no actual standard yet exists. Implementors wishing to
1592 provide hardware-level 8-bit support rather than throw a trap to emulate
1593 in software should contact the author of this specification before
1594 proceeding.
1595
1596 ## Polymorphic shift operators
1597
1598 A special note is needed for changing the element width of left and right
1599 shift operators, particularly right-shift. Even for standard RV base,
1600 in order for correct results to be returned, the second operand RS2 must
1601 be truncated to be within the range of RS1's bitwidth. spike's implementation
1602 of sll for example is as follows:
1603
1604 WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));
1605
1606 which means: where XLEN is 32 (for RV32), restrict RS2 to cover the
1607 range 0..31 so that RS1 will only be left-shifted by the amount that
1608 is possible to fit into a 32-bit register. Whilst this appears not
1609 to matter for hardware, it matters greatly in software implementations,
1610 and it also matters where an RV64 system is set to "RV32" mode, such
1611 that the underlying registers RS1 and RS2 comprise 64 hardware bits
1612 each.
1613
1614 For SV, where each operand's element bitwidth may be over-ridden, the
1615 rule about determining the operation's bitwidth *still applies*, being
1616 defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
1617 **also applies to the truncation of RS2**. In other words, *after*
1618 determining the maximum bitwidth, RS2's range must **also be truncated**
1619 to ensure a correct answer. Example:
1620
1621 * RS1 is over-ridden to a 16-bit width
1622 * RS2 is over-ridden to an 8-bit width
1623 * RD is over-ridden to a 64-bit width
1624 * the maximum bitwidth is thus determined to be 16-bit - max(8,16)
1625 * RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
1626
1627 Pseudocode (in spike) for this example would therefore be:
1628
1629 WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
1630
1631 This example illustrates that considerable care therefore needs to be
1632 taken to ensure that left and right shift operations are implemented
1633 correctly. The key is that
1634
1635 * The operation bitwidth is determined by the maximum bitwidth
1636 of the *source registers*, **not** the destination register bitwidth
1637 * The result is then sign-extend (or truncated) as appropriate.
1638
1639 ## Polymorphic MULH/MULHU/MULHSU
1640
1641 MULH is designed to take the top half MSBs of a multiply that
1642 does not fit within the range of the source operands, such that
1643 smaller width operations may produce a full double-width multiply
1644 in two cycles. The issue is: SV allows the source operands to
1645 have variable bitwidth.
1646
1647 Here again special attention has to be paid to the rules regarding
1648 bitwidth, which, again, are that the operation is performed at
1649 the maximum bitwidth of the **source** registers. Therefore:
1650
1651 * An 8-bit x 8-bit multiply will create a 16-bit result that must
1652 be shifted down by 8 bits
1653 * A 16-bit x 8-bit multiply will create a 24-bit result that must
1654 be shifted down by 16 bits (top 8 bits being zero)
1655 * A 16-bit x 16-bit multiply will create a 32-bit result that must
1656 be shifted down by 16 bits
1657 * A 32-bit x 16-bit multiply will create a 48-bit result that must
1658 be shifted down by 32 bits
1659 * A 32-bit x 8-bit multiply will create a 40-bit result that must
1660 be shifted down by 32 bits
1661
1662 So again, just as with shift-left and shift-right, the result
1663 is shifted down by the maximum of the two source register bitwidths.
1664 And, exactly again, truncation or sign-extension is performed on the
1665 result. If sign-extension is to be carried out, it is performed
1666 from the same maximum of the two source register bitwidths out
1667 to the result element's bitwidth.
1668
1669 If truncation occurs, i.e. the top MSBs of the result are lost,
1670 this is "Officially Not Our Problem", i.e. it is assumed that the
1671 programmer actually desires the result to be truncated. i.e. if the
1672 programmer wanted all of the bits, they would have set the destination
1673 elwidth to accommodate them.
1674
1675 ## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
1676
1677 Polymorphic element widths in vectorised form means that the data
1678 being loaded (or stored) across multiple registers needs to be treated
1679 (reinterpreted) as a contiguous stream of elwidth-wide items, where
1680 the source register's element width is **independent** from the destination's.
1681
1682 This makes for a slightly more complex algorithm when using indirection
1683 on the "addressed" register (source for LOAD and destination for STORE),
1684 particularly given that the LOAD/STORE instruction provides important
1685 information about the width of the data to be reinterpreted.
1686
1687 Let's illustrate the "load" part, where the pseudo-code for elwidth=default
1688 was as follows, and i is the loop from 0 to VL-1:
1689
1690 srcbase = ireg[rs+i];
1691 return mem[srcbase + imm]; // returns XLEN bits
1692
1693 Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
1694 chunks are taken from the source memory location addressed by the current
1695 indexed source address register, and only when a full 32-bits-worth
1696 are taken will the index be moved on to the next contiguous source
1697 address register:
1698
1699 bitwidth = bw(elwidth); // source elwidth from CSR reg entry
1700 elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
1701 srcbase = ireg[rs+i/(elsperblock)]; // integer divide
1702 offs = i % elsperblock; // modulo
1703 return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
1704
1705 Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
1706 and 128 for LQ.
1707
1708 The principle is basically exactly the same as if the srcbase were pointing
1709 at the memory of the *register* file: memory is re-interpreted as containing
1710 groups of elwidth-wide discrete elements.
1711
1712 When storing the result from a load, it's important to respect the fact
1713 that the destination register has its *own separate element width*. Thus,
1714 when each element is loaded (at the source element width), any sign-extension
1715 or zero-extension (or truncation) needs to be done to the *destination*
1716 bitwidth. Also, the storing has the exact same analogous algorithm as
1717 above, where in fact it is just the set\_polymorphed\_reg pseudocode
1718 (completely unchanged) used above.
1719
1720 One issue remains: when the source element width is **greater** than
1721 the width of the operation, it is obvious that a single LB for example
1722 cannot possibly obtain 16-bit-wide data. This condition may be detected
1723 where, when using integer divide, elsperblock (the width of the LOAD
1724 divided by the bitwidth of the element) is zero.
1725
1726 The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
1727
1728 elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
1729
1730 The elements, if the element bitwidth is larger than the LD operation's
1731 size, will then be sign/zero-extended to the full LD operation size, as
1732 specified by the LOAD (LDU instead of LD, LBU instead of LB), before
1733 being passed on to the second phase.
1734
1735 As LOAD/STORE may be twin-predicated, it is important to note that
1736 the rules on twin predication still apply, except where in previous
1737 pseudo-code (elwidth=default for both source and target) it was
1738 the *registers* that the predication was applied to, it is now the
1739 **elements** that the predication is applied to.
1740
1741 Thus the full pseudocode for all LD operations may be written out
1742 as follows:
1743
1744 function LBU(rd, rs):
1745 load_elwidthed(rd, rs, 8, true)
1746 function LB(rd, rs):
1747 load_elwidthed(rd, rs, 8, false)
1748 function LH(rd, rs):
1749 load_elwidthed(rd, rs, 16, false)
1750 ...
1751 ...
1752 function LQ(rd, rs):
1753 load_elwidthed(rd, rs, 128, false)
1754
1755 # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
1756 function load_memory(rs, imm, i, opwidth):
1757 elwidth = int_csr[rs].elwidth
1758 bitwidth = bw(elwidth);
1759 elsperblock = min(1, opwidth / bitwidth)
1760 srcbase = ireg[rs+i/(elsperblock)];
1761 offs = i % elsperblock;
1762 return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
1763
1764 function load_elwidthed(rd, rs, opwidth, unsigned):
1765 destwid = int_csr[rd].elwidth # destination element width
1766  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1767  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1768  ps = get_pred_val(FALSE, rs); # predication on src
1769  pd = get_pred_val(FALSE, rd); # ... AND on dest
1770  for (int i = 0, int j = 0; i < VL && j < VL;):
1771 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1772 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1773 val = load_memory(rs, imm, i, opwidth)
1774 if unsigned:
1775 val = zero_extend(val, min(opwidth, bitwidth))
1776 else:
1777 val = sign_extend(val, min(opwidth, bitwidth))
1778 set_polymorphed_reg(rd, bitwidth, j, val)
1779 if (int_csr[rs].isvec) i++;
1780 if (int_csr[rd].isvec) j++; else break;
1781
1782 Note:
1783
1784 * when comparing against for example the twin-predicated c.mv
1785 pseudo-code, the pattern of independent incrementing of rd and rs
1786 is preserved unchanged.
1787 * just as with the c.mv pseudocode, zeroing is not included and must be
1788 taken into account (TODO).
1789 * that due to the use of a twin-predication algorithm, LOAD/STORE also
1790 take on the same VSPLAT, VINSERT, VREDUCE, VEXTRACT, VGATHER and
1791 VSCATTER characteristics.
1792 * that due to the use of the same set\_polymorphed\_reg pseudocode,
1793 a destination that is not vectorised (marked as scalar) will
1794 result in the element being fully sign-extended or zero-extended
1795 out to the full register file bitwidth (XLEN). When the source
1796 is also marked as scalar, this is how the compatibility with
1797 standard RV LOAD/STORE is preserved by this algorithm.
1798
1799 ### Example Tables showing LOAD elements
1800
1801 This section contains examples of vectorised LOAD operations, showing
1802 how the two stage process works (three if zero/sign-extension is included).
1803
1804
1805 #### Example: LD x8, x5(0), x8 CSR-elwidth=32, x5 CSR-elwidth=16, VL=7
1806
1807 This is:
1808
1809 * a 64-bit load, with an offset of zero
1810 * with a source-address elwidth of 16-bit
1811 * into a destination-register with an elwidth of 32-bit
1812 * where VL=7
1813 * from register x5 (actually x5-x6) to x8 (actually x8 to half of x11)
1814 * RV64, where XLEN=64 is assumed.
1815
1816 First, the memory table, which, due to the
1817 element width being 16 and the operation being LD (64), the 64-bits
1818 loaded from memory are subdivided into groups of **four** elements.
1819 And, with VL being 7 (deliberately to illustrate that this is reasonable
1820 and possible), the first four are sourced from the offset addresses pointed
1821 to by x5, and the next three from the ofset addresses pointed to by
1822 the next contiguous register, x6:
1823
1824 [[!table data="""
1825 addr | byte 0 | byte 1 | byte 2 | byte 3 | byte 4 | byte 5 | byte 6 | byte 7 |
1826 @x5 | elem 0 || elem 1 || elem 2 || elem 3 ||
1827 @x6 | elem 4 || elem 5 || elem 6 || not loaded ||
1828 """]]
1829
1830 Next, the elements are zero-extended from 16-bit to 32-bit, as whilst
1831 the elwidth CSR entry for x5 is 16-bit, the destination elwidth on x8 is 32.
1832
1833 [[!table data="""
1834 byte 3 | byte 2 | byte 1 | byte 0 |
1835 0x0 | 0x0 | elem0 ||
1836 0x0 | 0x0 | elem1 ||
1837 0x0 | 0x0 | elem2 ||
1838 0x0 | 0x0 | elem3 ||
1839 0x0 | 0x0 | elem4 ||
1840 0x0 | 0x0 | elem5 ||
1841 0x0 | 0x0 | elem6 ||
1842 0x0 | 0x0 | elem7 ||
1843 """]]
1844
1845 Lastly, the elements are stored in contiguous blocks, as if x8 was also
1846 byte-addressable "memory". That "memory" happens to cover registers
1847 x8, x9, x10 and x11, with the last 32 "bits" of x11 being **UNMODIFIED**:
1848
1849 [[!table data="""
1850 reg# | byte 7 | byte 6 | byte 5 | byte 4 | byte 3 | byte 2 | byte 1 | byte 0 |
1851 x8 | 0x0 | 0x0 | elem 1 || 0x0 | 0x0 | elem 0 ||
1852 x9 | 0x0 | 0x0 | elem 3 || 0x0 | 0x0 | elem 2 ||
1853 x10 | 0x0 | 0x0 | elem 5 || 0x0 | 0x0 | elem 4 ||
1854 x11 | **UNMODIFIED** |||| 0x0 | 0x0 | elem 6 ||
1855 """]]
1856
1857 Thus we have data that is loaded from the **addresses** pointed to by
1858 x5 and x6, zero-extended from 16-bit to 32-bit, stored in the **registers**
1859 x8 through to half of x11.
1860 The end result is that elements 0 and 1 end up in x8, with element 8 being
1861 shifted up 32 bits, and so on, until finally element 6 is in the
1862 LSBs of x11.
1863
1864 Note that whilst the memory addressing table is shown left-to-right byte order,
1865 the registers are shown in right-to-left (MSB) order. This does **not**
1866 imply that bit or byte-reversal is carried out: it's just easier to visualise
1867 memory as being contiguous bytes, and emphasises that registers are not
1868 really actually "memory" as such.
1869
1870 ## Why SV bitwidth specification is restricted to 4 entries
1871
1872 The four entries for SV element bitwidths only allows three over-rides:
1873
1874 * 8 bit
1875 * 16 hit
1876 * 32 bit
1877
1878 This would seem inadequate, surely it would be better to have 3 bits or
1879 more and allow 64, 128 and some other options besides. The answer here
1880 is, it gets too complex, no RV128 implementation yet exists, and so RV64's
1881 default is 64 bit, so the 4 major element widths are covered anyway.
1882
1883 There is an absolutely crucial aspect oF SV here that explicitly
1884 needs spelling out, and it's whether the "vectorised" bit is set in
1885 the Register's CSR entry.
1886
1887 If "vectorised" is clear (not set), this indicates that the operation
1888 is "scalar". Under these circumstances, when set on a destination (RD),
1889 then sign-extension and zero-extension, whilst changed to match the
1890 override bitwidth (if set), will erase the **full** register entry
1891 (64-bit if RV64).
1892
1893 When vectorised is *set*, this indicates that the operation now treats
1894 **elements** as if they were independent registers, so regardless of
1895 the length, any parts of a given actual register that are not involved
1896 in the operation are **NOT** modified, but are **PRESERVED**.
1897
1898 For example:
1899
1900 * when the vector bit is clear and elwidth set to 16 on the destination
1901 register, operations are truncated to 16 bit and then sign or zero
1902 extended to the *FULL* XLEN register width.
1903 * when the vector bit is set, elwidth is 16 and VL=1 (or other value where
1904 groups of elwidth sized elements do not fill an entire XLEN register),
1905 the "top" bits of the destination register do *NOT* get modified, zero'd
1906 or otherwise overwritten.
1907
1908 SIMD micro-architectures may implement this by using predication on
1909 any elements in a given actual register that are beyond the end of
1910 multi-element operation.
1911
1912 Other microarchitectures may choose to provide byte-level write-enable
1913 lines on the register file, such that each 64 bit register in an RV64
1914 system requires 8 WE lines. Scalar RV64 operations would require
1915 activation of all 8 lines, where SV elwidth based operations would
1916 activate the required subset of those byte-level write lines.
1917
1918 Example:
1919
1920 * rs1, rs2 and rd are all set to 8-bit
1921 * VL is set to 3
1922 * RV64 architecture is set (UXL=64)
1923 * add operation is carried out
1924 * bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16]
1925 concatenated with similar add operations on bits 15..8 and 7..0
1926 * bits 24 through 63 **remain as they originally were**.
1927
1928 Example SIMD micro-architectural implementation:
1929
1930 * SIMD architecture works out the nearest round number of elements
1931 that would fit into a full RV64 register (in this case: 8)
1932 * SIMD architecture creates a hidden predicate, binary 0b00000111
1933 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
1934 * SIMD architecture goes ahead with the add operation as if it
1935 was a full 8-wide batch of 8 adds
1936 * SIMD architecture passes top 5 elements through the adders
1937 (which are "disabled" due to zero-bit predication)
1938 * SIMD architecture gets the 5 unmodified top 8-bits back unmodified
1939 and stores them in rd.
1940
1941 This requires a read on rd, however this is required anyway in order
1942 to support non-zeroing mode.
1943
1944 ## Polymorphic floating-point
1945
1946 Standard scalar RV integer operations base the register width on XLEN,
1947 which may be changed (UXL in USTATUS, and the corresponding MXL and
1948 SXL in MSTATUS and SSTATUS respectively). Integer LOAD, STORE and
1949 arithmetic operations are therefore restricted to an active XLEN bits,
1950 with sign or zero extension to pad out the upper bits when XLEN has
1951 been dynamically set to less than the actual register size.
1952
1953 For scalar floating-point, the active (used / changed) bits are
1954 specified exclusively by the operation: ADD.S specifies an active
1955 32-bits, with the upper bits of the source registers needing to
1956 be all 1s ("NaN-boxed"), and the destination upper bits being
1957 *set* to all 1s (including on LOAD/STOREs).
1958
1959 Where elwidth is set to default (on any source or the destination)
1960 it is obvious that this NaN-boxing behaviour can and should be
1961 preserved. When elwidth is non-default things are less obvious,
1962 so need to be thought through. Here is a normal (scalar) sequence,
1963 assuming an RV64 which supports Quad (128-bit) FLEN:
1964
1965 * FLD loads 64-bit wide from memory. Top 64 MSBs are set to all 1s
1966 * ADD.D performs a 64-bit-wide add. Top 64 MSBs of destination set to 1s.
1967 * FSD stores lowest 64-bits from the 128-bit-wide register to memory:
1968 top 64 MSBs ignored.
1969
1970 Therefore it makes sense to mirror this behaviour when, for example,
1971 elwidth is set to 32. Assume elwidth set to 32 on all source and
1972 destination registers:
1973
1974 * FLD loads 64-bit wide from memory as **two** 32-bit single-precision
1975 floating-point numbers.
1976 * ADD.D performs **two** 32-bit-wide adds, storing one of the adds
1977 in bits 0-31 and the second in bits 32-63.
1978 * FSD stores lowest 64-bits from the 128-bit-wide register to memory
1979
1980 Here's the thing: it does not make sense to overwrite the top 64 MSBs
1981 of the registers either during the FLD **or** the ADD.D. The reason
1982 is that, effectively, the top 64 MSBs actually represent a completely
1983 independent 64-bit register, so overwriting it is not only gratuitous
1984 but may actually be harmful for a future extension to SV which may
1985 have a way to directly access those top 64 bits.
1986
1987 The decision is therefore **not** to touch the upper parts of floating-point
1988 registers whereever elwidth is set to non-default values, including
1989 when "isvec" is false in a given register's CSR entry. Only when the
1990 elwidth is set to default **and** isvec is false will the standard
1991 RV behaviour be followed, namely that the upper bits be modified.
1992
1993 Ultimately if elwidth is default and isvec false on *all* source
1994 and destination registers, a SimpleV instruction defaults completely
1995 to standard RV scalar behaviour (this holds true for **all** operations,
1996 right across the board).
1997
1998 The nice thing here is that ADD.S, ADD.D and ADD.Q when elwidth are
1999 non-default values are effectively all the same: they all still perform
2000 multiple ADD operations, just at different widths. A future extension
2001 to SimpleV may actually allow ADD.S to access the upper bits of the
2002 register, effectively breaking down a 128-bit register into a bank
2003 of 4 independently-accesible 32-bit registers.
2004
2005 In the meantime, although when e.g. setting VL to 8 it would technically
2006 make no difference to the ALU whether ADD.S, ADD.D or ADD.Q is used,
2007 using ADD.Q may be an easy way to signal to the microarchitecture that
2008 it is to receive a higher VL value. On a superscalar OoO architecture
2009 there may be absolutely no difference, however on simpler SIMD-style
2010 microarchitectures they may not necessarily have the infrastructure in
2011 place to know the difference, such that when VL=8 and an ADD.D instruction
2012 is issued, it completes in 2 cycles (or more) rather than one, where
2013 if an ADD.Q had been issued instead on such simpler microarchitectures
2014 it would complete in one.
2015
2016 ## Specific instruction walk-throughs
2017
2018 This section covers walk-throughs of the above-outlined procedure
2019 for converting standard RISC-V scalar arithmetic operations to
2020 polymorphic widths, to ensure that it is correct.
2021
2022 ### add
2023
2024 Standard Scalar RV32/RV64 (xlen):
2025
2026 * RS1 @ xlen bits
2027 * RS2 @ xlen bits
2028 * add @ xlen bits
2029 * RD @ xlen bits
2030
2031 Polymorphic variant:
2032
2033 * RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
2034 * RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
2035 * add @ max(rs1, rs2) bits
2036 * RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate
2037
2038 Note here that polymorphic add zero-extends its source operands,
2039 where addw sign-extends.
2040
2041 ### addw
2042
2043 The RV Specification specifically states that "W" variants of arithmetic
2044 operations always produce 32-bit signed values. In a polymorphic
2045 environment it is reasonable to assume that the signed aspect is
2046 preserved, where it is the length of the operands and the result
2047 that may be changed.
2048
2049 Standard Scalar RV64 (xlen):
2050
2051 * RS1 @ xlen bits
2052 * RS2 @ xlen bits
2053 * add @ xlen bits
2054 * RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
2055
2056 Polymorphic variant:
2057
2058 * RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
2059 * RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
2060 * add @ max(rs1, rs2) bits
2061 * RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
2062
2063 Note here that polymorphic addw sign-extends its source operands,
2064 where add zero-extends.
2065
2066 This requires a little more in-depth analysis. Where the bitwidth of
2067 rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
2068 only where the bitwidth of either rs1 or rs2 are different, will the
2069 lesser-width operand be sign-extended.
2070
2071 Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
2072 where for add they are both zero-extended. This holds true for all arithmetic
2073 operations ending with "W".
2074
2075 ### addiw
2076
2077 Standard Scalar RV64I:
2078
2079 * RS1 @ xlen bits, truncated to 32-bit
2080 * immed @ 12 bits, sign-extended to 32-bit
2081 * add @ 32 bits
2082 * RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
2083
2084 Polymorphic variant:
2085
2086 * RS1 @ rs1 bits
2087 * immed @ 12 bits, sign-extend to max(rs1, 12) bits
2088 * add @ max(rs1, 12) bits
2089 * RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
2090
2091 # Predication Element Zeroing
2092
2093 The introduction of zeroing on traditional vector predication is usually
2094 intended as an optimisation for lane-based microarchitectures with register
2095 renaming to be able to save power by avoiding a register read on elements
2096 that are passed through en-masse through the ALU. Simpler microarchitectures
2097 do not have this issue: they simply do not pass the element through to
2098 the ALU at all, and therefore do not store it back in the destination.
2099 More complex non-lane-based micro-architectures can, when zeroing is
2100 not set, use the predication bits to simply avoid sending element-based
2101 operations to the ALUs, entirely: thus, over the long term, potentially
2102 keeping all ALUs 100% occupied even when elements are predicated out.
2103
2104 SimpleV's design principle is not based on or influenced by
2105 microarchitectural design factors: it is a hardware-level API.
2106 Therefore, looking purely at whether zeroing is *useful* or not,
2107 (whether less instructions are needed for certain scenarios),
2108 given that a case can be made for zeroing *and* non-zeroing, the
2109 decision was taken to add support for both.
2110
2111 ## Single-predication (based on destination register)
2112
2113 Zeroing on predication for arithmetic operations is taken from
2114 the destination register's predicate. i.e. the predication *and*
2115 zeroing settings to be applied to the whole operation come from the
2116 CSR Predication table entry for the destination register.
2117 Thus when zeroing is set on predication of a destination element,
2118 if the predication bit is clear, then the destination element is *set*
2119 to zero (twin-predication is slightly different, and will be covered
2120 next).
2121
2122 Thus the pseudo-code loop for a predicated arithmetic operation
2123 is modified to as follows:
2124
2125  for (i = 0; i < VL; i++)
2126 if not zeroing: # an optimisation
2127 while (!(predval & 1<<i) && i < VL)
2128 if (int_vec[rd ].isvector)  { id += 1; }
2129 if (int_vec[rs1].isvector)  { irs1 += 1; }
2130 if (int_vec[rs2].isvector)  { irs2 += 1; }
2131 if i == VL:
2132 break
2133 if (predval & 1<<i)
2134 src1 = ....
2135 src2 = ...
2136 else:
2137 result = src1 + src2 # actual add (or other op) here
2138 set_polymorphed_reg(rd, destwid, ird, result)
2139 if (!int_vec[rd].isvector) break
2140 else if zeroing:
2141 result = 0
2142 set_polymorphed_reg(rd, destwid, ird, result)
2143 if (int_vec[rd ].isvector)  { id += 1; }
2144 else if (predval & 1<<i) break;
2145 if (int_vec[rs1].isvector)  { irs1 += 1; }
2146 if (int_vec[rs2].isvector)  { irs2 += 1; }
2147
2148 The optimisation to skip elements entirely is only possible for certain
2149 micro-architectures when zeroing is not set. However for lane-based
2150 micro-architectures this optimisation may not be practical, as it
2151 implies that elements end up in different "lanes". Under these
2152 circumstances it is perfectly fine to simply have the lanes
2153 "inactive" for predicated elements, even though it results in
2154 less than 100% ALU utilisation.
2155
2156 ## Twin-predication (based on source and destination register)
2157
2158 Twin-predication is not that much different, except that that
2159 the source is independently zero-predicated from the destination.
2160 This means that the source may be zero-predicated *or* the
2161 destination zero-predicated *or both*, or neither.
2162
2163 When with twin-predication, zeroing is set on the source and not
2164 the destination, if a predicate bit is set it indicates that a zero
2165 data element is passed through the operation (the exception being:
2166 if the source data element is to be treated as an address - a LOAD -
2167 then the data returned *from* the LOAD is zero, rather than looking up an
2168 *address* of zero.
2169
2170 When zeroing is set on the destination and not the source, then just
2171 as with single-predicated operations, a zero is stored into the destination
2172 element (or target memory address for a STORE).
2173
2174 Zeroing on both source and destination effectively result in a bitwise
2175 NOR operation of the source and destination predicate: the result is that
2176 where either source predicate OR destination predicate is set to 0,
2177 a zero element will ultimately end up in the destination register.
2178
2179 However: this may not necessarily be the case for all operations;
2180 implementors, particularly of custom instructions, clearly need to
2181 think through the implications in each and every case.
2182
2183 Here is pseudo-code for a twin zero-predicated operation:
2184
2185 function op_mv(rd, rs) # MV not VMV!
2186  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
2187  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
2188  ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
2189  pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
2190  for (int i = 0, int j = 0; i < VL && j < VL):
2191 if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
2192 if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
2193 if ((pd & 1<<j))
2194 if ((pd & 1<<j))
2195 sourcedata = ireg[rs+i];
2196 else
2197 sourcedata = 0
2198 ireg[rd+j] <= sourcedata
2199 else if (zerodst)
2200 ireg[rd+j] <= 0
2201 if (int_csr[rs].isvec)
2202 i++;
2203 if (int_csr[rd].isvec)
2204 j++;
2205 else
2206 if ((pd & 1<<j))
2207 break;
2208
2209 Note that in the instance where the destination is a scalar, the hardware
2210 loop is ended the moment a value *or a zero* is placed into the destination
2211 register/element. Also note that, for clarity, variable element widths
2212 have been left out of the above.
2213
2214 # Exceptions
2215
2216 TODO: expand. Exceptions may occur at any time, in any given underlying
2217 scalar operation. This implies that context-switching (traps) may
2218 occur, and operation must be returned to where it left off. That in
2219 turn implies that the full state - including the current parallel
2220 element being processed - has to be saved and restored. This is
2221 what the **STATE** CSR is for.
2222
2223 The implications are that all underlying individual scalar operations
2224 "issued" by the parallelisation have to appear to be executed sequentially.
2225 The further implications are that if two or more individual element
2226 operations are underway, and one with an earlier index causes an exception,
2227 it may be necessary for the microarchitecture to **discard** or terminate
2228 operations with higher indices.
2229
2230 This being somewhat dissatisfactory, an "opaque predication" variant
2231 of the STATE CSR is being considered.
2232
2233 # Hints
2234
2235 A "HINT" is an operation that has no effect on architectural state,
2236 where its use may, by agreed convention, give advance notification
2237 to the microarchitecture: branch prediction notification would be
2238 a good example. Usually HINTs are where rd=x0.
2239
2240 With Simple-V being capable of issuing *parallel* instructions where
2241 rd=x0, the space for possible HINTs is expanded considerably. VL
2242 could be used to indicate different hints. In addition, if predication
2243 is set, the predication register itself could hypothetically be passed
2244 in as a *parameter* to the HINT operation.
2245
2246 No specific hints are yet defined in Simple-V
2247
2248 # VLIW Format <a name="vliw-format"></a>
2249
2250 One issue with SV is the setup and teardown time of the CSRs. The cost
2251 of the use of a full CSRRW (requiring LI) is quite high. A VLIW format
2252 therefore makes sense.
2253
2254 A suitable prefix, which fits the Expanded Instruction-Length encoding
2255 for "(80 + 16 times instruction_length)", as defined in Section 1.5
2256 of the RISC-V ISA, is as follows:
2257
2258 | 15 | 14:12 | 11:10 | 9:8 | 7 | 6:0 |
2259 | - | ----- | ----- | ----- | --- | ------- |
2260 | vlset | 16xil | pplen | rplen | mode | 1111111 |
2261
2262 An optional VL Block, optional predicate entries, optional register
2263 entries and finally some 16/32/48 bit standard RV or SVPrefix opcodes
2264 follow.
2265
2266 The variable-length format from Section 1.5 of the RISC-V ISA:
2267
2268 | base+4 ... base+2 | base | number of bits |
2269 | ------ ----------------- | ---------------- | -------------------------- |
2270 | ..xxxx xxxxxxxxxxxxxxxx | xnnnxxxxx1111111 | (80+16\*nnn)-bit, nnn!=111 |
2271 | {ops}{Pred}{Reg}{VL Block} | SV Prefix | |
2272
2273 VL/MAXVL/SubVL Block:
2274
2275 | 31-30 | 29:28 | 27:22 | 21:17 - 16 |
2276 | - | ----- | ------ | ------ - - |
2277 | 0 | SubVL | VLdest | VLEN vlt |
2278 | 1 | SubVL | VLdest | VLEN |
2279
2280 Note: this format is very similar to that used in [[sv_prefix_proposal]]
2281
2282 If vlt is 0, VLEN is a 5 bit immediate value, offset by one (i.e
2283 a bit sequence of 0b00000 represents VL=1 and so on). If vlt is 1,
2284 it specifies the scalar register from which VL is set by this VLIW
2285 instruction group. VL, whether set from the register or the immediate,
2286 is then modified (truncated) to be MIN(VL, MAXVL), and the result stored
2287 in the scalar register specified in VLdest. If VLdest is zero, no store
2288 in the regfile occurs (however VL is still set).
2289
2290 This option will typically be used to start vectorised loops, where
2291 the VLIW instruction effectively embeds an optional "SETSUBVL, SETVL"
2292 sequence (in compact form).
2293
2294 When bit 15 is set to 1, MAXVL and VL are both set to the immediate,
2295 VLEN (again, offset by one), which is 6 bits in length, and the same
2296 value stored in scalar register VLdest (if that register is nonzero).
2297 A value of 0b000000 will set MAXVL=VL=1, a value of 0b000001 will
2298 set MAXVL=VL= 2 and so on.
2299
2300 This option will typically not be used so much for loops as it will be
2301 for one-off instructions such as saving the entire register file to the
2302 stack with a single one-off Vectorised and predicated LD/ST, or as a way
2303 to save or restore registers in a function call with a single instruction.
2304
2305 CSRs needed:
2306
2307 * mepcvliw
2308 * sepcvliw
2309 * uepcvliw
2310 * hepcvliw
2311
2312 Notes:
2313
2314 * Bit 7 specifies if the prefix block format is the full 16 bit format
2315 (1) or the compact less expressive format (0). In the 8 bit format,
2316 pplen is multiplied by 2.
2317 * 8 bit format predicate numbering is implicit and begins from x9. Thus
2318 it is critical to put blocks in the correct order as required.
2319 * Bit 7 also specifies if the register block format is 16 bit (1) or 8 bit
2320 (0). In the 8 bit format, rplen is multiplied by 2. If only an odd number
2321 of entries are needed the last may be set to 0x00, indicating "unused".
2322 * Bit 15 specifies if the VL Block is present. If set to 1, the VL Block
2323 immediately follows the VLIW instruction Prefix
2324 * Bits 8 and 9 define how many RegCam entries (0 to 3 if bit 15 is 1,
2325 otherwise 0 to 6) follow the (optional) VL Block.
2326 * Bits 10 and 11 define how many PredCam entries (0 to 3 if bit 7 is 1,
2327 otherwise 0 to 6) follow the (optional) RegCam entries
2328 * Bits 14 to 12 (IL) define the actual length of the instruction: total
2329 number of bits is 80 + 16 times IL. Standard RV32, RVC and also
2330 SVPrefix (P48/64-\*-Type) instructions fit into this space, after the
2331 (optional) VL / RegCam / PredCam entries
2332 * Anything - any registers - within the VLIW-prefixed format *MUST* have the
2333 RegCam and PredCam entries applied to it.
2334 * At the end of the VLIW Group, the RegCam and PredCam entries
2335 *no longer apply*. VL, MAXVL and SUBVL on the other hand remain at
2336 the values set by the last instruction (whether a CSRRW or the VL
2337 Block header).
2338 * Although an inefficient use of resources, it is fine to set the MAXVL,
2339 VL and SUBVL CSRs with standard CSRRW instructions, within a VLIW block.
2340
2341 All this would greatly reduce the amount of space utilised by Vectorised
2342 instructions, given that 64-bit CSRRW requires 3, even 4 32-bit opcodes: the
2343 CSR itself, a LI, and the setting up of the value into the RS register
2344 of the CSR, which, again, requires a LI / LUI to get the 32 bit
2345 data into the CSR. To get 64-bit data into the register in order to put
2346 it into the CSR(s), LOAD operations from memory are needed!
2347
2348 Given that each 64-bit CSR can hold only 4x PredCAM entries (or 4 RegCAM
2349 entries), that's potentially 6 to eight 32-bit instructions, just to
2350 establish the Vector State!
2351
2352 Not only that: even CSRRW on VL and MAXVL requires 64-bits (even more bits if
2353 VL needs to be set to greater than 32). Bear in mind that in SV, both MAXVL
2354 and VL need to be set.
2355
2356 By contrast, the VLIW prefix is only 16 bits, the VL/MAX/SubVL block is
2357 only 16 bits, and as long as not too many predicates and register vector
2358 qualifiers are specified, several 32-bit and 16-bit opcodes can fit into
2359 the format. If the full flexibility of the 16 bit block formats are not
2360 needed, more space is saved by using the 8 bit formats.
2361
2362 In this light, embedding the VL/MAXVL, PredCam and RegCam CSR entries into
2363 a VLIW format makes a lot of sense.
2364
2365 Open Questions:
2366
2367 * Is it necessary to stick to the RISC-V 1.5 format? Why not go with
2368 using the 15th bit to allow 80 + 16\*0bnnnn bits? Perhaps to be sane,
2369 limit to 256 bits (16 times 0-11).
2370 * Could a "hint" be used to set which operations are parallel and which
2371 are sequential?
2372 * Could a new sub-instruction opcode format be used, one that does not
2373 conform precisely to RISC-V rules, but *unpacks* to RISC-V opcodes?
2374 no need for byte or bit-alignment
2375 * Could a hardware compression algorithm be deployed? Quite likely,
2376 because of the sub-execution context (sub-VLIW PC)
2377
2378 ## Limitations on instructions.
2379
2380 To greatly simplify implementations, it is required to treat the VLIW
2381 group as a separate sub-program with its own separate PC. The sub-pc
2382 advances separately whilst the main PC remains pointing at the beginning
2383 of the VLIW instruction (not to be confused with how VL works, which
2384 is exactly the same principle, except it is VStart in the STATE CSR
2385 that increments).
2386
2387 This has implications, namely that a new set of CSRs identical to xepc
2388 (mepc, srpc, hepc and uepc) must be created and managed and respected
2389 as being a sub extension of the xepc set of CSRs. Thus, xepcvliw CSRs
2390 must be context switched and saved / restored in traps.
2391
2392 The srcoffs and destoffs indices in the STATE CSR may be similarly regarded as another
2393 sub-execution context, giving in effect two sets of nested sub-levels
2394 of the RISCV Program Counter (actually, three including SUBVL and ssvoffs).
2395
2396 In addition, as xepcvliw CSRs are relative to the beginning of the VLIW
2397 block, branches MUST be restricted to within (relative to) the block, i.e. addressing
2398 is now restricted to the start (and very short) length of the block.
2399
2400 Also: calling subroutines is inadviseable, unless they can be entirely
2401 accomplished within a block.
2402
2403 A normal jump, normal branch and a normal function call may only be taken by letting
2404 the VLIW group end, returning to "normal" standard RV mode, and then using standard RVC, 32 bit
2405 or P48/64-\*-type opcodes.
2406
2407 ## Links
2408
2409 * <https://groups.google.com/d/msg/comp.arch/yIFmee-Cx-c/jRcf0evSAAAJ>
2410
2411 # Subsets of RV functionality
2412
2413 This section describes the differences when SV is implemented on top of
2414 different subsets of RV.
2415
2416 ## Common options
2417
2418 It is permitted to only implement SVprefix and not the VLIW instruction format option.
2419 UNIX Platforms **MUST** raise illegal instruction on seeing a VLIW opcode so that traps may emulate the format.
2420
2421 It is permitted in SVprefix to either not implement VL or not implement SUBVL (see [[sv_prefix_proposal]] for full details. Again, UNIX Platforms *MUST* raise illegal instruction on implementations that do not support VL or SUBVL.
2422
2423 It is permitted to limit the size of either (or both) the register files
2424 down to the original size of the standard RV architecture. However, below
2425 the mandatory limits set in the RV standard will result in non-compliance
2426 with the SV Specification.
2427
2428 ## RV32 / RV32F
2429
2430 When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
2431 maximum limit for predication is also restricted to 32 bits. Whilst not
2432 actually specifically an "option" it is worth noting.
2433
2434 ## RV32G
2435
2436 Normally in standard RV32 it does not make much sense to have
2437 RV32G, The critical instructions that are missing in standard RV32
2438 are those for moving data to and from the double-width floating-point
2439 registers into the integer ones, as well as the FCVT routines.
2440
2441 In an earlier draft of SV, it was possible to specify an elwidth
2442 of double the standard register size: this had to be dropped,
2443 and may be reintroduced in future revisions.
2444
2445 ## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
2446
2447 When floating-point is not implemented, the size of the User Register and
2448 Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
2449 per table).
2450
2451 ## RV32E
2452
2453 In embedded scenarios the User Register and Predication CSRs may be
2454 dropped entirely, or optionally limited to 1 CSR, such that the combined
2455 number of entries from the M-Mode CSR Register table plus U-Mode
2456 CSR Register table is either 4 16-bit entries or (if the U-Mode is
2457 zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
2458 the Predication CSR tables.
2459
2460 RV32E is the most likely candidate for simply detecting that registers
2461 are marked as "vectorised", and generating an appropriate exception
2462 for the VL loop to be implemented in software.
2463
2464 ## RV128
2465
2466 RV128 has not been especially considered, here, however it has some
2467 extremely large possibilities: double the element width implies
2468 256-bit operands, spanning 2 128-bit registers each, and predication
2469 of total length 128 bit given that XLEN is now 128.
2470
2471 # Under consideration <a name="issues"></a>
2472
2473 for element-grouping, if there is unused space within a register
2474 (3 16-bit elements in a 64-bit register for example), recommend:
2475
2476 * For the unused elements in an integer register, the used element
2477 closest to the MSB is sign-extended on write and the unused elements
2478 are ignored on read.
2479 * The unused elements in a floating-point register are treated as-if
2480 they are set to all ones on write and are ignored on read, matching the
2481 existing standard for storing smaller FP values in larger registers.
2482
2483 ---
2484
2485 info register,
2486
2487 > One solution is to just not support LR/SC wider than a fixed
2488 > implementation-dependent size, which must be at least 
2489 >1 XLEN word, which can be read from a read-only CSR
2490 > that can also be used for info like the kind and width of 
2491 > hw parallelism supported (128-bit SIMD, minimal virtual 
2492 > parallelism, etc.) and other things (like maybe the number 
2493 > of registers supported). 
2494
2495 > That CSR would have to have a flag to make a read trap so
2496 > a hypervisor can simulate different values.
2497
2498 ----
2499
2500 > And what about instructions like JALR? 
2501
2502 answer: they're not vectorised, so not a problem
2503
2504 ----
2505
2506 * if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
2507 XLEN if elwidth==default
2508 * if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
2509 *32* if elwidth == default
2510
2511 ---
2512
2513 TODO: document different lengths for INT / FP regfiles, and provide
2514 as part of info register. 00=32, 01=64, 10=128, 11=reserved.
2515
2516 ---
2517
2518 TODO, update to remove RegCam and PredCam CSRs, just use SVprefix and
2519 VLIW format