(no commit message)
[libreriscv.git] / simple_v_extension / specification.mdwn
1 # Simple-V (Parallelism Extension Proposal) Specification
2
3 * Copyright (C) 2017, 2018, 2019 Luke Kenneth Casson Leighton
4 * Status: DRAFTv0.6
5 * Last edited: 21 jun 2019
6 * Ancillary resource: [[opcodes]] [[sv_prefix_proposal]]
7
8 With thanks to:
9
10 * Allen Baum
11 * Bruce Hoult
12 * comp.arch
13 * Jacob Bachmeyer
14 * Guy Lemurieux
15 * Jacob Lifshay
16 * Terje Mathisen
17 * The RISC-V Founders, without whom this all would not be possible.
18
19 [[!toc ]]
20
21 # Summary and Background: Rationale
22
23 Simple-V is a uniform parallelism API for RISC-V hardware that has several
24 unplanned side-effects including code-size reduction, expansion of
25 HINT space and more. The reason for
26 creating it is to provide a manageable way to turn a pre-existing design
27 into a parallel one, in a step-by-step incremental fashion, without adding any new opcodes, thus allowing
28 the implementor to focus on adding hardware where it is needed and necessary.
29 The primary target is for mobile-class 3D GPUs and VPUs, with secondary
30 goals being to reduce executable size and reduce context-switch latency.
31
32 Critically: **No new instructions are added**. The parallelism (if any
33 is implemented) is implicitly added by tagging *standard* scalar registers
34 for redirection. When such a tagged register is used in any instruction,
35 it indicates that the PC shall **not** be incremented; instead a loop
36 is activated where *multiple* instructions are issued to the pipeline
37 (as determined by a length CSR), with contiguously incrementing register
38 numbers starting from the tagged register. When the last "element"
39 has been reached, only then is the PC permitted to move on. Thus
40 Simple-V effectively sits (slots) *in between* the instruction decode phase
41 and the ALU(s).
42
43 The barrier to entry with SV is therefore very low. The minimum
44 compliant implementation is software-emulation (traps), requiring
45 only the CSRs and CSR tables, and that an exception be thrown if an
46 instruction's registers are detected to have been tagged. The looping
47 that would otherwise be done in hardware is thus carried out in software,
48 instead. Whilst much slower, it is "compliant" with the SV specification,
49 and may be suited for implementation in RV32E and also in situations
50 where the implementor wishes to focus on certain aspects of SV, without
51 unnecessary time and resources into the silicon, whilst also conforming
52 strictly with the API. A good area to punt to software would be the
53 polymorphic element width capability for example.
54
55 Hardware Parallelism, if any, is therefore added at the implementor's
56 discretion to turn what would otherwise be a sequential loop into a
57 parallel one.
58
59 To emphasise that clearly: Simple-V (SV) is *not*:
60
61 * A SIMD system
62 * A SIMT system
63 * A Vectorisation Microarchitecture
64 * A microarchitecture of any specific kind
65 * A mandary parallel processor microarchitecture of any kind
66 * A supercomputer extension
67
68 SV does **not** tell implementors how or even if they should implement
69 parallelism: it is a hardware "API" (Application Programming Interface)
70 that, if implemented, presents a uniform and consistent way to *express*
71 parallelism, at the same time leaving the choice of if, how, how much,
72 when and whether to parallelise operations **entirely to the implementor**.
73
74 # Basic Operation
75
76 The principle of SV is as follows:
77
78 * Standard RV instructions are "prefixed" (extended) through a 48/64
79 bit format (single instruction option) or a variable
80 length VLIW-like prefix (multi or "grouped" option).
81 * The prefix(es) indicate which registers are "tagged" as
82 "vectorised". Predicates can also be added, and element widths overridden on any src or dest register.
83 * A "Vector Length" CSR is set, indicating the span of any future
84 "parallel" operations.
85 * If any operation (a **scalar** standard RV opcode) uses a register
86 that has been so "marked" ("tagged"), a hardware "macro-unrolling loop"
87 is activated, of length VL, that effectively issues **multiple**
88 identical instructions using contiguous sequentially-incrementing
89 register numbers, based on the "tags".
90 * **Whether they be executed sequentially or in parallel or a
91 mixture of both or punted to software-emulation in a trap handler
92 is entirely up to the implementor**.
93
94 In this way an entire scalar algorithm may be vectorised with
95 the minimum of modification to the hardware and to compiler toolchains.
96
97 To reiterate: **There are *no* new opcodes**. The scheme works *entirely*
98 on hidden context that augments *scalar* RISCV instructions.
99
100 # CSRs <a name="csrs"></a>
101
102 * An optional "reshaping" CSR key-value table which remaps from a 1D
103 linear shape to 2D or 3D, including full transposition.
104
105 There are also five additional User mode CSRs :
106
107 * uMVL (the Maximum Vector Length)
108 * uVL (which has different characteristics from standard CSRs)
109 * uSUBVL (effectively a kind of SIMD)
110 * uEPCVLIW (a copy of the sub-execution Program Counter, that is relative
111 to the start of the current VLIW Group, set on a trap).
112 * uSTATE (useful for saving and restoring during context switch,
113 and for providing fast transitions)
114
115 There are also five additional CSRs for Supervisor-Mode:
116
117 * SMVL
118 * SVL
119 * SSUBVL
120 * SEPCVLIW
121 * SSTATE
122
123 And likewise for M-Mode:
124
125 * MMVL
126 * MVL
127 * MSUBVL
128 * MEPCVLIW
129 * MSTATE
130
131 Both Supervisor and M-Mode have their own CSR registers, independent
132 of the other privilege levels, in order to make it easier to use
133 Vectorisation in each level without affecting other privilege levels.
134
135 The access pattern for these groups of CSRs in each mode follows the
136 same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
137
138 * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
139 * In S-Mode, accessing and changing of the M-Mode CSRs is transparently
140 identical
141 to changing the S-Mode CSRs. Accessing and changing the U-Mode
142 CSRs is permitted.
143 * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
144 is prohibited.
145
146 In M-Mode, only the M-Mode CSRs are in effect, i.e. it is only the
147 M-Mode MVL, the M-Mode STATE and so on that influences the processor
148 behaviour. Likewise for S-Mode, and likewise for U-Mode.
149
150 This has the interesting benefit of allowing M-Mode (or S-Mode) to be set
151 up, for context-switching to take place, and, on return back to the higher
152 privileged mode, the CSRs of that mode will be exactly as they were.
153 Thus, it becomes possible for example to set up CSRs suited best to aiding
154 and assisting low-latency fast context-switching *once and only once*
155 (for example at boot time), without the need for re-initialising the
156 CSRs needed to do so.
157
158 Another interesting side effect of separate S Mode CSRs is that Vectorised
159 saving of the entire register file to the stack is a single instruction
160 (accidental provision of LOAD-MULTI semantics). If the SVPrefix P64-LD-type format is used, LOAD-MULTI may even be done with a single standalone 64 bit opcode (P64 may set up both VL and MVL from an immediate field). It can even be predicated,
161 which opens up some very interesting possibilities.
162
163 The (x)EPCVLIW CSRs must be treated exactly like their corresponding (x)epc
164 equivalents. See VLIW section for details.
165
166 ## MAXVECTORLENGTH (MVL) <a name="mvl" />
167
168 MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
169 is variable length and may be dynamically set. MVL is
170 however limited to the regfile bitwidth XLEN (1-32 for RV32,
171 1-64 for RV64 and so on).
172
173 The reason for setting this limit is so that predication registers, when
174 marked as such, may fit into a single register as opposed to fanning out
175 over several registers. This keeps the hardware implementation a little simpler.
176
177 The other important factor to note is that the actual MVL is internally
178 stored **offset by one**, so that it can fit into only 6 bits (for RV64)
179 and still cover a range up to XLEN bits. Attempts to set MVL to zero will
180 return an exception. This is expressed more clearly in the "pseudocode"
181 section, where there are subtle differences between CSRRW and CSRRWI.
182
183 ## Vector Length (VL) <a name="vl" />
184
185 VSETVL is slightly different from RVV. Similar to RVV, VL is set to be within
186 the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
187
188 VL = rd = MIN(vlen, MVL)
189
190 where 1 <= MVL <= XLEN
191
192 However just like MVL it is important to note that the range for VL has
193 subtle design implications, covered in the "CSR pseudocode" section
194
195 The fixed (specific) setting of VL allows vector LOAD/STORE to be used
196 to switch the entire bank of registers using a single instruction (see
197 Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
198 is down to the fact that predication bits fit into a single register of
199 length XLEN bits.
200
201 The second change is that when VSETVL is requested to be stored
202 into x0, it is *ignored* silently (VSETVL x0, x5)
203
204 The third and most important change is that, within the limits set by
205 MVL, the value passed in **must** be set in VL (and in the
206 destination register).
207
208 This has implication for the microarchitecture, as VL is required to be
209 set (limits from MVL notwithstanding) to the actual value
210 requested. RVV has the option to set VL to an arbitrary value that suits
211 the conditions and the micro-architecture: SV does *not* permit this.
212
213 The reason is so that if SV is to be used for a context-switch or as a
214 substitute for LOAD/STORE-Multiple, the operation can be done with only
215 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
216 single LD/ST operation). If VL does *not* get set to the register file
217 length when VSETVL is called, then a software-loop would be needed.
218 To avoid this need, VL *must* be set to exactly what is requested
219 (limits notwithstanding).
220
221 Therefore, in turn, unlike RVV, implementors *must* provide
222 pseudo-parallelism (using sequential loops in hardware) if actual
223 hardware-parallelism in the ALUs is not deployed. A hybrid is also
224 permitted (as used in Broadcom's VideoCore-IV) however this must be
225 *entirely* transparent to the ISA.
226
227 The fourth change is that VSETVL is implemented as a CSR, where the
228 behaviour of CSRRW (and CSRRWI) must be changed to specifically store
229 the *new* value in the destination register, **not** the old value.
230 Where context-load/save is to be implemented in the usual fashion
231 by using a single CSRRW instruction to obtain the old value, the
232 *secondary* CSR must be used (SVSTATE). This CSR behaves
233 exactly as standard CSRs, and contains more than just VL.
234
235 One interesting side-effect of using CSRRWI to set VL is that this
236 may be done with a single instruction, useful particularly for a
237 context-load/save. There are however limitations: CSRWI's immediate
238 is limited to 0-31 (representing VL=1-32).
239
240 Note that when VL is set to 1, all parallel operations cease: the
241 hardware loop is reduced to a single element: scalar operations.
242
243 ## SUBVL - Sub Vector Length
244
245 This is a "group by quantity" that effectivrly asks each iteration of the hardware loop to load SUBVL elements of width elwidth at a time. Effectively, SUBVL is like a SIMD multiplier: instead of just 1 operation issued, SUBVL operations are issued.
246
247 Another way to view SUBVL is that each element in the VL length vector is now SUBVL times elwidth bits in length.
248
249 The primary use case for SUBVL is for 3D FP Vectors. A Vector of 3D coordinates X,Y,Z for example may be loaded and multiplied the stored, per VL element iteration, rather than having to set VL to three times larger.
250
251 Legal values are 1, 2, 3 and 4, and the STATE CSR must hold the 2 bit values 0b00 thru 0b11.
252
253 Setting this CSR to 0 must raise an exception. Setting it to a value
254 greater than 4 likewise.
255
256 The main effect of SUBVL is that predication bits are applied per **group**,
257 rather than by individual element.
258
259 This saves a not insignificant number of instructions when handling 3D
260 vectors, as otherwise a much longer predicate mask would have to be set
261 up with regularly-repeated bit patterns.
262
263 See SUBVL Pseudocode illustration for details.
264
265 ## STATE
266
267 This is a standard CSR that contains sufficient information for a
268 full context save/restore. It contains (and permits setting of):
269
270 * MVL
271 * VL
272 * the destination element offset of the current parallel instruction
273 being executed
274 * and, for twin-predication, the source element offset as well.
275 * SUBVL
276 * the subvector destination element offset of the current parallel instruction
277 being executed
278 * and, for twin-predication, the subvector source element offset as well.
279
280 Interestingly STATE may hypothetically also be used to make the
281 immediately-following instruction to skip a certain number of elements,
282 by playing with destoffs and srcoffs
283 (and the subvector offsets as well)
284
285 Setting destoffs and srcoffs is realistically intended for saving state
286 so that exceptions (page faults in particular) may be serviced and the
287 hardware-loop that was being executed at the time of the trap, from
288 user-mode (or Supervisor-mode), may be returned to and continued from exactly
289 where it left off. The reason why this works is because setting
290 User-Mode STATE will not change (not be used) in M-Mode or S-Mode
291 (and is entirely why M-Mode and S-Mode have their own STATE CSRs).
292
293 The format of the STATE CSR is as follows:
294
295 | (30..29 | (28..27) | (26..24) | (23..18) | (17..12) | (11..6) | (5...0) |
296 | ------- | -------- | -------- | -------- | -------- | ------- | ------- |
297 | dsvoffs | ssvoffs | subvl | destoffs | srcoffs | vl | maxvl |
298
299 When setting this CSR, the following characteristics will be enforced:
300
301 * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
302 * **VL** will be truncated (after offset) to be within the range 1 to MAXVL
303 * **SUBVL** which sets a SIMD-like quantity, has only 4 values there are no changes needed
304 * **srcoffs** will be truncated to be within the range 0 to VL-1
305 * **destoffs** will be truncated to be within the range 0 to VL-1
306 * **ssvoffs** will be truncated to be within the range 0 to SUBVL-1
307 * **dsvoffs** will be truncated to be within the range 0 to SUBVL-1
308
309 NOTE: if the following instruction is not a twin predicated instruction, and destoffs or dsvoffs has been set to non-zero, subsequent execution behaviour is undefined. **USE WITH CARE**.
310
311 ### Rules for when to increment STATE offsets
312
313 The offsets inside STATE are like the indices in a loop, except in hardware. They are also partially (conceptually) similar to a "sub-execution Program Counter". As such, and to allow proper context switching and to define correct exception behaviour, the following rules must be observed:
314
315 * When the VL CSR is set, srcoffs and destoffs are reset to zero.
316 * Each instruction that contains a "tagged" register shall start execution at the *current* value of srcoffs (and destoffs in the case of twin predication)
317 * Unpredicated bits (in nonzeroing mode) shall cause the element operation to skip, incrementing the srcoffs (or destoffs)
318 * On execution of an element operation, Exceptions shall **NOT** cause srcoffs or destoffs to increment.
319 * On completion of the full Vector Loop (srcoffs = VL-1 or destoffs = VL-1 after the last element is executed), both srcoffs and destoffs shall be reset to zero.
320
321 This latter is why srcoffs and destoffs may be stored as values from 0 to XLEN-1 in the STATE CSR, because as loop indices they refer to elements. srcoffs and destoffs never need to be set to VL: their maximum operating values are limited to 0 to VL-1.
322
323 The same corresponding rules apply to SUBVL, svsrcoffs and svdestoffs.
324
325 ## MVL and VL Pseudocode
326
327 The pseudo-code for get and set of VL and MVL use the following internal
328 functions as follows:
329
330 set_mvl_csr(value, rd):
331 regs[rd] = MVL
332 MVL = MIN(value, MVL)
333
334 get_mvl_csr(rd):
335 regs[rd] = VL
336
337 set_vl_csr(value, rd):
338 VL = MIN(value, MVL)
339 regs[rd] = VL # yes returning the new value NOT the old CSR
340 return VL
341
342 get_vl_csr(rd):
343 regs[rd] = VL
344 return VL
345
346 Note that where setting MVL behaves as a normal CSR (returns the old
347 value), unlike standard CSR behaviour, setting VL will return the **new**
348 value of VL **not** the old one.
349
350 For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
351 maximise the effectiveness, an immediate of 0 is used to set VL=1,
352 an immediate of 1 is used to set VL=2 and so on:
353
354 CSRRWI_Set_MVL(value):
355 set_mvl_csr(value+1, x0)
356
357 CSRRWI_Set_VL(value):
358 set_vl_csr(value+1, x0)
359
360 However for CSRRW the following pseudocode is used for MVL and VL,
361 where setting the value to zero will cause an exception to be raised.
362 The reason is that if VL or MVL are set to zero, the STATE CSR is
363 not capable of returning that value.
364
365 CSRRW_Set_MVL(rs1, rd):
366 value = regs[rs1]
367 if value == 0 or value > XLEN:
368 raise Exception
369 set_mvl_csr(value, rd)
370
371 CSRRW_Set_VL(rs1, rd):
372 value = regs[rs1]
373 if value == 0 or value > XLEN:
374 raise Exception
375 set_vl_csr(value, rd)
376
377 In this way, when CSRRW is utilised with a loop variable, the value
378 that goes into VL (and into the destination register) may be used
379 in an instruction-minimal fashion:
380
381 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
382 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
383 CSRRWI MVL, 3 # sets MVL == **4** (not 3)
384 j zerotest # in case loop counter a0 already 0
385 loop:
386 CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
387 ld a3, a1 # load 4 registers a3-6 from x
388 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
389 ld a7, a2 # load 4 registers a7-10 from y
390 add a1, a1, t1 # increment pointer to x by vl*8
391 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
392 sub a0, a0, t0 # n -= vl (t0)
393 st a7, a2 # store 4 registers a7-10 to y
394 add a2, a2, t1 # increment pointer to y by vl*8
395 zerotest:
396 bnez a0, loop # repeat if n != 0
397
398 With the STATE CSR, just like with CSRRWI, in order to maximise the
399 utilisation of the limited bitspace, "000000" in binary represents
400 VL==1, "00001" represents VL==2 and so on (likewise for MVL):
401
402 CSRRW_Set_SV_STATE(rs1, rd):
403 value = regs[rs1]
404 get_state_csr(rd)
405 MVL = set_mvl_csr(value[11:6]+1)
406 VL = set_vl_csr(value[5:0]+1)
407 destoffs = value[23:18]>>18
408 srcoffs = value[23:18]>>12
409
410 get_state_csr(rd):
411 regs[rd] = (MVL-1) | (VL-1)<<6 | (srcoffs)<<12 |
412 (destoffs)<<18
413 return regs[rd]
414
415 In both cases, whilst CSR read of VL and MVL return the exact values
416 of VL and MVL respectively, reading and writing the STATE CSR returns
417 those values **minus one**. This is absolutely critical to implement
418 if the STATE CSR is to be used for fast context-switching.
419
420 ## VL, MVL and SUBVL instruction aliases
421
422 | alias | CSR |
423 | - | - |
424 | SETVL rd, rs | CSRRW VL, rd, rs |
425 | SETVLi rd, #n | CSRRWI VL, rd, #n-1 |
426 | GETVL rd | CSRRW VL, rd, x0 |
427 | SETMVL rd, rs | CSRRW MVL, rd, rs |
428 | SETMVLi rd, #n | CSRRWI MVL,rd, #n-1 |
429 | GETMVL rd | CSRRW MVL, rd, x0 |
430
431 Note: CSRRC and other bitsetting may still be used, they are however not particularly useful (very obscure).
432
433 ## Register key-value (CAM) table <a name="regcsrtable" />
434
435 *NOTE: in prior versions of SV, this table used to be writable and
436 accessible via CSRs. It is now stored in the VLIW instruction format,
437 and entries may be overridden temporarily by the SVPrefix P48/64 format*
438
439 The purpose of the Register table is three-fold:
440
441 * To mark integer and floating-point registers as requiring "redirection"
442 if it is ever used as a source or destination in any given operation.
443 This involves a level of indirection through a 5-to-7-bit lookup table,
444 such that **unmodified** operands with 5 bits (3 for some RVC ops) may
445 access up to **128** registers.
446 * To indicate whether, after redirection through the lookup table, the
447 register is a vector (or remains a scalar).
448 * To over-ride the implicit or explicit bitwidth that the operation would
449 normally give the register.
450
451 16 bit format:
452
453 | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
454 | ------ | | - | - | - | ------ | ------- |
455 | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
456 | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
457 | .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
458 | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
459
460 8 bit format:
461
462 | RegCAM | | 7 | (6..5) | (4..0) |
463 | ------ | | - | ------ | ------- |
464 | 0 | | i/f | vew0 | regnum |
465
466 i/f is set to "1" to indicate that the redirection/tag entry is to be applied
467 to integer registers; 0 indicates that it is relevant to floating-point
468 registers.
469
470 The 8 bit format is used for a much more compact expression. "isvec"
471 is implicit and, similar to [[sv-prefix-proposal]], the target vector
472 is "regnum<<2", implicitly. Contrast this with the 16-bit format where
473 the target vector is *explicitly* named in bits 8 to 14, and bit 15 may
474 optionally set "scalar" mode.
475
476 Note that whilst SVPrefix adds one extra bit to each of rd, rs1 etc.,
477 and thus the "vector" mode need only shift the (6 bit) regnum by 1 to
478 get the actual (7 bit) register number to use, there is not enough space
479 in the 8 bit format (only 5 bits for regnum) so "regnum<<2" is required.
480
481 vew has the following meanings, indicating that the instruction's
482 operand size is "over-ridden" in a polymorphic fashion:
483
484 | vew | bitwidth |
485 | --- | ------------------- |
486 | 00 | default (XLEN/FLEN) |
487 | 01 | 8 bit |
488 | 10 | 16 bit |
489 | 11 | 32 bit |
490
491 As the above table is a CAM (key-value store) it may be appropriate
492 (faster, implementation-wise) to expand it as follows:
493
494 struct vectorised fp_vec[32], int_vec[32];
495
496 for (i = 0; i < 16; i++) // 16 CSRs?
497 tb = int_vec if CSRvec[i].type == 0 else fp_vec
498 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
499 tb[idx].elwidth = CSRvec[i].elwidth
500 tb[idx].regidx = CSRvec[i].regidx // indirection
501 tb[idx].isvector = CSRvec[i].isvector // 0=scalar
502 tb[idx].packed = CSRvec[i].packed // SIMD or not
503
504
505
506 ## Predication Table <a name="predication_csr_table"></a>
507
508 *NOTE: in prior versions of SV, this table used to be writable and
509 accessible via CSRs. It is now stored in the VLIW instruction format,
510 and entries may be overridden by the SVPrefix format*
511
512 The Predication Table is a key-value store indicating whether, if a
513 given destination register (integer or floating-point) is referred to
514 in an instruction, it is to be predicated. Like the Register table, it
515 is an indirect lookup that allows the RV opcodes to not need modification.
516
517 It is particularly important to note
518 that the *actual* register used can be *different* from the one that is
519 in the instruction, due to the redirection through the lookup table.
520
521 * regidx is the register that in combination with the
522 i/f flag, if that integer or floating-point register is referred to
523 in a (standard RV) instruction
524 results in the lookup table being referenced to find the predication
525 mask to use for this operation.
526 * predidx is the
527 *actual* (full, 7 bit) register to be used for the predication mask.
528 * inv indicates that the predication mask bits are to be inverted
529 prior to use *without* actually modifying the contents of the
530 registerfrom which those bits originated.
531 * zeroing is either 1 or 0, and if set to 1, the operation must
532 place zeros in any element position where the predication mask is
533 set to zero. If zeroing is set to 0, unpredicated elements *must*
534 be left alone. Some microarchitectures may choose to interpret
535 this as skipping the operation entirely. Others which wish to
536 stick more closely to a SIMD architecture may choose instead to
537 interpret unpredicated elements as an internal "copy element"
538 operation (which would be necessary in SIMD microarchitectures
539 that perform register-renaming)
540
541 16 bit format:
542
543 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
544 | ----- | - | - | - | - | ------- | ------- |
545 | 0 | predkey | zero0 | inv0 | i/f | regidx | rsrvd |
546 | 1 | predkey | zero1 | inv1 | i/f | regidx | rsvd |
547 | ... | predkey | ..... | .... | i/f | ....... | ....... |
548 | 15 | predkey | zero15 | inv15 | i/f | regidx | rsvd |
549
550
551 8 bit format:
552
553 | PrCSR | 7 | 6 | 5 | (4..0) |
554 | ----- | - | - | - | ------- |
555 | 0 | zero0 | inv0 | i/f | regnum |
556
557 The 8 bit format is a compact and less expressive variant of the full
558 16 bit format. Using the 8 bit formatis very different: the predicate
559 register to use is implicit, and numbering begins inplicitly from x9. The
560 regnum is still used to "activate" predication, in the same fashion as
561 described above.
562
563 The 16 bit Predication CSR Table is a key-value store, so implementation-wise
564 it will be faster to turn the table around (maintain topologically
565 equivalent state):
566
567 struct pred {
568 bool zero;
569 bool inv;
570 bool enabled;
571 int predidx; // redirection: actual int register to use
572 }
573
574 struct pred fp_pred_reg[32]; // 64 in future (bank=1)
575 struct pred int_pred_reg[32]; // 64 in future (bank=1)
576
577 for (i = 0; i < 16; i++)
578 tb = int_pred_reg if CSRpred[i].type == 0 else fp_pred_reg;
579 idx = CSRpred[i].regidx
580 tb[idx].zero = CSRpred[i].zero
581 tb[idx].inv = CSRpred[i].inv
582 tb[idx].predidx = CSRpred[i].predidx
583 tb[idx].enabled = true
584
585 So when an operation is to be predicated, it is the internal state that
586 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
587 pseudo-code for operations is given, where p is the explicit (direct)
588 reference to the predication register to be used:
589
590 for (int i=0; i<vl; ++i)
591 if ([!]preg[p][i])
592 (d ? vreg[rd][i] : sreg[rd]) =
593 iop(s1 ? vreg[rs1][i] : sreg[rs1],
594 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
595
596 This instead becomes an *indirect* reference using the *internal* state
597 table generated from the Predication CSR key-value store, which is used
598 as follows.
599
600 if type(iop) == INT:
601 preg = int_pred_reg[rd]
602 else:
603 preg = fp_pred_reg[rd]
604
605 for (int i=0; i<vl; ++i)
606 predicate, zeroing = get_pred_val(type(iop) == INT, rd):
607 if (predicate && (1<<i))
608 (d ? regfile[rd+i] : regfile[rd]) =
609 iop(s1 ? regfile[rs1+i] : regfile[rs1],
610 s2 ? regfile[rs2+i] : regfile[rs2]); // for insts with 2 inputs
611 else if (zeroing)
612 (d ? regfile[rd+i] : regfile[rd]) = 0
613
614 Note:
615
616 * d, s1 and s2 are booleans indicating whether destination,
617 source1 and source2 are vector or scalar
618 * key-value CSR-redirection of rd, rs1 and rs2 have NOT been included
619 above, for clarity. rd, rs1 and rs2 all also must ALSO go through
620 register-level redirection (from the Register table) if they are
621 vectors.
622
623 If written as a function, obtaining the predication mask (and whether
624 zeroing takes place) may be done as follows:
625
626 def get_pred_val(bool is_fp_op, int reg):
627 tb = int_reg if is_fp_op else fp_reg
628 if (!tb[reg].enabled):
629 return ~0x0, False // all enabled; no zeroing
630 tb = int_pred if is_fp_op else fp_pred
631 if (!tb[reg].enabled):
632 return ~0x0, False // all enabled; no zeroing
633 predidx = tb[reg].predidx // redirection occurs HERE
634 predicate = intreg[predidx] // actual predicate HERE
635 if (tb[reg].inv):
636 predicate = ~predicate // invert ALL bits
637 return predicate, tb[reg].zero
638
639 Note here, critically, that **only** if the register is marked
640 in its **register** table entry as being "active" does the testing
641 proceed further to check if the **predicate** table entry is
642 also active.
643
644 Note also that this is in direct contrast to branch operations
645 for the storage of comparisions: in these specific circumstances
646 the requirement for there to be an active *register* entry
647 is removed.
648
649 ## REMAP CSR <a name="remap" />
650
651 (Note: both the REMAP and SHAPE sections are best read after the
652 rest of the document has been read)
653
654 There is one 32-bit CSR which may be used to indicate which registers,
655 if used in any operation, must be "reshaped" (re-mapped) from a linear
656 form to a 2D or 3D transposed form, or "offset" to permit arbitrary
657 access to elements within a register.
658
659 The 32-bit REMAP CSR may reshape up to 3 registers:
660
661 | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
662 | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
663 | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
664
665 regidx0-2 refer not to the Register CSR CAM entry but to the underlying
666 *real* register (see regidx, the value) and consequently is 7-bits wide.
667 When set to zero (referring to x0), clearly reshaping x0 is pointless,
668 so is used to indicate "disabled".
669 shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
670 Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
671
672 It is anticipated that these specialist CSRs not be very often used.
673 Unlike the CSR Register and Predication tables, the REMAP CSRs use
674 the full 7-bit regidx so that they can be set once and left alone,
675 whilst the CSR Register entries pointing to them are disabled, instead.
676
677 ## SHAPE 1D/2D/3D vector-matrix remapping CSRs
678
679 (Note: both the REMAP and SHAPE sections are best read after the
680 rest of the document has been read)
681
682 There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
683 which have the same format. When each SHAPE CSR is set entirely to zeros,
684 remapping is disabled: the register's elements are a linear (1D) vector.
685
686 | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
687 | ------- | -- | ------- | -- | ------- | -- | ------- |
688 | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
689
690 offs is a 3-bit field, spread out across bits 7, 15 and 23, which
691 is added to the element index during the loop calculation.
692
693 xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
694 that the array dimensionality for that dimension is 1. A value of xdimsz=2
695 would indicate that in the first dimension there are 3 elements in the
696 array. The format of the array is therefore as follows:
697
698 array[xdim+1][ydim+1][zdim+1]
699
700 However whilst illustrative of the dimensionality, that does not take the
701 "permute" setting into account. "permute" may be any one of six values
702 (0-5, with values of 6 and 7 being reserved, and not legal). The table
703 below shows how the permutation dimensionality order works:
704
705 | permute | order | array format |
706 | ------- | ----- | ------------------------ |
707 | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
708 | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
709 | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
710 | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
711 | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
712 | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
713
714 In other words, the "permute" option changes the order in which
715 nested for-loops over the array would be done. The algorithm below
716 shows this more clearly, and may be executed as a python program:
717
718 # mapidx = REMAP.shape2
719 xdim = 3 # SHAPE[mapidx].xdim_sz+1
720 ydim = 4 # SHAPE[mapidx].ydim_sz+1
721 zdim = 5 # SHAPE[mapidx].zdim_sz+1
722
723 lims = [xdim, ydim, zdim]
724 idxs = [0,0,0] # starting indices
725 order = [1,0,2] # experiment with different permutations, here
726 offs = 0 # experiment with different offsets, here
727
728 for idx in range(xdim * ydim * zdim):
729 new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
730 print new_idx,
731 for i in range(3):
732 idxs[order[i]] = idxs[order[i]] + 1
733 if (idxs[order[i]] != lims[order[i]]):
734 break
735 print
736 idxs[order[i]] = 0
737
738 Here, it is assumed that this algorithm be run within all pseudo-code
739 throughout this document where a (parallelism) for-loop would normally
740 run from 0 to VL-1 to refer to contiguous register
741 elements; instead, where REMAP indicates to do so, the element index
742 is run through the above algorithm to work out the **actual** element
743 index, instead. Given that there are three possible SHAPE entries, up to
744 three separate registers in any given operation may be simultaneously
745 remapped:
746
747 function op_add(rd, rs1, rs2) # add not VADD!
748 ...
749 ...
750  for (i = 0; i < VL; i++)
751 xSTATE.srcoffs = i # save context
752 if (predval & 1<<i) # predication uses intregs
753    ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
754 ireg[rs2+remap(irs2)];
755 if (!int_vec[rd ].isvector) break;
756 if (int_vec[rd ].isvector)  { id += 1; }
757 if (int_vec[rs1].isvector)  { irs1 += 1; }
758 if (int_vec[rs2].isvector)  { irs2 += 1; }
759
760 By changing remappings, 2D matrices may be transposed "in-place" for one
761 operation, followed by setting a different permutation order without
762 having to move the values in the registers to or from memory. Also,
763 the reason for having REMAP separate from the three SHAPE CSRs is so
764 that in a chain of matrix multiplications and additions, for example,
765 the SHAPE CSRs need only be set up once; only the REMAP CSR need be
766 changed to target different registers.
767
768 Note that:
769
770 * Over-running the register file clearly has to be detected and
771 an illegal instruction exception thrown
772 * When non-default elwidths are set, the exact same algorithm still
773 applies (i.e. it offsets elements *within* registers rather than
774 entire registers).
775 * If permute option 000 is utilised, the actual order of the
776 reindexing does not change!
777 * If two or more dimensions are set to zero, the actual order does not change!
778 * The above algorithm is pseudo-code **only**. Actual implementations
779 will need to take into account the fact that the element for-looping
780 must be **re-entrant**, due to the possibility of exceptions occurring.
781 See MSTATE CSR, which records the current element index.
782 * Twin-predicated operations require **two** separate and distinct
783 element offsets. The above pseudo-code algorithm will be applied
784 separately and independently to each, should each of the two
785 operands be remapped. *This even includes C.LDSP* and other operations
786 in that category, where in that case it will be the **offset** that is
787 remapped (see Compressed Stack LOAD/STORE section).
788 * Offset is especially useful, on its own, for accessing elements
789 within the middle of a register. Without offsets, it is necessary
790 to either use a predicated MV, skipping the first elements, or
791 performing a LOAD/STORE cycle to memory.
792 With offsets, the data does not have to be moved.
793 * Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to
794 less than MVL is **perfectly legal**, albeit very obscure. It permits
795 entries to be regularly presented to operands **more than once**, thus
796 allowing the same underlying registers to act as an accumulator of
797 multiple vector or matrix operations, for example.
798
799 Clearly here some considerable care needs to be taken as the remapping
800 could hypothetically create arithmetic operations that target the
801 exact same underlying registers, resulting in data corruption due to
802 pipeline overlaps. Out-of-order / Superscalar micro-architectures with
803 register-renaming will have an easier time dealing with this than
804 DSP-style SIMD micro-architectures.
805
806 # Instruction Execution Order
807
808 Simple-V behaves as if it is a hardware-level "macro expansion system",
809 substituting and expanding a single instruction into multiple sequential
810 instructions with contiguous and sequentially-incrementing registers.
811 As such, it does **not** modify - or specify - the behaviour and semantics of
812 the execution order: that may be deduced from the **existing** RV
813 specification in each and every case.
814
815 So for example if a particular micro-architecture permits out-of-order
816 execution, and it is augmented with Simple-V, then wherever instructions
817 may be out-of-order then so may the "post-expansion" SV ones.
818
819 If on the other hand there are memory guarantees which specifically
820 prevent and prohibit certain instructions from being re-ordered
821 (such as the Atomicity Axiom, or FENCE constraints), then clearly
822 those constraints **MUST** also be obeyed "post-expansion".
823
824 It should be absolutely clear that SV is **not** about providing new
825 functionality or changing the existing behaviour of a micro-architetural
826 design, or about changing the RISC-V Specification.
827 It is **purely** about compacting what would otherwise be contiguous
828 instructions that use sequentially-increasing register numbers down
829 to the **one** instruction.
830
831 # Instructions <a name="instructions" />
832
833 Despite being a 98% complete and accurate topological remap of RVV
834 concepts and functionality, no new instructions are needed.
835 Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
836 becomes a critical dependency for efficient manipulation of predication
837 masks (as a bit-field). Despite the removal of all operations,
838 with the exception of CLIP and VSELECT.X
839 *all instructions from RVV Base are topologically re-mapped and retain their
840 complete functionality, intact*. Note that if RV64G ever had
841 a MV.X added as well as FCLIP, the full functionality of RVV-Base would
842 be obtained in SV.
843
844 Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
845 equivalents, so are left out of Simple-V. VSELECT could be included if
846 there existed a MV.X instruction in RV (MV.X is a hypothetical
847 non-immediate variant of MV that would allow another register to
848 specify which register was to be copied). Note that if any of these three
849 instructions are added to any given RV extension, their functionality
850 will be inherently parallelised.
851
852 With some exceptions, where it does not make sense or is simply too
853 challenging, all RV-Base instructions are parallelised:
854
855 * CSR instructions, whilst a case could be made for fast-polling of
856 a CSR into multiple registers, or for being able to copy multiple
857 contiguously addressed CSRs into contiguous registers, and so on,
858 are the fundamental core basis of SV. If parallelised, extreme
859 care would need to be taken. Additionally, CSR reads are done
860 using x0, and it is *really* inadviseable to tag x0.
861 * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
862 left as scalar.
863 * LR/SC could hypothetically be parallelised however their purpose is
864 single (complex) atomic memory operations where the LR must be followed
865 up by a matching SC. A sequence of parallel LR instructions followed
866 by a sequence of parallel SC instructions therefore is guaranteed to
867 not be useful. Not least: the guarantees of a Multi-LR/SC
868 would be impossible to provide if emulated in a trap.
869 * EBREAK, NOP, FENCE and others do not use registers so are not inherently
870 paralleliseable anyway.
871
872 All other operations using registers are automatically parallelised.
873 This includes AMOMAX, AMOSWAP and so on, where particular care and
874 attention must be paid.
875
876 Example pseudo-code for an integer ADD operation (including scalar operations).
877 Floating-point uses fp csrs.
878
879 function op_add(rd, rs1, rs2) # add not VADD!
880  int i, id=0, irs1=0, irs2=0;
881  predval = get_pred_val(FALSE, rd);
882  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
883  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
884  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
885  for (i = 0; i < VL; i++)
886 xSTATE.srcoffs = i # save context
887 if (predval & 1<<i) # predication uses intregs
888    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
889 if (!int_vec[rd ].isvector) break;
890 if (int_vec[rd ].isvector)  { id += 1; }
891 if (int_vec[rs1].isvector)  { irs1 += 1; }
892 if (int_vec[rs2].isvector)  { irs2 += 1; }
893
894 Note that for simplicity there is quite a lot missing from the above
895 pseudo-code: element widths, zeroing on predication, dimensional
896 reshaping and offsets and so on. However it demonstrates the basic
897 principle. Augmentations that produce the full pseudo-code are covered in
898 other sections.
899
900 ## SUBVL Pseudocode
901
902 Adding in support for SUBVL is a matter of adding in an extra inner for-loop, where register src and dest are still incremented inside the inner part. Not that the predication is still taken from the VL index.
903
904 So whilst elements are indexed by (i * SUBVL + s), predicate bits are indexed by i
905
906 function op_add(rd, rs1, rs2) # add not VADD!
907  int i, id=0, irs1=0, irs2=0;
908  predval = get_pred_val(FALSE, rd);
909  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
910  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
911  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
912  for (i = 0; i < VL; i++)
913 xSTATE.srcoffs = i # save context
914 for (s = 0; s < SUBVL; s++)
915 xSTATE.ssvoffs = s # save context
916 if (predval & 1<<i) # predication uses intregs
917    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
918 if (!int_vec[rd ].isvector) break;
919 if (int_vec[rd ].isvector)  { id += 1; }
920 if (int_vec[rs1].isvector)  { irs1 += 1; }
921 if (int_vec[rs2].isvector)  { irs2 += 1; }
922
923 NOTE: pseudocode simplified greatly: zeroing, proper predicate handling, elwidth handling etc. all left out.
924
925 ## Instruction Format
926
927 It is critical to appreciate that there are
928 **no operations added to SV, at all**.
929
930 Instead, by using CSRs to tag registers as an indication of "changed
931 behaviour", SV *overloads* pre-existing branch operations into predicated
932 variants, and implicitly overloads arithmetic operations, MV, FCVT, and
933 LOAD/STORE depending on CSR configurations for bitwidth and predication.
934 **Everything** becomes parallelised. *This includes Compressed
935 instructions* as well as any future instructions and Custom Extensions.
936
937 Note: CSR tags to change behaviour of instructions is nothing new, including
938 in RISC-V. UXL, SXL and MXL change the behaviour so that XLEN=32/64/128.
939 FRM changes the behaviour of the floating-point unit, to alter the rounding
940 mode. Other architectures change the LOAD/STORE byte-order from big-endian
941 to little-endian on a per-instruction basis. SV is just a little more...
942 comprehensive in its effect on instructions.
943
944 ## Branch Instructions
945
946 ### Standard Branch <a name="standard_branch"></a>
947
948 Branch operations use standard RV opcodes that are reinterpreted to
949 be "predicate variants" in the instance where either of the two src
950 registers are marked as vectors (active=1, vector=1).
951
952 Note that the predication register to use (if one is enabled) is taken from
953 the *first* src register, and that this is used, just as with predicated
954 arithmetic operations, to mask whether the comparison operations take
955 place or not. The target (destination) predication register
956 to use (if one is enabled) is taken from the *second* src register.
957
958 If either of src1 or src2 are scalars (whether by there being no
959 CSR register entry or whether by the CSR entry specifically marking
960 the register as "scalar") the comparison goes ahead as vector-scalar
961 or scalar-vector.
962
963 In instances where no vectorisation is detected on either src registers
964 the operation is treated as an absolutely standard scalar branch operation.
965 Where vectorisation is present on either or both src registers, the
966 branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
967 those tests that are predicated out).
968
969 Note that when zero-predication is enabled (from source rs1),
970 a cleared bit in the predicate indicates that the result
971 of the compare is set to "false", i.e. that the corresponding
972 destination bit (or result)) be set to zero. Contrast this with
973 when zeroing is not set: bits in the destination predicate are
974 only *set*; they are **not** cleared. This is important to appreciate,
975 as there may be an expectation that, going into the hardware-loop,
976 the destination predicate is always expected to be set to zero:
977 this is **not** the case. The destination predicate is only set
978 to zero if **zeroing** is enabled.
979
980 Note that just as with the standard (scalar, non-predicated) branch
981 operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
982 src1 and src2.
983
984 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
985 for predicated compare operations of function "cmp":
986
987 for (int i=0; i<vl; ++i)
988 if ([!]preg[p][i])
989 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
990 s2 ? vreg[rs2][i] : sreg[rs2]);
991
992 With associated predication, vector-length adjustments and so on,
993 and temporarily ignoring bitwidth (which makes the comparisons more
994 complex), this becomes:
995
996 s1 = reg_is_vectorised(src1);
997 s2 = reg_is_vectorised(src2);
998
999 if not s1 && not s2
1000 if cmp(rs1, rs2) # scalar compare
1001 goto branch
1002 return
1003
1004 preg = int_pred_reg[rd]
1005 reg = int_regfile
1006
1007 ps = get_pred_val(I/F==INT, rs1);
1008 rd = get_pred_val(I/F==INT, rs2); # this may not exist
1009
1010 if not exists(rd) or zeroing:
1011 result = 0
1012 else
1013 result = preg[rd]
1014
1015 for (int i = 0; i < VL; ++i)
1016 if (zeroing)
1017 if not (ps & (1<<i))
1018 result &= ~(1<<i);
1019 else if (ps & (1<<i))
1020 if (cmp(s1 ? reg[src1+i]:reg[src1],
1021 s2 ? reg[src2+i]:reg[src2])
1022 result |= 1<<i;
1023 else
1024 result &= ~(1<<i);
1025
1026 if not exists(rd)
1027 if result == ps
1028 goto branch
1029 else
1030 preg[rd] = result # store in destination
1031 if preg[rd] == ps
1032 goto branch
1033
1034 Notes:
1035
1036 * Predicated SIMD comparisons would break src1 and src2 further down
1037 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
1038 Reordering") setting Vector-Length times (number of SIMD elements) bits
1039 in Predicate Register rd, as opposed to just Vector-Length bits.
1040 * The execution of "parallelised" instructions **must** be implemented
1041 as "re-entrant" (to use a term from software). If an exception (trap)
1042 occurs during the middle of a vectorised
1043 Branch (now a SV predicated compare) operation, the partial results
1044 of any comparisons must be written out to the destination
1045 register before the trap is permitted to begin. If however there
1046 is no predicate, the **entire** set of comparisons must be **restarted**,
1047 with the offset loop indices set back to zero. This is because
1048 there is no place to store the temporary result during the handling
1049 of traps.
1050
1051 TODO: predication now taken from src2. also branch goes ahead
1052 if all compares are successful.
1053
1054 Note also that where normally, predication requires that there must
1055 also be a CSR register entry for the register being used in order
1056 for the **predication** CSR register entry to also be active,
1057 for branches this is **not** the case. src2 does **not** have
1058 to have its CSR register entry marked as active in order for
1059 predication on src2 to be active.
1060
1061 Also note: SV Branch operations are **not** twin-predicated
1062 (see Twin Predication section). This would require three
1063 element offsets: one to track src1, one to track src2 and a third
1064 to track where to store the accumulation of the results. Given
1065 that the element offsets need to be exposed via CSRs so that
1066 the parallel hardware looping may be made re-entrant on traps
1067 and exceptions, the decision was made not to make SV Branches
1068 twin-predicated.
1069
1070 ### Floating-point Comparisons
1071
1072 There does not exist floating-point branch operations, only compare.
1073 Interestingly no change is needed to the instruction format because
1074 FP Compare already stores a 1 or a zero in its "rd" integer register
1075 target, i.e. it's not actually a Branch at all: it's a compare.
1076
1077 In RV (scalar) Base, a branch on a floating-point compare is
1078 done via the sequence "FEQ x1, f0, f5; BEQ x1, x0, #jumploc".
1079 This does extend to SV, as long as x1 (in the example sequence given)
1080 is vectorised. When that is the case, x1..x(1+VL-1) will also be
1081 set to 0 or 1 depending on whether f0==f5, f1==f6, f2==f7 and so on.
1082 The BEQ that follows will *also* compare x1==x0, x2==x0, x3==x0 and
1083 so on. Consequently, unlike integer-branch, FP Compare needs no
1084 modification in its behaviour.
1085
1086 In addition, it is noted that an entry "FNE" (the opposite of FEQ) is missing,
1087 and whilst in ordinary branch code this is fine because the standard
1088 RVF compare can always be followed up with an integer BEQ or a BNE (or
1089 a compressed comparison to zero or non-zero), in predication terms that
1090 becomes more of an impact. To deal with this, SV's predication has
1091 had "invert" added to it.
1092
1093 Also: note that FP Compare may be predicated, using the destination
1094 integer register (rd) to determine the predicate. FP Compare is **not**
1095 a twin-predication operation, as, again, just as with SV Branches,
1096 there are three registers involved: FP src1, FP src2 and INT rd.
1097
1098 ### Compressed Branch Instruction
1099
1100 Compressed Branch instructions are, just like standard Branch instructions,
1101 reinterpreted to be vectorised and predicated based on the source register
1102 (rs1s) CSR entries. As however there is only the one source register,
1103 given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
1104 to store the results of the comparisions is taken from CSR predication
1105 table entries for **x0**.
1106
1107 The specific required use of x0 is, with a little thought, quite obvious,
1108 but is counterintuitive. Clearly it is **not** recommended to redirect
1109 x0 with a CSR register entry, however as a means to opaquely obtain
1110 a predication target it is the only sensible option that does not involve
1111 additional special CSRs (or, worse, additional special opcodes).
1112
1113 Note also that, just as with standard branches, the 2nd source
1114 (in this case x0 rather than src2) does **not** have to have its CSR
1115 register table marked as "active" in order for predication to work.
1116
1117 ## Vectorised Dual-operand instructions
1118
1119 There is a series of 2-operand instructions involving copying (and
1120 sometimes alteration):
1121
1122 * C.MV
1123 * FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
1124 * C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
1125 * LOAD(-FP) and STORE(-FP)
1126
1127 All of these operations follow the same two-operand pattern, so it is
1128 *both* the source *and* destination predication masks that are taken into
1129 account. This is different from
1130 the three-operand arithmetic instructions, where the predication mask
1131 is taken from the *destination* register, and applied uniformly to the
1132 elements of the source register(s), element-for-element.
1133
1134 The pseudo-code pattern for twin-predicated operations is as
1135 follows:
1136
1137 function op(rd, rs):
1138  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1139  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1140  ps = get_pred_val(FALSE, rs); # predication on src
1141  pd = get_pred_val(FALSE, rd); # ... AND on dest
1142  for (int i = 0, int j = 0; i < VL && j < VL;):
1143 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1144 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1145 xSTATE.srcoffs = i # save context
1146 xSTATE.destoffs = j # save context
1147 reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
1148 if (int_csr[rs].isvec) i++;
1149 if (int_csr[rd].isvec) j++; else break
1150
1151 This pattern covers scalar-scalar, scalar-vector, vector-scalar
1152 and vector-vector, and predicated variants of all of those.
1153 Zeroing is not presently included (TODO). As such, when compared
1154 to RVV, the twin-predicated variants of C.MV and FMV cover
1155 **all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
1156 VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
1157
1158 Note that:
1159
1160 * elwidth (SIMD) is not covered in the pseudo-code above
1161 * ending the loop early in scalar cases (VINSERT, VEXTRACT) is also
1162 not covered
1163 * zero predication is also not shown (TODO).
1164
1165 ### C.MV Instruction <a name="c_mv"></a>
1166
1167 There is no MV instruction in RV however there is a C.MV instruction.
1168 It is used for copying integer-to-integer registers (vectorised FMV
1169 is used for copying floating-point).
1170
1171 If either the source or the destination register are marked as vectors
1172 C.MV is reinterpreted to be a vectorised (multi-register) predicated
1173 move operation. The actual instruction's format does not change:
1174
1175 [[!table data="""
1176 15 12 | 11 7 | 6 2 | 1 0 |
1177 funct4 | rd | rs | op |
1178 4 | 5 | 5 | 2 |
1179 C.MV | dest | src | C0 |
1180 """]]
1181
1182 A simplified version of the pseudocode for this operation is as follows:
1183
1184 function op_mv(rd, rs) # MV not VMV!
1185  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1186  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1187  ps = get_pred_val(FALSE, rs); # predication on src
1188  pd = get_pred_val(FALSE, rd); # ... AND on dest
1189  for (int i = 0, int j = 0; i < VL && j < VL;):
1190 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1191 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1192 xSTATE.srcoffs = i # save context
1193 xSTATE.destoffs = j # save context
1194 ireg[rd+j] <= ireg[rs+i];
1195 if (int_csr[rs].isvec) i++;
1196 if (int_csr[rd].isvec) j++; else break
1197
1198 There are several different instructions from RVV that are covered by
1199 this one opcode:
1200
1201 [[!table data="""
1202 src | dest | predication | op |
1203 scalar | vector | none | VSPLAT |
1204 scalar | vector | destination | sparse VSPLAT |
1205 scalar | vector | 1-bit dest | VINSERT |
1206 vector | scalar | 1-bit? src | VEXTRACT |
1207 vector | vector | none | VCOPY |
1208 vector | vector | src | Vector Gather |
1209 vector | vector | dest | Vector Scatter |
1210 vector | vector | src & dest | Gather/Scatter |
1211 vector | vector | src == dest | sparse VCOPY |
1212 """]]
1213
1214 Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
1215 operations with inversion on the src and dest predication for one of the
1216 two C.MV operations.
1217
1218 Note that in the instance where the Compressed Extension is not implemented,
1219 MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs.
1220 Note that the behaviour is **different** from C.MV because with addi the
1221 predication mask to use is taken **only** from rd and is applied against
1222 all elements: rs[i] = rd[i].
1223
1224 ### FMV, FNEG and FABS Instructions
1225
1226 These are identical in form to C.MV, except covering floating-point
1227 register copying. The same double-predication rules also apply.
1228 However when elwidth is not set to default the instruction is implicitly
1229 and automatic converted to a (vectorised) floating-point type conversion
1230 operation of the appropriate size covering the source and destination
1231 register bitwidths.
1232
1233 (Note that FMV, FNEG and FABS are all actually pseudo-instructions)
1234
1235 ### FVCT Instructions
1236
1237 These are again identical in form to C.MV, except that they cover
1238 floating-point to integer and integer to floating-point. When element
1239 width in each vector is set to default, the instructions behave exactly
1240 as they are defined for standard RV (scalar) operations, except vectorised
1241 in exactly the same fashion as outlined in C.MV.
1242
1243 However when the source or destination element width is not set to default,
1244 the opcode's explicit element widths are *over-ridden* to new definitions,
1245 and the opcode's element width is taken as indicative of the SIMD width
1246 (if applicable i.e. if packed SIMD is requested) instead.
1247
1248 For example FCVT.S.L would normally be used to convert a 64-bit
1249 integer in register rs1 to a 64-bit floating-point number in rd.
1250 If however the source rs1 is set to be a vector, where elwidth is set to
1251 default/2 and "packed SIMD" is enabled, then the first 32 bits of
1252 rs1 are converted to a floating-point number to be stored in rd's
1253 first element and the higher 32-bits *also* converted to floating-point
1254 and stored in the second. The 32 bit size comes from the fact that
1255 FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to
1256 divide that by two it means that rs1 element width is to be taken as 32.
1257
1258 Similar rules apply to the destination register.
1259
1260 ## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
1261
1262 An earlier draft of SV modified the behaviour of LOAD/STORE (modified
1263 the interpretation of the instruction fields). This
1264 actually undermined the fundamental principle of SV, namely that there
1265 be no modifications to the scalar behaviour (except where absolutely
1266 necessary), in order to simplify an implementor's task if considering
1267 converting a pre-existing scalar design to support parallelism.
1268
1269 So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
1270 do not change in SV, however just as with C.MV it is important to note
1271 that dual-predication is possible.
1272
1273 In vectorised architectures there are usually at least two different modes
1274 for LOAD/STORE:
1275
1276 * Read (or write for STORE) from sequential locations, where one
1277 register specifies the address, and the one address is incremented
1278 by a fixed amount. This is usually known as "Unit Stride" mode.
1279 * Read (or write) from multiple indirected addresses, where the
1280 vector elements each specify separate and distinct addresses.
1281
1282 To support these different addressing modes, the CSR Register "isvector"
1283 bit is used. So, for a LOAD, when the src register is set to
1284 scalar, the LOADs are sequentially incremented by the src register
1285 element width, and when the src register is set to "vector", the
1286 elements are treated as indirection addresses. Simplified
1287 pseudo-code would look like this:
1288
1289 function op_ld(rd, rs) # LD not VLD!
1290  rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
1291  rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
1292  ps = get_pred_val(FALSE, rs); # predication on src
1293  pd = get_pred_val(FALSE, rd); # ... AND on dest
1294  for (int i = 0, int j = 0; i < VL && j < VL;):
1295 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1296 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1297 if (int_csr[rd].isvec)
1298 # indirect mode (multi mode)
1299 srcbase = ireg[rsv+i];
1300 else
1301 # unit stride mode
1302 srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
1303 ireg[rdv+j] <= mem[srcbase + imm_offs];
1304 if (!int_csr[rs].isvec &&
1305 !int_csr[rd].isvec) break # scalar-scalar LD
1306 if (int_csr[rs].isvec) i++;
1307 if (int_csr[rd].isvec) j++;
1308
1309 Notes:
1310
1311 * For simplicity, zeroing and elwidth is not included in the above:
1312 the key focus here is the decision-making for srcbase; vectorised
1313 rs means use sequentially-numbered registers as the indirection
1314 address, and scalar rs is "offset" mode.
1315 * The test towards the end for whether both source and destination are
1316 scalar is what makes the above pseudo-code provide the "standard" RV
1317 Base behaviour for LD operations.
1318 * The offset in bytes (XLEN/8) changes depending on whether the
1319 operation is a LB (1 byte), LH (2 byes), LW (4 bytes) or LD
1320 (8 bytes), and also whether the element width is over-ridden
1321 (see special element width section).
1322
1323 ## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
1324
1325 C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
1326 where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
1327 It is therefore possible to use predicated C.LWSP to efficiently
1328 pop registers off the stack (by predicating x2 as the source), cherry-picking
1329 which registers to store to (by predicating the destination). Likewise
1330 for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
1331
1332 The two modes ("unit stride" and multi-indirection) are still supported,
1333 as with standard LD/ST. Essentially, the only difference is that the
1334 use of x2 is hard-coded into the instruction.
1335
1336 **Note**: it is still possible to redirect x2 to an alternative target
1337 register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
1338 general-purpose LOAD/STORE operations.
1339
1340 ## Compressed LOAD / STORE Instructions
1341
1342 Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
1343 where the same rules apply and the same pseudo-code apply as for
1344 non-compressed LOAD/STORE. Again: setting scalar or vector mode
1345 on the src for LOAD and dest for STORE switches mode from "Unit Stride"
1346 to "Multi-indirection", respectively.
1347
1348 # Element bitwidth polymorphism <a name="elwidth"></a>
1349
1350 Element bitwidth is best covered as its own special section, as it
1351 is quite involved and applies uniformly across-the-board. SV restricts
1352 bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
1353
1354 The effect of setting an element bitwidth is to re-cast each entry
1355 in the register table, and for all memory operations involving
1356 load/stores of certain specific sizes, to a completely different width.
1357 Thus In c-style terms, on an RV64 architecture, effectively each register
1358 now looks like this:
1359
1360 typedef union {
1361 uint8_t b[8];
1362 uint16_t s[4];
1363 uint32_t i[2];
1364 uint64_t l[1];
1365 } reg_t;
1366
1367 // integer table: assume maximum SV 7-bit regfile size
1368 reg_t int_regfile[128];
1369
1370 where the CSR Register table entry (not the instruction alone) determines
1371 which of those union entries is to be used on each operation, and the
1372 VL element offset in the hardware-loop specifies the index into each array.
1373
1374 However a naive interpretation of the data structure above masks the
1375 fact that setting VL greater than 8, for example, when the bitwidth is 8,
1376 accessing one specific register "spills over" to the following parts of
1377 the register file in a sequential fashion. So a much more accurate way
1378 to reflect this would be:
1379
1380 typedef union {
1381 uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
1382 uint8_t b[0]; // array of type uint8_t
1383 uint16_t s[0];
1384 uint32_t i[0];
1385 uint64_t l[0];
1386 uint128_t d[0];
1387 } reg_t;
1388
1389 reg_t int_regfile[128];
1390
1391 where when accessing any individual regfile[n].b entry it is permitted
1392 (in c) to arbitrarily over-run the *declared* length of the array (zero),
1393 and thus "overspill" to consecutive register file entries in a fashion
1394 that is completely transparent to a greatly-simplified software / pseudo-code
1395 representation.
1396 It is however critical to note that it is clearly the responsibility of
1397 the implementor to ensure that, towards the end of the register file,
1398 an exception is thrown if attempts to access beyond the "real" register
1399 bytes is ever attempted.
1400
1401 Now we may modify pseudo-code an operation where all element bitwidths have
1402 been set to the same size, where this pseudo-code is otherwise identical
1403 to its "non" polymorphic versions (above):
1404
1405 function op_add(rd, rs1, rs2) # add not VADD!
1406 ...
1407 ...
1408  for (i = 0; i < VL; i++)
1409 ...
1410 ...
1411 // TODO, calculate if over-run occurs, for each elwidth
1412 if (elwidth == 8) {
1413    int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
1414     int_regfile[rs2].i[irs2];
1415 } else if elwidth == 16 {
1416    int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
1417     int_regfile[rs2].s[irs2];
1418 } else if elwidth == 32 {
1419    int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
1420     int_regfile[rs2].i[irs2];
1421 } else { // elwidth == 64
1422    int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
1423     int_regfile[rs2].l[irs2];
1424 }
1425 ...
1426 ...
1427
1428 So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
1429 following sequentially on respectively from the same) are "type-cast"
1430 to 8-bit; for 16-bit entries likewise and so on.
1431
1432 However that only covers the case where the element widths are the same.
1433 Where the element widths are different, the following algorithm applies:
1434
1435 * Analyse the bitwidth of all source operands and work out the
1436 maximum. Record this as "maxsrcbitwidth"
1437 * If any given source operand requires sign-extension or zero-extension
1438 (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
1439 sign-extension / zero-extension or whatever is specified in the standard
1440 RV specification, **change** that to sign-extending from the respective
1441 individual source operand's bitwidth from the CSR table out to
1442 "maxsrcbitwidth" (previously calculated), instead.
1443 * Following separate and distinct (optional) sign/zero-extension of all
1444 source operands as specifically required for that operation, carry out the
1445 operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
1446 this may be a "null" (copy) operation, and that with FCVT, the changes
1447 to the source and destination bitwidths may also turn FVCT effectively
1448 into a copy).
1449 * If the destination operand requires sign-extension or zero-extension,
1450 instead of a mandatory fixed size (typically 32-bit for arithmetic,
1451 for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
1452 etc.), overload the RV specification with the bitwidth from the
1453 destination register's elwidth entry.
1454 * Finally, store the (optionally) sign/zero-extended value into its
1455 destination: memory for sb/sw etc., or an offset section of the register
1456 file for an arithmetic operation.
1457
1458 In this way, polymorphic bitwidths are achieved without requiring a
1459 massive 64-way permutation of calculations **per opcode**, for example
1460 (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
1461 rd bitwidths). The pseudo-code is therefore as follows:
1462
1463 typedef union {
1464 uint8_t b;
1465 uint16_t s;
1466 uint32_t i;
1467 uint64_t l;
1468 } el_reg_t;
1469
1470 bw(elwidth):
1471 if elwidth == 0:
1472 return xlen
1473 if elwidth == 1:
1474 return xlen / 2
1475 if elwidth == 2:
1476 return xlen * 2
1477 // elwidth == 3:
1478 return 8
1479
1480 get_max_elwidth(rs1, rs2):
1481 return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
1482 bw(int_csr[rs2].elwidth)) # again XLEN if no entry
1483
1484 get_polymorphed_reg(reg, bitwidth, offset):
1485 el_reg_t res;
1486 res.l = 0; // TODO: going to need sign-extending / zero-extending
1487 if bitwidth == 8:
1488 reg.b = int_regfile[reg].b[offset]
1489 elif bitwidth == 16:
1490 reg.s = int_regfile[reg].s[offset]
1491 elif bitwidth == 32:
1492 reg.i = int_regfile[reg].i[offset]
1493 elif bitwidth == 64:
1494 reg.l = int_regfile[reg].l[offset]
1495 return res
1496
1497 set_polymorphed_reg(reg, bitwidth, offset, val):
1498 if (!int_csr[reg].isvec):
1499 # sign/zero-extend depending on opcode requirements, from
1500 # the reg's bitwidth out to the full bitwidth of the regfile
1501 val = sign_or_zero_extend(val, bitwidth, xlen)
1502 int_regfile[reg].l[0] = val
1503 elif bitwidth == 8:
1504 int_regfile[reg].b[offset] = val
1505 elif bitwidth == 16:
1506 int_regfile[reg].s[offset] = val
1507 elif bitwidth == 32:
1508 int_regfile[reg].i[offset] = val
1509 elif bitwidth == 64:
1510 int_regfile[reg].l[offset] = val
1511
1512 maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
1513 destwid = int_csr[rs1].elwidth # destination element width
1514  for (i = 0; i < VL; i++)
1515 if (predval & 1<<i) # predication uses intregs
1516 // TODO, calculate if over-run occurs, for each elwidth
1517 src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
1518 // TODO, sign/zero-extend src1 and src2 as operation requires
1519 if (op_requires_sign_extend_src1)
1520 src1 = sign_extend(src1, maxsrcwid)
1521 src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
1522 result = src1 + src2 # actual add here
1523 // TODO, sign/zero-extend result, as operation requires
1524 if (op_requires_sign_extend_dest)
1525 result = sign_extend(result, maxsrcwid)
1526 set_polymorphed_reg(rd, destwid, ird, result)
1527 if (!int_vec[rd].isvector) break
1528 if (int_vec[rd ].isvector)  { id += 1; }
1529 if (int_vec[rs1].isvector)  { irs1 += 1; }
1530 if (int_vec[rs2].isvector)  { irs2 += 1; }
1531
1532 Whilst specific sign-extension and zero-extension pseudocode call
1533 details are left out, due to each operation being different, the above
1534 should be clear that;
1535
1536 * the source operands are extended out to the maximum bitwidth of all
1537 source operands
1538 * the operation takes place at that maximum source bitwidth (the
1539 destination bitwidth is not involved at this point, at all)
1540 * the result is extended (or potentially even, truncated) before being
1541 stored in the destination. i.e. truncation (if required) to the
1542 destination width occurs **after** the operation **not** before.
1543 * when the destination is not marked as "vectorised", the **full**
1544 (standard, scalar) register file entry is taken up, i.e. the
1545 element is either sign-extended or zero-extended to cover the
1546 full register bitwidth (XLEN) if it is not already XLEN bits long.
1547
1548 Implementors are entirely free to optimise the above, particularly
1549 if it is specifically known that any given operation will complete
1550 accurately in less bits, as long as the results produced are
1551 directly equivalent and equal, for all inputs and all outputs,
1552 to those produced by the above algorithm.
1553
1554 ## Polymorphic floating-point operation exceptions and error-handling
1555
1556 For floating-point operations, conversion takes place without
1557 raising any kind of exception. Exactly as specified in the standard
1558 RV specification, NAN (or appropriate) is stored if the result
1559 is beyond the range of the destination, and, again, exactly as
1560 with the standard RV specification just as with scalar
1561 operations, the floating-point flag is raised (FCSR). And, again, just as
1562 with scalar operations, it is software's responsibility to check this flag.
1563 Given that the FCSR flags are "accrued", the fact that multiple element
1564 operations could have occurred is not a problem.
1565
1566 Note that it is perfectly legitimate for floating-point bitwidths of
1567 only 8 to be specified. However whilst it is possible to apply IEEE 754
1568 principles, no actual standard yet exists. Implementors wishing to
1569 provide hardware-level 8-bit support rather than throw a trap to emulate
1570 in software should contact the author of this specification before
1571 proceeding.
1572
1573 ## Polymorphic shift operators
1574
1575 A special note is needed for changing the element width of left and right
1576 shift operators, particularly right-shift. Even for standard RV base,
1577 in order for correct results to be returned, the second operand RS2 must
1578 be truncated to be within the range of RS1's bitwidth. spike's implementation
1579 of sll for example is as follows:
1580
1581 WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));
1582
1583 which means: where XLEN is 32 (for RV32), restrict RS2 to cover the
1584 range 0..31 so that RS1 will only be left-shifted by the amount that
1585 is possible to fit into a 32-bit register. Whilst this appears not
1586 to matter for hardware, it matters greatly in software implementations,
1587 and it also matters where an RV64 system is set to "RV32" mode, such
1588 that the underlying registers RS1 and RS2 comprise 64 hardware bits
1589 each.
1590
1591 For SV, where each operand's element bitwidth may be over-ridden, the
1592 rule about determining the operation's bitwidth *still applies*, being
1593 defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
1594 **also applies to the truncation of RS2**. In other words, *after*
1595 determining the maximum bitwidth, RS2's range must **also be truncated**
1596 to ensure a correct answer. Example:
1597
1598 * RS1 is over-ridden to a 16-bit width
1599 * RS2 is over-ridden to an 8-bit width
1600 * RD is over-ridden to a 64-bit width
1601 * the maximum bitwidth is thus determined to be 16-bit - max(8,16)
1602 * RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
1603
1604 Pseudocode (in spike) for this example would therefore be:
1605
1606 WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
1607
1608 This example illustrates that considerable care therefore needs to be
1609 taken to ensure that left and right shift operations are implemented
1610 correctly. The key is that
1611
1612 * The operation bitwidth is determined by the maximum bitwidth
1613 of the *source registers*, **not** the destination register bitwidth
1614 * The result is then sign-extend (or truncated) as appropriate.
1615
1616 ## Polymorphic MULH/MULHU/MULHSU
1617
1618 MULH is designed to take the top half MSBs of a multiply that
1619 does not fit within the range of the source operands, such that
1620 smaller width operations may produce a full double-width multiply
1621 in two cycles. The issue is: SV allows the source operands to
1622 have variable bitwidth.
1623
1624 Here again special attention has to be paid to the rules regarding
1625 bitwidth, which, again, are that the operation is performed at
1626 the maximum bitwidth of the **source** registers. Therefore:
1627
1628 * An 8-bit x 8-bit multiply will create a 16-bit result that must
1629 be shifted down by 8 bits
1630 * A 16-bit x 8-bit multiply will create a 24-bit result that must
1631 be shifted down by 16 bits (top 8 bits being zero)
1632 * A 16-bit x 16-bit multiply will create a 32-bit result that must
1633 be shifted down by 16 bits
1634 * A 32-bit x 16-bit multiply will create a 48-bit result that must
1635 be shifted down by 32 bits
1636 * A 32-bit x 8-bit multiply will create a 40-bit result that must
1637 be shifted down by 32 bits
1638
1639 So again, just as with shift-left and shift-right, the result
1640 is shifted down by the maximum of the two source register bitwidths.
1641 And, exactly again, truncation or sign-extension is performed on the
1642 result. If sign-extension is to be carried out, it is performed
1643 from the same maximum of the two source register bitwidths out
1644 to the result element's bitwidth.
1645
1646 If truncation occurs, i.e. the top MSBs of the result are lost,
1647 this is "Officially Not Our Problem", i.e. it is assumed that the
1648 programmer actually desires the result to be truncated. i.e. if the
1649 programmer wanted all of the bits, they would have set the destination
1650 elwidth to accommodate them.
1651
1652 ## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
1653
1654 Polymorphic element widths in vectorised form means that the data
1655 being loaded (or stored) across multiple registers needs to be treated
1656 (reinterpreted) as a contiguous stream of elwidth-wide items, where
1657 the source register's element width is **independent** from the destination's.
1658
1659 This makes for a slightly more complex algorithm when using indirection
1660 on the "addressed" register (source for LOAD and destination for STORE),
1661 particularly given that the LOAD/STORE instruction provides important
1662 information about the width of the data to be reinterpreted.
1663
1664 Let's illustrate the "load" part, where the pseudo-code for elwidth=default
1665 was as follows, and i is the loop from 0 to VL-1:
1666
1667 srcbase = ireg[rs+i];
1668 return mem[srcbase + imm]; // returns XLEN bits
1669
1670 Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
1671 chunks are taken from the source memory location addressed by the current
1672 indexed source address register, and only when a full 32-bits-worth
1673 are taken will the index be moved on to the next contiguous source
1674 address register:
1675
1676 bitwidth = bw(elwidth); // source elwidth from CSR reg entry
1677 elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
1678 srcbase = ireg[rs+i/(elsperblock)]; // integer divide
1679 offs = i % elsperblock; // modulo
1680 return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
1681
1682 Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
1683 and 128 for LQ.
1684
1685 The principle is basically exactly the same as if the srcbase were pointing
1686 at the memory of the *register* file: memory is re-interpreted as containing
1687 groups of elwidth-wide discrete elements.
1688
1689 When storing the result from a load, it's important to respect the fact
1690 that the destination register has its *own separate element width*. Thus,
1691 when each element is loaded (at the source element width), any sign-extension
1692 or zero-extension (or truncation) needs to be done to the *destination*
1693 bitwidth. Also, the storing has the exact same analogous algorithm as
1694 above, where in fact it is just the set\_polymorphed\_reg pseudocode
1695 (completely unchanged) used above.
1696
1697 One issue remains: when the source element width is **greater** than
1698 the width of the operation, it is obvious that a single LB for example
1699 cannot possibly obtain 16-bit-wide data. This condition may be detected
1700 where, when using integer divide, elsperblock (the width of the LOAD
1701 divided by the bitwidth of the element) is zero.
1702
1703 The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
1704
1705 elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
1706
1707 The elements, if the element bitwidth is larger than the LD operation's
1708 size, will then be sign/zero-extended to the full LD operation size, as
1709 specified by the LOAD (LDU instead of LD, LBU instead of LB), before
1710 being passed on to the second phase.
1711
1712 As LOAD/STORE may be twin-predicated, it is important to note that
1713 the rules on twin predication still apply, except where in previous
1714 pseudo-code (elwidth=default for both source and target) it was
1715 the *registers* that the predication was applied to, it is now the
1716 **elements** that the predication is applied to.
1717
1718 Thus the full pseudocode for all LD operations may be written out
1719 as follows:
1720
1721 function LBU(rd, rs):
1722 load_elwidthed(rd, rs, 8, true)
1723 function LB(rd, rs):
1724 load_elwidthed(rd, rs, 8, false)
1725 function LH(rd, rs):
1726 load_elwidthed(rd, rs, 16, false)
1727 ...
1728 ...
1729 function LQ(rd, rs):
1730 load_elwidthed(rd, rs, 128, false)
1731
1732 # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
1733 function load_memory(rs, imm, i, opwidth):
1734 elwidth = int_csr[rs].elwidth
1735 bitwidth = bw(elwidth);
1736 elsperblock = min(1, opwidth / bitwidth)
1737 srcbase = ireg[rs+i/(elsperblock)];
1738 offs = i % elsperblock;
1739 return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
1740
1741 function load_elwidthed(rd, rs, opwidth, unsigned):
1742 destwid = int_csr[rd].elwidth # destination element width
1743  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1744  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1745  ps = get_pred_val(FALSE, rs); # predication on src
1746  pd = get_pred_val(FALSE, rd); # ... AND on dest
1747  for (int i = 0, int j = 0; i < VL && j < VL;):
1748 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1749 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1750 val = load_memory(rs, imm, i, opwidth)
1751 if unsigned:
1752 val = zero_extend(val, min(opwidth, bitwidth))
1753 else:
1754 val = sign_extend(val, min(opwidth, bitwidth))
1755 set_polymorphed_reg(rd, bitwidth, j, val)
1756 if (int_csr[rs].isvec) i++;
1757 if (int_csr[rd].isvec) j++; else break;
1758
1759 Note:
1760
1761 * when comparing against for example the twin-predicated c.mv
1762 pseudo-code, the pattern of independent incrementing of rd and rs
1763 is preserved unchanged.
1764 * just as with the c.mv pseudocode, zeroing is not included and must be
1765 taken into account (TODO).
1766 * that due to the use of a twin-predication algorithm, LOAD/STORE also
1767 take on the same VSPLAT, VINSERT, VREDUCE, VEXTRACT, VGATHER and
1768 VSCATTER characteristics.
1769 * that due to the use of the same set\_polymorphed\_reg pseudocode,
1770 a destination that is not vectorised (marked as scalar) will
1771 result in the element being fully sign-extended or zero-extended
1772 out to the full register file bitwidth (XLEN). When the source
1773 is also marked as scalar, this is how the compatibility with
1774 standard RV LOAD/STORE is preserved by this algorithm.
1775
1776 ### Example Tables showing LOAD elements
1777
1778 This section contains examples of vectorised LOAD operations, showing
1779 how the two stage process works (three if zero/sign-extension is included).
1780
1781
1782 #### Example: LD x8, x5(0), x8 CSR-elwidth=32, x5 CSR-elwidth=16, VL=7
1783
1784 This is:
1785
1786 * a 64-bit load, with an offset of zero
1787 * with a source-address elwidth of 16-bit
1788 * into a destination-register with an elwidth of 32-bit
1789 * where VL=7
1790 * from register x5 (actually x5-x6) to x8 (actually x8 to half of x11)
1791 * RV64, where XLEN=64 is assumed.
1792
1793 First, the memory table, which, due to the
1794 element width being 16 and the operation being LD (64), the 64-bits
1795 loaded from memory are subdivided into groups of **four** elements.
1796 And, with VL being 7 (deliberately to illustrate that this is reasonable
1797 and possible), the first four are sourced from the offset addresses pointed
1798 to by x5, and the next three from the ofset addresses pointed to by
1799 the next contiguous register, x6:
1800
1801 [[!table data="""
1802 addr | byte 0 | byte 1 | byte 2 | byte 3 | byte 4 | byte 5 | byte 6 | byte 7 |
1803 @x5 | elem 0 || elem 1 || elem 2 || elem 3 ||
1804 @x6 | elem 4 || elem 5 || elem 6 || not loaded ||
1805 """]]
1806
1807 Next, the elements are zero-extended from 16-bit to 32-bit, as whilst
1808 the elwidth CSR entry for x5 is 16-bit, the destination elwidth on x8 is 32.
1809
1810 [[!table data="""
1811 byte 3 | byte 2 | byte 1 | byte 0 |
1812 0x0 | 0x0 | elem0 ||
1813 0x0 | 0x0 | elem1 ||
1814 0x0 | 0x0 | elem2 ||
1815 0x0 | 0x0 | elem3 ||
1816 0x0 | 0x0 | elem4 ||
1817 0x0 | 0x0 | elem5 ||
1818 0x0 | 0x0 | elem6 ||
1819 0x0 | 0x0 | elem7 ||
1820 """]]
1821
1822 Lastly, the elements are stored in contiguous blocks, as if x8 was also
1823 byte-addressable "memory". That "memory" happens to cover registers
1824 x8, x9, x10 and x11, with the last 32 "bits" of x11 being **UNMODIFIED**:
1825
1826 [[!table data="""
1827 reg# | byte 7 | byte 6 | byte 5 | byte 4 | byte 3 | byte 2 | byte 1 | byte 0 |
1828 x8 | 0x0 | 0x0 | elem 1 || 0x0 | 0x0 | elem 0 ||
1829 x9 | 0x0 | 0x0 | elem 3 || 0x0 | 0x0 | elem 2 ||
1830 x10 | 0x0 | 0x0 | elem 5 || 0x0 | 0x0 | elem 4 ||
1831 x11 | **UNMODIFIED** |||| 0x0 | 0x0 | elem 6 ||
1832 """]]
1833
1834 Thus we have data that is loaded from the **addresses** pointed to by
1835 x5 and x6, zero-extended from 16-bit to 32-bit, stored in the **registers**
1836 x8 through to half of x11.
1837 The end result is that elements 0 and 1 end up in x8, with element 8 being
1838 shifted up 32 bits, and so on, until finally element 6 is in the
1839 LSBs of x11.
1840
1841 Note that whilst the memory addressing table is shown left-to-right byte order,
1842 the registers are shown in right-to-left (MSB) order. This does **not**
1843 imply that bit or byte-reversal is carried out: it's just easier to visualise
1844 memory as being contiguous bytes, and emphasises that registers are not
1845 really actually "memory" as such.
1846
1847 ## Why SV bitwidth specification is restricted to 4 entries
1848
1849 The four entries for SV element bitwidths only allows three over-rides:
1850
1851 * 8 bit
1852 * 16 hit
1853 * 32 bit
1854
1855 This would seem inadequate, surely it would be better to have 3 bits or
1856 more and allow 64, 128 and some other options besides. The answer here
1857 is, it gets too complex, no RV128 implementation yet exists, and so RV64's
1858 default is 64 bit, so the 4 major element widths are covered anyway.
1859
1860 There is an absolutely crucial aspect oF SV here that explicitly
1861 needs spelling out, and it's whether the "vectorised" bit is set in
1862 the Register's CSR entry.
1863
1864 If "vectorised" is clear (not set), this indicates that the operation
1865 is "scalar". Under these circumstances, when set on a destination (RD),
1866 then sign-extension and zero-extension, whilst changed to match the
1867 override bitwidth (if set), will erase the **full** register entry
1868 (64-bit if RV64).
1869
1870 When vectorised is *set*, this indicates that the operation now treats
1871 **elements** as if they were independent registers, so regardless of
1872 the length, any parts of a given actual register that are not involved
1873 in the operation are **NOT** modified, but are **PRESERVED**.
1874
1875 For example:
1876
1877 * when the vector bit is clear and elwidth set to 16 on the destination
1878 register, operations are truncated to 16 bit and then sign or zero
1879 extended to the *FULL* XLEN register width.
1880 * when the vector bit is set, elwidth is 16 and VL=1 (or other value where
1881 groups of elwidth sized elements do not fill an entire XLEN register),
1882 the "top" bits of the destination register do *NOT* get modified, zero'd
1883 or otherwise overwritten.
1884
1885 SIMD micro-architectures may implement this by using predication on
1886 any elements in a given actual register that are beyond the end of
1887 multi-element operation.
1888
1889 Other microarchitectures may choose to provide byte-level write-enable
1890 lines on the register file, such that each 64 bit register in an RV64
1891 system requires 8 WE lines. Scalar RV64 operations would require
1892 activation of all 8 lines, where SV elwidth based operations would
1893 activate the required subset of those byte-level write lines.
1894
1895 Example:
1896
1897 * rs1, rs2 and rd are all set to 8-bit
1898 * VL is set to 3
1899 * RV64 architecture is set (UXL=64)
1900 * add operation is carried out
1901 * bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16]
1902 concatenated with similar add operations on bits 15..8 and 7..0
1903 * bits 24 through 63 **remain as they originally were**.
1904
1905 Example SIMD micro-architectural implementation:
1906
1907 * SIMD architecture works out the nearest round number of elements
1908 that would fit into a full RV64 register (in this case: 8)
1909 * SIMD architecture creates a hidden predicate, binary 0b00000111
1910 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
1911 * SIMD architecture goes ahead with the add operation as if it
1912 was a full 8-wide batch of 8 adds
1913 * SIMD architecture passes top 5 elements through the adders
1914 (which are "disabled" due to zero-bit predication)
1915 * SIMD architecture gets the 5 unmodified top 8-bits back unmodified
1916 and stores them in rd.
1917
1918 This requires a read on rd, however this is required anyway in order
1919 to support non-zeroing mode.
1920
1921 ## Polymorphic floating-point
1922
1923 Standard scalar RV integer operations base the register width on XLEN,
1924 which may be changed (UXL in USTATUS, and the corresponding MXL and
1925 SXL in MSTATUS and SSTATUS respectively). Integer LOAD, STORE and
1926 arithmetic operations are therefore restricted to an active XLEN bits,
1927 with sign or zero extension to pad out the upper bits when XLEN has
1928 been dynamically set to less than the actual register size.
1929
1930 For scalar floating-point, the active (used / changed) bits are
1931 specified exclusively by the operation: ADD.S specifies an active
1932 32-bits, with the upper bits of the source registers needing to
1933 be all 1s ("NaN-boxed"), and the destination upper bits being
1934 *set* to all 1s (including on LOAD/STOREs).
1935
1936 Where elwidth is set to default (on any source or the destination)
1937 it is obvious that this NaN-boxing behaviour can and should be
1938 preserved. When elwidth is non-default things are less obvious,
1939 so need to be thought through. Here is a normal (scalar) sequence,
1940 assuming an RV64 which supports Quad (128-bit) FLEN:
1941
1942 * FLD loads 64-bit wide from memory. Top 64 MSBs are set to all 1s
1943 * ADD.D performs a 64-bit-wide add. Top 64 MSBs of destination set to 1s.
1944 * FSD stores lowest 64-bits from the 128-bit-wide register to memory:
1945 top 64 MSBs ignored.
1946
1947 Therefore it makes sense to mirror this behaviour when, for example,
1948 elwidth is set to 32. Assume elwidth set to 32 on all source and
1949 destination registers:
1950
1951 * FLD loads 64-bit wide from memory as **two** 32-bit single-precision
1952 floating-point numbers.
1953 * ADD.D performs **two** 32-bit-wide adds, storing one of the adds
1954 in bits 0-31 and the second in bits 32-63.
1955 * FSD stores lowest 64-bits from the 128-bit-wide register to memory
1956
1957 Here's the thing: it does not make sense to overwrite the top 64 MSBs
1958 of the registers either during the FLD **or** the ADD.D. The reason
1959 is that, effectively, the top 64 MSBs actually represent a completely
1960 independent 64-bit register, so overwriting it is not only gratuitous
1961 but may actually be harmful for a future extension to SV which may
1962 have a way to directly access those top 64 bits.
1963
1964 The decision is therefore **not** to touch the upper parts of floating-point
1965 registers whereever elwidth is set to non-default values, including
1966 when "isvec" is false in a given register's CSR entry. Only when the
1967 elwidth is set to default **and** isvec is false will the standard
1968 RV behaviour be followed, namely that the upper bits be modified.
1969
1970 Ultimately if elwidth is default and isvec false on *all* source
1971 and destination registers, a SimpleV instruction defaults completely
1972 to standard RV scalar behaviour (this holds true for **all** operations,
1973 right across the board).
1974
1975 The nice thing here is that ADD.S, ADD.D and ADD.Q when elwidth are
1976 non-default values are effectively all the same: they all still perform
1977 multiple ADD operations, just at different widths. A future extension
1978 to SimpleV may actually allow ADD.S to access the upper bits of the
1979 register, effectively breaking down a 128-bit register into a bank
1980 of 4 independently-accesible 32-bit registers.
1981
1982 In the meantime, although when e.g. setting VL to 8 it would technically
1983 make no difference to the ALU whether ADD.S, ADD.D or ADD.Q is used,
1984 using ADD.Q may be an easy way to signal to the microarchitecture that
1985 it is to receive a higher VL value. On a superscalar OoO architecture
1986 there may be absolutely no difference, however on simpler SIMD-style
1987 microarchitectures they may not necessarily have the infrastructure in
1988 place to know the difference, such that when VL=8 and an ADD.D instruction
1989 is issued, it completes in 2 cycles (or more) rather than one, where
1990 if an ADD.Q had been issued instead on such simpler microarchitectures
1991 it would complete in one.
1992
1993 ## Specific instruction walk-throughs
1994
1995 This section covers walk-throughs of the above-outlined procedure
1996 for converting standard RISC-V scalar arithmetic operations to
1997 polymorphic widths, to ensure that it is correct.
1998
1999 ### add
2000
2001 Standard Scalar RV32/RV64 (xlen):
2002
2003 * RS1 @ xlen bits
2004 * RS2 @ xlen bits
2005 * add @ xlen bits
2006 * RD @ xlen bits
2007
2008 Polymorphic variant:
2009
2010 * RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
2011 * RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
2012 * add @ max(rs1, rs2) bits
2013 * RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate
2014
2015 Note here that polymorphic add zero-extends its source operands,
2016 where addw sign-extends.
2017
2018 ### addw
2019
2020 The RV Specification specifically states that "W" variants of arithmetic
2021 operations always produce 32-bit signed values. In a polymorphic
2022 environment it is reasonable to assume that the signed aspect is
2023 preserved, where it is the length of the operands and the result
2024 that may be changed.
2025
2026 Standard Scalar RV64 (xlen):
2027
2028 * RS1 @ xlen bits
2029 * RS2 @ xlen bits
2030 * add @ xlen bits
2031 * RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
2032
2033 Polymorphic variant:
2034
2035 * RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
2036 * RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
2037 * add @ max(rs1, rs2) bits
2038 * RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
2039
2040 Note here that polymorphic addw sign-extends its source operands,
2041 where add zero-extends.
2042
2043 This requires a little more in-depth analysis. Where the bitwidth of
2044 rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
2045 only where the bitwidth of either rs1 or rs2 are different, will the
2046 lesser-width operand be sign-extended.
2047
2048 Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
2049 where for add they are both zero-extended. This holds true for all arithmetic
2050 operations ending with "W".
2051
2052 ### addiw
2053
2054 Standard Scalar RV64I:
2055
2056 * RS1 @ xlen bits, truncated to 32-bit
2057 * immed @ 12 bits, sign-extended to 32-bit
2058 * add @ 32 bits
2059 * RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
2060
2061 Polymorphic variant:
2062
2063 * RS1 @ rs1 bits
2064 * immed @ 12 bits, sign-extend to max(rs1, 12) bits
2065 * add @ max(rs1, 12) bits
2066 * RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
2067
2068 # Predication Element Zeroing
2069
2070 The introduction of zeroing on traditional vector predication is usually
2071 intended as an optimisation for lane-based microarchitectures with register
2072 renaming to be able to save power by avoiding a register read on elements
2073 that are passed through en-masse through the ALU. Simpler microarchitectures
2074 do not have this issue: they simply do not pass the element through to
2075 the ALU at all, and therefore do not store it back in the destination.
2076 More complex non-lane-based micro-architectures can, when zeroing is
2077 not set, use the predication bits to simply avoid sending element-based
2078 operations to the ALUs, entirely: thus, over the long term, potentially
2079 keeping all ALUs 100% occupied even when elements are predicated out.
2080
2081 SimpleV's design principle is not based on or influenced by
2082 microarchitectural design factors: it is a hardware-level API.
2083 Therefore, looking purely at whether zeroing is *useful* or not,
2084 (whether less instructions are needed for certain scenarios),
2085 given that a case can be made for zeroing *and* non-zeroing, the
2086 decision was taken to add support for both.
2087
2088 ## Single-predication (based on destination register)
2089
2090 Zeroing on predication for arithmetic operations is taken from
2091 the destination register's predicate. i.e. the predication *and*
2092 zeroing settings to be applied to the whole operation come from the
2093 CSR Predication table entry for the destination register.
2094 Thus when zeroing is set on predication of a destination element,
2095 if the predication bit is clear, then the destination element is *set*
2096 to zero (twin-predication is slightly different, and will be covered
2097 next).
2098
2099 Thus the pseudo-code loop for a predicated arithmetic operation
2100 is modified to as follows:
2101
2102  for (i = 0; i < VL; i++)
2103 if not zeroing: # an optimisation
2104 while (!(predval & 1<<i) && i < VL)
2105 if (int_vec[rd ].isvector)  { id += 1; }
2106 if (int_vec[rs1].isvector)  { irs1 += 1; }
2107 if (int_vec[rs2].isvector)  { irs2 += 1; }
2108 if i == VL:
2109 break
2110 if (predval & 1<<i)
2111 src1 = ....
2112 src2 = ...
2113 else:
2114 result = src1 + src2 # actual add (or other op) here
2115 set_polymorphed_reg(rd, destwid, ird, result)
2116 if (!int_vec[rd].isvector) break
2117 else if zeroing:
2118 result = 0
2119 set_polymorphed_reg(rd, destwid, ird, result)
2120 if (int_vec[rd ].isvector)  { id += 1; }
2121 else if (predval & 1<<i) break;
2122 if (int_vec[rs1].isvector)  { irs1 += 1; }
2123 if (int_vec[rs2].isvector)  { irs2 += 1; }
2124
2125 The optimisation to skip elements entirely is only possible for certain
2126 micro-architectures when zeroing is not set. However for lane-based
2127 micro-architectures this optimisation may not be practical, as it
2128 implies that elements end up in different "lanes". Under these
2129 circumstances it is perfectly fine to simply have the lanes
2130 "inactive" for predicated elements, even though it results in
2131 less than 100% ALU utilisation.
2132
2133 ## Twin-predication (based on source and destination register)
2134
2135 Twin-predication is not that much different, except that that
2136 the source is independently zero-predicated from the destination.
2137 This means that the source may be zero-predicated *or* the
2138 destination zero-predicated *or both*, or neither.
2139
2140 When with twin-predication, zeroing is set on the source and not
2141 the destination, if a predicate bit is set it indicates that a zero
2142 data element is passed through the operation (the exception being:
2143 if the source data element is to be treated as an address - a LOAD -
2144 then the data returned *from* the LOAD is zero, rather than looking up an
2145 *address* of zero.
2146
2147 When zeroing is set on the destination and not the source, then just
2148 as with single-predicated operations, a zero is stored into the destination
2149 element (or target memory address for a STORE).
2150
2151 Zeroing on both source and destination effectively result in a bitwise
2152 NOR operation of the source and destination predicate: the result is that
2153 where either source predicate OR destination predicate is set to 0,
2154 a zero element will ultimately end up in the destination register.
2155
2156 However: this may not necessarily be the case for all operations;
2157 implementors, particularly of custom instructions, clearly need to
2158 think through the implications in each and every case.
2159
2160 Here is pseudo-code for a twin zero-predicated operation:
2161
2162 function op_mv(rd, rs) # MV not VMV!
2163  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
2164  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
2165  ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
2166  pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
2167  for (int i = 0, int j = 0; i < VL && j < VL):
2168 if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
2169 if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
2170 if ((pd & 1<<j))
2171 if ((pd & 1<<j))
2172 sourcedata = ireg[rs+i];
2173 else
2174 sourcedata = 0
2175 ireg[rd+j] <= sourcedata
2176 else if (zerodst)
2177 ireg[rd+j] <= 0
2178 if (int_csr[rs].isvec)
2179 i++;
2180 if (int_csr[rd].isvec)
2181 j++;
2182 else
2183 if ((pd & 1<<j))
2184 break;
2185
2186 Note that in the instance where the destination is a scalar, the hardware
2187 loop is ended the moment a value *or a zero* is placed into the destination
2188 register/element. Also note that, for clarity, variable element widths
2189 have been left out of the above.
2190
2191 # Exceptions
2192
2193 TODO: expand. Exceptions may occur at any time, in any given underlying
2194 scalar operation. This implies that context-switching (traps) may
2195 occur, and operation must be returned to where it left off. That in
2196 turn implies that the full state - including the current parallel
2197 element being processed - has to be saved and restored. This is
2198 what the **STATE** CSR is for.
2199
2200 The implications are that all underlying individual scalar operations
2201 "issued" by the parallelisation have to appear to be executed sequentially.
2202 The further implications are that if two or more individual element
2203 operations are underway, and one with an earlier index causes an exception,
2204 it may be necessary for the microarchitecture to **discard** or terminate
2205 operations with higher indices.
2206
2207 This being somewhat dissatisfactory, an "opaque predication" variant
2208 of the STATE CSR is being considered.
2209
2210 # Hints
2211
2212 A "HINT" is an operation that has no effect on architectural state,
2213 where its use may, by agreed convention, give advance notification
2214 to the microarchitecture: branch prediction notification would be
2215 a good example. Usually HINTs are where rd=x0.
2216
2217 With Simple-V being capable of issuing *parallel* instructions where
2218 rd=x0, the space for possible HINTs is expanded considerably. VL
2219 could be used to indicate different hints. In addition, if predication
2220 is set, the predication register itself could hypothetically be passed
2221 in as a *parameter* to the HINT operation.
2222
2223 No specific hints are yet defined in Simple-V
2224
2225 # VLIW Format <a name="vliw-format"></a>
2226
2227 One issue with SV is the setup and teardown time of the CSRs. The cost
2228 of the use of a full CSRRW (requiring LI) is quite high. A VLIW format
2229 therefore makes sense.
2230
2231 A suitable prefix, which fits the Expanded Instruction-Length encoding
2232 for "(80 + 16 times instruction_length)", as defined in Section 1.5
2233 of the RISC-V ISA, is as follows:
2234
2235 | 15 | 14:12 | 11:10 | 9:8 | 7 | 6:0 |
2236 | - | ----- | ----- | ----- | --- | ------- |
2237 | vlset | 16xil | pplen | rplen | mode | 1111111 |
2238
2239 An optional VL Block, optional predicate entries, optional register
2240 entries and finally some 16/32/48 bit standard RV or SVPrefix opcodes
2241 follow.
2242
2243 The variable-length format from Section 1.5 of the RISC-V ISA:
2244
2245 | base+4 ... base+2 | base | number of bits |
2246 | ------ ----------------- | ---------------- | -------------------------- |
2247 | ..xxxx xxxxxxxxxxxxxxxx | xnnnxxxxx1111111 | (80+16\*nnn)-bit, nnn!=111 |
2248 | {ops}{Pred}{Reg}{VL Block} | SV Prefix | |
2249
2250 VL/MAXVL/SubVL Block:
2251
2252 | 31-30 | 29:28 | 27:22 | 21:17 - 16 |
2253 | - | ----- | ------ | ------ - - |
2254 | 0 | SubVL | VLdest | VLEN vlt |
2255 | 1 | SubVL | VLdest | VLEN |
2256
2257 Note: this format is very similar to that used in [[sv_prefix_proposal]]
2258
2259 If vlt is 0, VLEN is a 5 bit immediate value, offset by one (i.e
2260 a bit sequence of 0b00000 represents VL=1 and so on). If vlt is 1,
2261 it specifies the scalar register from which VL is set by this VLIW
2262 instruction group. VL, whether set from the register or the immediate,
2263 is then modified (truncated) to be MIN(VL, MAXVL), and the result stored
2264 in the scalar register specified in VLdest. If VLdest is zero, no store
2265 in the regfile occurs (however VL is still set).
2266
2267 This option will typically be used to start vectorised loops, where
2268 the VLIW instruction effectively embeds an optional "SETSUBVL, SETVL"
2269 sequence (in compact form).
2270
2271 When bit 15 is set to 1, MAXVL and VL are both set to the immediate,
2272 VLEN (again, offset by one), which is 6 bits in length, and the same
2273 value stored in scalar register VLdest (if that register is nonzero).
2274 A value of 0b000000 will set MAXVL=VL=1, a value of 0b000001 will
2275 set MAXVL=VL= 2 and so on.
2276
2277 This option will typically not be used so much for loops as it will be
2278 for one-off instructions such as saving the entire register file to the
2279 stack with a single one-off Vectorised and predicated LD/ST, or as a way
2280 to save or restore registers in a function call with a single instruction.
2281
2282 CSRs needed:
2283
2284 * mepcvliw
2285 * sepcvliw
2286 * uepcvliw
2287 * hepcvliw
2288
2289 Notes:
2290
2291 * Bit 7 specifies if the prefix block format is the full 16 bit format
2292 (1) or the compact less expressive format (0). In the 8 bit format,
2293 pplen is multiplied by 2.
2294 * 8 bit format predicate numbering is implicit and begins from x9. Thus
2295 it is critical to put blocks in the correct order as required.
2296 * Bit 7 also specifies if the register block format is 16 bit (1) or 8 bit
2297 (0). In the 8 bit format, rplen is multiplied by 2. If only an odd number
2298 of entries are needed the last may be set to 0x00, indicating "unused".
2299 * Bit 15 specifies if the VL Block is present. If set to 1, the VL Block
2300 immediately follows the VLIW instruction Prefix
2301 * Bits 8 and 9 define how many RegCam entries (0 to 3 if bit 15 is 1,
2302 otherwise 0 to 6) follow the (optional) VL Block.
2303 * Bits 10 and 11 define how many PredCam entries (0 to 3 if bit 7 is 1,
2304 otherwise 0 to 6) follow the (optional) RegCam entries
2305 * Bits 14 to 12 (IL) define the actual length of the instruction: total
2306 number of bits is 80 + 16 times IL. Standard RV32, RVC and also
2307 SVPrefix (P48/64-\*-Type) instructions fit into this space, after the
2308 (optional) VL / RegCam / PredCam entries
2309 * Anything - any registers - within the VLIW-prefixed format *MUST* have the
2310 RegCam and PredCam entries applied to it.
2311 * At the end of the VLIW Group, the RegCam and PredCam entries
2312 *no longer apply*. VL, MAXVL and SUBVL on the other hand remain at
2313 the values set by the last instruction (whether a CSRRW or the VL
2314 Block header).
2315 * Although an inefficient use of resources, it is fine to set the MAXVL,
2316 VL and SUBVL CSRs with standard CSRRW instructions, within a VLIW block.
2317
2318 All this would greatly reduce the amount of space utilised by Vectorised
2319 instructions, given that 64-bit CSRRW requires 3, even 4 32-bit opcodes: the
2320 CSR itself, a LI, and the setting up of the value into the RS register
2321 of the CSR, which, again, requires a LI / LUI to get the 32 bit
2322 data into the CSR. To get 64-bit data into the register in order to put
2323 it into the CSR(s), LOAD operations from memory are needed!
2324
2325 Given that each 64-bit CSR can hold only 4x PredCAM entries (or 4 RegCAM
2326 entries), that's potentially 6 to eight 32-bit instructions, just to
2327 establish the Vector State!
2328
2329 Not only that: even CSRRW on VL and MAXVL requires 64-bits (even more bits if
2330 VL needs to be set to greater than 32). Bear in mind that in SV, both MAXVL
2331 and VL need to be set.
2332
2333 By contrast, the VLIW prefix is only 16 bits, the VL/MAX/SubVL block is
2334 only 16 bits, and as long as not too many predicates and register vector
2335 qualifiers are specified, several 32-bit and 16-bit opcodes can fit into
2336 the format. If the full flexibility of the 16 bit block formats are not
2337 needed, more space is saved by using the 8 bit formats.
2338
2339 In this light, embedding the VL/MAXVL, PredCam and RegCam CSR entries into
2340 a VLIW format makes a lot of sense.
2341
2342 Open Questions:
2343
2344 * Is it necessary to stick to the RISC-V 1.5 format? Why not go with
2345 using the 15th bit to allow 80 + 16\*0bnnnn bits? Perhaps to be sane,
2346 limit to 256 bits (16 times 0-11).
2347 * Could a "hint" be used to set which operations are parallel and which
2348 are sequential?
2349 * Could a new sub-instruction opcode format be used, one that does not
2350 conform precisely to RISC-V rules, but *unpacks* to RISC-V opcodes?
2351 no need for byte or bit-alignment
2352 * Could a hardware compression algorithm be deployed? Quite likely,
2353 because of the sub-execution context (sub-VLIW PC)
2354
2355 ## Limitations on instructions.
2356
2357 To greatly simplify implementations, it is required to treat the VLIW
2358 group as a separate sub-program with its own separate PC. The sub-pc
2359 advances separately whilst the main PC remains pointing at the beginning
2360 of the VLIW instruction (not to be confused with how VL works, which
2361 is exactly the same principle, except it is VStart in the STATE CSR
2362 that increments).
2363
2364 This has implications, namely that a new set of CSRs identical to xepc
2365 (mepc, srpc, hepc and uepc) must be created and managed and respected
2366 as being a sub extension of the xepc set of CSRs. Thus, xepcvliw CSRs
2367 must be context switched and saved / restored in traps.
2368
2369 The srcoffs and destoffs indices in the STATE CSR may be similarly regarded as another
2370 sub-execution context, giving in effect two sets of nested sub-levels
2371 of the RISCV Program Counter (actually, three including SUBVL and ssvoffs).
2372
2373 In addition, as xepcvliw CSRs are relative to the beginning of the VLIW
2374 block, branches MUST be restricted to within (relative to) the block, i.e. addressing
2375 is now restricted to the start (and very short) length of the block.
2376
2377 Also: calling subroutines is inadviseable, unless they can be entirely
2378 accomplished within a block.
2379
2380 A normal jump, normal branch and a normal function call may only be taken by letting
2381 the VLIW group end, returning to "normal" standard RV mode, and then using standard RVC, 32 bit
2382 or P48/64-\*-type opcodes.
2383
2384 ## Links
2385
2386 * <https://groups.google.com/d/msg/comp.arch/yIFmee-Cx-c/jRcf0evSAAAJ>
2387
2388 # Subsets of RV functionality
2389
2390 This section describes the differences when SV is implemented on top of
2391 different subsets of RV.
2392
2393 ## Common options
2394
2395 It is permitted to only implement SVprefix and not the VLIW instruction format option.
2396 UNIX Platforms **MUST** raise illegal instruction on seeing a VLIW opcode so that traps may emulate the format.
2397
2398 It is permitted in SVprefix to either not implement VL or not implement SUBVL (see [[sv_prefix_proposal]] for full details. Again, UNIX Platforms *MUST* raise illegal instruction on implementations that do not support VL or SUBVL.
2399
2400 It is permitted to limit the size of either (or both) the register files
2401 down to the original size of the standard RV architecture. However, below
2402 the mandatory limits set in the RV standard will result in non-compliance
2403 with the SV Specification.
2404
2405 ## RV32 / RV32F
2406
2407 When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
2408 maximum limit for predication is also restricted to 32 bits. Whilst not
2409 actually specifically an "option" it is worth noting.
2410
2411 ## RV32G
2412
2413 Normally in standard RV32 it does not make much sense to have
2414 RV32G, The critical instructions that are missing in standard RV32
2415 are those for moving data to and from the double-width floating-point
2416 registers into the integer ones, as well as the FCVT routines.
2417
2418 In an earlier draft of SV, it was possible to specify an elwidth
2419 of double the standard register size: this had to be dropped,
2420 and may be reintroduced in future revisions.
2421
2422 ## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
2423
2424 When floating-point is not implemented, the size of the User Register and
2425 Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
2426 per table).
2427
2428 ## RV32E
2429
2430 In embedded scenarios the User Register and Predication CSRs may be
2431 dropped entirely, or optionally limited to 1 CSR, such that the combined
2432 number of entries from the M-Mode CSR Register table plus U-Mode
2433 CSR Register table is either 4 16-bit entries or (if the U-Mode is
2434 zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
2435 the Predication CSR tables.
2436
2437 RV32E is the most likely candidate for simply detecting that registers
2438 are marked as "vectorised", and generating an appropriate exception
2439 for the VL loop to be implemented in software.
2440
2441 ## RV128
2442
2443 RV128 has not been especially considered, here, however it has some
2444 extremely large possibilities: double the element width implies
2445 256-bit operands, spanning 2 128-bit registers each, and predication
2446 of total length 128 bit given that XLEN is now 128.
2447
2448 # Under consideration <a name="issues"></a>
2449
2450 for element-grouping, if there is unused space within a register
2451 (3 16-bit elements in a 64-bit register for example), recommend:
2452
2453 * For the unused elements in an integer register, the used element
2454 closest to the MSB is sign-extended on write and the unused elements
2455 are ignored on read.
2456 * The unused elements in a floating-point register are treated as-if
2457 they are set to all ones on write and are ignored on read, matching the
2458 existing standard for storing smaller FP values in larger registers.
2459
2460 ---
2461
2462 info register,
2463
2464 > One solution is to just not support LR/SC wider than a fixed
2465 > implementation-dependent size, which must be at least 
2466 >1 XLEN word, which can be read from a read-only CSR
2467 > that can also be used for info like the kind and width of 
2468 > hw parallelism supported (128-bit SIMD, minimal virtual 
2469 > parallelism, etc.) and other things (like maybe the number 
2470 > of registers supported). 
2471
2472 > That CSR would have to have a flag to make a read trap so
2473 > a hypervisor can simulate different values.
2474
2475 ----
2476
2477 > And what about instructions like JALR? 
2478
2479 answer: they're not vectorised, so not a problem
2480
2481 ----
2482
2483 * if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
2484 XLEN if elwidth==default
2485 * if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
2486 *32* if elwidth == default
2487
2488 ---
2489
2490 TODO: document different lengths for INT / FP regfiles, and provide
2491 as part of info register. 00=32, 01=64, 10=128, 11=reserved.
2492
2493 ---
2494
2495 TODO, update to remove RegCam and PredCam CSRs, just use SVprefix and
2496 VLIW format