whitespace
[libreriscv.git] / simple_v_extension / specification.mdwn
1 # Simple-V (Parallelism Extension Proposal) Specification
2
3 * Copyright (C) 2017, 2018, 3029 Luke Kenneth Casson Leighton
4 * Status: DRAFTv0.5
5 * Last edited: 19 jun 2019
6 * Ancillary resource: [[opcodes]] [[sv_prefix_proposal]]
7
8 With thanks to:
9
10 * Allen Baum
11 * Bruce Hoult
12 * comp.arch
13 * Jacob Bachmeyer
14 * Guy Lemurieux
15 * Jacob Lifshay
16 * Terje Mathisen
17 * The RISC-V Founders, without whom this all would not be possible.
18
19 [[!toc ]]
20
21 # Summary and Background: Rationale
22
23 Simple-V is a uniform parallelism API for RISC-V hardware that has several
24 unplanned side-effects including code-size reduction, expansion of
25 HINT space and more. The reason for
26 creating it is to provide a manageable way to turn a pre-existing design
27 into a parallel one, in a step-by-step incremental fashion, allowing
28 the implementor to focus on adding hardware where it is needed and necessary.
29 The primary target is for mobile-class 3D GPUs and VPUs, with secondary
30 goals being to reduce executable size and reduce context-switch latency.
31
32 Critically: **No new instructions are added**. The parallelism (if any
33 is implemented) is implicitly added by tagging *standard* scalar registers
34 for redirection. When such a tagged register is used in any instruction,
35 it indicates that the PC shall **not** be incremented; instead a loop
36 is activated where *multiple* instructions are issued to the pipeline
37 (as determined by a length CSR), with contiguously incrementing register
38 numbers starting from the tagged register. When the last "element"
39 has been reached, only then is the PC permitted to move on. Thus
40 Simple-V effectively sits (slots) *in between* the instruction decode phase
41 and the ALU(s).
42
43 The barrier to entry with SV is therefore very low. The minimum
44 compliant implementation is software-emulation (traps), requiring
45 only the CSRs and CSR tables, and that an exception be thrown if an
46 instruction's registers are detected to have been tagged. The looping
47 that would otherwise be done in hardware is thus carried out in software,
48 instead. Whilst much slower, it is "compliant" with the SV specification,
49 and may be suited for implementation in RV32E and also in situations
50 where the implementor wishes to focus on certain aspects of SV, without
51 unnecessary time and resources into the silicon, whilst also conforming
52 strictly with the API. A good area to punt to software would be the
53 polymorphic element width capability for example.
54
55 Hardware Parallelism, if any, is therefore added at the implementor's
56 discretion to turn what would otherwise be a sequential loop into a
57 parallel one.
58
59 To emphasise that clearly: Simple-V (SV) is *not*:
60
61 * A SIMD system
62 * A SIMT system
63 * A Vectorisation Microarchitecture
64 * A microarchitecture of any specific kind
65 * A mandary parallel processor microarchitecture of any kind
66 * A supercomputer extension
67
68 SV does **not** tell implementors how or even if they should implement
69 parallelism: it is a hardware "API" (Application Programming Interface)
70 that, if implemented, presents a uniform and consistent way to *express*
71 parallelism, at the same time leaving the choice of if, how, how much,
72 when and whether to parallelise operations **entirely to the implementor**.
73
74 # Basic Operation
75
76 The principle of SV is as follows:
77
78 * CSRs indicating which registers are "tagged" as "vectorised"
79 (potentially parallel, depending on the microarchitecture)
80 must be set up
81 * A "Vector Length" CSR is set, indicating the span of any future
82 "parallel" operations.
83 * A **scalar** operation, just after the decode phase and before the
84 execution phase, checks the CSR register tables to see if any of
85 its registers have been marked as "vectorised"
86 * If so, a hardware "macro-unrolling loop" is activated, of length
87 VL, that effectively issues **multiple** identical instructions
88 using contiguous sequentially-incrementing registers.
89 **Whether they be executed sequentially or in parallel or a
90 mixture of both or punted to software-emulation in a trap handler
91 is entirely up to the implementor**.
92
93 In this way an entire scalar algorithm may be vectorised with
94 the minimum of modification to the hardware and to compiler toolchains.
95 There are **no** new opcodes.
96
97 # CSRs <a name="csrs"></a>
98
99 For U-Mode there are two CSR key-value stores needed to create lookup
100 tables which are used at the register decode phase.
101
102 * A register CSR key-value table (typically 8 32-bit CSRs of 2 16-bits each)
103 * A predication CSR key-value table (again, 8 32-bit CSRs of 2 16-bits each)
104 * Small U-Mode and S-Mode register and predication CSR key-value tables
105 (2 32-bit CSRs of 2x 16-bit entries each).
106 * An optional "reshaping" CSR key-value table which remaps from a 1D
107 linear shape to 2D or 3D, including full transposition.
108
109 There are also four additional CSRs for User-Mode:
110
111 * CFG subsets the CSR tables
112 * MVL (the Maximum Vector Length)
113 * VL (which has different characteristics from standard CSRs)
114 * STATE (useful for saving and restoring during context switch,
115 and for providing fast transitions)
116
117 There are also three additional CSRs for Supervisor-Mode:
118
119 * SMVL
120 * SVL
121 * SSTATE
122
123 And likewise for M-Mode:
124
125 * MMVL
126 * MVL
127 * MSTATE
128
129 Both Supervisor and M-Mode have their own (small) CSR register and
130 predication tables of only 4 entries each.
131
132 The access pattern for these groups of CSRs in each mode follows the
133 same pattern for other CSRs that have M-Mode and S-Mode "mirrors":
134
135 * In M-Mode, the S-Mode and U-Mode CSRs are separate and distinct.
136 * In S-Mode, accessing and changing of the M-Mode CSRs is identical
137 to changing the S-Mode CSRs. Accessing and changing the U-Mode
138 CSRs is permitted.
139 * In U-Mode, accessing and changing of the S-Mode and U-Mode CSRs
140 is prohibited.
141
142 In M-Mode, only the M-Mode CSRs are in effect, i.e. it is only the
143 M-Mode MVL, the M-Mode STATE and so on that influences the processor
144 behaviour. Likewise for S-Mode, and likewise for U-Mode.
145
146 This has the interesting benefit of allowing M-Mode (or S-Mode)
147 to be set up, for context-switching to take place, and, on return
148 back to the higher privileged mode, the CSRs of that mode will be
149 exactly as they were. Thus, it becomes possible for example to
150 set up CSRs suited best to aiding and assisting low-latency fast
151 context-switching *once and only once*, without the need for
152 re-initialising the CSRs needed to do so.
153
154 ## CFG
155
156 This CSR may be used to switch between subsets of the CSR Register and
157 Predication Tables: it is kept to 5 bits so that a single CSRRWI instruction
158 can be used. A setting of all ones is reserved to indicate that SimpleV
159 is disabled.
160
161 | (4..3) | (2...0) |
162 | ------ | ------- |
163 | size | bank |
164
165 Bank is 3 bits in size, and indicates the starting index of the CSR
166 Register and Predication Table entries that are "enabled". Given that
167 each CSR table row is 16 bits and contains 2 CAM entries each, there
168 are only 8 CSRs to cover in each table, so 8 bits is sufficient.
169
170 Size is 2 bits. With the exception of when bank == 7 and size == 3,
171 the number of elements enabled is taken by right-shifting 2 by size:
172
173 | size | elements |
174 | ------ | -------- |
175 | 0 | 2 |
176 | 1 | 4 |
177 | 2 | 8 |
178 | 3 | 16 |
179
180 Given that there are 2 16-bit CAM entries per CSR table row, this
181 may also be viewed as the number of CSR rows to enable, by raising size to
182 the power of 2.
183
184 Examples:
185
186 * When bank = 0 and size = 3, SVREGCFG0 through to SVREGCFG7 are
187 enabled, and SVPREDCFG0 through to SVPREGCFG7 are enabled.
188 * When bank = 1 and size = 3, SVREGCFG1 through to SVREGCFG7 are
189 enabled, and SVPREDCFG1 through to SVPREGCFG7 are enabled.
190 * When bank = 3 and size = 0, SVREGCFG3 and SVPREDCFG3 are enabled.
191 * When bank = 3 and size = 1, SVREGCFG3-4 and SVPREDCFG3-4 are enabled.
192 * When bank = 7 and size = 1, SVREGCFG7 and SVPREDCFG7 are enabled
193 (because there are only 8 32-bit CSRs there does not exist a
194 SVREGCFG8 or SVPREDCFG8 to enable).
195 * When bank = 7 and size = 3, SimpleV is entirely disabled.
196
197 In this way it is possible to enable and disable SimpleV with a
198 single instruction, and, furthermore, on context-switching the quantity
199 of CSRs to be saved and restored is greatly reduced.
200
201 ## MAXVECTORLENGTH (MVL) <a name="mvl" />
202
203 MAXVECTORLENGTH is the same concept as MVL in RVV, except that it
204 is variable length and may be dynamically set. MVL is
205 however limited to the regfile bitwidth XLEN (1-32 for RV32,
206 1-64 for RV64 and so on).
207
208 The reason for setting this limit is so that predication registers, when
209 marked as such, may fit into a single register as opposed to fanning out
210 over several registers. This keeps the implementation a little simpler.
211
212 The other important factor to note is that the actual MVL is **offset
213 by one**, so that it can fit into only 6 bits (for RV64) and still cover
214 a range up to XLEN bits. So, when setting the MVL CSR to 0, this actually
215 means that MVL==1. When setting the MVL CSR to 3, this actually means
216 that MVL==4, and so on. This is expressed more clearly in the "pseudocode"
217 section, where there are subtle differences between CSRRW and CSRRWI.
218
219 ## Vector Length (VL) <a name="vl" />
220
221 VSETVL is slightly different from RVV. Like RVV, VL is set to be within
222 the range 1 <= VL <= MVL (where MVL in turn is limited to 1 <= MVL <= XLEN)
223
224 VL = rd = MIN(vlen, MVL)
225
226 where 1 <= MVL <= XLEN
227
228 However just like MVL it is important to note that the range for VL has
229 subtle design implications, covered in the "CSR pseudocode" section
230
231 The fixed (specific) setting of VL allows vector LOAD/STORE to be used
232 to switch the entire bank of registers using a single instruction (see
233 Appendix, "Context Switch Example"). The reason for limiting VL to XLEN
234 is down to the fact that predication bits fit into a single register of
235 length XLEN bits.
236
237 The second change is that when VSETVL is requested to be stored
238 into x0, it is *ignored* silently (VSETVL x0, x5)
239
240 The third and most important change is that, within the limits set by
241 MVL, the value passed in **must** be set in VL (and in the
242 destination register).
243
244 This has implication for the microarchitecture, as VL is required to be
245 set (limits from MVL notwithstanding) to the actual value
246 requested. RVV has the option to set VL to an arbitrary value that suits
247 the conditions and the micro-architecture: SV does *not* permit this.
248
249 The reason is so that if SV is to be used for a context-switch or as a
250 substitute for LOAD/STORE-Multiple, the operation can be done with only
251 2-3 instructions (setup of the CSRs, VSETVL x0, x0, #{regfilelen-1},
252 single LD/ST operation). If VL does *not* get set to the register file
253 length when VSETVL is called, then a software-loop would be needed.
254 To avoid this need, VL *must* be set to exactly what is requested
255 (limits notwithstanding).
256
257 Therefore, in turn, unlike RVV, implementors *must* provide
258 pseudo-parallelism (using sequential loops in hardware) if actual
259 hardware-parallelism in the ALUs is not deployed. A hybrid is also
260 permitted (as used in Broadcom's VideoCore-IV) however this must be
261 *entirely* transparent to the ISA.
262
263 The fourth change is that VSETVL is implemented as a CSR, where the
264 behaviour of CSRRW (and CSRRWI) must be changed to specifically store
265 the *new* value in the destination register, **not** the old value.
266 Where context-load/save is to be implemented in the usual fashion
267 by using a single CSRRW instruction to obtain the old value, the
268 *secondary* CSR must be used (SVSTATE). This CSR behaves
269 exactly as standard CSRs, and contains more than just VL.
270
271 One interesting side-effect of using CSRRWI to set VL is that this
272 may be done with a single instruction, useful particularly for a
273 context-load/save. There are however limitations: CSRWI's immediate
274 is limited to 0-31 (representing VL=1-32).
275
276 Note that when VL is set to 1, all parallel operations cease: the
277 hardware loop is reduced to a single element: scalar operations.
278
279 ## STATE
280
281 This is a standard CSR that contains sufficient information for a
282 full context save/restore. It contains (and permits setting of)
283 MVL, VL, CFG, the destination element offset of the current parallel
284 instruction being executed, and, for twin-predication, the source
285 element offset as well. Interestingly it may hypothetically
286 also be used to make the immediately-following instruction to skip a
287 certain number of elements, however the recommended method to do
288 this is predication or using the offset mode of the REMAP CSRs.
289
290 Setting destoffs and srcoffs is realistically intended for saving state
291 so that exceptions (page faults in particular) may be serviced and the
292 hardware-loop that was being executed at the time of the trap, from
293 user-mode (or Supervisor-mode), may be returned to and continued from
294 where it left off. The reason why this works is because setting
295 User-Mode STATE will not change (not be used) in M-Mode or S-Mode
296 (and is entirely why M-Mode and S-Mode have their own STATE CSRs).
297
298 The format of the STATE CSR is as follows:
299
300 | (28..27) | (26..24) | (23..18) | (17..12) | (11..6) | (5...0) |
301 | -------- | -------- | -------- | -------- | ------- | ------- |
302 | size | bank | destoffs | srcoffs | vl | maxvl |
303
304 When setting this CSR, the following characteristics will be enforced:
305
306 * **MAXVL** will be truncated (after offset) to be within the range 1 to XLEN
307 * **VL** will be truncated (after offset) to be within the range 1 to MAXVL
308 * **srcoffs** will be truncated to be within the range 0 to VL-1
309 * **destoffs** will be truncated to be within the range 0 to VL-1
310
311 ## MVL, VL and CSR Pseudocode
312
313 The pseudo-code for get and set of VL and MVL are as follows:
314
315 set_mvl_csr(value, rd):
316 regs[rd] = MVL
317 MVL = MIN(value, MVL)
318
319 get_mvl_csr(rd):
320 regs[rd] = VL
321
322 set_vl_csr(value, rd):
323 VL = MIN(value, MVL)
324 regs[rd] = VL # yes returning the new value NOT the old CSR
325
326 get_vl_csr(rd):
327 regs[rd] = VL
328
329 Note that where setting MVL behaves as a normal CSR, unlike standard CSR
330 behaviour, setting VL will return the **new** value of VL **not** the old
331 one.
332
333 For CSRRWI, the range of the immediate is restricted to 5 bits. In order to
334 maximise the effectiveness, an immediate of 0 is used to set VL=1,
335 an immediate of 1 is used to set VL=2 and so on:
336
337 CSRRWI_Set_MVL(value):
338 set_mvl_csr(value+1, x0)
339
340 CSRRWI_Set_VL(value):
341 set_vl_csr(value+1, x0)
342
343 However for CSRRW the following pseudocide is used for MVL and VL,
344 where setting the value to zero will cause an exception to be raised.
345 The reason is that if VL or MVL are set to zero, the STATE CSR is
346 not capable of returning that value.
347
348 CSRRW_Set_MVL(rs1, rd):
349 value = regs[rs1]
350 if value == 0:
351 raise Exception
352 set_mvl_csr(value, rd)
353
354 CSRRW_Set_VL(rs1, rd):
355 value = regs[rs1]
356 if value == 0:
357 raise Exception
358 set_vl_csr(value, rd)
359
360 In this way, when CSRRW is utilised with a loop variable, the value
361 that goes into VL (and into the destination register) may be used
362 in an instruction-minimal fashion:
363
364 CSRvect1 = {type: F, key: a3, val: a3, elwidth: dflt}
365 CSRvect2 = {type: F, key: a7, val: a7, elwidth: dflt}
366 CSRRWI MVL, 3 # sets MVL == **4** (not 3)
367 j zerotest # in case loop counter a0 already 0
368 loop:
369 CSRRW VL, t0, a0 # vl = t0 = min(mvl, a0)
370 ld a3, a1 # load 4 registers a3-6 from x
371 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
372 ld a7, a2 # load 4 registers a7-10 from y
373 add a1, a1, t1 # increment pointer to x by vl*8
374 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
375 sub a0, a0, t0 # n -= vl (t0)
376 st a7, a2 # store 4 registers a7-10 to y
377 add a2, a2, t1 # increment pointer to y by vl*8
378 zerotest:
379 bnez a0, loop # repeat if n != 0
380
381 With the STATE CSR, just like with CSRRWI, in order to maximise the
382 utilisation of the limited bitspace, "000000" in binary represents
383 VL==1, "00001" represents VL==2 and so on (likewise for MVL):
384
385 CSRRW_Set_SV_STATE(rs1, rd):
386 value = regs[rs1]
387 get_state_csr(rd)
388 MVL = set_mvl_csr(value[11:6]+1)
389 VL = set_vl_csr(value[5:0]+1)
390 CFG = value[28:24]>>24
391 destoffs = value[23:18]>>18
392 srcoffs = value[23:18]>>12
393
394 get_state_csr(rd):
395 regs[rd] = (MVL-1) | (VL-1)<<6 | (srcoffs)<<12 |
396 (destoffs)<<18 | (CFG)<<24
397 return regs[rd]
398
399 In both cases, whilst CSR read of VL and MVL return the exact values
400 of VL and MVL respectively, reading and writing the STATE CSR returns
401 those values **minus one**. This is absolutely critical to implement
402 if the STATE CSR is to be used for fast context-switching.
403
404 ## Register CSR key-value (CAM) table <a name="regcsrtable" />
405
406 The purpose of the Register CSR table is four-fold:
407
408 * To mark integer and floating-point registers as requiring "redirection"
409 if it is ever used as a source or destination in any given operation.
410 This involves a level of indirection through a 5-to-7-bit lookup table,
411 such that **unmodified** operands with 5 bit (3 for Compressed) may
412 access up to **128** registers.
413 * To indicate whether, after redirection through the lookup table, the
414 register is a vector (or remains a scalar).
415 * To over-ride the implicit or explicit bitwidth that the operation would
416 normally give the register.
417
418 16 bit format:
419
420 | RegCAM | | 15 | (14..8) | 7 | (6..5) | (4..0) |
421 | ------ | | - | - | - | ------ | ------- |
422 | 0 | | isvec0 | regidx0 | i/f | vew0 | regkey |
423 | 1 | | isvec1 | regidx1 | i/f | vew1 | regkey |
424 | .. | | isvec.. | regidx.. | i/f | vew.. | regkey |
425 | 15 | | isvec15 | regidx15 | i/f | vew15 | regkey |
426
427 8 bit format:
428
429 | RegCAM | | 7 | (6..5) | (4..0) |
430 | ------ | | - | ------ | ------- |
431 | 0 | | i/f | vew0 | regnum |
432
433 i/f is set to "1" to indicate that the redirection/tag entry is to be applied
434 to integer registers; 0 indicates that it is relevant to floating-point
435 registers.
436
437 The 8 bit format is used for a much more compact expression. "isvec" is implicit and, as in [[sv-prefix-proposal]], the target vector is "regnum<<2", implicitly. Contrast this with the 16-bit format where the target vector is *explicitly* named in bits 8 to 14, and bit 15 may optionally set "scalar" mode.
438
439 vew has the following meanings, indicating that the instruction's
440 operand size is "over-ridden" in a polymorphic fashion:
441
442 | vew | bitwidth |
443 | --- | ------------------- |
444 | 00 | default (XLEN/FLEN) |
445 | 01 | 8 bit |
446 | 10 | 16 bit |
447 | 11 | 32 bit |
448
449 As the above table is a CAM (key-value store) it may be appropriate
450 (faster, implementation-wise) to expand it as follows:
451
452 struct vectorised fp_vec[32], int_vec[32];
453
454 for (i = 0; i < 16; i++) // 16 CSRs?
455 tb = int_vec if CSRvec[i].type == 0 else fp_vec
456 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
457 tb[idx].elwidth = CSRvec[i].elwidth
458 tb[idx].regidx = CSRvec[i].regidx // indirection
459 tb[idx].isvector = CSRvec[i].isvector // 0=scalar
460 tb[idx].packed = CSRvec[i].packed // SIMD or not
461
462 The actual size of the CSR Register table depends on the platform
463 and on whether other Extensions are present (RV64G, RV32E, etc.).
464 For details see "Subsets" section.
465
466 There are two CSRs (per privilege level) for adding to and removing
467 entries from the table, which, conceptually may be viewed as either
468 a register window (similar to SPARC) or as the "top of a stack".
469
470 * SVREGTOP will push or pop entries onto the top of the "stack"
471 (highest non-zero indexed entry in the table)
472 * SVREGBOT will push or pop entries from the bottom (always
473 element indexed as zero.
474
475 In addition, note that CSRRWI behaviour is completely different
476 from CSRRW when writing to these two CSR registers. The CSRRW
477 behaviour: the src register is subdivided into 16-bit chunks,
478 and each non-zero chunk is pushed/popped separately. The
479 CSRRWI behaviour: the immediate indicates the number of
480 entries in the table to be popped.
481
482 CSRRWI:
483
484 * The src register indicates how many entries to pop from the
485 CAM table.
486 * "CSRRWI SVREGTOP, 3" indicates that the top 3
487 entries are to be zero'd and returned as the CSR return
488 result. The top entry is returned in bits 0-15, the
489 next entry down in bits 16-31, and when XLEN==64, an
490 extra 2 entries are also returned.
491 * "CSRRWI SVREGBOT, 3" indicates that the bottom 3 entries are
492 to be returned, and the entries with indices above 3 are
493 to be shuffled down. The first entry to be popped off the
494 bottom is returned in bits 0-15, the second entry as bits
495 16-31 and so on.
496 * If XLEN==32, only a maximum of 2 entries may be returned
497 (and shuffled). If XLEN==64, only a maximum of 4 entries
498 may be returned
499 * If however the destination register is x0 (zero), then
500 the exact number of entries requested will be removed
501 (shuffled down).
502
503 CSRRW when src == 0:
504
505 * When the src register is all zeros, this is a request to
506 pop one and only one 16-bit element from the table.
507 * "CSRRW SVREGTOP, 0" will return (and clear) the highest
508 non-zero 16-bit entry in the table
509 * "CSRRW SVREGBOT, 0" will return (and clear) the zero'th
510 16-bit entry in the table, and will shuffle down all
511 other entries (if any) by one index.
512
513 CSRRW when src != 0:
514
515 All other CSRRW behaviours are a "loop", taking 16-bits
516 at a time from the src register. Obviously, for XLEN=32
517 that can only be up to 2 16-bit entries, however for XLEN=64
518 it can be up to 4.
519
520 * When the src 16-bit chunk is non-zero and there already exists
521 an entry with the exact same "regkey" (bits 0-4), the
522 entry is **updated**. No other modifications are made.
523 * When the 16-bit chunk is non-zero and there does not exist
524 an entry, the new value will be placed at the end
525 (in the highest non-zero slot), or at the beginning
526 (shuffling up all other entries to make room).
527 * If there is not enough room, the entry at the opposite
528 end will become part of the CSR return result.
529 * The process is repeated for the next 16-bit chunk (starting
530 with bits 0-15 and moving next to 16-31 and so on), until
531 the limit of XLEN is reached or a chunk is all-zeros, at
532 which point the looping stops.
533 * Any 16-bit entries that are pushed out of the stack
534 (from either end) are concatenated in order (first entry
535 pushed out is bits 0-15 of the return result).
536
537 What this behaviour basically does is allow the CAM table to
538 effectively be like the top entries of a stack. Entries that
539 get returned from CSRRW SVREGTOP can be *actually* stored on the stack,
540 such that after a function call exits, CSRRWI SVREGTOP may be used
541 to delete the callee's CAM entries, and the caller's entries may then
542 be pushed *back*, using CSRRW SVREGBOT.
543
544 Context-switching may be carried out in a loop, where CSRRWI may
545 be called to "pop" values that are tested for being non-zero, and
546 transferred onto the stack with C.SWSP using only around 4-5 instructions.
547 CSRRW may then be used in combination with C.LWSP to get the CAM entries
548 off the stack and back into the CAM table, again with a loop using
549 only around 4-5 instructions.
550
551 Contrast this with needing around 6-7 instructions (8-9 without SV on
552 RV64, and 16-17 on RV32) to do a context-switch of fixed-address CSRs:
553 a sequence of fixed-address C.LWSP with fixed offsets plus fixed-address
554 CSRRWs, and that is without testing if any of the entries are zero
555 or not.
556
557 ## Predication CSR <a name="predication_csr_table"></a>
558
559 TODO: update CSR tables, now 7-bit for regidx
560
561 The Predication CSR is a key-value store indicating whether, if a given
562 destination register (integer or floating-point) is referred to in an
563 instruction, it is to be predicated. Tt is particularly important to note
564 that the *actual* register used can be *different* from the one that is
565 in the instruction, due to the redirection through the lookup table.
566
567 * regidx is the actual register that in combination with the
568 i/f flag, if that integer or floating-point register is referred to,
569 results in the lookup table being referenced to find the predication
570 mask to use on the operation in which that (regidx) register has
571 been used
572 * predidx (in combination with the bank bit in the future) is the
573 *actual* register to be used for the predication mask. Note:
574 in effect predidx is actually a 6-bit register address, as the bank
575 bit is the MSB (and is nominally set to zero for now).
576 * inv indicates that the predication mask bits are to be inverted
577 prior to use *without* actually modifying the contents of the
578 register itself.
579 * zeroing is either 1 or 0, and if set to 1, the operation must
580 place zeros in any element position where the predication mask is
581 set to zero. If zeroing is set to 0, unpredicated elements *must*
582 be left alone. Some microarchitectures may choose to interpret
583 this as skipping the operation entirely. Others which wish to
584 stick more closely to a SIMD architecture may choose instead to
585 interpret unpredicated elements as an internal "copy element"
586 operation (which would be necessary in SIMD microarchitectures
587 that perform register-renaming)
588 * "packed" indicates if the register is to be interpreted as SIMD
589 i.e. containing multiple contiguous elements of size equal to "bitwidth".
590 (Note: in earlier drafts this was in the Register CSR table.
591 However after extending to 7 bits there was not enough space.
592 To use "unpredicated" packed SIMD, set the predicate to x0 and
593 set "invert". This has the effect of setting a predicate of all 1s)
594
595 16 bit format:
596
597 | PrCSR | (15..11) | 10 | 9 | 8 | (7..1) | 0 |
598 | ----- | - | - | - | - | ------- | ------- |
599 | 0 | predkey | zero0 | inv0 | i/f | regidx | rsrvd |
600 | 1 | predkey | zero1 | inv1 | i/f | regidx | packed1 |
601 | ... | predkey | ..... | .... | i/f | ....... | ....... |
602 | 15 | predkey | zero15 | inv15 | i/f | regidx | packed15|
603
604
605 8 bit format:
606
607 | PrCSR | 7 | 6 | 5 | (4..0) |
608 | ----- | - | - | - | ------- |
609 | 0 | zero0 | inv0 | i/f | regnum |
610
611 The 8 bit format is a compact and less expressive variant of the full 16 bit format. Using the 8 bit formatis very different: the predicate register to use is implicit, and numbering begins inplicitly from x9. The regnum is still used to "activate" predication.
612
613 The 16 bit Predication CSR Table is a key-value store, so implementation-wise
614 it will be faster to turn the table around (maintain topologically
615 equivalent state):
616
617 struct pred {
618 bool zero;
619 bool inv;
620 bool enabled;
621 int predidx; // redirection: actual int register to use
622 }
623
624 struct pred fp_pred_reg[32]; // 64 in future (bank=1)
625 struct pred int_pred_reg[32]; // 64 in future (bank=1)
626
627 for (i = 0; i < 16; i++)
628 tb = int_pred_reg if CSRpred[i].type == 0 else fp_pred_reg;
629 idx = CSRpred[i].regidx
630 tb[idx].zero = CSRpred[i].zero
631 tb[idx].inv = CSRpred[i].inv
632 tb[idx].predidx = CSRpred[i].predidx
633 tb[idx].enabled = true
634
635 So when an operation is to be predicated, it is the internal state that
636 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
637 pseudo-code for operations is given, where p is the explicit (direct)
638 reference to the predication register to be used:
639
640 for (int i=0; i<vl; ++i)
641 if ([!]preg[p][i])
642 (d ? vreg[rd][i] : sreg[rd]) =
643 iop(s1 ? vreg[rs1][i] : sreg[rs1],
644 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
645
646 This instead becomes an *indirect* reference using the *internal* state
647 table generated from the Predication CSR key-value store, which is used
648 as follows.
649
650 if type(iop) == INT:
651 preg = int_pred_reg[rd]
652 else:
653 preg = fp_pred_reg[rd]
654
655 for (int i=0; i<vl; ++i)
656 predicate, zeroing = get_pred_val(type(iop) == INT, rd):
657 if (predicate && (1<<i))
658 (d ? regfile[rd+i] : regfile[rd]) =
659 iop(s1 ? regfile[rs1+i] : regfile[rs1],
660 s2 ? regfile[rs2+i] : regfile[rs2]); // for insts with 2 inputs
661 else if (zeroing)
662 (d ? regfile[rd+i] : regfile[rd]) = 0
663
664 Note:
665
666 * d, s1 and s2 are booleans indicating whether destination,
667 source1 and source2 are vector or scalar
668 * key-value CSR-redirection of rd, rs1 and rs2 have NOT been included
669 above, for clarity. rd, rs1 and rs2 all also must ALSO go through
670 register-level redirection (from the Register CSR table) if they are
671 vectors.
672
673 If written as a function, obtaining the predication mask (and whether
674 zeroing takes place) may be done as follows:
675
676 def get_pred_val(bool is_fp_op, int reg):
677 tb = int_reg if is_fp_op else fp_reg
678 if (!tb[reg].enabled):
679 return ~0x0, False // all enabled; no zeroing
680 tb = int_pred if is_fp_op else fp_pred
681 if (!tb[reg].enabled):
682 return ~0x0, False // all enabled; no zeroing
683 predidx = tb[reg].predidx // redirection occurs HERE
684 predicate = intreg[predidx] // actual predicate HERE
685 if (tb[reg].inv):
686 predicate = ~predicate // invert ALL bits
687 return predicate, tb[reg].zero
688
689 Note here, critically, that **only** if the register is marked
690 in its CSR **register** table entry as being "active" does the testing
691 proceed further to check if the CSR **predicate** table entry is
692 also active.
693
694 Note also that this is in direct contrast to branch operations
695 for the storage of comparisions: in these specific circumstances
696 the requirement for there to be an active CSR *register* entry
697 is removed.
698
699 ## REMAP CSR <a name="remap" />
700
701 (Note: both the REMAP and SHAPE sections are best read after the
702 rest of the document has been read)
703
704 There is one 32-bit CSR which may be used to indicate which registers,
705 if used in any operation, must be "reshaped" (re-mapped) from a linear
706 form to a 2D or 3D transposed form, or "offset" to permit arbitrary
707 access to elements within a register.
708
709 The 32-bit REMAP CSR may reshape up to 3 registers:
710
711 | 29..28 | 27..26 | 25..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
712 | ------ | ------ | ------ | -- | ------- | -- | ------- | -- | ------- |
713 | shape2 | shape1 | shape0 | 0 | regidx2 | 0 | regidx1 | 0 | regidx0 |
714
715 regidx0-2 refer not to the Register CSR CAM entry but to the underlying
716 *real* register (see regidx, the value) and consequently is 7-bits wide.
717 When set to zero (referring to x0), clearly reshaping x0 is pointless,
718 so is used to indicate "disabled".
719 shape0-2 refers to one of three SHAPE CSRs. A value of 0x3 is reserved.
720 Bits 7, 15, 23, 30 and 31 are also reserved, and must be set to zero.
721
722 It is anticipated that these specialist CSRs not be very often used.
723 Unlike the CSR Register and Predication tables, the REMAP CSRs use
724 the full 7-bit regidx so that they can be set once and left alone,
725 whilst the CSR Register entries pointing to them are disabled, instead.
726
727 ## SHAPE 1D/2D/3D vector-matrix remapping CSRs
728
729 (Note: both the REMAP and SHAPE sections are best read after the
730 rest of the document has been read)
731
732 There are three "shape" CSRs, SHAPE0, SHAPE1, SHAPE2, 32-bits in each,
733 which have the same format. When each SHAPE CSR is set entirely to zeros,
734 remapping is disabled: the register's elements are a linear (1D) vector.
735
736 | 26..24 | 23 | 22..16 | 15 | 14..8 | 7 | 6..0 |
737 | ------- | -- | ------- | -- | ------- | -- | ------- |
738 | permute | offs[2] | zdimsz | offs[1] | ydimsz | offs[0] | xdimsz |
739
740 offs is a 3-bit field, spread out across bits 7, 15 and 23, which
741 is added to the element index during the loop calculation.
742
743 xdimsz, ydimsz and zdimsz are offset by 1, such that a value of 0 indicates
744 that the array dimensionality for that dimension is 1. A value of xdimsz=2
745 would indicate that in the first dimension there are 3 elements in the
746 array. The format of the array is therefore as follows:
747
748 array[xdim+1][ydim+1][zdim+1]
749
750 However whilst illustrative of the dimensionality, that does not take the
751 "permute" setting into account. "permute" may be any one of six values
752 (0-5, with values of 6 and 7 being reserved, and not legal). The table
753 below shows how the permutation dimensionality order works:
754
755 | permute | order | array format |
756 | ------- | ----- | ------------------------ |
757 | 000 | 0,1,2 | (xdim+1)(ydim+1)(zdim+1) |
758 | 001 | 0,2,1 | (xdim+1)(zdim+1)(ydim+1) |
759 | 010 | 1,0,2 | (ydim+1)(xdim+1)(zdim+1) |
760 | 011 | 1,2,0 | (ydim+1)(zdim+1)(xdim+1) |
761 | 100 | 2,0,1 | (zdim+1)(xdim+1)(ydim+1) |
762 | 101 | 2,1,0 | (zdim+1)(ydim+1)(xdim+1) |
763
764 In other words, the "permute" option changes the order in which
765 nested for-loops over the array would be done. The algorithm below
766 shows this more clearly, and may be executed as a python program:
767
768 # mapidx = REMAP.shape2
769 xdim = 3 # SHAPE[mapidx].xdim_sz+1
770 ydim = 4 # SHAPE[mapidx].ydim_sz+1
771 zdim = 5 # SHAPE[mapidx].zdim_sz+1
772
773 lims = [xdim, ydim, zdim]
774 idxs = [0,0,0] # starting indices
775 order = [1,0,2] # experiment with different permutations, here
776 offs = 0 # experiment with different offsets, here
777
778 for idx in range(xdim * ydim * zdim):
779 new_idx = offs + idxs[0] + idxs[1] * xdim + idxs[2] * xdim * ydim
780 print new_idx,
781 for i in range(3):
782 idxs[order[i]] = idxs[order[i]] + 1
783 if (idxs[order[i]] != lims[order[i]]):
784 break
785 print
786 idxs[order[i]] = 0
787
788 Here, it is assumed that this algorithm be run within all pseudo-code
789 throughout this document where a (parallelism) for-loop would normally
790 run from 0 to VL-1 to refer to contiguous register
791 elements; instead, where REMAP indicates to do so, the element index
792 is run through the above algorithm to work out the **actual** element
793 index, instead. Given that there are three possible SHAPE entries, up to
794 three separate registers in any given operation may be simultaneously
795 remapped:
796
797 function op_add(rd, rs1, rs2) # add not VADD!
798 ...
799 ...
800  for (i = 0; i < VL; i++)
801 if (predval & 1<<i) # predication uses intregs
802    ireg[rd+remap(id)] <= ireg[rs1+remap(irs1)] +
803 ireg[rs2+remap(irs2)];
804 if (!int_vec[rd ].isvector) break;
805 if (int_vec[rd ].isvector)  { id += 1; }
806 if (int_vec[rs1].isvector)  { irs1 += 1; }
807 if (int_vec[rs2].isvector)  { irs2 += 1; }
808
809 By changing remappings, 2D matrices may be transposed "in-place" for one
810 operation, followed by setting a different permutation order without
811 having to move the values in the registers to or from memory. Also,
812 the reason for having REMAP separate from the three SHAPE CSRs is so
813 that in a chain of matrix multiplications and additions, for example,
814 the SHAPE CSRs need only be set up once; only the REMAP CSR need be
815 changed to target different registers.
816
817 Note that:
818
819 * Over-running the register file clearly has to be detected and
820 an illegal instruction exception thrown
821 * When non-default elwidths are set, the exact same algorithm still
822 applies (i.e. it offsets elements *within* registers rather than
823 entire registers).
824 * If permute option 000 is utilised, the actual order of the
825 reindexing does not change!
826 * If two or more dimensions are set to zero, the actual order does not change!
827 * The above algorithm is pseudo-code **only**. Actual implementations
828 will need to take into account the fact that the element for-looping
829 must be **re-entrant**, due to the possibility of exceptions occurring.
830 See MSTATE CSR, which records the current element index.
831 * Twin-predicated operations require **two** separate and distinct
832 element offsets. The above pseudo-code algorithm will be applied
833 separately and independently to each, should each of the two
834 operands be remapped. *This even includes C.LDSP* and other operations
835 in that category, where in that case it will be the **offset** that is
836 remapped (see Compressed Stack LOAD/STORE section).
837 * Offset is especially useful, on its own, for accessing elements
838 within the middle of a register. Without offsets, it is necessary
839 to either use a predicated MV, skipping the first elements, or
840 performing a LOAD/STORE cycle to memory.
841 With offsets, the data does not have to be moved.
842 * Setting the total elements (xdim+1) times (ydim+1) times (zdim+1) to
843 less than MVL is **perfectly legal**, albeit very obscure. It permits
844 entries to be regularly presented to operands **more than once**, thus
845 allowing the same underlying registers to act as an accumulator of
846 multiple vector or matrix operations, for example.
847
848 Clearly here some considerable care needs to be taken as the remapping
849 could hypothetically create arithmetic operations that target the
850 exact same underlying registers, resulting in data corruption due to
851 pipeline overlaps. Out-of-order / Superscalar micro-architectures with
852 register-renaming will have an easier time dealing with this than
853 DSP-style SIMD micro-architectures.
854
855 # Instruction Execution Order
856
857 Simple-V behaves as if it is a hardware-level "macro expansion system",
858 substituting and expanding a single instruction into multiple sequential
859 instructions with contiguous and sequentially-incrementing registers.
860 As such, it does **not** modify - or specify - the behaviour and semantics of
861 the execution order: that may be deduced from the **existing** RV
862 specification in each and every case.
863
864 So for example if a particular micro-architecture permits out-of-order
865 execution, and it is augmented with Simple-V, then wherever instructions
866 may be out-of-order then so may the "post-expansion" SV ones.
867
868 If on the other hand there are memory guarantees which specifically
869 prevent and prohibit certain instructions from being re-ordered
870 (such as the Atomicity Axiom, or FENCE constraints), then clearly
871 those constraints **MUST** also be obeyed "post-expansion".
872
873 It should be absolutely clear that SV is **not** about providing new
874 functionality or changing the existing behaviour of a micro-architetural
875 design, or about changing the RISC-V Specification.
876 It is **purely** about compacting what would otherwise be contiguous
877 instructions that use sequentially-increasing register numbers down
878 to the **one** instruction.
879
880 # Instructions <a name="instructions" />
881
882 Despite being a 98% complete and accurate topological remap of RVV
883 concepts and functionality, no new instructions are needed.
884 Compared to RVV: *All* RVV instructions can be re-mapped, however xBitManip
885 becomes a critical dependency for efficient manipulation of predication
886 masks (as a bit-field). Despite the removal of all operations,
887 with the exception of CLIP and VSELECT.X
888 *all instructions from RVV Base are topologically re-mapped and retain their
889 complete functionality, intact*. Note that if RV64G ever had
890 a MV.X added as well as FCLIP, the full functionality of RVV-Base would
891 be obtained in SV.
892
893 Three instructions, VSELECT, VCLIP and VCLIPI, do not have RV Standard
894 equivalents, so are left out of Simple-V. VSELECT could be included if
895 there existed a MV.X instruction in RV (MV.X is a hypothetical
896 non-immediate variant of MV that would allow another register to
897 specify which register was to be copied). Note that if any of these three
898 instructions are added to any given RV extension, their functionality
899 will be inherently parallelised.
900
901 With some exceptions, where it does not make sense or is simply too
902 challenging, all RV-Base instructions are parallelised:
903
904 * CSR instructions, whilst a case could be made for fast-polling of
905 a CSR into multiple registers, or for being able to copy multiple
906 contiguously addressed CSRs into contiguous registers, and so on,
907 are the fundamental core basis of SV. If parallelised, extreme
908 care would need to be taken. Additionally, CSR reads are done
909 using x0, and it is *really* inadviseable to tag x0.
910 * LUI, C.J, C.JR, WFI, AUIPC are not suitable for parallelising so are
911 left as scalar.
912 * LR/SC could hypothetically be parallelised however their purpose is
913 single (complex) atomic memory operations where the LR must be followed
914 up by a matching SC. A sequence of parallel LR instructions followed
915 by a sequence of parallel SC instructions therefore is guaranteed to
916 not be useful. Not least: the guarantees of a Multi-LR/SC
917 would be impossible to provide if emulated in a trap.
918 * EBREAK, NOP, FENCE and others do not use registers so are not inherently
919 paralleliseable anyway.
920
921 All other operations using registers are automatically parallelised.
922 This includes AMOMAX, AMOSWAP and so on, where particular care and
923 attention must be paid.
924
925 Example pseudo-code for an integer ADD operation (including scalar operations).
926 Floating-point uses fp csrs.
927
928 function op_add(rd, rs1, rs2) # add not VADD!
929  int i, id=0, irs1=0, irs2=0;
930  predval = get_pred_val(FALSE, rd);
931  rd = int_vec[rd ].isvector ? int_vec[rd ].regidx : rd;
932  rs1 = int_vec[rs1].isvector ? int_vec[rs1].regidx : rs1;
933  rs2 = int_vec[rs2].isvector ? int_vec[rs2].regidx : rs2;
934  for (i = 0; i < VL; i++)
935 if (predval & 1<<i) # predication uses intregs
936    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
937 if (!int_vec[rd ].isvector) break;
938 if (int_vec[rd ].isvector)  { id += 1; }
939 if (int_vec[rs1].isvector)  { irs1 += 1; }
940 if (int_vec[rs2].isvector)  { irs2 += 1; }
941
942 Note that for simplicity there is quite a lot missing from the above
943 pseudo-code: element widths, zeroing on predication, dimensional
944 reshaping and offsets and so on. However it demonstrates the basic
945 principle. Augmentations that produce the full pseudo-code are covered in
946 other sections.
947
948 ## Instruction Format
949
950 It is critical to appreciate that there are
951 **no operations added to SV, at all**.
952
953 Instead, by using CSRs to tag registers as an indication of "changed behaviour",
954 SV *overloads* pre-existing branch operations into predicated
955 variants, and implicitly overloads arithmetic operations, MV,
956 FCVT, and LOAD/STORE depending on CSR configurations for bitwidth
957 and predication. **Everything** becomes parallelised. *This includes
958 Compressed instructions* as well as any future instructions and Custom
959 Extensions.
960
961 Note: CSR tags to change behaviour of instructions is nothing new, including
962 in RISC-V. UXL, SXL and MXL change the behaviour so that XLEN=32/64/128.
963 FRM changes the behaviour of the floating-point unit, to alter the rounding
964 mode. Other architectures change the LOAD/STORE byte-order from big-endian
965 to little-endian on a per-instruction basis. SV is just a little more...
966 comprehensive in its effect on instructions.
967
968 ## Branch Instructions
969
970 ### Standard Branch <a name="standard_branch"></a>
971
972 Branch operations use standard RV opcodes that are reinterpreted to
973 be "predicate variants" in the instance where either of the two src
974 registers are marked as vectors (active=1, vector=1).
975
976 Note that the predication register to use (if one is enabled) is taken from
977 the *first* src register, and that this is used, just as with predicated
978 arithmetic operations, to mask whether the comparison operations take
979 place or not. The target (destination) predication register
980 to use (if one is enabled) is taken from the *second* src register.
981
982 If either of src1 or src2 are scalars (whether by there being no
983 CSR register entry or whether by the CSR entry specifically marking
984 the register as "scalar") the comparison goes ahead as vector-scalar
985 or scalar-vector.
986
987 In instances where no vectorisation is detected on either src registers
988 the operation is treated as an absolutely standard scalar branch operation.
989 Where vectorisation is present on either or both src registers, the
990 branch may stil go ahead if any only if *all* tests succeed (i.e. excluding
991 those tests that are predicated out).
992
993 Note that when zero-predication is enabled (from source rs1),
994 a cleared bit in the predicate indicates that the result
995 of the compare is set to "false", i.e. that the corresponding
996 destination bit (or result)) be set to zero. Contrast this with
997 when zeroing is not set: bits in the destination predicate are
998 only *set*; they are **not** cleared. This is important to appreciate,
999 as there may be an expectation that, going into the hardware-loop,
1000 the destination predicate is always expected to be set to zero:
1001 this is **not** the case. The destination predicate is only set
1002 to zero if **zeroing** is enabled.
1003
1004 Note that just as with the standard (scalar, non-predicated) branch
1005 operations, BLE, BGT, BLEU and BTGU may be synthesised by inverting
1006 src1 and src2.
1007
1008 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
1009 for predicated compare operations of function "cmp":
1010
1011 for (int i=0; i<vl; ++i)
1012 if ([!]preg[p][i])
1013 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
1014 s2 ? vreg[rs2][i] : sreg[rs2]);
1015
1016 With associated predication, vector-length adjustments and so on,
1017 and temporarily ignoring bitwidth (which makes the comparisons more
1018 complex), this becomes:
1019
1020 s1 = reg_is_vectorised(src1);
1021 s2 = reg_is_vectorised(src2);
1022
1023 if not s1 && not s2
1024 if cmp(rs1, rs2) # scalar compare
1025 goto branch
1026 return
1027
1028 preg = int_pred_reg[rd]
1029 reg = int_regfile
1030
1031 ps = get_pred_val(I/F==INT, rs1);
1032 rd = get_pred_val(I/F==INT, rs2); # this may not exist
1033
1034 if not exists(rd) or zeroing:
1035 result = 0
1036 else
1037 result = preg[rd]
1038
1039 for (int i = 0; i < VL; ++i)
1040 if (zeroing)
1041 if not (ps & (1<<i))
1042 result &= ~(1<<i);
1043 else if (ps & (1<<i))
1044 if (cmp(s1 ? reg[src1+i]:reg[src1],
1045 s2 ? reg[src2+i]:reg[src2])
1046 result |= 1<<i;
1047 else
1048 result &= ~(1<<i);
1049
1050 if not exists(rd)
1051 if result == ps
1052 goto branch
1053 else
1054 preg[rd] = result # store in destination
1055 if preg[rd] == ps
1056 goto branch
1057
1058 Notes:
1059
1060 * Predicated SIMD comparisons would break src1 and src2 further down
1061 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
1062 Reordering") setting Vector-Length times (number of SIMD elements) bits
1063 in Predicate Register rd, as opposed to just Vector-Length bits.
1064 * The execution of "parallelised" instructions **must** be implemented
1065 as "re-entrant" (to use a term from software). If an exception (trap)
1066 occurs during the middle of a vectorised
1067 Branch (now a SV predicated compare) operation, the partial results
1068 of any comparisons must be written out to the destination
1069 register before the trap is permitted to begin. If however there
1070 is no predicate, the **entire** set of comparisons must be **restarted**,
1071 with the offset loop indices set back to zero. This is because
1072 there is no place to store the temporary result during the handling
1073 of traps.
1074
1075 TODO: predication now taken from src2. also branch goes ahead
1076 if all compares are successful.
1077
1078 Note also that where normally, predication requires that there must
1079 also be a CSR register entry for the register being used in order
1080 for the **predication** CSR register entry to also be active,
1081 for branches this is **not** the case. src2 does **not** have
1082 to have its CSR register entry marked as active in order for
1083 predication on src2 to be active.
1084
1085 Also note: SV Branch operations are **not** twin-predicated
1086 (see Twin Predication section). This would require three
1087 element offsets: one to track src1, one to track src2 and a third
1088 to track where to store the accumulation of the results. Given
1089 that the element offsets need to be exposed via CSRs so that
1090 the parallel hardware looping may be made re-entrant on traps
1091 and exceptions, the decision was made not to make SV Branches
1092 twin-predicated.
1093
1094 ### Floating-point Comparisons
1095
1096 There does not exist floating-point branch operations, only compare.
1097 Interestingly no change is needed to the instruction format because
1098 FP Compare already stores a 1 or a zero in its "rd" integer register
1099 target, i.e. it's not actually a Branch at all: it's a compare.
1100
1101 In RV (scalar) Base, a branch on a floating-point compare is
1102 done via the sequence "FEQ x1, f0, f5; BEQ x1, x0, #jumploc".
1103 This does extend to SV, as long as x1 (in the example sequence given)
1104 is vectorised. When that is the case, x1..x(1+VL-1) will also be
1105 set to 0 or 1 depending on whether f0==f5, f1==f6, f2==f7 and so on.
1106 The BEQ that follows will *also* compare x1==x0, x2==x0, x3==x0 and
1107 so on. Consequently, unlike integer-branch, FP Compare needs no
1108 modification in its behaviour.
1109
1110 In addition, it is noted that an entry "FNE" (the opposite of FEQ) is missing,
1111 and whilst in ordinary branch code this is fine because the standard
1112 RVF compare can always be followed up with an integer BEQ or a BNE (or
1113 a compressed comparison to zero or non-zero), in predication terms that
1114 becomes more of an impact. To deal with this, SV's predication has
1115 had "invert" added to it.
1116
1117 Also: note that FP Compare may be predicated, using the destination
1118 integer register (rd) to determine the predicate. FP Compare is **not**
1119 a twin-predication operation, as, again, just as with SV Branches,
1120 there are three registers involved: FP src1, FP src2 and INT rd.
1121
1122 ### Compressed Branch Instruction
1123
1124 Compressed Branch instructions are, just like standard Branch instructions,
1125 reinterpreted to be vectorised and predicated based on the source register
1126 (rs1s) CSR entries. As however there is only the one source register,
1127 given that c.beqz a10 is equivalent to beqz a10,x0, the optional target
1128 to store the results of the comparisions is taken from CSR predication
1129 table entries for **x0**.
1130
1131 The specific required use of x0 is, with a little thought, quite obvious,
1132 but is counterintuitive. Clearly it is **not** recommended to redirect
1133 x0 with a CSR register entry, however as a means to opaquely obtain
1134 a predication target it is the only sensible option that does not involve
1135 additional special CSRs (or, worse, additional special opcodes).
1136
1137 Note also that, just as with standard branches, the 2nd source
1138 (in this case x0 rather than src2) does **not** have to have its CSR
1139 register table marked as "active" in order for predication to work.
1140
1141 ## Vectorised Dual-operand instructions
1142
1143 There is a series of 2-operand instructions involving copying (and
1144 sometimes alteration):
1145
1146 * C.MV
1147 * FMV, FNEG, FABS, FCVT, FSGNJ, FSGNJN and FSGNJX
1148 * C.LWSP, C.SWSP, C.LDSP, C.FLWSP etc.
1149 * LOAD(-FP) and STORE(-FP)
1150
1151 All of these operations follow the same two-operand pattern, so it is
1152 *both* the source *and* destination predication masks that are taken into
1153 account. This is different from
1154 the three-operand arithmetic instructions, where the predication mask
1155 is taken from the *destination* register, and applied uniformly to the
1156 elements of the source register(s), element-for-element.
1157
1158 The pseudo-code pattern for twin-predicated operations is as
1159 follows:
1160
1161 function op(rd, rs):
1162  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1163  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1164  ps = get_pred_val(FALSE, rs); # predication on src
1165  pd = get_pred_val(FALSE, rd); # ... AND on dest
1166  for (int i = 0, int j = 0; i < VL && j < VL;):
1167 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1168 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1169 reg[rd+j] = SCALAR_OPERATION_ON(reg[rs+i])
1170 if (int_csr[rs].isvec) i++;
1171 if (int_csr[rd].isvec) j++; else break
1172
1173 This pattern covers scalar-scalar, scalar-vector, vector-scalar
1174 and vector-vector, and predicated variants of all of those.
1175 Zeroing is not presently included (TODO). As such, when compared
1176 to RVV, the twin-predicated variants of C.MV and FMV cover
1177 **all** standard vector operations: VINSERT, VSPLAT, VREDUCE,
1178 VEXTRACT, VSCATTER, VGATHER, VCOPY, and more.
1179
1180 Note that:
1181
1182 * elwidth (SIMD) is not covered in the pseudo-code above
1183 * ending the loop early in scalar cases (VINSERT, VEXTRACT) is also
1184 not covered
1185 * zero predication is also not shown (TODO).
1186
1187 ### C.MV Instruction <a name="c_mv"></a>
1188
1189 There is no MV instruction in RV however there is a C.MV instruction.
1190 It is used for copying integer-to-integer registers (vectorised FMV
1191 is used for copying floating-point).
1192
1193 If either the source or the destination register are marked as vectors
1194 C.MV is reinterpreted to be a vectorised (multi-register) predicated
1195 move operation. The actual instruction's format does not change:
1196
1197 [[!table data="""
1198 15 12 | 11 7 | 6 2 | 1 0 |
1199 funct4 | rd | rs | op |
1200 4 | 5 | 5 | 2 |
1201 C.MV | dest | src | C0 |
1202 """]]
1203
1204 A simplified version of the pseudocode for this operation is as follows:
1205
1206 function op_mv(rd, rs) # MV not VMV!
1207  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1208  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1209  ps = get_pred_val(FALSE, rs); # predication on src
1210  pd = get_pred_val(FALSE, rd); # ... AND on dest
1211  for (int i = 0, int j = 0; i < VL && j < VL;):
1212 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1213 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1214 ireg[rd+j] <= ireg[rs+i];
1215 if (int_csr[rs].isvec) i++;
1216 if (int_csr[rd].isvec) j++; else break
1217
1218 There are several different instructions from RVV that are covered by
1219 this one opcode:
1220
1221 [[!table data="""
1222 src | dest | predication | op |
1223 scalar | vector | none | VSPLAT |
1224 scalar | vector | destination | sparse VSPLAT |
1225 scalar | vector | 1-bit dest | VINSERT |
1226 vector | scalar | 1-bit? src | VEXTRACT |
1227 vector | vector | none | VCOPY |
1228 vector | vector | src | Vector Gather |
1229 vector | vector | dest | Vector Scatter |
1230 vector | vector | src & dest | Gather/Scatter |
1231 vector | vector | src == dest | sparse VCOPY |
1232 """]]
1233
1234 Also, VMERGE may be implemented as back-to-back (macro-op fused) C.MV
1235 operations with inversion on the src and dest predication for one of the
1236 two C.MV operations.
1237
1238 Note that in the instance where the Compressed Extension is not implemented,
1239 MV may be used, but that is a pseudo-operation mapping to addi rd, x0, rs.
1240 Note that the behaviour is **different** from C.MV because with addi the
1241 predication mask to use is taken **only** from rd and is applied against
1242 all elements: rs[i] = rd[i].
1243
1244 ### FMV, FNEG and FABS Instructions
1245
1246 These are identical in form to C.MV, except covering floating-point
1247 register copying. The same double-predication rules also apply.
1248 However when elwidth is not set to default the instruction is implicitly
1249 and automatic converted to a (vectorised) floating-point type conversion
1250 operation of the appropriate size covering the source and destination
1251 register bitwidths.
1252
1253 (Note that FMV, FNEG and FABS are all actually pseudo-instructions)
1254
1255 ### FVCT Instructions
1256
1257 These are again identical in form to C.MV, except that they cover
1258 floating-point to integer and integer to floating-point. When element
1259 width in each vector is set to default, the instructions behave exactly
1260 as they are defined for standard RV (scalar) operations, except vectorised
1261 in exactly the same fashion as outlined in C.MV.
1262
1263 However when the source or destination element width is not set to default,
1264 the opcode's explicit element widths are *over-ridden* to new definitions,
1265 and the opcode's element width is taken as indicative of the SIMD width
1266 (if applicable i.e. if packed SIMD is requested) instead.
1267
1268 For example FCVT.S.L would normally be used to convert a 64-bit
1269 integer in register rs1 to a 64-bit floating-point number in rd.
1270 If however the source rs1 is set to be a vector, where elwidth is set to
1271 default/2 and "packed SIMD" is enabled, then the first 32 bits of
1272 rs1 are converted to a floating-point number to be stored in rd's
1273 first element and the higher 32-bits *also* converted to floating-point
1274 and stored in the second. The 32 bit size comes from the fact that
1275 FCVT.S.L's integer width is 64 bit, and with elwidth on rs1 set to
1276 divide that by two it means that rs1 element width is to be taken as 32.
1277
1278 Similar rules apply to the destination register.
1279
1280 ## LOAD / STORE Instructions and LOAD-FP/STORE-FP <a name="load_store"></a>
1281
1282 An earlier draft of SV modified the behaviour of LOAD/STORE (modified
1283 the interpretation of the instruction fields). This
1284 actually undermined the fundamental principle of SV, namely that there
1285 be no modifications to the scalar behaviour (except where absolutely
1286 necessary), in order to simplify an implementor's task if considering
1287 converting a pre-existing scalar design to support parallelism.
1288
1289 So the original RISC-V scalar LOAD/STORE and LOAD-FP/STORE-FP functionality
1290 do not change in SV, however just as with C.MV it is important to note
1291 that dual-predication is possible.
1292
1293 In vectorised architectures there are usually at least two different modes
1294 for LOAD/STORE:
1295
1296 * Read (or write for STORE) from sequential locations, where one
1297 register specifies the address, and the one address is incremented
1298 by a fixed amount. This is usually known as "Unit Stride" mode.
1299 * Read (or write) from multiple indirected addresses, where the
1300 vector elements each specify separate and distinct addresses.
1301
1302 To support these different addressing modes, the CSR Register "isvector"
1303 bit is used. So, for a LOAD, when the src register is set to
1304 scalar, the LOADs are sequentially incremented by the src register
1305 element width, and when the src register is set to "vector", the
1306 elements are treated as indirection addresses. Simplified
1307 pseudo-code would look like this:
1308
1309 function op_ld(rd, rs) # LD not VLD!
1310  rdv = int_csr[rd].active ? int_csr[rd].regidx : rd;
1311  rsv = int_csr[rs].active ? int_csr[rs].regidx : rs;
1312  ps = get_pred_val(FALSE, rs); # predication on src
1313  pd = get_pred_val(FALSE, rd); # ... AND on dest
1314  for (int i = 0, int j = 0; i < VL && j < VL;):
1315 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1316 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1317 if (int_csr[rd].isvec)
1318 # indirect mode (multi mode)
1319 srcbase = ireg[rsv+i];
1320 else
1321 # unit stride mode
1322 srcbase = ireg[rsv] + i * XLEN/8; # offset in bytes
1323 ireg[rdv+j] <= mem[srcbase + imm_offs];
1324 if (!int_csr[rs].isvec &&
1325 !int_csr[rd].isvec) break # scalar-scalar LD
1326 if (int_csr[rs].isvec) i++;
1327 if (int_csr[rd].isvec) j++;
1328
1329 Notes:
1330
1331 * For simplicity, zeroing and elwidth is not included in the above:
1332 the key focus here is the decision-making for srcbase; vectorised
1333 rs means use sequentially-numbered registers as the indirection
1334 address, and scalar rs is "offset" mode.
1335 * The test towards the end for whether both source and destination are
1336 scalar is what makes the above pseudo-code provide the "standard" RV
1337 Base behaviour for LD operations.
1338 * The offset in bytes (XLEN/8) changes depending on whether the
1339 operation is a LB (1 byte), LH (2 byes), LW (4 bytes) or LD
1340 (8 bytes), and also whether the element width is over-ridden
1341 (see special element width section).
1342
1343 ## Compressed Stack LOAD / STORE Instructions <a name="c_ld_st"></a>
1344
1345 C.LWSP / C.SWSP and floating-point etc. are also source-dest twin-predicated,
1346 where it is implicit in C.LWSP/FLWSP etc. that x2 is the source register.
1347 It is therefore possible to use predicated C.LWSP to efficiently
1348 pop registers off the stack (by predicating x2 as the source), cherry-picking
1349 which registers to store to (by predicating the destination). Likewise
1350 for C.SWSP. In this way, LOAD/STORE-Multiple is efficiently achieved.
1351
1352 The two modes ("unit stride" and multi-indirection) are still supported,
1353 as with standard LD/ST. Essentially, the only difference is that the
1354 use of x2 is hard-coded into the instruction.
1355
1356 **Note**: it is still possible to redirect x2 to an alternative target
1357 register. With care, this allows C.LWSP / C.SWSP (and C.FLWSP) to be used as
1358 general-purpose LOAD/STORE operations.
1359
1360 ## Compressed LOAD / STORE Instructions
1361
1362 Compressed LOAD and STORE are again exactly the same as scalar LOAD/STORE,
1363 where the same rules apply and the same pseudo-code apply as for
1364 non-compressed LOAD/STORE. Again: setting scalar or vector mode
1365 on the src for LOAD and dest for STORE switches mode from "Unit Stride"
1366 to "Multi-indirection", respectively.
1367
1368 # Element bitwidth polymorphism <a name="elwidth"></a>
1369
1370 Element bitwidth is best covered as its own special section, as it
1371 is quite involved and applies uniformly across-the-board. SV restricts
1372 bitwidth polymorphism to default, 8-bit, 16-bit and 32-bit.
1373
1374 The effect of setting an element bitwidth is to re-cast each entry
1375 in the register table, and for all memory operations involving
1376 load/stores of certain specific sizes, to a completely different width.
1377 Thus In c-style terms, on an RV64 architecture, effectively each register
1378 now looks like this:
1379
1380 typedef union {
1381 uint8_t b[8];
1382 uint16_t s[4];
1383 uint32_t i[2];
1384 uint64_t l[1];
1385 } reg_t;
1386
1387 // integer table: assume maximum SV 7-bit regfile size
1388 reg_t int_regfile[128];
1389
1390 where the CSR Register table entry (not the instruction alone) determines
1391 which of those union entries is to be used on each operation, and the
1392 VL element offset in the hardware-loop specifies the index into each array.
1393
1394 However a naive interpretation of the data structure above masks the
1395 fact that setting VL greater than 8, for example, when the bitwidth is 8,
1396 accessing one specific register "spills over" to the following parts of
1397 the register file in a sequential fashion. So a much more accurate way
1398 to reflect this would be:
1399
1400 typedef union {
1401 uint8_t actual_bytes[8]; // 8 for RV64, 4 for RV32, 16 for RV128
1402 uint8_t b[0]; // array of type uint8_t
1403 uint16_t s[0];
1404 uint32_t i[0];
1405 uint64_t l[0];
1406 uint128_t d[0];
1407 } reg_t;
1408
1409 reg_t int_regfile[128];
1410
1411 where when accessing any individual regfile[n].b entry it is permitted
1412 (in c) to arbitrarily over-run the *declared* length of the array (zero),
1413 and thus "overspill" to consecutive register file entries in a fashion
1414 that is completely transparent to a greatly-simplified software / pseudo-code
1415 representation.
1416 It is however critical to note that it is clearly the responsibility of
1417 the implementor to ensure that, towards the end of the register file,
1418 an exception is thrown if attempts to access beyond the "real" register
1419 bytes is ever attempted.
1420
1421 Now we may modify pseudo-code an operation where all element bitwidths have
1422 been set to the same size, where this pseudo-code is otherwise identical
1423 to its "non" polymorphic versions (above):
1424
1425 function op_add(rd, rs1, rs2) # add not VADD!
1426 ...
1427 ...
1428  for (i = 0; i < VL; i++)
1429 ...
1430 ...
1431 // TODO, calculate if over-run occurs, for each elwidth
1432 if (elwidth == 8) {
1433    int_regfile[rd].b[id] <= int_regfile[rs1].i[irs1] +
1434     int_regfile[rs2].i[irs2];
1435 } else if elwidth == 16 {
1436    int_regfile[rd].s[id] <= int_regfile[rs1].s[irs1] +
1437     int_regfile[rs2].s[irs2];
1438 } else if elwidth == 32 {
1439    int_regfile[rd].i[id] <= int_regfile[rs1].i[irs1] +
1440     int_regfile[rs2].i[irs2];
1441 } else { // elwidth == 64
1442    int_regfile[rd].l[id] <= int_regfile[rs1].l[irs1] +
1443     int_regfile[rs2].l[irs2];
1444 }
1445 ...
1446 ...
1447
1448 So here we can see clearly: for 8-bit entries rd, rs1 and rs2 (and registers
1449 following sequentially on respectively from the same) are "type-cast"
1450 to 8-bit; for 16-bit entries likewise and so on.
1451
1452 However that only covers the case where the element widths are the same.
1453 Where the element widths are different, the following algorithm applies:
1454
1455 * Analyse the bitwidth of all source operands and work out the
1456 maximum. Record this as "maxsrcbitwidth"
1457 * If any given source operand requires sign-extension or zero-extension
1458 (ldb, div, rem, mul, sll, srl, sra etc.), instead of mandatory 32-bit
1459 sign-extension / zero-extension or whatever is specified in the standard
1460 RV specification, **change** that to sign-extending from the respective
1461 individual source operand's bitwidth from the CSR table out to
1462 "maxsrcbitwidth" (previously calculated), instead.
1463 * Following separate and distinct (optional) sign/zero-extension of all
1464 source operands as specifically required for that operation, carry out the
1465 operation at "maxsrcbitwidth". (Note that in the case of LOAD/STORE or MV
1466 this may be a "null" (copy) operation, and that with FCVT, the changes
1467 to the source and destination bitwidths may also turn FVCT effectively
1468 into a copy).
1469 * If the destination operand requires sign-extension or zero-extension,
1470 instead of a mandatory fixed size (typically 32-bit for arithmetic,
1471 for subw for example, and otherwise various: 8-bit for sb, 16-bit for sw
1472 etc.), overload the RV specification with the bitwidth from the
1473 destination register's elwidth entry.
1474 * Finally, store the (optionally) sign/zero-extended value into its
1475 destination: memory for sb/sw etc., or an offset section of the register
1476 file for an arithmetic operation.
1477
1478 In this way, polymorphic bitwidths are achieved without requiring a
1479 massive 64-way permutation of calculations **per opcode**, for example
1480 (4 possible rs1 bitwidths times 4 possible rs2 bitwidths times 4 possible
1481 rd bitwidths). The pseudo-code is therefore as follows:
1482
1483 typedef union {
1484 uint8_t b;
1485 uint16_t s;
1486 uint32_t i;
1487 uint64_t l;
1488 } el_reg_t;
1489
1490 bw(elwidth):
1491 if elwidth == 0:
1492 return xlen
1493 if elwidth == 1:
1494 return xlen / 2
1495 if elwidth == 2:
1496 return xlen * 2
1497 // elwidth == 3:
1498 return 8
1499
1500 get_max_elwidth(rs1, rs2):
1501 return max(bw(int_csr[rs1].elwidth), # default (XLEN) if not set
1502 bw(int_csr[rs2].elwidth)) # again XLEN if no entry
1503
1504 get_polymorphed_reg(reg, bitwidth, offset):
1505 el_reg_t res;
1506 res.l = 0; // TODO: going to need sign-extending / zero-extending
1507 if bitwidth == 8:
1508 reg.b = int_regfile[reg].b[offset]
1509 elif bitwidth == 16:
1510 reg.s = int_regfile[reg].s[offset]
1511 elif bitwidth == 32:
1512 reg.i = int_regfile[reg].i[offset]
1513 elif bitwidth == 64:
1514 reg.l = int_regfile[reg].l[offset]
1515 return res
1516
1517 set_polymorphed_reg(reg, bitwidth, offset, val):
1518 if (!int_csr[reg].isvec):
1519 # sign/zero-extend depending on opcode requirements, from
1520 # the reg's bitwidth out to the full bitwidth of the regfile
1521 val = sign_or_zero_extend(val, bitwidth, xlen)
1522 int_regfile[reg].l[0] = val
1523 elif bitwidth == 8:
1524 int_regfile[reg].b[offset] = val
1525 elif bitwidth == 16:
1526 int_regfile[reg].s[offset] = val
1527 elif bitwidth == 32:
1528 int_regfile[reg].i[offset] = val
1529 elif bitwidth == 64:
1530 int_regfile[reg].l[offset] = val
1531
1532 maxsrcwid = get_max_elwidth(rs1, rs2) # source element width(s)
1533 destwid = int_csr[rs1].elwidth # destination element width
1534  for (i = 0; i < VL; i++)
1535 if (predval & 1<<i) # predication uses intregs
1536 // TODO, calculate if over-run occurs, for each elwidth
1537 src1 = get_polymorphed_reg(rs1, maxsrcwid, irs1)
1538 // TODO, sign/zero-extend src1 and src2 as operation requires
1539 if (op_requires_sign_extend_src1)
1540 src1 = sign_extend(src1, maxsrcwid)
1541 src2 = get_polymorphed_reg(rs2, maxsrcwid, irs2)
1542 result = src1 + src2 # actual add here
1543 // TODO, sign/zero-extend result, as operation requires
1544 if (op_requires_sign_extend_dest)
1545 result = sign_extend(result, maxsrcwid)
1546 set_polymorphed_reg(rd, destwid, ird, result)
1547 if (!int_vec[rd].isvector) break
1548 if (int_vec[rd ].isvector)  { id += 1; }
1549 if (int_vec[rs1].isvector)  { irs1 += 1; }
1550 if (int_vec[rs2].isvector)  { irs2 += 1; }
1551
1552 Whilst specific sign-extension and zero-extension pseudocode call
1553 details are left out, due to each operation being different, the above
1554 should be clear that;
1555
1556 * the source operands are extended out to the maximum bitwidth of all
1557 source operands
1558 * the operation takes place at that maximum source bitwidth (the
1559 destination bitwidth is not involved at this point, at all)
1560 * the result is extended (or potentially even, truncated) before being
1561 stored in the destination. i.e. truncation (if required) to the
1562 destination width occurs **after** the operation **not** before.
1563 * when the destination is not marked as "vectorised", the **full**
1564 (standard, scalar) register file entry is taken up, i.e. the
1565 element is either sign-extended or zero-extended to cover the
1566 full register bitwidth (XLEN) if it is not already XLEN bits long.
1567
1568 Implementors are entirely free to optimise the above, particularly
1569 if it is specifically known that any given operation will complete
1570 accurately in less bits, as long as the results produced are
1571 directly equivalent and equal, for all inputs and all outputs,
1572 to those produced by the above algorithm.
1573
1574 ## Polymorphic floating-point operation exceptions and error-handling
1575
1576 For floating-point operations, conversion takes place without
1577 raising any kind of exception. Exactly as specified in the standard
1578 RV specification, NAN (or appropriate) is stored if the result
1579 is beyond the range of the destination, and, again, exactly as
1580 with the standard RV specification just as with scalar
1581 operations, the floating-point flag is raised (FCSR). And, again, just as
1582 with scalar operations, it is software's responsibility to check this flag.
1583 Given that the FCSR flags are "accrued", the fact that multiple element
1584 operations could have occurred is not a problem.
1585
1586 Note that it is perfectly legitimate for floating-point bitwidths of
1587 only 8 to be specified. However whilst it is possible to apply IEEE 754
1588 principles, no actual standard yet exists. Implementors wishing to
1589 provide hardware-level 8-bit support rather than throw a trap to emulate
1590 in software should contact the author of this specification before
1591 proceeding.
1592
1593 ## Polymorphic shift operators
1594
1595 A special note is needed for changing the element width of left and right
1596 shift operators, particularly right-shift. Even for standard RV base,
1597 in order for correct results to be returned, the second operand RS2 must
1598 be truncated to be within the range of RS1's bitwidth. spike's implementation
1599 of sll for example is as follows:
1600
1601 WRITE_RD(sext_xlen(zext_xlen(RS1) << (RS2 & (xlen-1))));
1602
1603 which means: where XLEN is 32 (for RV32), restrict RS2 to cover the
1604 range 0..31 so that RS1 will only be left-shifted by the amount that
1605 is possible to fit into a 32-bit register. Whilst this appears not
1606 to matter for hardware, it matters greatly in software implementations,
1607 and it also matters where an RV64 system is set to "RV32" mode, such
1608 that the underlying registers RS1 and RS2 comprise 64 hardware bits
1609 each.
1610
1611 For SV, where each operand's element bitwidth may be over-ridden, the
1612 rule about determining the operation's bitwidth *still applies*, being
1613 defined as the maximum bitwidth of RS1 and RS2. *However*, this rule
1614 **also applies to the truncation of RS2**. In other words, *after*
1615 determining the maximum bitwidth, RS2's range must **also be truncated**
1616 to ensure a correct answer. Example:
1617
1618 * RS1 is over-ridden to a 16-bit width
1619 * RS2 is over-ridden to an 8-bit width
1620 * RD is over-ridden to a 64-bit width
1621 * the maximum bitwidth is thus determined to be 16-bit - max(8,16)
1622 * RS2 is **truncated to a range of values from 0 to 15**: RS2 & (16-1)
1623
1624 Pseudocode (in spike) for this example would therefore be:
1625
1626 WRITE_RD(sext_xlen(zext_16bit(RS1) << (RS2 & (16-1))));
1627
1628 This example illustrates that considerable care therefore needs to be
1629 taken to ensure that left and right shift operations are implemented
1630 correctly. The key is that
1631
1632 * The operation bitwidth is determined by the maximum bitwidth
1633 of the *source registers*, **not** the destination register bitwidth
1634 * The result is then sign-extend (or truncated) as appropriate.
1635
1636 ## Polymorphic MULH/MULHU/MULHSU
1637
1638 MULH is designed to take the top half MSBs of a multiply that
1639 does not fit within the range of the source operands, such that
1640 smaller width operations may produce a full double-width multiply
1641 in two cycles. The issue is: SV allows the source operands to
1642 have variable bitwidth.
1643
1644 Here again special attention has to be paid to the rules regarding
1645 bitwidth, which, again, are that the operation is performed at
1646 the maximum bitwidth of the **source** registers. Therefore:
1647
1648 * An 8-bit x 8-bit multiply will create a 16-bit result that must
1649 be shifted down by 8 bits
1650 * A 16-bit x 8-bit multiply will create a 24-bit result that must
1651 be shifted down by 16 bits (top 8 bits being zero)
1652 * A 16-bit x 16-bit multiply will create a 32-bit result that must
1653 be shifted down by 16 bits
1654 * A 32-bit x 16-bit multiply will create a 48-bit result that must
1655 be shifted down by 32 bits
1656 * A 32-bit x 8-bit multiply will create a 40-bit result that must
1657 be shifted down by 32 bits
1658
1659 So again, just as with shift-left and shift-right, the result
1660 is shifted down by the maximum of the two source register bitwidths.
1661 And, exactly again, truncation or sign-extension is performed on the
1662 result. If sign-extension is to be carried out, it is performed
1663 from the same maximum of the two source register bitwidths out
1664 to the result element's bitwidth.
1665
1666 If truncation occurs, i.e. the top MSBs of the result are lost,
1667 this is "Officially Not Our Problem", i.e. it is assumed that the
1668 programmer actually desires the result to be truncated. i.e. if the
1669 programmer wanted all of the bits, they would have set the destination
1670 elwidth to accommodate them.
1671
1672 ## Polymorphic elwidth on LOAD/STORE <a name="elwidth_loadstore"></a>
1673
1674 Polymorphic element widths in vectorised form means that the data
1675 being loaded (or stored) across multiple registers needs to be treated
1676 (reinterpreted) as a contiguous stream of elwidth-wide items, where
1677 the source register's element width is **independent** from the destination's.
1678
1679 This makes for a slightly more complex algorithm when using indirection
1680 on the "addressed" register (source for LOAD and destination for STORE),
1681 particularly given that the LOAD/STORE instruction provides important
1682 information about the width of the data to be reinterpreted.
1683
1684 Let's illustrate the "load" part, where the pseudo-code for elwidth=default
1685 was as follows, and i is the loop from 0 to VL-1:
1686
1687 srcbase = ireg[rs+i];
1688 return mem[srcbase + imm]; // returns XLEN bits
1689
1690 Instead, when elwidth != default, for a LW (32-bit LOAD), elwidth-wide
1691 chunks are taken from the source memory location addressed by the current
1692 indexed source address register, and only when a full 32-bits-worth
1693 are taken will the index be moved on to the next contiguous source
1694 address register:
1695
1696 bitwidth = bw(elwidth); // source elwidth from CSR reg entry
1697 elsperblock = 32 / bitwidth // 1 if bw=32, 2 if bw=16, 4 if bw=8
1698 srcbase = ireg[rs+i/(elsperblock)]; // integer divide
1699 offs = i % elsperblock; // modulo
1700 return &mem[srcbase + imm + offs]; // re-cast to uint8_t*, uint16_t* etc.
1701
1702 Note that the constant "32" above is replaced by 8 for LB, 16 for LH, 64 for LD
1703 and 128 for LQ.
1704
1705 The principle is basically exactly the same as if the srcbase were pointing
1706 at the memory of the *register* file: memory is re-interpreted as containing
1707 groups of elwidth-wide discrete elements.
1708
1709 When storing the result from a load, it's important to respect the fact
1710 that the destination register has its *own separate element width*. Thus,
1711 when each element is loaded (at the source element width), any sign-extension
1712 or zero-extension (or truncation) needs to be done to the *destination*
1713 bitwidth. Also, the storing has the exact same analogous algorithm as
1714 above, where in fact it is just the set\_polymorphed\_reg pseudocode
1715 (completely unchanged) used above.
1716
1717 One issue remains: when the source element width is **greater** than
1718 the width of the operation, it is obvious that a single LB for example
1719 cannot possibly obtain 16-bit-wide data. This condition may be detected
1720 where, when using integer divide, elsperblock (the width of the LOAD
1721 divided by the bitwidth of the element) is zero.
1722
1723 The issue is "fixed" by ensuring that elsperblock is a minimum of 1:
1724
1725 elsperblock = min(1, LD_OP_BITWIDTH / element_bitwidth)
1726
1727 The elements, if the element bitwidth is larger than the LD operation's
1728 size, will then be sign/zero-extended to the full LD operation size, as
1729 specified by the LOAD (LDU instead of LD, LBU instead of LB), before
1730 being passed on to the second phase.
1731
1732 As LOAD/STORE may be twin-predicated, it is important to note that
1733 the rules on twin predication still apply, except where in previous
1734 pseudo-code (elwidth=default for both source and target) it was
1735 the *registers* that the predication was applied to, it is now the
1736 **elements** that the predication is applied to.
1737
1738 Thus the full pseudocode for all LD operations may be written out
1739 as follows:
1740
1741 function LBU(rd, rs):
1742 load_elwidthed(rd, rs, 8, true)
1743 function LB(rd, rs):
1744 load_elwidthed(rd, rs, 8, false)
1745 function LH(rd, rs):
1746 load_elwidthed(rd, rs, 16, false)
1747 ...
1748 ...
1749 function LQ(rd, rs):
1750 load_elwidthed(rd, rs, 128, false)
1751
1752 # returns 1 byte of data when opwidth=8, 2 bytes when opwidth=16..
1753 function load_memory(rs, imm, i, opwidth):
1754 elwidth = int_csr[rs].elwidth
1755 bitwidth = bw(elwidth);
1756 elsperblock = min(1, opwidth / bitwidth)
1757 srcbase = ireg[rs+i/(elsperblock)];
1758 offs = i % elsperblock;
1759 return mem[srcbase + imm + offs]; # 1/2/4/8/16 bytes
1760
1761 function load_elwidthed(rd, rs, opwidth, unsigned):
1762 destwid = int_csr[rd].elwidth # destination element width
1763  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
1764  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
1765  ps = get_pred_val(FALSE, rs); # predication on src
1766  pd = get_pred_val(FALSE, rd); # ... AND on dest
1767  for (int i = 0, int j = 0; i < VL && j < VL;):
1768 if (int_csr[rs].isvec) while (!(ps & 1<<i)) i++;
1769 if (int_csr[rd].isvec) while (!(pd & 1<<j)) j++;
1770 val = load_memory(rs, imm, i, opwidth)
1771 if unsigned:
1772 val = zero_extend(val, min(opwidth, bitwidth))
1773 else:
1774 val = sign_extend(val, min(opwidth, bitwidth))
1775 set_polymorphed_reg(rd, bitwidth, j, val)
1776 if (int_csr[rs].isvec) i++;
1777 if (int_csr[rd].isvec) j++; else break;
1778
1779 Note:
1780
1781 * when comparing against for example the twin-predicated c.mv
1782 pseudo-code, the pattern of independent incrementing of rd and rs
1783 is preserved unchanged.
1784 * just as with the c.mv pseudocode, zeroing is not included and must be
1785 taken into account (TODO).
1786 * that due to the use of a twin-predication algorithm, LOAD/STORE also
1787 take on the same VSPLAT, VINSERT, VREDUCE, VEXTRACT, VGATHER and
1788 VSCATTER characteristics.
1789 * that due to the use of the same set\_polymorphed\_reg pseudocode,
1790 a destination that is not vectorised (marked as scalar) will
1791 result in the element being fully sign-extended or zero-extended
1792 out to the full register file bitwidth (XLEN). When the source
1793 is also marked as scalar, this is how the compatibility with
1794 standard RV LOAD/STORE is preserved by this algorithm.
1795
1796 ### Example Tables showing LOAD elements
1797
1798 This section contains examples of vectorised LOAD operations, showing
1799 how the two stage process works (three if zero/sign-extension is included).
1800
1801
1802 #### Example: LD x8, x5(0), x8 CSR-elwidth=32, x5 CSR-elwidth=16, VL=7
1803
1804 This is:
1805
1806 * a 64-bit load, with an offset of zero
1807 * with a source-address elwidth of 16-bit
1808 * into a destination-register with an elwidth of 32-bit
1809 * where VL=7
1810 * from register x5 (actually x5-x6) to x8 (actually x8 to half of x11)
1811 * RV64, where XLEN=64 is assumed.
1812
1813 First, the memory table, which, due to the
1814 element width being 16 and the operation being LD (64), the 64-bits
1815 loaded from memory are subdivided into groups of **four** elements.
1816 And, with VL being 7 (deliberately to illustrate that this is reasonable
1817 and possible), the first four are sourced from the offset addresses pointed
1818 to by x5, and the next three from the ofset addresses pointed to by
1819 the next contiguous register, x6:
1820
1821 [[!table data="""
1822 addr | byte 0 | byte 1 | byte 2 | byte 3 | byte 4 | byte 5 | byte 6 | byte 7 |
1823 @x5 | elem 0 || elem 1 || elem 2 || elem 3 ||
1824 @x6 | elem 4 || elem 5 || elem 6 || not loaded ||
1825 """]]
1826
1827 Next, the elements are zero-extended from 16-bit to 32-bit, as whilst
1828 the elwidth CSR entry for x5 is 16-bit, the destination elwidth on x8 is 32.
1829
1830 [[!table data="""
1831 byte 3 | byte 2 | byte 1 | byte 0 |
1832 0x0 | 0x0 | elem0 ||
1833 0x0 | 0x0 | elem1 ||
1834 0x0 | 0x0 | elem2 ||
1835 0x0 | 0x0 | elem3 ||
1836 0x0 | 0x0 | elem4 ||
1837 0x0 | 0x0 | elem5 ||
1838 0x0 | 0x0 | elem6 ||
1839 0x0 | 0x0 | elem7 ||
1840 """]]
1841
1842 Lastly, the elements are stored in contiguous blocks, as if x8 was also
1843 byte-addressable "memory". That "memory" happens to cover registers
1844 x8, x9, x10 and x11, with the last 32 "bits" of x11 being **UNMODIFIED**:
1845
1846 [[!table data="""
1847 reg# | byte 7 | byte 6 | byte 5 | byte 4 | byte 3 | byte 2 | byte 1 | byte 0 |
1848 x8 | 0x0 | 0x0 | elem 1 || 0x0 | 0x0 | elem 0 ||
1849 x9 | 0x0 | 0x0 | elem 3 || 0x0 | 0x0 | elem 2 ||
1850 x10 | 0x0 | 0x0 | elem 5 || 0x0 | 0x0 | elem 4 ||
1851 x11 | **UNMODIFIED** |||| 0x0 | 0x0 | elem 6 ||
1852 """]]
1853
1854 Thus we have data that is loaded from the **addresses** pointed to by
1855 x5 and x6, zero-extended from 16-bit to 32-bit, stored in the **registers**
1856 x8 through to half of x11.
1857 The end result is that elements 0 and 1 end up in x8, with element 8 being
1858 shifted up 32 bits, and so on, until finally element 6 is in the
1859 LSBs of x11.
1860
1861 Note that whilst the memory addressing table is shown left-to-right byte order,
1862 the registers are shown in right-to-left (MSB) order. This does **not**
1863 imply that bit or byte-reversal is carried out: it's just easier to visualise
1864 memory as being contiguous bytes, and emphasises that registers are not
1865 really actually "memory" as such.
1866
1867 ## Why SV bitwidth specification is restricted to 4 entries
1868
1869 The four entries for SV element bitwidths only allows three over-rides:
1870
1871 * default bitwidth for a given operation *divided* by two
1872 * default bitwidth for a given operation *multiplied* by two
1873 * 8-bit
1874
1875 At first glance this seems completely inadequate: for example, RV64
1876 cannot possibly operate on 16-bit operations, because 64 divided by
1877 2 is 32. However, the reader may have forgotten that it is possible,
1878 at run-time, to switch a 64-bit application into 32-bit mode, by
1879 setting UXL. Once switched, opcodes that formerly had 64-bit
1880 meanings now have 32-bit meanings, and in this way, "default/2"
1881 now reaches **16-bit** where previously it meant "32-bit".
1882
1883 There is however an absolutely crucial aspect oF SV here that explicitly
1884 needs spelling out, and it's whether the "vectorised" bit is set in
1885 the Register's CSR entry.
1886
1887 If "vectorised" is clear (not set), this indicates that the operation
1888 is "scalar". Under these circumstances, when set on a destination (RD),
1889 then sign-extension and zero-extension, whilst changed to match the
1890 override bitwidth (if set), will erase the **full** register entry
1891 (64-bit if RV64).
1892
1893 When vectorised is *set*, this indicates that the operation now treats
1894 **elements** as if they were independent registers, so regardless of
1895 the length, any parts of a given actual register that are not involved
1896 in the operation are **NOT** modified, but are **PRESERVED**.
1897
1898 SIMD micro-architectures may implement this by using predication on
1899 any elements in a given actual register that are beyond the end of
1900 multi-element operation.
1901
1902 Example:
1903
1904 * rs1, rs2 and rd are all set to 8-bit
1905 * VL is set to 3
1906 * RV64 architecture is set (UXL=64)
1907 * add operation is carried out
1908 * bits 0-23 of RD are modified to be rs1[23..16] + rs2[23..16]
1909 concatenated with similar add operations on bits 15..8 and 7..0
1910 * bits 24 through 63 **remain as they originally were**.
1911
1912 Example SIMD micro-architectural implementation:
1913
1914 * SIMD architecture works out the nearest round number of elements
1915 that would fit into a full RV64 register (in this case: 8)
1916 * SIMD architecture creates a hidden predicate, binary 0b00000111
1917 i.e. the bottom 3 bits set (VL=3) and the top 5 bits clear
1918 * SIMD architecture goes ahead with the add operation as if it
1919 was a full 8-wide batch of 8 adds
1920 * SIMD architecture passes top 5 elements through the adders
1921 (which are "disabled" due to zero-bit predication)
1922 * SIMD architecture gets the 5 unmodified top 8-bits back unmodified
1923 and stores them in rd.
1924
1925 This requires a read on rd, however this is required anyway in order
1926 to support non-zeroing mode.
1927
1928 ## Polymorphic floating-point
1929
1930 Standard scalar RV integer operations base the register width on XLEN,
1931 which may be changed (UXL in USTATUS, and the corresponding MXL and
1932 SXL in MSTATUS and SSTATUS respectively). Integer LOAD, STORE and
1933 arithmetic operations are therefore restricted to an active XLEN bits,
1934 with sign or zero extension to pad out the upper bits when XLEN has
1935 been dynamically set to less than the actual register size.
1936
1937 For scalar floating-point, the active (used / changed) bits are
1938 specified exclusively by the operation: ADD.S specifies an active
1939 32-bits, with the upper bits of the source registers needing to
1940 be all 1s ("NaN-boxed"), and the destination upper bits being
1941 *set* to all 1s (including on LOAD/STOREs).
1942
1943 Where elwidth is set to default (on any source or the destination)
1944 it is obvious that this NaN-boxing behaviour can and should be
1945 preserved. When elwidth is non-default things are less obvious,
1946 so need to be thought through. Here is a normal (scalar) sequence,
1947 assuming an RV64 which supports Quad (128-bit) FLEN:
1948
1949 * FLD loads 64-bit wide from memory. Top 64 MSBs are set to all 1s
1950 * ADD.D performs a 64-bit-wide add. Top 64 MSBs of destination set to 1s.
1951 * FSD stores lowest 64-bits from the 128-bit-wide register to memory:
1952 top 64 MSBs ignored.
1953
1954 Therefore it makes sense to mirror this behaviour when, for example,
1955 elwidth is set to 32. Assume elwidth set to 32 on all source and
1956 destination registers:
1957
1958 * FLD loads 64-bit wide from memory as **two** 32-bit single-precision
1959 floating-point numbers.
1960 * ADD.D performs **two** 32-bit-wide adds, storing one of the adds
1961 in bits 0-31 and the second in bits 32-63.
1962 * FSD stores lowest 64-bits from the 128-bit-wide register to memory
1963
1964 Here's the thing: it does not make sense to overwrite the top 64 MSBs
1965 of the registers either during the FLD **or** the ADD.D. The reason
1966 is that, effectively, the top 64 MSBs actually represent a completely
1967 independent 64-bit register, so overwriting it is not only gratuitous
1968 but may actually be harmful for a future extension to SV which may
1969 have a way to directly access those top 64 bits.
1970
1971 The decision is therefore **not** to touch the upper parts of floating-point
1972 registers whereever elwidth is set to non-default values, including
1973 when "isvec" is false in a given register's CSR entry. Only when the
1974 elwidth is set to default **and** isvec is false will the standard
1975 RV behaviour be followed, namely that the upper bits be modified.
1976
1977 Ultimately if elwidth is default and isvec false on *all* source
1978 and destination registers, a SimpleV instruction defaults completely
1979 to standard RV scalar behaviour (this holds true for **all** operations,
1980 right across the board).
1981
1982 The nice thing here is that ADD.S, ADD.D and ADD.Q when elwidth are
1983 non-default values are effectively all the same: they all still perform
1984 multiple ADD operations, just at different widths. A future extension
1985 to SimpleV may actually allow ADD.S to access the upper bits of the
1986 register, effectively breaking down a 128-bit register into a bank
1987 of 4 independently-accesible 32-bit registers.
1988
1989 In the meantime, although when e.g. setting VL to 8 it would technically
1990 make no difference to the ALU whether ADD.S, ADD.D or ADD.Q is used,
1991 using ADD.Q may be an easy way to signal to the microarchitecture that
1992 it is to receive a higher VL value. On a superscalar OoO architecture
1993 there may be absolutely no difference, however on simpler SIMD-style
1994 microarchitectures they may not necessarily have the infrastructure in
1995 place to know the difference, such that when VL=8 and an ADD.D instruction
1996 is issued, it completes in 2 cycles (or more) rather than one, where
1997 if an ADD.Q had been issued instead on such simpler microarchitectures
1998 it would complete in one.
1999
2000 ## Specific instruction walk-throughs
2001
2002 This section covers walk-throughs of the above-outlined procedure
2003 for converting standard RISC-V scalar arithmetic operations to
2004 polymorphic widths, to ensure that it is correct.
2005
2006 ### add
2007
2008 Standard Scalar RV32/RV64 (xlen):
2009
2010 * RS1 @ xlen bits
2011 * RS2 @ xlen bits
2012 * add @ xlen bits
2013 * RD @ xlen bits
2014
2015 Polymorphic variant:
2016
2017 * RS1 @ rs1 bits, zero-extended to max(rs1, rs2) bits
2018 * RS2 @ rs2 bits, zero-extended to max(rs1, rs2) bits
2019 * add @ max(rs1, rs2) bits
2020 * RD @ rd bits. zero-extend to rd if rd > max(rs1, rs2) otherwise truncate
2021
2022 Note here that polymorphic add zero-extends its source operands,
2023 where addw sign-extends.
2024
2025 ### addw
2026
2027 The RV Specification specifically states that "W" variants of arithmetic
2028 operations always produce 32-bit signed values. In a polymorphic
2029 environment it is reasonable to assume that the signed aspect is
2030 preserved, where it is the length of the operands and the result
2031 that may be changed.
2032
2033 Standard Scalar RV64 (xlen):
2034
2035 * RS1 @ xlen bits
2036 * RS2 @ xlen bits
2037 * add @ xlen bits
2038 * RD @ xlen bits, truncate add to 32-bit and sign-extend to xlen.
2039
2040 Polymorphic variant:
2041
2042 * RS1 @ rs1 bits, sign-extended to max(rs1, rs2) bits
2043 * RS2 @ rs2 bits, sign-extended to max(rs1, rs2) bits
2044 * add @ max(rs1, rs2) bits
2045 * RD @ rd bits. sign-extend to rd if rd > max(rs1, rs2) otherwise truncate
2046
2047 Note here that polymorphic addw sign-extends its source operands,
2048 where add zero-extends.
2049
2050 This requires a little more in-depth analysis. Where the bitwidth of
2051 rs1 equals the bitwidth of rs2, no sign-extending will occur. It is
2052 only where the bitwidth of either rs1 or rs2 are different, will the
2053 lesser-width operand be sign-extended.
2054
2055 Effectively however, both rs1 and rs2 are being sign-extended (or truncated),
2056 where for add they are both zero-extended. This holds true for all arithmetic
2057 operations ending with "W".
2058
2059 ### addiw
2060
2061 Standard Scalar RV64I:
2062
2063 * RS1 @ xlen bits, truncated to 32-bit
2064 * immed @ 12 bits, sign-extended to 32-bit
2065 * add @ 32 bits
2066 * RD @ rd bits. sign-extend to rd if rd > 32, otherwise truncate.
2067
2068 Polymorphic variant:
2069
2070 * RS1 @ rs1 bits
2071 * immed @ 12 bits, sign-extend to max(rs1, 12) bits
2072 * add @ max(rs1, 12) bits
2073 * RD @ rd bits. sign-extend to rd if rd > max(rs1, 12) otherwise truncate
2074
2075 # Predication Element Zeroing
2076
2077 The introduction of zeroing on traditional vector predication is usually
2078 intended as an optimisation for lane-based microarchitectures with register
2079 renaming to be able to save power by avoiding a register read on elements
2080 that are passed through en-masse through the ALU. Simpler microarchitectures
2081 do not have this issue: they simply do not pass the element through to
2082 the ALU at all, and therefore do not store it back in the destination.
2083 More complex non-lane-based micro-architectures can, when zeroing is
2084 not set, use the predication bits to simply avoid sending element-based
2085 operations to the ALUs, entirely: thus, over the long term, potentially
2086 keeping all ALUs 100% occupied even when elements are predicated out.
2087
2088 SimpleV's design principle is not based on or influenced by
2089 microarchitectural design factors: it is a hardware-level API.
2090 Therefore, looking purely at whether zeroing is *useful* or not,
2091 (whether less instructions are needed for certain scenarios),
2092 given that a case can be made for zeroing *and* non-zeroing, the
2093 decision was taken to add support for both.
2094
2095 ## Single-predication (based on destination register)
2096
2097 Zeroing on predication for arithmetic operations is taken from
2098 the destination register's predicate. i.e. the predication *and*
2099 zeroing settings to be applied to the whole operation come from the
2100 CSR Predication table entry for the destination register.
2101 Thus when zeroing is set on predication of a destination element,
2102 if the predication bit is clear, then the destination element is *set*
2103 to zero (twin-predication is slightly different, and will be covered
2104 next).
2105
2106 Thus the pseudo-code loop for a predicated arithmetic operation
2107 is modified to as follows:
2108
2109  for (i = 0; i < VL; i++)
2110 if not zeroing: # an optimisation
2111 while (!(predval & 1<<i) && i < VL)
2112 if (int_vec[rd ].isvector)  { id += 1; }
2113 if (int_vec[rs1].isvector)  { irs1 += 1; }
2114 if (int_vec[rs2].isvector)  { irs2 += 1; }
2115 if i == VL:
2116 break
2117 if (predval & 1<<i)
2118 src1 = ....
2119 src2 = ...
2120 else:
2121 result = src1 + src2 # actual add (or other op) here
2122 set_polymorphed_reg(rd, destwid, ird, result)
2123 if (!int_vec[rd].isvector) break
2124 else if zeroing:
2125 result = 0
2126 set_polymorphed_reg(rd, destwid, ird, result)
2127 if (int_vec[rd ].isvector)  { id += 1; }
2128 else if (predval & 1<<i) break;
2129 if (int_vec[rs1].isvector)  { irs1 += 1; }
2130 if (int_vec[rs2].isvector)  { irs2 += 1; }
2131
2132 The optimisation to skip elements entirely is only possible for certain
2133 micro-architectures when zeroing is not set. However for lane-based
2134 micro-architectures this optimisation may not be practical, as it
2135 implies that elements end up in different "lanes". Under these
2136 circumstances it is perfectly fine to simply have the lanes
2137 "inactive" for predicated elements, even though it results in
2138 less than 100% ALU utilisation.
2139
2140 ## Twin-predication (based on source and destination register)
2141
2142 Twin-predication is not that much different, except that that
2143 the source is independently zero-predicated from the destination.
2144 This means that the source may be zero-predicated *or* the
2145 destination zero-predicated *or both*, or neither.
2146
2147 When with twin-predication, zeroing is set on the source and not
2148 the destination, if a predicate bit is set it indicates that a zero
2149 data element is passed through the operation (the exception being:
2150 if the source data element is to be treated as an address - a LOAD -
2151 then the data returned *from* the LOAD is zero, rather than looking up an
2152 *address* of zero.
2153
2154 When zeroing is set on the destination and not the source, then just
2155 as with single-predicated operations, a zero is stored into the destination
2156 element (or target memory address for a STORE).
2157
2158 Zeroing on both source and destination effectively result in a bitwise
2159 NOR operation of the source and destination predicate: the result is that
2160 where either source predicate OR destination predicate is set to 0,
2161 a zero element will ultimately end up in the destination register.
2162
2163 However: this may not necessarily be the case for all operations;
2164 implementors, particularly of custom instructions, clearly need to
2165 think through the implications in each and every case.
2166
2167 Here is pseudo-code for a twin zero-predicated operation:
2168
2169 function op_mv(rd, rs) # MV not VMV!
2170  rd = int_csr[rd].active ? int_csr[rd].regidx : rd;
2171  rs = int_csr[rs].active ? int_csr[rs].regidx : rs;
2172  ps, zerosrc = get_pred_val(FALSE, rs); # predication on src
2173  pd, zerodst = get_pred_val(FALSE, rd); # ... AND on dest
2174  for (int i = 0, int j = 0; i < VL && j < VL):
2175 if (int_csr[rs].isvec && !zerosrc) while (!(ps & 1<<i)) i++;
2176 if (int_csr[rd].isvec && !zerodst) while (!(pd & 1<<j)) j++;
2177 if ((pd & 1<<j))
2178 if ((pd & 1<<j))
2179 sourcedata = ireg[rs+i];
2180 else
2181 sourcedata = 0
2182 ireg[rd+j] <= sourcedata
2183 else if (zerodst)
2184 ireg[rd+j] <= 0
2185 if (int_csr[rs].isvec)
2186 i++;
2187 if (int_csr[rd].isvec)
2188 j++;
2189 else
2190 if ((pd & 1<<j))
2191 break;
2192
2193 Note that in the instance where the destination is a scalar, the hardware
2194 loop is ended the moment a value *or a zero* is placed into the destination
2195 register/element. Also note that, for clarity, variable element widths
2196 have been left out of the above.
2197
2198 # Exceptions
2199
2200 TODO: expand. Exceptions may occur at any time, in any given underlying
2201 scalar operation. This implies that context-switching (traps) may
2202 occur, and operation must be returned to where it left off. That in
2203 turn implies that the full state - including the current parallel
2204 element being processed - has to be saved and restored. This is
2205 what the **STATE** CSR is for.
2206
2207 The implications are that all underlying individual scalar operations
2208 "issued" by the parallelisation have to appear to be executed sequentially.
2209 The further implications are that if two or more individual element
2210 operations are underway, and one with an earlier index causes an exception,
2211 it may be necessary for the microarchitecture to **discard** or terminate
2212 operations with higher indices.
2213
2214 This being somewhat dissatisfactory, an "opaque predication" variant
2215 of the STATE CSR is being considered.
2216
2217 # Hints
2218
2219 A "HINT" is an operation that has no effect on architectural state,
2220 where its use may, by agreed convention, give advance notification
2221 to the microarchitecture: branch prediction notification would be
2222 a good example. Usually HINTs are where rd=x0.
2223
2224 With Simple-V being capable of issuing *parallel* instructions where
2225 rd=x0, the space for possible HINTs is expanded considerably. VL
2226 could be used to indicate different hints. In addition, if predication
2227 is set, the predication register itself could hypothetically be passed
2228 in as a *parameter* to the HINT operation.
2229
2230 No specific hints are yet defined in Simple-V
2231
2232 # VLIW Format <a name="vliw-format"></a>
2233
2234 One issue with SV is the setup and teardown time of the CSRs. The cost
2235 of the use of a full CSRRW (requiring LI) is quite high. A VLIW format
2236 therefore makes sense.
2237
2238 A suitable prefix, which fits the Expanded Instruction-Length encoding
2239 for "(80 + 16 times instruction_length)", as defined in Section 1.5
2240 of the RISC-V ISA, is as follows:
2241
2242 | 15 | 14:12 | 11:10 | 9:8 | 7 | 6:0 |
2243 | - | ----- | ----- | ----- | --- | ------- |
2244 | vlset | 16xil | pplen | rplen | mode | 1111111 |
2245
2246 An optional VL Block, optional predicate entries, optional register entries and finally some 16/32/48 bit standard RV or SVPrefix opcodes follow.
2247
2248 The variable-length format from Section 1.5 of the RISC-V ISA:
2249
2250 | base+4 ... base+2 | base | number of bits |
2251 | ------ ------------------- | ---------------- -------------------------- |
2252 | ..xxxx xxxxxxxxxxxxxxxx | xnnnxxxxx1111111 | (80+16\*nnn)-bit, nnn!=111 |
2253 | {ops}{Pred}{Reg}{VL Block} | SV Prefix | |
2254
2255 VL/MAXVL/SubVL Block:
2256
2257 | 31-30 | 29:28 | 27:22 | 21:17 | 16 |
2258 | - | ----- | ------ | ------ | - |
2259 | 0 | SubVL | VLdest | VLEN | vlt |
2260 | 1 | SubVL | VLdest | VLEN ||
2261
2262 If vlt is 0, VLEN is a 5 bit immediate value. If vlt is 1, it specifies the scalar register from which VL is set by this VLIW instruction group. VL, whether set from the register or the immediate, is then modified (truncated) to be max(VL, MAXVL), and the result stored in the scalar register specified in VLdest. If VLdest is zero, no store in the regfile occurs.
2263
2264 This option will typically be used to start vectorised loops, where the VLIW instruction effectively embeds an optional "SETSUBVL, SETVL" sequence (in compact form).
2265
2266 When bit 15 is set to 1, MAXVL and VL are both set to the immediate, VLEN, which is 6 bits in length, and the same value stored in scalar register VLdest (if that register is nonzero).
2267
2268 This option will typically not be used so much for loops as it will be for one-off instructions such as saving the entire register file to the stack with a single one-off Vectorised LD/ST.
2269
2270 CSRs needed:
2271
2272 * mepcvliw
2273 * sepcvliw
2274 * uepcvliw
2275 * hepcvliw
2276
2277 Notes:
2278
2279 * Bit 7 specifies if the prefix block format is the full 16 bit format (1) or the compact less expressive format (0). In the 8 bit format, pplen is multiplied by 2.
2280 * 8 bit format predicate numbering is implicit and begins from x9. Thus it is critical to put blocks in the correct order as required.
2281 * Bit 7 also specifies if the register block format is 16 bit (1) or 8 bit (0). In the 8 bit format, rplen is multiplied by 2. If only an odd number of entries are needed the last may be set to 0x00, indicating "unused".
2282 * Bit 15 specifies if the VL Block is present. If set to 1, the VL Block immediately follows the VLIW instruction Prefix
2283 * Bits 8 and 9 define how many RegCam entries (0 to 3 if bit 15 is 1, otherwise 0 to 6) follow the (optional) VL Block.
2284 * Bits 10 and 11 define how many PredCam entries (0 to 3 if bit 7 is 1, otherwise 0 to 6) follow the (optional) RegCam entries
2285 * Bits 14 to 12 (IL) define the actual length of the instruction: total
2286 number of bits is 80 + 16 times IL. Standard RV32, RVC and also
2287 SVPrefix (P48-\*-Type) instructions fit into this space, after the
2288 (optional) VL / RegCam / PredCam entries
2289 * Anything - any registers - within the VLIW-prefixed format *MUST* have the
2290 RegCam and PredCam entries applied to it.
2291 * At the end of the VLIW Group, the RegCam and PredCam entries *no longer apply*. VL, MAXVL and SUBVL on the other hand remain at the values set by the last instruction (whether a CSRRW or the VL Block header).
2292 * Although an inefficient use of resources, it is fine to set the MAXVL, VL and SUBVL CSRs with standard CSRRW instructions, within a VLIW block.
2293
2294 All this would greatly reduce the amount of space utilised by Vectorised
2295 instructions, given that 64-bit CSRRW requires 3, even 4 32-bit opcodes: the
2296 CSR itself, a LI, and the setting up of the value into the RS register
2297 of the CSR, which, again, requires a LI / LUI to get the 32 bit
2298 data into the CSR. To get 64-bit data into the register in order to put
2299 it into the CSR(s), LOAD operations from memory are needed!
2300
2301 Given that each 64-bit CSR can hold only 4x PredCAM entries (or 4 RegCAM
2302 entries), that's potentially 6 to eight 32-bit instructions, just to
2303 establish the Vector State!
2304
2305 Not only that: even CSRRW on VL and MAXVL requires 64-bits (even more bits if
2306 VL needs to be set to greater than 32). Bear in mind that in SV, both MAXVL
2307 and VL need to be set.
2308
2309 By contrast, the VLIW prefix is only 16 bits, the VL/MAX/SubVL block is
2310 only 16 bits, and as long as not too many predicates and register vector
2311 qualifiers are specified, several 32-bit and 16-bit opcodes can fit into the
2312 format. If the full flexibility of the 16 bit block formats are not needed, more space is saved by using the 8 bit formats.
2313
2314 In this light, embedding the VL/MAXVL, PredCam and RegCam CSR entries into
2315 a VLIW format makes a lot of sense.
2316
2317 Open Questions:
2318
2319 * Is it necessary to stick to the RISC-V 1.5 format? Why not go with
2320 using the 15th bit to allow 80 + 16\*0bnnnn bits? Perhaps to be sane,
2321 limit to 256 bits (16 times 0-11).
2322
2323 ## Limitations on instructions.
2324
2325 To greatly simplify implementations, it is required to treat the VLIW
2326 group as a separate sub-program with its own separate PC. The sub-pc
2327 advances separately whilst the main PC remains pointing at the beginning
2328 of the VLIW instruction (not to be confused with how VL works, which
2329 is exactly the same principle, except it is VStart in the STATE CSR
2330 that increments).
2331
2332 This has implications, namely that a new set of CSRs identical to xepc
2333 (mepc, srpc, hepc and uepc) must be created and managed and respected
2334 as being a sub extension of the xepc set of CSRs. Thus, xepcvliw CSRs
2335 must be context switched and saved / restored in traps.
2336
2337 The VStart indices in the STATE CSR may be similarly regarded as another
2338 sub-execution context, giving in effect two sets of nested sub-levels
2339 of the RISCV Program Counter.
2340
2341 In addition, as xepcvliw CSRs are relative to the beginning of the VLIW
2342 block, branches MUST be restricted to within the block, i.e. addressing
2343 is now restricted to the start (and very short) length of the block.
2344
2345 Also: calling subroutines is inadviseable, unless they can be entirely
2346 accomplished within a block.
2347
2348 A normal jump and a normal function call may only be taken by letting
2349 the VLIW end, returning to "normal" standard RV mode, using RVC, 32 bit
2350 or P48-*-type opcodes.
2351
2352 ## Links
2353
2354 * <https://groups.google.com/d/msg/comp.arch/yIFmee-Cx-c/jRcf0evSAAAJ>
2355
2356 # Subsets of RV functionality
2357
2358 This section describes the differences when SV is implemented on top of
2359 different subsets of RV.
2360
2361 ## Common options
2362
2363 It is permitted to limit the size of either (or both) the register files
2364 down to the original size of the standard RV architecture. However, below
2365 the mandatory limits set in the RV standard will result in non-compliance
2366 with the SV Specification.
2367
2368 ## RV32 / RV32F
2369
2370 When RV32 or RV32F is implemented, XLEN is set to 32, and thus the
2371 maximum limit for predication is also restricted to 32 bits. Whilst not
2372 actually specifically an "option" it is worth noting.
2373
2374 ## RV32G
2375
2376 Normally in standard RV32 it does not make much sense to have
2377 RV32G, The critical instructions that are missing in standard RV32
2378 are those for moving data to and from the double-width floating-point
2379 registers into the integer ones, as well as the FCVT routines.
2380
2381 In an earlier draft of SV, it was possible to specify an elwidth
2382 of double the standard register size: this had to be dropped,
2383 and may be reintroduced in future revisions.
2384
2385 ## RV32 (not RV32F / RV32G) and RV64 (not RV64F / RV64G)
2386
2387 When floating-point is not implemented, the size of the User Register and
2388 Predication CSR tables may be halved, to only 4 2x16-bit CSRs (8 entries
2389 per table).
2390
2391 ## RV32E
2392
2393 In embedded scenarios the User Register and Predication CSRs may be
2394 dropped entirely, or optionally limited to 1 CSR, such that the combined
2395 number of entries from the M-Mode CSR Register table plus U-Mode
2396 CSR Register table is either 4 16-bit entries or (if the U-Mode is
2397 zero) only 2 16-bit entries (M-Mode CSR table only). Likewise for
2398 the Predication CSR tables.
2399
2400 RV32E is the most likely candidate for simply detecting that registers
2401 are marked as "vectorised", and generating an appropriate exception
2402 for the VL loop to be implemented in software.
2403
2404 ## RV128
2405
2406 RV128 has not been especially considered, here, however it has some
2407 extremely large possibilities: double the element width implies
2408 256-bit operands, spanning 2 128-bit registers each, and predication
2409 of total length 128 bit given that XLEN is now 128.
2410
2411 # Under consideration <a name="issues"></a>
2412
2413 for element-grouping, if there is unused space within a register
2414 (3 16-bit elements in a 64-bit register for example), recommend:
2415
2416 * For the unused elements in an integer register, the used element
2417 closest to the MSB is sign-extended on write and the unused elements
2418 are ignored on read.
2419 * The unused elements in a floating-point register are treated as-if
2420 they are set to all ones on write and are ignored on read, matching the
2421 existing standard for storing smaller FP values in larger registers.
2422
2423 ---
2424
2425 info register,
2426
2427 > One solution is to just not support LR/SC wider than a fixed
2428 > implementation-dependent size, which must be at least 
2429 >1 XLEN word, which can be read from a read-only CSR
2430 > that can also be used for info like the kind and width of 
2431 > hw parallelism supported (128-bit SIMD, minimal virtual 
2432 > parallelism, etc.) and other things (like maybe the number 
2433 > of registers supported). 
2434
2435 > That CSR would have to have a flag to make a read trap so
2436 > a hypervisor can simulate different values.
2437
2438 ----
2439
2440 > And what about instructions like JALR? 
2441
2442 answer: they're not vectorised, so not a problem
2443
2444 ----
2445
2446 * if opcode is in the RV32 group, rd, rs1 and rs2 bitwidth are
2447 XLEN if elwidth==default
2448 * if opcode is in the RV32I group, rd, rs1 and rs2 bitwidth are
2449 *32* if elwidth == default
2450
2451 ---
2452
2453 TODO: update elwidth to be default / 8 / 16 / 32
2454
2455 ---
2456
2457 TODO: document different lengths for INT / FP regfiles, and provide
2458 as part of info register. 00=32, 01=64, 10=128, 11=reserved.
2459
2460 ---
2461
2462 push/pop of vector config state:
2463 <https://groups.google.com/d/msg/comp.arch/bGBeaNjAKvc/z2d_cST7AgAJ>
2464
2465 when Bank in CFG is altered, shift the "addressing" of Reg and
2466 Pred CSRs to match. i.e. treat the Reg and Pred CSRs as a
2467 "mini stack".