add vector length pseudocode
[libreriscv.git] / simple_v_extension.mdwn
1 # Variable-width Variable-packed SIMD / Simple-V / Parallelism Extension Proposal
2
3 Key insight: Simple-V is intended as an abstraction layer to provide
4 a consistent "API" to parallelisation of existing *and future* operations.
5 *Actual* internal hardware-level parallelism is *not* required, such
6 that Simple-V may be viewed as providing a "compact" or "consolidated"
7 means of issuing multiple near-identical arithmetic instructions to an
8 instruction queue (FILO), pending execution.
9
10 *Actual* parallelism, if added independently of Simple-V in the form
11 of Out-of-order restructuring (including parallel ALU lanes) or VLIW
12 implementations, or SIMD, or anything else, would then benefit *if*
13 Simple-V was added on top.
14
15 [[!toc ]]
16
17 # Introduction
18
19 This proposal exists so as to be able to satisfy several disparate
20 requirements: power-conscious, area-conscious, and performance-conscious
21 designs all pull an ISA and its implementation in different conflicting
22 directions, as do the specific intended uses for any given implementation.
23
24 Additionally, the existing P (SIMD) proposal and the V (Vector) proposals,
25 whilst each extremely powerful in their own right and clearly desirable,
26 are also:
27
28 * Clearly independent in their origins (Cray and AndesStar v3 respectively)
29 so need work to adapt to the RISC-V ethos and paradigm
30 * Are sufficiently large so as to make adoption (and exploration for
31 analysis and review purposes) prohibitively expensive
32 * Both contain partial duplication of pre-existing RISC-V instructions
33 (an undesirable characteristic)
34 * Both have independent and disparate methods for introducing parallelism
35 at the instruction level.
36 * Both require that their respective parallelism paradigm be implemented
37 along-side and integral to their respective functionality *or not at all*.
38 * Both independently have methods for introducing parallelism that
39 could, if separated, benefit
40 *other areas of RISC-V not just DSP or Floating-point respectively*.
41
42 Therefore it makes a huge amount of sense to have a means and method
43 of introducing instruction parallelism in a flexible way that provides
44 implementors with the option to choose exactly where they wish to offer
45 performance improvements and where they wish to optimise for power
46 and/or area (and if that can be offered even on a per-operation basis that
47 would provide even more flexibility).
48
49 Additionally it makes sense to *split out* the parallelism inherent within
50 each of P and V, and to see if each of P and V then, in *combination* with
51 a "best-of-both" parallelism extension, could be added on *on top* of
52 this proposal, to topologically provide the exact same functionality of
53 each of P and V. Each of P and V then can focus on providing the best
54 operations possible for their respective target areas, without being
55 hugely concerned about the actual parallelism.
56
57 Furthermore, an additional goal of this proposal is to reduce the number
58 of opcodes utilised by each of P and V as they currently stand, leveraging
59 existing RISC-V opcodes where possible, and also potentially allowing
60 P and V to make use of Compressed Instructions as a result.
61
62 **TODO**: propose overflow registers be actually one of the integer regs
63 (flowing to multiple regs).
64
65 **TODO**: propose "mask" (predication) registers likewise. combination with
66 standard RV instructions and overflow registers extremely powerful, see
67 Aspex ASP.
68
69 # Analysis and discussion of Vector vs SIMD
70
71 There are five combined areas between the two proposals that help with
72 parallelism without over-burdening the ISA with a huge proliferation of
73 instructions:
74
75 * Fixed vs variable parallelism (fixed or variable "M" in SIMD)
76 * Implicit vs fixed instruction bit-width (integral to instruction or not)
77 * Implicit vs explicit type-conversion (compounded on bit-width)
78 * Implicit vs explicit inner loops.
79 * Masks / tagging (selecting/preventing certain indexed elements from execution)
80
81 The pros and cons of each are discussed and analysed below.
82
83 ## Fixed vs variable parallelism length
84
85 In David Patterson and Andrew Waterman's analysis of SIMD and Vector
86 ISAs, the analysis comes out clearly in favour of (effectively) variable
87 length SIMD. As SIMD is a fixed width, typically 4, 8 or in extreme cases
88 16 or 32 simultaneous operations, the setup, teardown and corner-cases of SIMD
89 are extremely burdensome except for applications whose requirements
90 *specifically* match the *precise and exact* depth of the SIMD engine.
91
92 Thus, SIMD, no matter what width is chosen, is never going to be acceptable
93 for general-purpose computation, and in the context of developing a
94 general-purpose ISA, is never going to satisfy 100 percent of implementors.
95
96 To explain this further: for increased workloads over time, as the
97 performance requirements increase for new target markets, implementors
98 choose to extend the SIMD width (so as to again avoid mixing parallelism
99 into the instruction issue phases: the primary "simplicity" benefit of
100 SIMD in the first place), with the result that the entire opcode space
101 effectively doubles with each new SIMD width that's added to the ISA.
102
103 That basically leaves "variable-length vector" as the clear *general-purpose*
104 winner, at least in terms of greatly simplifying the instruction set,
105 reducing the number of instructions required for any given task, and thus
106 reducing power consumption for the same.
107
108 ## Implicit vs fixed instruction bit-width
109
110 SIMD again has a severe disadvantage here, over Vector: huge proliferation
111 of specialist instructions that target 8-bit, 16-bit, 32-bit, 64-bit, and
112 have to then have operations *for each and between each*. It gets very
113 messy, very quickly.
114
115 The V-Extension on the other hand proposes to set the bit-width of
116 future instructions on a per-register basis, such that subsequent instructions
117 involving that register are *implicitly* of that particular bit-width until
118 otherwise changed or reset.
119
120 This has some extremely useful properties, without being particularly
121 burdensome to implementations, given that instruction decode already has
122 to direct the operation to a correctly-sized width ALU engine, anyway.
123
124 Not least: in places where an ISA was previously constrained (due for
125 whatever reason, including limitations of the available operand spcace),
126 implicit bit-width allows the meaning of certain operations to be
127 type-overloaded *without* pollution or alteration of frozen and immutable
128 instructions, in a fully backwards-compatible fashion.
129
130 ## Implicit and explicit type-conversion
131
132 The Draft 2.3 V-extension proposal has (deprecated) polymorphism to help
133 deal with over-population of instructions, such that type-casting from
134 integer (and floating point) of various sizes is automatically inferred
135 due to "type tagging" that is set with a special instruction. A register
136 will be *specifically* marked as "16-bit Floating-Point" and, if added
137 to an operand that is specifically tagged as "32-bit Integer" an implicit
138 type-conversion will take placce *without* requiring that type-conversion
139 to be explicitly done with its own separate instruction.
140
141 However, implicit type-conversion is not only quite burdensome to
142 implement (explosion of inferred type-to-type conversion) but also is
143 never really going to be complete. It gets even worse when bit-widths
144 also have to be taken into consideration. Each new type results in
145 an increased O(N^2) conversion space that, as anyone who has examined
146 python's source code (which has built-in polymorphic type-conversion),
147 knows that the task is more complex than it first seems.
148
149 Overall, type-conversion is generally best to leave to explicit
150 type-conversion instructions, or in definite specific use-cases left to
151 be part of an actual instruction (DSP or FP)
152
153 ## Zero-overhead loops vs explicit loops
154
155 The initial Draft P-SIMD Proposal by Chuanhua Chang of Andes Technology
156 contains an extremely interesting feature: zero-overhead loops. This
157 proposal would basically allow an inner loop of instructions to be
158 repeated indefinitely, a fixed number of times.
159
160 Its specific advantage over explicit loops is that the pipeline in a DSP
161 can potentially be kept completely full *even in an in-order single-issue
162 implementation*. Normally, it requires a superscalar architecture and
163 out-of-order execution capabilities to "pre-process" instructions in
164 order to keep ALU pipelines 100% occupied.
165
166 By bringing that capability in, this proposal could offer a way to increase
167 pipeline activity even in simpler implementations in the one key area
168 which really matters: the inner loop.
169
170 However when looking at much more comprehensive schemes
171 "A portable specification of zero-overhead loop control hardware
172 applied to embedded processors" (ZOLC), optimising only the single
173 inner loop seems inadequate, tending to suggest that ZOLC may be
174 better off being proposed as an entirely separate Extension.
175
176 ## Mask and Tagging (Predication)
177
178 Tagging (aka Masks aka Predication) is a pseudo-method of implementing
179 simplistic branching in a parallel fashion, by allowing execution on
180 elements of a vector to be switched on or off depending on the results
181 of prior operations in the same array position.
182
183 The reason for considering this is simple: by *definition* it
184 is not possible to perform individual parallel branches in a SIMD
185 (Single-Instruction, **Multiple**-Data) context. Branches (modifying
186 of the Program Counter) will result in *all* parallel data having
187 a different instruction executed on it: that's just the definition of
188 SIMD, and it is simply unavoidable.
189
190 So these are the ways in which conditional execution may be implemented:
191
192 * explicit compare and branch: BNE x, y -> offs would jump offs
193 instructions if x was not equal to y
194 * explicit store of tag condition: CMP x, y -> tagbit
195 * implicit (condition-code) ADD results in a carry, carry bit implicitly
196 (or sometimes explicitly) goes into a "tag" (mask) register
197
198 The first of these is a "normal" branch method, which is flat-out impossible
199 to parallelise without look-ahead and effectively rewriting instructions.
200 This would defeat the purpose of RISC.
201
202 The latter two are where parallelism becomes easy to do without complexity:
203 every operation is modified to be "conditionally executed" (in an explicit
204 way directly in the instruction format *or* implicitly).
205
206 RVV (Vector-Extension) proposes to have *explicit* storing of the compare
207 in a tag/mask register, and to *explicitly* have every vector operation
208 *require* that its operation be "predicated" on the bits within an
209 explicitly-named tag/mask register.
210
211 SIMD (P-Extension) has not yet published precise documentation on what its
212 schema is to be: there is however verbal indication at the time of writing
213 that:
214
215 > The "compare" instructions in the DSP/SIMD ISA proposed by Andes will
216 > be executed using the same compare ALU logic for the base ISA with some
217 > minor modifications to handle smaller data types. The function will not
218 > be duplicated.
219
220 This is an *implicit* form of predication as the base RV ISA does not have
221 condition-codes or predication. By adding a CSR it becomes possible
222 to also tag certain registers as "predicated if referenced as a destination".
223 Example:
224
225 // in future operations from now on, if r0 is the destination use r5 as
226 // the PREDICATION register
227 SET_IMPLICIT_CSRPREDICATE r0, r5
228 // store the compares in r5 as the PREDICATION register
229 CMPEQ8 r5, r1, r2
230 // r0 is used here. ah ha! that means it's predicated using r5!
231 ADD8 r0, r1, r3
232
233 With enough registers (and in RISC-V there are enough registers) some fairly
234 complex predication can be set up and yet still execute without significant
235 stalling, even in a simple non-superscalar architecture.
236
237 (For details on how Branch Instructions would be retro-fitted to indirectly
238 predicated equivalents, see Appendix)
239
240 ## Conclusions
241
242 In the above sections the five different ways where parallel instruction
243 execution has closely and loosely inter-related implications for the ISA and
244 for implementors, were outlined. The pluses and minuses came out as
245 follows:
246
247 * Fixed vs variable parallelism: <b>variable</b>
248 * Implicit (indirect) vs fixed (integral) instruction bit-width: <b>indirect</b>
249 * Implicit vs explicit type-conversion: <b>explicit</b>
250 * Implicit vs explicit inner loops: <b>implicit but best done separately</b>
251 * Tag or no-tag: <b>Complex but highly beneficial</b>
252
253 In particular:
254
255 * variable-length vectors came out on top because of the high setup, teardown
256 and corner-cases associated with the fixed width of SIMD.
257 * Implicit bit-width helps to extend the ISA to escape from
258 former limitations and restrictions (in a backwards-compatible fashion),
259 whilst also leaving implementors free to simmplify implementations
260 by using actual explicit internal parallelism.
261 * Implicit (zero-overhead) loops provide a means to keep pipelines
262 potentially 100% occupied in a single-issue in-order implementation
263 i.e. *without* requiring a super-scalar or out-of-order architecture,
264 but doing a proper, full job (ZOLC) is an entirely different matter.
265
266 Constructing a SIMD/Simple-Vector proposal based around four of these five
267 requirements would therefore seem to be a logical thing to do.
268
269 # Instruction Format
270
271 **TODO** *basically borrow from both P and V, which should be quite simple
272 to do, with the exception of Tag/no-tag, which needs a bit more
273 thought. V's Section 17.19 of Draft V2.3 spec is reminiscent of B's BGS
274 gather-scatterer, and, if implemented, could actually be a really useful
275 way to span 8-bit up to 64-bit groups of data, where BGS as it stands
276 and described by Clifford does **bits** of up to 16 width. Lots to
277 look at and investigate*
278
279 * For analysis of RVV see [[v_comparative_analysis]] which begins to
280 outline topologically-equivalent mappings of instructions
281 * Also see Appendix "Retro-fitting Predication into branch-explicit ISA"
282 for format of Branch opcodes.
283
284 **TODO**: *analyse and decide whether the implicit nature of predication
285 as proposed is or is not a lot of hassle, and if explicit prefixes are
286 a better idea instead. Parallelism therefore effectively may end up
287 as always being 64-bit opcodes (32 for the prefix, 32 for the instruction)
288 with some opportunities for to use Compressed bringing it down to 48.
289 Also to consider is whether one or both of the last two remaining Compressed
290 instruction codes in Quadrant 1 could be used as a parallelism prefix,
291 bringing parallelised opcodes down to 32-bit and having the benefit of
292 being explicit.*
293
294 ## Branch Instruction:
295
296 [[!table data="""
297 31 | 30 .. 25 |24 ... 20 | 19 15 | 14 12 | 11 .. 8 | 7 | 6 ... 0 |
298 imm[12] | imm[10:5]| rs2 | rs1 | funct3 | imm[4:1] | imm[11] | opcode |
299 1 | 6 | 5 | 5 | 3 | 4 | 1 | 7 |
300 I/F | reserved | src2 | src1 | BPR | predicate rs3 || BRANCH |
301 0 | reserved | src2 | src1 | 000 | predicate rs3 || BEQ |
302 0 | reserved | src2 | src1 | 001 | predicate rs3 || BNE |
303 0 | reserved | src2 | src1 | 010 | predicate rs3 || rsvd |
304 0 | reserved | src2 | src1 | 011 | predicate rs3 || rsvd |
305 0 | reserved | src2 | src1 | 100 | predicate rs3 || BLE |
306 0 | reserved | src2 | src1 | 101 | predicate rs3 || BGE |
307 0 | reserved | src2 | src1 | 110 | predicate rs3 || BLTU |
308 0 | reserved | src2 | src1 | 111 | predicate rs3 || BGEU |
309 1 | reserved | src2 | src1 | 000 | predicate rs3 || FEQ |
310 1 | reserved | src2 | src1 | 001 | predicate rs3 || FNE |
311 1 | reserved | src2 | src1 | 010 | predicate rs3 || rsvd |
312 1 | reserved | src2 | src1 | 011 | predicate rs3 || rsvd |
313 1 | reserved | src2 | src1 | 100 | predicate rs3 || FLT |
314 1 | reserved | src2 | src1 | 101 | predicate rs3 || FLE |
315 1 | reserved | src2 | src1 | 110 | predicate rs3 || rsvd |
316 1 | reserved | src2 | src1 | 111 | predicate rs3 || rsvd |
317 """]]
318
319 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
320 for predicated compare operations of function "cmp":
321
322 for (int i=0; i<vl; ++i)
323 if ([!]preg[p][i])
324 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
325 s2 ? vreg[rs2][i] : sreg[rs2]);
326
327 With associated predication, vector-length adjustments and so on,
328 and temporarily ignoring bitwidth (which makes the comparisons more
329 complex), this becomes:
330
331 if I/F == INT: # integer type cmp
332 pred_enabled = int_pred_enabled # TODO: exception if not set!
333 preg = int_pred_reg[rd]
334 else:
335 pred_enabled = fp_pred_enabled # TODO: exception if not set!
336 preg = fp_pred_reg[rd]
337
338 s1 = CSRvectorlen[src1] > 1;
339 s2 = CSRvectorlen[src2] > 1;
340 for (int i=0; i<vl; ++i)
341 preg[rs3][i] = cmp(s1 ? reg[src1+i] : reg[src1],
342 s2 ? reg[src2+i] : reg[src2]);
343
344 Notes:
345
346 * Predicated SIMD comparisons would break src1 and src2 further down
347 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
348 Reordering") setting Vector-Length * (number of SIMD elements) bits
349 in Predicate Register rs3 as opposed to just Vector-Length bits.
350 * Predicated Branches do not actually have an adjustment to the Program
351 Counter, so all of bits 25 through 30 in every case are not needed.
352 * There are plenty of reserved opcodes for which bits 25 through 30 could
353 be put to good use if there is a suitable use-case.
354 * FEQ and FNE (and BEQ and BNE) are included in order to save one
355 instruction having to invert the resultant predicate bitfield.
356 FLT and FLE may be inverted to FGT and FGE if needed by swapping
357 src1 and src2 (likewise the integer counterparts).
358
359 ## Compressed Branch Instruction:
360
361 [[!table data="""
362 15..13 | 12...10 | 9..7 | 6..5 | 4..2 | 1..0 | name |
363 funct3 | imm | rs10 | imm | | op | |
364 3 | 3 | 3 | 2 | 3 | 2 | |
365 C.BPR | pred rs3 | src1 | I/F B | src2 | C1 | |
366 110 | pred rs3 | src1 | I/F 0 | src2 | C1 | P.EQ |
367 111 | pred rs3 | src1 | I/F 0 | src2 | C1 | P.NE |
368 110 | pred rs3 | src1 | I/F 1 | src2 | C1 | P.LT |
369 111 | pred rs3 | src1 | I/F 1 | src2 | C1 | P.LE |
370 """]]
371
372 Notes:
373
374 * Bits 5 13 14 and 15 make up the comparator type
375 * In both floating-point and integer cases there are four predication
376 comparators: EQ/NEQ/LT/LE (with GT and GE being synthesised by inverting
377 src1 and src2).
378
379 # LOAD / STORE Instructions
380
381 For full analysis of topological adaptation of RVV LOAD/STORE
382 see [[v_comparative_analysis]]. All three types (LD, LD.S and LD.X)
383 may be implicitly overloaded into the one base RV LOAD instruction.
384
385 Revised LOAD:
386
387 [[!table data="""
388 31 | 30 | 29 25 | 24 20 | 19 15 | 14 12 | 11 7 | 6 0 |
389 imm[11:0] |||| rs1 | funct3 | rd | opcode |
390 1 | 1 | 5 | 5 | 5 | 3 | 5 | 7 |
391 ? | s | rs2 | imm[4:0] | base | width | dest | LOAD |
392 """]]
393
394 Notes:
395
396 * LOAD remains functionally (topologically) identical to RVV LOAD
397 * Predication CSR-marking register is not explicitly shown in instruction, it's
398 implicit based on the CSR predicate state for the rd (destination) register
399 * rs2, the source, may *also be marked as a vector*, which implicitly
400 is taken to indicate "Indexed Load" (LD.X)
401 * Bit 30 indicates "element stride" or "constant-stride" (LD or LD.S)
402 * Bit 31 is reserved (ideas under consideration: auto-increment)
403 * **TODO**: include CSR SIMD bitwidth in the pseudo-code below.
404 * **TODO**: clarify where width maps to elsize
405
406 Pseudo-code (excludes CSR SIMD bitwidth):
407
408 if (unit-strided) stride = elsize;
409 else stride = areg[as2]; // constant-strided
410
411 pred_enabled = int_pred_enabled
412 preg = int_pred_reg[rd]
413
414 for (int i=0; i<vl; ++i)
415 if (preg_enabled[rd] && [!]preg[i])
416 for (int j=0; j<seglen+1; j++)
417 {
418 if CSRvectorised[rs2])
419 offs = vreg[rs2][i]
420 else
421 offs = i*(seglen+1)*stride;
422 vreg[rd+j][i] = mem[sreg[base] + offs + j*stride];
423 }
424
425 Taking CSR (SIMD) bitwidth into account involves extending vl according
426 to the "Bitwidth Virtual Register Reordering" scheme shown in the Appendix.
427
428 A similar instruction exists for STORE, with identical topological
429 translation of all features.
430
431 # Note on implementation of parallelism
432
433 One extremely important aspect of this proposal is to respect and support
434 implementors desire to focus on power, area or performance. In that regard,
435 it is proposed that implementors be free to choose whether to implement
436 the Vector (or variable-width SIMD) parallelism as sequential operations
437 with a single ALU, fully parallel (if practical) with multiple ALUs, or
438 a hybrid combination of both.
439
440 In Broadcom's Videocore-IV, they chose hybrid, and called it "Virtual
441 Parallelism". They achieve a 16-way SIMD at an **instruction** level
442 by providing a combination of a 4-way parallel ALU *and* an externally
443 transparent loop that feeds 4 sequential sets of data into each of the
444 4 ALUs.
445
446 Also in the same core, it is worth noting that particularly uncommon
447 but essential operations (Reciprocal-Square-Root for example) are
448 *not* part of the 4-way parallel ALU but instead have a *single* ALU.
449 Under the proposed Vector (varible-width SIMD) implementors would
450 be free to do precisely that: i.e. free to choose *on a per operation
451 basis* whether and how much "Virtual Parallelism" to deploy.
452
453 It is absolutely critical to note that it is proposed that such choices MUST
454 be **entirely transparent** to the end-user and the compiler. Whilst
455 a Vector (varible-width SIM) may not precisely match the width of the
456 parallelism within the implementation, the end-user **should not care**
457 and in this way the performance benefits are gained but the ISA remains
458 straightforward. All that happens at the end of an instruction run is: some
459 parallel units (if there are any) would remain offline, completely
460 transparently to the ISA, the program, and the compiler.
461
462 The "SIMD considered harmful" trap of having huge complexity and extra
463 instructions to deal with corner-cases is thus avoided, and implementors
464 get to choose precisely where to focus and target the benefits of their
465 implementation efforts, without "extra baggage".
466
467 # CSRs <a name="csrs"></a>
468
469 There are a number of CSRs needed, which are used at the instruction
470 decode phase to re-interpret standard RV opcodes (a practice that has
471 precedent in the setting of MISA to enable / disable extensions).
472
473 * Integer Register N is Vector of length M: r(N) -> r(N..N+M-1)
474 * Integer Register N is of implicit bitwidth M (M=default,8,16,32,64)
475 * Floating-point Register N is Vector of length M: r(N) -> r(N..N+M-1)
476 * Floating-point Register N is of implicit bitwidth M (M=default,8,16,32,64)
477 * Integer Register N is a Predication Register (note: a key-value store)
478
479 Notes:
480
481 * for the purposes of LOAD / STORE, Integer Registers which are
482 marked as a Vector will result in a Vector LOAD / STORE.
483 * Vector Lengths are *not* the same as vsetl but are an integral part
484 of vsetl.
485 * Actual vector length is *multipled* by how many blocks of length
486 "bitwidth" may fit into an XLEN-sized register file.
487 * Predication is a key-value store due to the implicit referencing,
488 as opposed to having the predicate register explicitly in the instruction.
489
490 ## Predication CSR
491
492 The Predication CSR is a key-value store indicating whether, if a given
493 destination register (integer or floating-point) is referred to in an
494 instruction, it is to be predicated. The first entry is whether predication
495 is enabled. The second entry is whether the register index refers to a
496 floating-point or an integer register. The third entry is the index
497 of that register which is to be predicated (if referred to). The fourth entry
498 is the integer register that is treated as a bitfield, indexable by the
499 vector element index.
500
501 | RegNo | 6 | 5 | (4..0) | (4..0) |
502 | ----- | - | - | ------- | ------- |
503 | r0 | pren0 | i/f | regidx | predidx |
504 | r1 | pren1 | i/f | regidx | predidx |
505 | .. | pren.. | i/f | regidx | predidx |
506 | r15 | pren15 | i/f | regidx | predidx |
507
508 The Predication CSR Table is a key-value store, so implementation-wise
509 it will be faster to turn the table around (maintain topologically
510 equivalent state):
511
512 fp_pred_enabled[32];
513 int_pred_enabled[32];
514 for (i = 0; i < 16; i++)
515 if CSRpred[i].pren:
516 idx = CSRpred[i].regidx
517 predidx = CSRpred[i].predidx
518 if CSRpred[i].type == 0: # integer
519 int_pred_enabled[idx] = 1
520 int_pred_reg[idx] = predidx
521 else:
522 fp_pred_enabled[idx] = 1
523 fp_pred_reg[idx] = predidx
524
525 So when an operation is to be predicated, it is the internal state that
526 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
527 pseudo-code for operations is given, where p is the explicit (direct)
528 reference to the predication register to be used:
529
530 for (int i=0; i<vl; ++i)
531 if ([!]preg[p][i])
532 (d ? vreg[rd][i] : sreg[rd]) =
533 iop(s1 ? vreg[rs1][i] : sreg[rs1],
534 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
535
536 This instead becomes an *indirect* reference using the *internal* state
537 table generated from the Predication CSR key-value store:
538
539 if type(iop) == INT:
540 pred_enabled = int_pred_enabled
541 preg = int_pred_reg[rd]
542 else:
543 pred_enabled = fp_pred_enabled
544 preg = fp_pred_reg[rd]
545
546 for (int i=0; i<vl; ++i)
547 if (preg_enabled[rd] && [!]preg[i])
548 (d ? vreg[rd][i] : sreg[rd]) =
549 iop(s1 ? vreg[rs1][i] : sreg[rs1],
550 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
551
552 ## MAXVECTORDEPTH
553
554 MAXVECTORDEPTH is the same concept as MVL in RVV. However in Simple-V,
555 given that its primary (base, unextended) purpose is for 3D, Video and
556 other purposes (not requiring supercomputing capability), it makes sense
557 to limit MAXVECTORDEPTH to the regfile bitwidth (32 for RV32, 64 for RV64
558 and so on).
559
560 The reason for setting this limit is so that predication registers, when
561 marked as such, may fit into a single register as opposed to fanning out
562 over several registers. This keeps the implementation a little simpler.
563 Note that RVV on top of Simple-V may choose to over-ride this decision.
564
565 ## Vector-length CSRs
566
567 Vector lengths are interpreted as meaning "any instruction referring to
568 r(N) generates implicit identical instructions referring to registers
569 r(N+M-1) where M is the Vector Length". Vector Lengths may be set to
570 use up to 16 registers in the register file.
571
572 One separate CSR table is needed for each of the integer and floating-point
573 register files:
574
575 | RegNo | (3..0) |
576 | ----- | ------ |
577 | r0 | vlen0 |
578 | r1 | vlen1 |
579 | .. | vlen.. |
580 | r31 | vlen31 |
581
582 An array of 32 4-bit CSRs is needed (4 bits per register) to indicate
583 whether a register was, if referred to in any standard instructions,
584 implicitly to be treated as a vector. A vector length of 1 indicates
585 that it is to be treated as a scalar. Vector lengths of 0 are reserved.
586
587 Internally, implementations may choose to use the non-zero vector length
588 to set a bit-field per register, to be used in the instruction decode phase.
589 In this way any standard (current or future) operation involving
590 register operands may detect if the operation is to be vector-vector,
591 vector-scalar or scalar-scalar (standard) simply through a single
592 bit test.
593
594 Note that when using the "vsetl rs1, rs2" instruction (caveat: when the
595 bitwidth is specifically not set) it becomes:
596
597 CSRvlength = MIN(MIN(CSRvectorlen[rs1], MAXVECTORDEPTH), rs2)
598
599 This is in contrast to RVV:
600
601 CSRvlength = MIN(MIN(rs1, MAXVECTORDEPTH), rs2)
602
603 ## Element (SIMD) bitwidth CSRs
604
605 Element bitwidths may be specified with a per-register CSR, and indicate
606 how a register (integer or floating-point) is to be subdivided.
607
608 | RegNo | (2..0) |
609 | ----- | ------ |
610 | r0 | vew0 |
611 | r1 | vew1 |
612 | .. | vew.. |
613 | r31 | vew31 |
614
615 vew may be one of the following (giving a table "bytestable", used below):
616
617 | vew | bitwidth |
618 | --- | -------- |
619 | 000 | default |
620 | 001 | 8 |
621 | 010 | 16 |
622 | 011 | 32 |
623 | 100 | 64 |
624 | 101 | 128 |
625 | 110 | rsvd |
626 | 111 | rsvd |
627
628 Extending this table (with extra bits) is covered in the section
629 "Implementing RVV on top of Simple-V".
630
631 Note that when using the "vsetl rs1, rs2" instruction, taking bitwidth
632 into account, it becomes:
633
634 vew = CSRbitwidth[rs1]
635 if (vew == 0)
636 bytesperreg = (XLEN/8) # or FLEN as appropriate
637 else:
638 bytesperreg = bytestable[vew] # 1 2 4 8 16
639 simdmult = (XLEN/8) / bytesperreg # or FLEN as appropriate
640 vlen = CSRvectorlen[rs1] * simdmult
641 CSRvlength = MIN(MIN(vlen, MAXVECTORDEPTH), rs2)
642
643 The reason for multiplying the vector length by the number of SIMD elements
644 (in each individual register) is so that each SIMD element may optionally be
645 predicated.
646
647 An example of how to subdivide the register file when bitwidth != default
648 is given in the section "Bitwidth Virtual Register Reordering".
649
650 # Exceptions
651
652 > What does an ADD of two different-sized vectors do in simple-V?
653
654 * if the two source operands are not the same, throw an exception.
655 * if the destination operand is also a vector, and the source is longer
656 than the destination, throw an exception.
657
658 > And what about instructions like JALR? 
659 > What does jumping to a vector do?
660
661 * Throw an exception. Whether that actually results in spawning threads
662 as part of the trap-handling remains to be seen.
663
664 # Comparison of "Traditional" SIMD, Alt-RVP, Simple-V and RVV Proposals <a name="parallelism_comparisons"></a>
665
666 This section compares the various parallelism proposals as they stand,
667 including traditional SIMD, in terms of features, ease of implementation,
668 complexity, flexibility, and die area.
669
670 ## [[alt_rvp]]
671
672 Primary benefit of Alt-RVP is the simplicity with which parallelism
673 may be introduced (effective multiplication of regfiles and associated ALUs).
674
675 * plus: the simplicity of the lanes (combined with the regularity of
676 allocating identical opcodes multiple independent registers) meaning
677 that SRAM or 2R1W can be used for entire regfile (potentially).
678 * minus: a more complex instruction set where the parallelism is much
679 more explicitly directly specified in the instruction and
680 * minus: if you *don't* have an explicit instruction (opcode) and you
681 need one, the only place it can be added is... in the vector unit and
682 * minus: opcode functions (and associated ALUs) duplicated in Alt-RVP are
683 not useable or accessible in other Extensions.
684 * plus-and-minus: Lanes may be utilised for high-speed context-switching
685 but with the down-side that they're an all-or-nothing part of the Extension.
686 No Alt-RVP: no fast register-bank switching.
687 * plus: Lane-switching would mean that complex operations not suited to
688 parallelisation can be carried out, followed by further parallel Lane-based
689 work, without moving register contents down to memory (and back)
690 * minus: Access to registers across multiple lanes is challenging. "Solution"
691 is to drop data into memory and immediately back in again (like MMX).
692
693 ## Simple-V
694
695 Primary benefit of Simple-V is the OO abstraction of parallel principles
696 from actual (internal) parallel hardware. It's an API in effect that's
697 designed to be slotted in to an existing implementation (just after
698 instruction decode) with minimum disruption and effort.
699
700 * minus: the complexity of having to use register renames, OoO, VLIW,
701 register file cacheing, all of which has been done before but is a
702 pain
703 * plus: transparent re-use of existing opcodes as-is just indirectly
704 saying "this register's now a vector" which
705 * plus: means that future instructions also get to be inherently
706 parallelised because there's no "separate vector opcodes"
707 * plus: Compressed instructions may also be (indirectly) parallelised
708 * minus: the indirect nature of Simple-V means that setup (setting
709 a CSR register to indicate vector length, a separate one to indicate
710 that it is a predicate register and so on) means a little more setup
711 time than Alt-RVP or RVV's "direct and within the (longer) instruction"
712 approach.
713 * plus: shared register file meaning that, like Alt-RVP, complex
714 operations not suited to parallelisation may be carried out interleaved
715 between parallelised instructions *without* requiring data to be dropped
716 down to memory and back (into a separate vectorised register engine).
717 * plus-and-maybe-minus: re-use of integer and floating-point 32-wide register
718 files means that huge parallel workloads would use up considerable
719 chunks of the register file. However in the case of RV64 and 32-bit
720 operations, that effectively means 64 slots are available for parallel
721 operations.
722 * plus: inherent parallelism (actual parallel ALUs) doesn't actually need to
723 be added, yet the instruction opcodes remain unchanged (and still appear
724 to be parallel). consistent "API" regardless of actual internal parallelism:
725 even an in-order single-issue implementation with a single ALU would still
726 appear to have parallel vectoristion.
727 * hard-to-judge: if actual inherent underlying ALU parallelism is added it's
728 hard to say if there would be pluses or minuses (on die area). At worse it
729 would be "no worse" than existing register renaming, OoO, VLIW and register
730 file cacheing schemes.
731
732 ## RVV (as it stands, Draft 0.4 Section 17, RISC-V ISA V2.3-Draft)
733
734 RVV is extremely well-designed and has some amazing features, including
735 2D reorganisation of memory through LOAD/STORE "strides".
736
737 * plus: regular predictable workload means that implementations may
738 streamline effects on L1/L2 Cache.
739 * plus: regular and clear parallel workload also means that lanes
740 (similar to Alt-RVP) may be used as an implementation detail,
741 using either SRAM or 2R1W registers.
742 * plus: separate engine with no impact on the rest of an implementation
743 * minus: separate *complex* engine with no RTL (ALUs, Pipeline stages) reuse
744 really feasible.
745 * minus: no ISA abstraction or re-use either: additions to other Extensions
746 do not gain parallelism, resulting in prolific duplication of functionality
747 inside RVV *and out*.
748 * minus: when operations require a different approach (scalar operations
749 using the standard integer or FP regfile) an entire vector must be
750 transferred out to memory, into standard regfiles, then back to memory,
751 then back to the vector unit, this to occur potentially multiple times.
752 * minus: will never fit into Compressed instruction space (as-is. May
753 be able to do so if "indirect" features of Simple-V are partially adopted).
754 * plus-and-slight-minus: extended variants may address up to 256
755 vectorised registers (requires 48/64-bit opcodes to do it).
756 * minus-and-partial-plus: separate engine plus complexity increases
757 implementation time and die area, meaning that adoption is likely only
758 to be in high-performance specialist supercomputing (where it will
759 be absolutely superb).
760
761 ## Traditional SIMD
762
763 The only really good things about SIMD are how easy it is to implement and
764 get good performance. Unfortunately that makes it quite seductive...
765
766 * plus: really straightforward, ALU basically does several packed operations
767 at once. Parallelism is inherent at the ALU, making the addition of
768 SIMD-style parallelism an easy decision that has zero significant impact
769 on the rest of any given architectural design and layout.
770 * plus (continuation): SIMD in simple in-order single-issue designs can
771 therefore result in superb throughput, easily achieved even with a very
772 simple execution model.
773 * minus: ridiculously complex setup and corner-cases that disproportionately
774 increase instruction count on what would otherwise be a "simple loop",
775 should the number of elements in an array not happen to exactly match
776 the SIMD group width.
777 * minus: getting data usefully out of registers (if separate regfiles
778 are used) means outputting to memory and back.
779 * minus: quite a lot of supplementary instructions for bit-level manipulation
780 are needed in order to efficiently extract (or prepare) SIMD operands.
781 * minus: MASSIVE proliferation of ISA both in terms of opcodes in one
782 dimension and parallelism (width): an at least O(N^2) and quite probably
783 O(N^3) ISA proliferation that often results in several thousand
784 separate instructions. all requiring separate and distinct corner-case
785 algorithms!
786 * minus: EVEN BIGGER proliferation of SIMD ISA if the functionality of
787 8, 16, 32 or 64-bit reordering is built-in to the SIMD instruction.
788 For example: add (high|low) 16-bits of r1 to (low|high) of r2 requires
789 four separate and distinct instructions: one for (r1:low r2:high),
790 one for (r1:high r2:low), one for (r1:high r2:high) and one for
791 (r1:low r2:low) *per function*.
792 * minus: EVEN BIGGER proliferation of SIMD ISA if there is a mismatch
793 between operand and result bit-widths. In combination with high/low
794 proliferation the situation is made even worse.
795 * minor-saving-grace: some implementations *may* have predication masks
796 that allow control over individual elements within the SIMD block.
797
798 # Comparison *to* Traditional SIMD: Alt-RVP, Simple-V and RVV Proposals <a name="simd_comparison"></a>
799
800 This section compares the various parallelism proposals as they stand,
801 *against* traditional SIMD as opposed to *alongside* SIMD. In other words,
802 the question is asked "How can each of the proposals effectively implement
803 (or replace) SIMD, and how effective would they be"?
804
805 ## [[alt_rvp]]
806
807 * Alt-RVP would not actually replace SIMD but would augment it: just as with
808 a SIMD architecture where the ALU becomes responsible for the parallelism,
809 Alt-RVP ALUs would likewise be so responsible... with *additional*
810 (lane-based) parallelism on top.
811 * Thus at least some of the downsides of SIMD ISA O(N^3) proliferation by
812 at least one dimension are avoided (architectural upgrades introducing
813 128-bit then 256-bit then 512-bit variants of the exact same 64-bit
814 SIMD block)
815 * Thus, unfortunately, Alt-RVP would suffer the same inherent proliferation
816 of instructions as SIMD, albeit not quite as badly (due to Lanes).
817 * In the same discussion for Alt-RVP, an additional proposal was made to
818 be able to subdivide the bits of each register lane (columns) down into
819 arbitrary bit-lengths (RGB 565 for example).
820 * A recommendation was given instead to make the subdivisions down to 32-bit,
821 16-bit or even 8-bit, effectively dividing the registerfile into
822 Lane0(H), Lane0(L), Lane1(H) ... LaneN(L) or further. If inter-lane
823 "swapping" instructions were then introduced, some of the disadvantages
824 of SIMD could be mitigated.
825
826 ## RVV
827
828 * RVV is designed to replace SIMD with a better paradigm: arbitrary-length
829 parallelism.
830 * However whilst SIMD is usually designed for single-issue in-order simple
831 DSPs with a focus on Multimedia (Audio, Video and Image processing),
832 RVV's primary focus appears to be on Supercomputing: optimisation of
833 mathematical operations that fit into the OpenCL space.
834 * Adding functions (operations) that would normally fit (in parallel)
835 into a SIMD instruction requires an equivalent to be added to the
836 RVV Extension, if one does not exist. Given the specialist nature of
837 some SIMD instructions (8-bit or 16-bit saturated or halving add),
838 this possibility seems extremely unlikely to occur, even if the
839 implementation overhead of RVV were acceptable (compared to
840 normal SIMD/DSP-style single-issue in-order simplicity).
841
842 ## Simple-V
843
844 * Simple-V borrows hugely from RVV as it is intended to be easy to
845 topologically transplant every single instruction from RVV (as
846 designed) into Simple-V equivalents, with *zero loss of functionality
847 or capability*.
848 * With the "parallelism" abstracted out, a hypothetical SIMD-less "DSP"
849 Extension which contained the basic primitives (non-parallelised
850 8, 16 or 32-bit SIMD operations) inherently *become* parallel,
851 automatically.
852 * Additionally, standard operations (ADD, MUL) that would normally have
853 to have special SIMD-parallel opcodes added need no longer have *any*
854 of the length-dependent variants (2of 32-bit ADDs in a 64-bit register,
855 4of 32-bit ADDs in a 128-bit register) because Simple-V takes the
856 *standard* RV opcodes (present and future) and automatically parallelises
857 them.
858 * By inheriting the RVV feature of arbitrary vector-length, then just as
859 with RVV the corner-cases and ISA proliferation of SIMD is avoided.
860 * Whilst not entirely finalised, registers are expected to be
861 capable of being subdivided down to an implementor-chosen bitwidth
862 in the underlying hardware (r1 becomes r1[31..24] r1[23..16] r1[15..8]
863 and r1[7..0], or just r1[31..16] r1[15..0]) where implementors can
864 choose to have separate independent 8-bit ALUs or dual-SIMD 16-bit
865 ALUs that perform twin 8-bit operations as they see fit, or anything
866 else including no subdivisions at all.
867 * Even though implementors have that choice even to have full 64-bit
868 (with RV64) SIMD, they *must* provide predication that transparently
869 switches off appropriate units on the last loop, thus neatly fitting
870 underlying SIMD ALU implementations *into* the arbitrary vector-length
871 RVV paradigm, keeping the uniform consistent API that is a key strategic
872 feature of Simple-V.
873 * With Simple-V fitting into the standard register files, certain classes
874 of SIMD operations such as High/Low arithmetic (r1[31..16] + r2[15..0])
875 can be done by applying *Parallelised* Bit-manipulation operations
876 followed by parallelised *straight* versions of element-to-element
877 arithmetic operations, even if the bit-manipulation operations require
878 changing the bitwidth of the "vectors" to do so. Predication can
879 be utilised to skip high words (or low words) in source or destination.
880 * In essence, the key downside of SIMD - massive duplication of
881 identical functions over time as an architecture evolves from 32-bit
882 wide SIMD all the way up to 512-bit, is avoided with Simple-V, through
883 vector-style parallelism being dropped on top of 8-bit or 16-bit
884 operations, all the while keeping a consistent ISA-level "API" irrespective
885 of implementor design choices (or indeed actual implementations).
886
887 # Impementing V on top of Simple-V
888
889 * Number of Offset CSRs extends from 2
890 * Extra register file: vector-file
891 * Setup of Vector length and bitwidth CSRs now can specify vector-file
892 as well as integer or float file.
893 * Extend CSR tables (bitwidth) with extra bits
894 * TODO
895
896 # Implementing P (renamed to DSP) on top of Simple-V
897
898 * Implementors indicate chosen bitwidth support in Vector-bitwidth CSR
899 (caveat: anything not specified drops through to software-emulation / traps)
900 * TODO
901
902 # Appendix
903
904 ## V-Extension to Simple-V Comparative Analysis
905
906 This section has been moved to its own page [[v_comparative_analysis]]
907
908 ## P-Ext ISA
909
910 This section has been moved to its own page [[p_comparative_analysis]]
911
912 ## Example of vector / vector, vector / scalar, scalar / scalar => vector add
913
914 register CSRvectorlen[XLEN][4]; # not quite decided yet about this one...
915 register CSRpredicate[XLEN][4]; # 2^4 is max vector length
916 register CSRreg_is_vectorised[XLEN]; # just for fun support scalars as well
917 register x[32][XLEN];
918
919 function op_add(rd, rs1, rs2, predr)
920 {
921    /* note that this is ADD, not PADD */
922    int i, id, irs1, irs2;
923    # checks CSRvectorlen[rd] == CSRvectorlen[rs] etc. ignored
924    # also destination makes no sense as a scalar but what the hell...
925    for (i = 0, id=0, irs1=0, irs2=0; i<CSRvectorlen[rd]; i++)
926       if (CSRpredicate[predr][i]) # i *think* this is right...
927          x[rd+id] <= x[rs1+irs1] + x[rs2+irs2];
928       # now increment the idxs
929       if (CSRreg_is_vectorised[rd]) # bitfield check rd, scalar/vector?
930          id += 1;
931       if (CSRreg_is_vectorised[rs1]) # bitfield check rs1, scalar/vector?
932          irs1 += 1;
933       if (CSRreg_is_vectorised[rs2]) # bitfield check rs2, scalar/vector?
934          irs2 += 1;
935 }
936
937 ## Retro-fitting Predication into branch-explicit ISA
938
939 One of the goals of this parallelism proposal is to avoid instruction
940 duplication. However, with the base ISA having been designed explictly
941 to *avoid* condition-codes entirely, shoe-horning predication into it
942 bcomes quite challenging.
943
944 However what if all branch instructions, if referencing a vectorised
945 register, were instead given *completely new analogous meanings* that
946 resulted in a parallel bit-wise predication register being set? This
947 would have to be done for both C.BEQZ and C.BNEZ, as well as BEQ, BNE,
948 BLT and BGE.
949
950 We might imagine that FEQ, FLT and FLT would also need to be converted,
951 however these are effectively *already* in the precise form needed and
952 do not need to be converted *at all*! The difference is that FEQ, FLT
953 and FLE *specifically* write a 1 to an integer register if the condition
954 holds, and 0 if not. All that needs to be done here is to say, "if
955 the integer register is tagged with a bit that says it is a predication
956 register, the **bit** in the integer register is set based on the
957 current vector index" instead.
958
959 There is, in the standard Conditional Branch instruction, more than
960 adequate space to interpret it in a similar fashion:
961
962 [[!table data="""
963 31 |30 ..... 25 |24 ... 20 | 19 ... 15 | 14 ...... 12 | 11 ....... 8 | 7 | 6 ....... 0 |
964 imm[12] | imm[10:5] | rs2 | rs1 | funct3 | imm[4:1] | imm[11] | opcode |
965 1 | 6 | 5 | 5 | 3 | 4 | 1 | 7 |
966 offset[12,10:5] || src2 | src1 | BEQ | offset[11,4:1] || BRANCH |
967 """]]
968
969 This would become:
970
971 [[!table data="""
972 31 | 30 .. 25 |24 ... 20 | 19 15 | 14 12 | 11 .. 8 | 7 | 6 ... 0 |
973 imm[12] | imm[10:5]| rs2 | rs1 | funct3 | imm[4:1] | imm[11] | opcode |
974 1 | 6 | 5 | 5 | 3 | 4 | 1 | 7 |
975 reserved || src2 | src1 | BEQ | predicate rs3 || BRANCH |
976 """]]
977
978 Similarly the C.BEQZ and C.BNEZ instruction format may be retro-fitted,
979 with the interesting side-effect that there is space within what is presently
980 the "immediate offset" field to reinterpret that to add in not only a bit
981 field to distinguish between floating-point compare and integer compare,
982 not only to add in a second source register, but also use some of the bits as
983 a predication target as well.
984
985 [[!table data="""
986 15 ...... 13 | 12 ........... 10 | 9..... 7 | 6 ................. 2 | 1 .. 0 |
987 funct3 | imm | rs10 | imm | op |
988 3 | 3 | 3 | 5 | 2 |
989 C.BEQZ | offset[8,4:3] | src | offset[7:6,2:1,5] | C1 |
990 """]]
991
992 Now uses the CS format:
993
994 [[!table data="""
995 15 ...... 13 | 12 ........... 10 | 9..... 7 | 6 .. 5 | 4......... 2 | 1 .. 0 |
996 funct3 | imm | rs10 | imm | | op |
997 3 | 3 | 3 | 2 | 3 | 2 |
998 C.BEQZ | predicate rs3 | src1 | I/F B | src2 | C1 |
999 """]]
1000
1001 Bit 6 would be decoded as "operation refers to Integer or Float" including
1002 interpreting src1 and src2 accordingly as outlined in Table 12.2 of the
1003 "C" Standard, version 2.0,
1004 whilst Bit 5 would allow the operation to be extended, in combination with
1005 funct3 = 110 or 111: a combination of four distinct (predicated) comparison
1006 operators. In both floating-point and integer cases those could be
1007 EQ/NEQ/LT/LE (with GT and GE being synthesised by inverting src1 and src2).
1008
1009 ## Register reordering <a name="register_reordering"></a>
1010
1011 ### Register File
1012
1013 | Reg Num | Bits |
1014 | ------- | ---- |
1015 | r0 | (32..0) |
1016 | r1 | (32..0) |
1017 | r2 | (32..0) |
1018 | r3 | (32..0) |
1019 | r4 | (32..0) |
1020 | r5 | (32..0) |
1021 | r6 | (32..0) |
1022 | r7 | (32..0) |
1023 | .. | (32..0) |
1024 | r31| (32..0) |
1025
1026 ### Vectorised CSR
1027
1028 May not be an actual CSR: may be generated from Vector Length CSR:
1029 single-bit is less burdensome on instruction decode phase.
1030
1031 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
1032 | - | - | - | - | - | - | - | - |
1033 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
1034
1035 ### Vector Length CSR
1036
1037 | Reg Num | (3..0) |
1038 | ------- | ---- |
1039 | r0 | 2 |
1040 | r1 | 0 |
1041 | r2 | 1 |
1042 | r3 | 1 |
1043 | r4 | 3 |
1044 | r5 | 0 |
1045 | r6 | 0 |
1046 | r7 | 1 |
1047
1048 ### Virtual Register Reordering
1049
1050 This example assumes the above Vector Length CSR table
1051
1052 | Reg Num | Bits (0) | Bits (1) | Bits (2) |
1053 | ------- | -------- | -------- | -------- |
1054 | r0 | (32..0) | (32..0) |
1055 | r2 | (32..0) |
1056 | r3 | (32..0) |
1057 | r4 | (32..0) | (32..0) | (32..0) |
1058 | r7 | (32..0) |
1059
1060 ### Bitwidth Virtual Register Reordering
1061
1062 This example goes a little further and illustrates the effect that a
1063 bitwidth CSR has been set on a register. Preconditions:
1064
1065 * RV32 assumed
1066 * CSRintbitwidth[2] = 010 # integer r2 is 16-bit
1067 * CSRintvlength[2] = 3 # integer r2 is a vector of length 3
1068 * vsetl rs1, 5 # set the vector length to 5
1069
1070 This is interpreted as follows:
1071
1072 * Given that the context is RV32, ELEN=32.
1073 * With ELEN=32 and bitwidth=16, the number of SIMD elements is 2
1074 * Therefore the actual vector length is up to *six* elements
1075
1076 So when using an operation that uses r2 as a source (or destination)
1077 the operation is carried out as follows:
1078
1079 * 16-bit operation on r2(15..0) - vector element index 0
1080 * 16-bit operation on r2(31..16) - vector element index 1
1081 * 16-bit operation on r3(15..0) - vector element index 2
1082 * 16-bit operation on r3(31..16) - vector element index 3
1083 * 16-bit operation on r4(15..0) - vector element index 4
1084 * 16-bit operation on r4(31..16) **NOT** carried out due to length being 5
1085
1086 Predication has been left out of the above example for simplicity, however
1087 predication is ANDed with the latter stages (vsetl not equal to maximum
1088 capacity).
1089
1090 Note also that it is entirely an implementor's choice as to whether to have
1091 actual separate ALUs down to the minimum bitwidth, or whether to have something
1092 more akin to traditional SIMD (at any level of subdivision: 8-bit SIMD
1093 operations carried out 32-bits at a time is perfectly acceptable, as is
1094 8-bit SIMD operations carried out 16-bits at a time requiring two ALUs).
1095 Regardless of the internal parallelism choice, *predication must
1096 still be respected*, making Simple-V in effect the "consistent public API".
1097
1098 vew may be one of the following (giving a table "bytestable", used below):
1099
1100 | vew | bitwidth |
1101 | --- | -------- |
1102 | 000 | default |
1103 | 001 | 8 |
1104 | 010 | 16 |
1105 | 011 | 32 |
1106 | 100 | 64 |
1107 | 101 | 128 |
1108 | 110 | rsvd |
1109 | 111 | rsvd |
1110
1111 Pseudocode for vector length taking CSR SIMD-bitwidth into account:
1112
1113 vew = CSRbitwidth[rs1]
1114 if (vew == 0)
1115 bytesperreg = (XLEN/8) # or FLEN as appropriate
1116 else:
1117 bytesperreg = bytestable[vew] # 1 2 4 8 16
1118 simdmult = (XLEN/8) / bytesperreg # or FLEN as appropriate
1119 vlen = CSRvectorlen[rs1] * simdmult
1120
1121 To index an element in a register rnum where the vector element index is i:
1122
1123 function regoffs(rnum, i):
1124 regidx = floor(i / simdmult) # integer-div rounded down
1125 byteidx = i % simdmult # integer-remainder
1126 return rnum + regidx, # actual real register
1127 byteidx * 8, # low
1128 byteidx * 8 + (vew-1), # high
1129
1130 ### Example Instruction translation: <a name="example_translation"></a>
1131
1132 Instructions "ADD r2 r4 r4" would result in three instructions being
1133 generated and placed into the FILO:
1134
1135 * ADD r2 r4 r4
1136 * ADD r2 r5 r5
1137 * ADD r2 r6 r6
1138
1139 ### Insights
1140
1141 SIMD register file splitting still to consider. For RV64, benefits of doubling
1142 (quadrupling in the case of Half-Precision IEEE754 FP) the apparent
1143 size of the floating point register file to 64 (128 in the case of HP)
1144 seem pretty clear and worth the complexity.
1145
1146 64 virtual 32-bit F.P. registers and given that 32-bit FP operations are
1147 done on 64-bit registers it's not so conceptually difficult.  May even
1148 be achieved by *actually* splitting the regfile into 64 virtual 32-bit
1149 registers such that a 64-bit FP scalar operation is dropped into (r0.H
1150 r0.L) tuples.  Implementation therefore hidden through register renaming.
1151
1152 Implementations intending to introduce VLIW, OoO and parallelism
1153 (even without Simple-V) would then find that the instructions are
1154 generated quicker (or in a more compact fashion that is less heavy
1155 on caches). Interestingly we observe then that Simple-V is about
1156 "consolidation of instruction generation", where actual parallelism
1157 of underlying hardware is an implementor-choice that could just as
1158 equally be applied *without* Simple-V even being implemented.
1159
1160 ## Analysis of CSR decoding on latency <a name="csr_decoding_analysis"></a>
1161
1162 It could indeed have been logically deduced (or expected), that there
1163 would be additional decode latency in this proposal, because if
1164 overloading the opcodes to have different meanings, there is guaranteed
1165 to be some state, some-where, directly related to registers.
1166
1167 There are several cases:
1168
1169 * All operands vector-length=1 (scalars), all operands
1170 packed-bitwidth="default": instructions are passed through direct as if
1171 Simple-V did not exist.  Simple-V is, in effect, completely disabled.
1172 * At least one operand vector-length > 1, all operands
1173 packed-bitwidth="default": any parallel vector ALUs placed on "alert",
1174 virtual parallelism looping may be activated.
1175 * All operands vector-length=1 (scalars), at least one
1176 operand packed-bitwidth != default: degenerate case of SIMD,
1177 implementation-specific complexity here (packed decode before ALUs or
1178 *IN* ALUs)
1179 * At least one operand vector-length > 1, at least one operand
1180 packed-bitwidth != default: parallel vector ALUs (if any)
1181 placed on "alert", virtual parallelsim looping may be activated,
1182 implementation-specific SIMD complexity kicks in (packed decode before
1183 ALUs or *IN* ALUs).
1184
1185 Bear in mind that the proposal includes that the decision whether
1186 to parallelise in hardware or whether to virtual-parallelise (to
1187 dramatically simplify compilers and also not to run into the SIMD
1188 instruction proliferation nightmare) *or* a transprent combination
1189 of both, be done on a *per-operand basis*, so that implementors can
1190 specifically choose to create an application-optimised implementation
1191 that they believe (or know) will sell extremely well, without having
1192 "Extra Standards-Mandated Baggage" that would otherwise blow their area
1193 or power budget completely out the window.
1194
1195 Additionally, two possible CSR schemes have been proposed, in order to
1196 greatly reduce CSR space:
1197
1198 * per-register CSRs (vector-length and packed-bitwidth)
1199 * a smaller number of CSRs with the same information but with an *INDEX*
1200 specifying WHICH register in one of three regfiles (vector, fp, int)
1201 the length and bitwidth applies to.
1202
1203 (See "CSR vector-length and CSR SIMD packed-bitwidth" section for details)
1204
1205 In addition, LOAD/STORE has its own associated proposed CSRs that
1206 mirror the STRIDE (but not yet STRIDE-SEGMENT?) functionality of
1207 V (and Hwacha).
1208
1209 Also bear in mind that, for reasons of simplicity for implementors,
1210 I was coming round to the idea of permitting implementors to choose
1211 exactly which bitwidths they would like to support in hardware and which
1212 to allow to fall through to software-trap emulation.
1213
1214 So the question boils down to:
1215
1216 * whether either (or both) of those two CSR schemes have significant
1217 latency that could even potentially require an extra pipeline decode stage
1218 * whether there are implementations that can be thought of which do *not*
1219 introduce significant latency
1220 * whether it is possible to explicitly (through quite simply
1221 disabling Simple-V-Ext) or implicitly (detect the case all-vlens=1,
1222 all-simd-bitwidths=default) switch OFF any decoding, perhaps even to
1223 the extreme of skipping an entire pipeline stage (if one is needed)
1224 * whether packed bitwidth and associated regfile splitting is so complex
1225 that it should definitely, definitely be made mandatory that implementors
1226 move regfile splitting into the ALU, and what are the implications of that
1227 * whether even if that *is* made mandatory, is software-trapped
1228 "unsupported bitwidths" still desirable, on the basis that SIMD is such
1229 a complete nightmare that *even* having a software implementation is
1230 better, making Simple-V have more in common with a software API than
1231 anything else.
1232
1233 Whilst the above may seem to be severe minuses, there are some strong
1234 pluses:
1235
1236 * Significant reduction of V's opcode space: over 85%.
1237 * Smaller reduction of P's opcode space: around 10%.
1238 * The potential to use Compressed instructions in both Vector and SIMD
1239 due to the overloading of register meaning (implicit vectorisation,
1240 implicit packing)
1241 * Not only present but also future extensions automatically gain parallelism.
1242 * Already mentioned but worth emphasising: the simplification to compiler
1243 writers and assembly-level writers of having the same consistent ISA
1244 regardless of whether the internal level of parallelism (number of
1245 parallel ALUs) is only equal to one ("virtual" parallelism), or is
1246 greater than one, should not be underestimated.
1247
1248 ## Reducing Register Bank porting
1249
1250 This looks quite reasonable.
1251 <https://www.princeton.edu/~rblee/ELE572Papers/MultiBankRegFile_ISCA2000.pdf>
1252
1253 The main details are outlined on page 4.  They propose a 2-level register
1254 cache hierarchy, note that registers are typically only read once, that
1255 you never write back from upper to lower cache level but always go in a
1256 cycle lower -> upper -> ALU -> lower, and at the top of page 5 propose
1257 a scheme where you look ahead by only 2 instructions to determine which
1258 registers to bring into the cache.
1259
1260 The nice thing about a vector architecture is that you *know* that
1261 *even more* registers are going to be pulled in: Hwacha uses this fact
1262 to optimise L1/L2 cache-line usage (avoid thrashing), strangely enough
1263 by *introducing* deliberate latency into the execution phase.
1264
1265 # References
1266
1267 * SIMD considered harmful <https://www.sigarch.org/simd-instructions-considered-harmful/>
1268 * Link to first proposal <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/GuukrSjgBH8>
1269 * Recommendation by Jacob Bachmeyer to make zero-overhead loop an
1270 "implicit program-counter" <https://groups.google.com/a/groups.riscv.org/d/msg/isa-dev/vYVi95gF2Mo/SHz6a4_lAgAJ>
1271 * Re-continuing P-Extension proposal <https://groups.google.com/a/groups.riscv.org/forum/#!msg/isa-dev/IkLkQn3HvXQ/SEMyC9IlAgAJ>
1272 * First Draft P-SIMD (DSP) proposal <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/vYVi95gF2Mo>
1273 * B-Extension discussion <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/zi_7B15kj6s>
1274 * Broadcom VideoCore-IV <https://docs.broadcom.com/docs/12358545>
1275 Figure 2 P17 and Section 3 on P16.
1276 * Hwacha <https://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-262.html>
1277 * Hwacha <https://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-263.html>
1278 * Vector Workshop <http://riscv.org/wp-content/uploads/2015/06/riscv-vector-workshop-june2015.pdf>
1279 * Predication <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/XoP4BfYSLXA>
1280 * Branch Divergence <https://jbush001.github.io/2014/12/07/branch-divergence-in-parallel-kernels.html>
1281 * Life of Triangles (3D) <https://jbush001.github.io/2016/02/27/life-of-triangle.html>
1282 * Videocore-IV <https://github.com/hermanhermitage/videocoreiv/wiki/VideoCore-IV-3d-Graphics-Pipeline>
1283 * Discussion proposing CSRs that change ISA definition
1284 <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/InzQ1wr_3Ak>
1285 * Zero-overhead loops <https://pdfs.semanticscholar.org/dbaa/66985cc730d4b44d79f519e96ec9c43ab5b7.pdf>
1286 * Multi-ported VLIW Register File Implementation <https://ce-publications.et.tudelft.nl/publications/1517_multiple_contexts_in_a_multiported_vliw_register_file_impl.pdf>
1287 * Fast context save/restore proposal <https://groups.google.com/a/groups.riscv.org/d/msgid/isa-dev/57F823FA.6030701%40gmail.com>
1288 * Register File Bank Cacheing <https://www.princeton.edu/~rblee/ELE572Papers/MultiBankRegFile_ISCA2000.pdf>