shuffle
[libreriscv.git] / simple_v_extension.mdwn
1 # Variable-width Variable-packed SIMD / Simple-V / Parallelism Extension Proposal
2
3 Key insight: Simple-V is intended as an abstraction layer to provide
4 a consistent "API" to parallelisation of existing *and future* operations.
5 *Actual* internal hardware-level parallelism is *not* required, such
6 that Simple-V may be viewed as providing a "compact" or "consolidated"
7 means of issuing multiple near-identical arithmetic instructions to an
8 instruction queue (FILO), pending execution.
9
10 *Actual* parallelism, if added independently of Simple-V in the form
11 of Out-of-order restructuring (including parallel ALU lanes) or VLIW
12 implementations, or SIMD, or anything else, would then benefit *if*
13 Simple-V was added on top.
14
15 [[!toc ]]
16
17 # Introduction
18
19 This proposal exists so as to be able to satisfy several disparate
20 requirements: power-conscious, area-conscious, and performance-conscious
21 designs all pull an ISA and its implementation in different conflicting
22 directions, as do the specific intended uses for any given implementation.
23
24 Additionally, the existing P (SIMD) proposal and the V (Vector) proposals,
25 whilst each extremely powerful in their own right and clearly desirable,
26 are also:
27
28 * Clearly independent in their origins (Cray and AndesStar v3 respectively)
29 so need work to adapt to the RISC-V ethos and paradigm
30 * Are sufficiently large so as to make adoption (and exploration for
31 analysis and review purposes) prohibitively expensive
32 * Both contain partial duplication of pre-existing RISC-V instructions
33 (an undesirable characteristic)
34 * Both have independent and disparate methods for introducing parallelism
35 at the instruction level.
36 * Both require that their respective parallelism paradigm be implemented
37 along-side and integral to their respective functionality *or not at all*.
38 * Both independently have methods for introducing parallelism that
39 could, if separated, benefit
40 *other areas of RISC-V not just DSP or Floating-point respectively*.
41
42 Therefore it makes a huge amount of sense to have a means and method
43 of introducing instruction parallelism in a flexible way that provides
44 implementors with the option to choose exactly where they wish to offer
45 performance improvements and where they wish to optimise for power
46 and/or area (and if that can be offered even on a per-operation basis that
47 would provide even more flexibility).
48
49 Additionally it makes sense to *split out* the parallelism inherent within
50 each of P and V, and to see if each of P and V then, in *combination* with
51 a "best-of-both" parallelism extension, could be added on *on top* of
52 this proposal, to topologically provide the exact same functionality of
53 each of P and V. Each of P and V then can focus on providing the best
54 operations possible for their respective target areas, without being
55 hugely concerned about the actual parallelism.
56
57 Furthermore, an additional goal of this proposal is to reduce the number
58 of opcodes utilised by each of P and V as they currently stand, leveraging
59 existing RISC-V opcodes where possible, and also potentially allowing
60 P and V to make use of Compressed Instructions as a result.
61
62 **TODO**: propose overflow registers be actually one of the integer regs
63 (flowing to multiple regs).
64
65 **TODO**: propose "mask" (predication) registers likewise. combination with
66 standard RV instructions and overflow registers extremely powerful, see
67 Aspex ASP.
68
69 # Analysis and discussion of Vector vs SIMD
70
71 There are five combined areas between the two proposals that help with
72 parallelism without over-burdening the ISA with a huge proliferation of
73 instructions:
74
75 * Fixed vs variable parallelism (fixed or variable "M" in SIMD)
76 * Implicit vs fixed instruction bit-width (integral to instruction or not)
77 * Implicit vs explicit type-conversion (compounded on bit-width)
78 * Implicit vs explicit inner loops.
79 * Masks / tagging (selecting/preventing certain indexed elements from execution)
80
81 The pros and cons of each are discussed and analysed below.
82
83 ## Fixed vs variable parallelism length
84
85 In David Patterson and Andrew Waterman's analysis of SIMD and Vector
86 ISAs, the analysis comes out clearly in favour of (effectively) variable
87 length SIMD. As SIMD is a fixed width, typically 4, 8 or in extreme cases
88 16 or 32 simultaneous operations, the setup, teardown and corner-cases of SIMD
89 are extremely burdensome except for applications whose requirements
90 *specifically* match the *precise and exact* depth of the SIMD engine.
91
92 Thus, SIMD, no matter what width is chosen, is never going to be acceptable
93 for general-purpose computation, and in the context of developing a
94 general-purpose ISA, is never going to satisfy 100 percent of implementors.
95
96 To explain this further: for increased workloads over time, as the
97 performance requirements increase for new target markets, implementors
98 choose to extend the SIMD width (so as to again avoid mixing parallelism
99 into the instruction issue phases: the primary "simplicity" benefit of
100 SIMD in the first place), with the result that the entire opcode space
101 effectively doubles with each new SIMD width that's added to the ISA.
102
103 That basically leaves "variable-length vector" as the clear *general-purpose*
104 winner, at least in terms of greatly simplifying the instruction set,
105 reducing the number of instructions required for any given task, and thus
106 reducing power consumption for the same.
107
108 ## Implicit vs fixed instruction bit-width
109
110 SIMD again has a severe disadvantage here, over Vector: huge proliferation
111 of specialist instructions that target 8-bit, 16-bit, 32-bit, 64-bit, and
112 have to then have operations *for each and between each*. It gets very
113 messy, very quickly.
114
115 The V-Extension on the other hand proposes to set the bit-width of
116 future instructions on a per-register basis, such that subsequent instructions
117 involving that register are *implicitly* of that particular bit-width until
118 otherwise changed or reset.
119
120 This has some extremely useful properties, without being particularly
121 burdensome to implementations, given that instruction decode already has
122 to direct the operation to a correctly-sized width ALU engine, anyway.
123
124 Not least: in places where an ISA was previously constrained (due for
125 whatever reason, including limitations of the available operand spcace),
126 implicit bit-width allows the meaning of certain operations to be
127 type-overloaded *without* pollution or alteration of frozen and immutable
128 instructions, in a fully backwards-compatible fashion.
129
130 ## Implicit and explicit type-conversion
131
132 The Draft 2.3 V-extension proposal has (deprecated) polymorphism to help
133 deal with over-population of instructions, such that type-casting from
134 integer (and floating point) of various sizes is automatically inferred
135 due to "type tagging" that is set with a special instruction. A register
136 will be *specifically* marked as "16-bit Floating-Point" and, if added
137 to an operand that is specifically tagged as "32-bit Integer" an implicit
138 type-conversion will take placce *without* requiring that type-conversion
139 to be explicitly done with its own separate instruction.
140
141 However, implicit type-conversion is not only quite burdensome to
142 implement (explosion of inferred type-to-type conversion) but also is
143 never really going to be complete. It gets even worse when bit-widths
144 also have to be taken into consideration. Each new type results in
145 an increased O(N^2) conversion space that, as anyone who has examined
146 python's source code (which has built-in polymorphic type-conversion),
147 knows that the task is more complex than it first seems.
148
149 Overall, type-conversion is generally best to leave to explicit
150 type-conversion instructions, or in definite specific use-cases left to
151 be part of an actual instruction (DSP or FP)
152
153 ## Zero-overhead loops vs explicit loops
154
155 The initial Draft P-SIMD Proposal by Chuanhua Chang of Andes Technology
156 contains an extremely interesting feature: zero-overhead loops. This
157 proposal would basically allow an inner loop of instructions to be
158 repeated indefinitely, a fixed number of times.
159
160 Its specific advantage over explicit loops is that the pipeline in a DSP
161 can potentially be kept completely full *even in an in-order single-issue
162 implementation*. Normally, it requires a superscalar architecture and
163 out-of-order execution capabilities to "pre-process" instructions in
164 order to keep ALU pipelines 100% occupied.
165
166 By bringing that capability in, this proposal could offer a way to increase
167 pipeline activity even in simpler implementations in the one key area
168 which really matters: the inner loop.
169
170 However when looking at much more comprehensive schemes
171 "A portable specification of zero-overhead loop control hardware
172 applied to embedded processors" (ZOLC), optimising only the single
173 inner loop seems inadequate, tending to suggest that ZOLC may be
174 better off being proposed as an entirely separate Extension.
175
176 ## Mask and Tagging (Predication)
177
178 Tagging (aka Masks aka Predication) is a pseudo-method of implementing
179 simplistic branching in a parallel fashion, by allowing execution on
180 elements of a vector to be switched on or off depending on the results
181 of prior operations in the same array position.
182
183 The reason for considering this is simple: by *definition* it
184 is not possible to perform individual parallel branches in a SIMD
185 (Single-Instruction, **Multiple**-Data) context. Branches (modifying
186 of the Program Counter) will result in *all* parallel data having
187 a different instruction executed on it: that's just the definition of
188 SIMD, and it is simply unavoidable.
189
190 So these are the ways in which conditional execution may be implemented:
191
192 * explicit compare and branch: BNE x, y -> offs would jump offs
193 instructions if x was not equal to y
194 * explicit store of tag condition: CMP x, y -> tagbit
195 * implicit (condition-code) ADD results in a carry, carry bit implicitly
196 (or sometimes explicitly) goes into a "tag" (mask) register
197
198 The first of these is a "normal" branch method, which is flat-out impossible
199 to parallelise without look-ahead and effectively rewriting instructions.
200 This would defeat the purpose of RISC.
201
202 The latter two are where parallelism becomes easy to do without complexity:
203 every operation is modified to be "conditionally executed" (in an explicit
204 way directly in the instruction format *or* implicitly).
205
206 RVV (Vector-Extension) proposes to have *explicit* storing of the compare
207 in a tag/mask register, and to *explicitly* have every vector operation
208 *require* that its operation be "predicated" on the bits within an
209 explicitly-named tag/mask register.
210
211 SIMD (P-Extension) has not yet published precise documentation on what its
212 schema is to be: there is however verbal indication at the time of writing
213 that:
214
215 > The "compare" instructions in the DSP/SIMD ISA proposed by Andes will
216 > be executed using the same compare ALU logic for the base ISA with some
217 > minor modifications to handle smaller data types. The function will not
218 > be duplicated.
219
220 This is an *implicit* form of predication as the base RV ISA does not have
221 condition-codes or predication. By adding a CSR it becomes possible
222 to also tag certain registers as "predicated if referenced as a destination".
223 Example:
224
225 // in future operations from now on, if r0 is the destination use r5 as
226 // the PREDICATION register
227 SET_IMPLICIT_CSRPREDICATE r0, r5
228 // store the compares in r5 as the PREDICATION register
229 CMPEQ8 r5, r1, r2
230 // r0 is used here. ah ha! that means it's predicated using r5!
231 ADD8 r0, r1, r3
232
233 With enough registers (and in RISC-V there are enough registers) some fairly
234 complex predication can be set up and yet still execute without significant
235 stalling, even in a simple non-superscalar architecture.
236
237 (For details on how Branch Instructions would be retro-fitted to indirectly
238 predicated equivalents, see Appendix)
239
240 ## Conclusions
241
242 In the above sections the five different ways where parallel instruction
243 execution has closely and loosely inter-related implications for the ISA and
244 for implementors, were outlined. The pluses and minuses came out as
245 follows:
246
247 * Fixed vs variable parallelism: <b>variable</b>
248 * Implicit (indirect) vs fixed (integral) instruction bit-width: <b>indirect</b>
249 * Implicit vs explicit type-conversion: <b>explicit</b>
250 * Implicit vs explicit inner loops: <b>implicit but best done separately</b>
251 * Tag or no-tag: <b>Complex but highly beneficial</b>
252
253 In particular:
254
255 * variable-length vectors came out on top because of the high setup, teardown
256 and corner-cases associated with the fixed width of SIMD.
257 * Implicit bit-width helps to extend the ISA to escape from
258 former limitations and restrictions (in a backwards-compatible fashion),
259 whilst also leaving implementors free to simmplify implementations
260 by using actual explicit internal parallelism.
261 * Implicit (zero-overhead) loops provide a means to keep pipelines
262 potentially 100% occupied in a single-issue in-order implementation
263 i.e. *without* requiring a super-scalar or out-of-order architecture,
264 but doing a proper, full job (ZOLC) is an entirely different matter.
265
266 Constructing a SIMD/Simple-Vector proposal based around four of these five
267 requirements would therefore seem to be a logical thing to do.
268
269 # Instruction Format
270
271 **TODO** *basically borrow from both P and V, which should be quite simple
272 to do, with the exception of Tag/no-tag, which needs a bit more
273 thought. V's Section 17.19 of Draft V2.3 spec is reminiscent of B's BGS
274 gather-scatterer, and, if implemented, could actually be a really useful
275 way to span 8-bit up to 64-bit groups of data, where BGS as it stands
276 and described by Clifford does **bits** of up to 16 width. Lots to
277 look at and investigate*
278
279 * For analysis of RVV see [[v_comparative_analysis]] which begins to
280 outline topologically-equivalent mappings of instructions
281 * Also see Appendix "Retro-fitting Predication into branch-explicit ISA"
282 for format of Branch opcodes.
283
284 **TODO**: *analyse and decide whether the implicit nature of predication
285 as proposed is or is not a lot of hassle, and if explicit prefixes are
286 a better idea instead. Parallelism therefore effectively may end up
287 as always being 64-bit opcodes (32 for the prefix, 32 for the instruction)
288 with some opportunities for to use Compressed bringing it down to 48.
289 Also to consider is whether one or both of the last two remaining Compressed
290 instruction codes in Quadrant 1 could be used as a parallelism prefix,
291 bringing parallelised opcodes down to 32-bit and having the benefit of
292 being explicit.*
293
294 # Note on implementation of parallelism
295
296 One extremely important aspect of this proposal is to respect and support
297 implementors desire to focus on power, area or performance. In that regard,
298 it is proposed that implementors be free to choose whether to implement
299 the Vector (or variable-width SIMD) parallelism as sequential operations
300 with a single ALU, fully parallel (if practical) with multiple ALUs, or
301 a hybrid combination of both.
302
303 In Broadcom's Videocore-IV, they chose hybrid, and called it "Virtual
304 Parallelism". They achieve a 16-way SIMD at an **instruction** level
305 by providing a combination of a 4-way parallel ALU *and* an externally
306 transparent loop that feeds 4 sequential sets of data into each of the
307 4 ALUs.
308
309 Also in the same core, it is worth noting that particularly uncommon
310 but essential operations (Reciprocal-Square-Root for example) are
311 *not* part of the 4-way parallel ALU but instead have a *single* ALU.
312 Under the proposed Vector (varible-width SIMD) implementors would
313 be free to do precisely that: i.e. free to choose *on a per operation
314 basis* whether and how much "Virtual Parallelism" to deploy.
315
316 It is absolutely critical to note that it is proposed that such choices MUST
317 be **entirely transparent** to the end-user and the compiler. Whilst
318 a Vector (varible-width SIM) may not precisely match the width of the
319 parallelism within the implementation, the end-user **should not care**
320 and in this way the performance benefits are gained but the ISA remains
321 straightforward. All that happens at the end of an instruction run is: some
322 parallel units (if there are any) would remain offline, completely
323 transparently to the ISA, the program, and the compiler.
324
325 The "SIMD considered harmful" trap of having huge complexity and extra
326 instructions to deal with corner-cases is thus avoided, and implementors
327 get to choose precisely where to focus and target the benefits of their
328 implementation efforts, without "extra baggage".
329
330 # CSRs <a name="csrs"></a>
331
332 There are a number of CSRs needed, which are used at the instruction
333 decode phase to re-interpret standard RV opcodes (a practice that has
334 precedent in the setting of MISA to enable / disable extensions).
335
336 * Integer Register N is Vector of length M: r(N) -> r(N..N+M-1)
337 * Integer Register N is of implicit bitwidth M (M=default,8,16,32,64)
338 * Floating-point Register N is Vector of length M: r(N) -> r(N..N+M-1)
339 * Floating-point Register N is of implicit bitwidth M (M=default,8,16,32,64)
340 * Integer Register N is a Predication Register (note: a key-value store)
341
342 Notes:
343
344 * for the purposes of LOAD / STORE, Integer Registers which are
345 marked as a Vector will result in a Vector LOAD / STORE.
346 * Vector Lengths are *not* the same as vsetl but are an integral part
347 of vsetl.
348 * Actual vector length is *multipled* by how many blocks of length
349 "bitwidth" may fit into an XLEN-sized register file.
350 * Predication is a key-value store due to the implicit referencing,
351 as opposed to having the predicate register explicitly in the instruction.
352
353 ## Predication CSR
354
355 The Predication CSR is a key-value store indicating whether, if a given
356 destination register (integer or floating-point) is referred to in an
357 instruction, it is to be predicated. The first entry is whether predication
358 is enabled. The second entry is whether the register index refers to a
359 floating-point or an integer register. The third entry is the index
360 of that register which is to be predicated (if referred to). The fourth entry
361 is the integer register that is treated as a bitfield, indexable by the
362 vector element index.
363
364 | RegNo | 6 | 5 | (4..0) | (4..0) |
365 | ----- | - | - | ------- | ------- |
366 | r0 | pren0 | i/f | regidx | predidx |
367 | r1 | pren1 | i/f | regidx | predidx |
368 | .. | pren.. | i/f | regidx | predidx |
369 | r15 | pren15 | i/f | regidx | predidx |
370
371 The Predication CSR Table is a key-value store, so implementation-wise
372 it will be faster to turn the table around (maintain topologically
373 equivalent state):
374
375 fp_pred_enabled[32];
376 int_pred_enabled[32];
377 for (i = 0; i < 16; i++)
378 if CSRpred[i].pren:
379 idx = CSRpred[i].regidx
380 predidx = CSRpred[i].predidx
381 if CSRpred[i].type == 0: # integer
382 int_pred_enabled[idx] = 1
383 int_pred_reg[idx] = predidx
384 else:
385 fp_pred_enabled[idx] = 1
386 fp_pred_reg[idx] = predidx
387
388 So when an operation is to be predicated, it is the internal state that
389 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
390 pseudo-code for operations is given, where p is the explicit (direct)
391 reference to the predication register to be used:
392
393 for (int i=0; i<vl; ++i)
394 if ([!]preg[p][i])
395 (d ? vreg[rd][i] : sreg[rd]) =
396 iop(s1 ? vreg[rs1][i] : sreg[rs1],
397 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
398
399 This instead becomes an *indirect* reference using the *internal* state
400 table generated from the Predication CSR key-value store:
401
402 if type(iop) == INT:
403 pred_enabled = int_pred_enabled
404 preg = int_pred_reg[rd]
405 else:
406 pred_enabled = fp_pred_enabled
407 preg = fp_pred_reg[rd]
408
409 for (int i=0; i<vl; ++i)
410 if (preg_enabled[rd] && [!]preg[i])
411 (d ? vreg[rd][i] : sreg[rd]) =
412 iop(s1 ? vreg[rs1][i] : sreg[rs1],
413 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
414
415 ## MAXVECTORDEPTH
416
417 MAXVECTORDEPTH is the same concept as MVL in RVV. However in Simple-V,
418 given that its primary (base, unextended) purpose is for 3D, Video and
419 other purposes (not requiring supercomputing capability), it makes sense
420 to limit MAXVECTORDEPTH to the regfile bitwidth (32 for RV32, 64 for RV64
421 and so on).
422
423 The reason for setting this limit is so that predication registers, when
424 marked as such, may fit into a single register as opposed to fanning out
425 over several registers. This keeps the implementation a little simpler.
426 Note that RVV on top of Simple-V may choose to over-ride this decision.
427
428 ## Vector-length CSRs
429
430 Vector lengths are interpreted as meaning "any instruction referring to
431 r(N) generates implicit identical instructions referring to registers
432 r(N+M-1) where M is the Vector Length". Vector Lengths may be set to
433 use up to 16 registers in the register file.
434
435 One separate CSR table is needed for each of the integer and floating-point
436 register files:
437
438 | RegNo | (3..0) |
439 | ----- | ------ |
440 | r0 | vlen0 |
441 | r1 | vlen1 |
442 | .. | vlen.. |
443 | r31 | vlen31 |
444
445 An array of 32 4-bit CSRs is needed (4 bits per register) to indicate
446 whether a register was, if referred to in any standard instructions,
447 implicitly to be treated as a vector. A vector length of 1 indicates
448 that it is to be treated as a scalar. Vector lengths of 0 are reserved.
449
450 Internally, implementations may choose to use the non-zero vector length
451 to set a bit-field per register, to be used in the instruction decode phase.
452 In this way any standard (current or future) operation involving
453 register operands may detect if the operation is to be vector-vector,
454 vector-scalar or scalar-scalar (standard) simply through a single
455 bit test.
456
457 Note that when using the "vsetl rs1, rs2" instruction (caveat: when the
458 bitwidth is specifically not set) it becomes:
459
460 CSRvlength = MIN(MIN(CSRvectorlen[rs1], MAXVECTORDEPTH), rs2)
461
462 This is in contrast to RVV:
463
464 CSRvlength = MIN(MIN(rs1, MAXVECTORDEPTH), rs2)
465
466 ## Element (SIMD) bitwidth CSRs
467
468 Element bitwidths may be specified with a per-register CSR, and indicate
469 how a register (integer or floating-point) is to be subdivided.
470
471 | RegNo | (2..0) |
472 | ----- | ------ |
473 | r0 | vew0 |
474 | r1 | vew1 |
475 | .. | vew.. |
476 | r31 | vew31 |
477
478 vew may be one of the following (giving a table "bytestable", used below):
479
480 | vew | bitwidth |
481 | --- | -------- |
482 | 000 | default |
483 | 001 | 8 |
484 | 010 | 16 |
485 | 011 | 32 |
486 | 100 | 64 |
487 | 101 | 128 |
488 | 110 | rsvd |
489 | 111 | rsvd |
490
491 Extending this table (with extra bits) is covered in the section
492 "Implementing RVV on top of Simple-V".
493
494 Note that when using the "vsetl rs1, rs2" instruction, taking bitwidth
495 into account, it becomes:
496
497 vew = CSRbitwidth[rs1]
498 if (vew == 0)
499 bytesperreg = (XLEN/8) # or FLEN as appropriate
500 else:
501 bytesperreg = bytestable[vew] # 1 2 4 8 16
502 simdmult = (XLEN/8) / bytesperreg # or FLEN as appropriate
503 vlen = CSRvectorlen[rs1] * simdmult
504 CSRvlength = MIN(MIN(vlen, MAXVECTORDEPTH), rs2)
505
506 The reason for multiplying the vector length by the number of SIMD elements
507 (in each individual register) is so that each SIMD element may optionally be
508 predicated.
509
510 An example of how to subdivide the register file when bitwidth != default
511 is given in the section "Bitwidth Virtual Register Reordering".
512
513 # Exceptions
514
515 > What does an ADD of two different-sized vectors do in simple-V?
516
517 * if the two source operands are not the same, throw an exception.
518 * if the destination operand is also a vector, and the source is longer
519 than the destination, throw an exception.
520
521 > And what about instructions like JALR? 
522 > What does jumping to a vector do?
523
524 * Throw an exception. Whether that actually results in spawning threads
525 as part of the trap-handling remains to be seen.
526
527 # Comparison of "Traditional" SIMD, Alt-RVP, Simple-V and RVV Proposals <a name="parallelism_comparisons"></a>
528
529 This section compares the various parallelism proposals as they stand,
530 including traditional SIMD, in terms of features, ease of implementation,
531 complexity, flexibility, and die area.
532
533 ## [[alt_rvp]]
534
535 Primary benefit of Alt-RVP is the simplicity with which parallelism
536 may be introduced (effective multiplication of regfiles and associated ALUs).
537
538 * plus: the simplicity of the lanes (combined with the regularity of
539 allocating identical opcodes multiple independent registers) meaning
540 that SRAM or 2R1W can be used for entire regfile (potentially).
541 * minus: a more complex instruction set where the parallelism is much
542 more explicitly directly specified in the instruction and
543 * minus: if you *don't* have an explicit instruction (opcode) and you
544 need one, the only place it can be added is... in the vector unit and
545 * minus: opcode functions (and associated ALUs) duplicated in Alt-RVP are
546 not useable or accessible in other Extensions.
547 * plus-and-minus: Lanes may be utilised for high-speed context-switching
548 but with the down-side that they're an all-or-nothing part of the Extension.
549 No Alt-RVP: no fast register-bank switching.
550 * plus: Lane-switching would mean that complex operations not suited to
551 parallelisation can be carried out, followed by further parallel Lane-based
552 work, without moving register contents down to memory (and back)
553 * minus: Access to registers across multiple lanes is challenging. "Solution"
554 is to drop data into memory and immediately back in again (like MMX).
555
556 ## Simple-V
557
558 Primary benefit of Simple-V is the OO abstraction of parallel principles
559 from actual (internal) parallel hardware. It's an API in effect that's
560 designed to be slotted in to an existing implementation (just after
561 instruction decode) with minimum disruption and effort.
562
563 * minus: the complexity of having to use register renames, OoO, VLIW,
564 register file cacheing, all of which has been done before but is a
565 pain
566 * plus: transparent re-use of existing opcodes as-is just indirectly
567 saying "this register's now a vector" which
568 * plus: means that future instructions also get to be inherently
569 parallelised because there's no "separate vector opcodes"
570 * plus: Compressed instructions may also be (indirectly) parallelised
571 * minus: the indirect nature of Simple-V means that setup (setting
572 a CSR register to indicate vector length, a separate one to indicate
573 that it is a predicate register and so on) means a little more setup
574 time than Alt-RVP or RVV's "direct and within the (longer) instruction"
575 approach.
576 * plus: shared register file meaning that, like Alt-RVP, complex
577 operations not suited to parallelisation may be carried out interleaved
578 between parallelised instructions *without* requiring data to be dropped
579 down to memory and back (into a separate vectorised register engine).
580 * plus-and-maybe-minus: re-use of integer and floating-point 32-wide register
581 files means that huge parallel workloads would use up considerable
582 chunks of the register file. However in the case of RV64 and 32-bit
583 operations, that effectively means 64 slots are available for parallel
584 operations.
585 * plus: inherent parallelism (actual parallel ALUs) doesn't actually need to
586 be added, yet the instruction opcodes remain unchanged (and still appear
587 to be parallel). consistent "API" regardless of actual internal parallelism:
588 even an in-order single-issue implementation with a single ALU would still
589 appear to have parallel vectoristion.
590 * hard-to-judge: if actual inherent underlying ALU parallelism is added it's
591 hard to say if there would be pluses or minuses (on die area). At worse it
592 would be "no worse" than existing register renaming, OoO, VLIW and register
593 file cacheing schemes.
594
595 ## RVV (as it stands, Draft 0.4 Section 17, RISC-V ISA V2.3-Draft)
596
597 RVV is extremely well-designed and has some amazing features, including
598 2D reorganisation of memory through LOAD/STORE "strides".
599
600 * plus: regular predictable workload means that implementations may
601 streamline effects on L1/L2 Cache.
602 * plus: regular and clear parallel workload also means that lanes
603 (similar to Alt-RVP) may be used as an implementation detail,
604 using either SRAM or 2R1W registers.
605 * plus: separate engine with no impact on the rest of an implementation
606 * minus: separate *complex* engine with no RTL (ALUs, Pipeline stages) reuse
607 really feasible.
608 * minus: no ISA abstraction or re-use either: additions to other Extensions
609 do not gain parallelism, resulting in prolific duplication of functionality
610 inside RVV *and out*.
611 * minus: when operations require a different approach (scalar operations
612 using the standard integer or FP regfile) an entire vector must be
613 transferred out to memory, into standard regfiles, then back to memory,
614 then back to the vector unit, this to occur potentially multiple times.
615 * minus: will never fit into Compressed instruction space (as-is. May
616 be able to do so if "indirect" features of Simple-V are partially adopted).
617 * plus-and-slight-minus: extended variants may address up to 256
618 vectorised registers (requires 48/64-bit opcodes to do it).
619 * minus-and-partial-plus: separate engine plus complexity increases
620 implementation time and die area, meaning that adoption is likely only
621 to be in high-performance specialist supercomputing (where it will
622 be absolutely superb).
623
624 ## Traditional SIMD
625
626 The only really good things about SIMD are how easy it is to implement and
627 get good performance. Unfortunately that makes it quite seductive...
628
629 * plus: really straightforward, ALU basically does several packed operations
630 at once. Parallelism is inherent at the ALU, making the addition of
631 SIMD-style parallelism an easy decision that has zero significant impact
632 on the rest of any given architectural design and layout.
633 * plus (continuation): SIMD in simple in-order single-issue designs can
634 therefore result in superb throughput, easily achieved even with a very
635 simple execution model.
636 * minus: ridiculously complex setup and corner-cases that disproportionately
637 increase instruction count on what would otherwise be a "simple loop",
638 should the number of elements in an array not happen to exactly match
639 the SIMD group width.
640 * minus: getting data usefully out of registers (if separate regfiles
641 are used) means outputting to memory and back.
642 * minus: quite a lot of supplementary instructions for bit-level manipulation
643 are needed in order to efficiently extract (or prepare) SIMD operands.
644 * minus: MASSIVE proliferation of ISA both in terms of opcodes in one
645 dimension and parallelism (width): an at least O(N^2) and quite probably
646 O(N^3) ISA proliferation that often results in several thousand
647 separate instructions. all requiring separate and distinct corner-case
648 algorithms!
649 * minus: EVEN BIGGER proliferation of SIMD ISA if the functionality of
650 8, 16, 32 or 64-bit reordering is built-in to the SIMD instruction.
651 For example: add (high|low) 16-bits of r1 to (low|high) of r2 requires
652 four separate and distinct instructions: one for (r1:low r2:high),
653 one for (r1:high r2:low), one for (r1:high r2:high) and one for
654 (r1:low r2:low) *per function*.
655 * minus: EVEN BIGGER proliferation of SIMD ISA if there is a mismatch
656 between operand and result bit-widths. In combination with high/low
657 proliferation the situation is made even worse.
658 * minor-saving-grace: some implementations *may* have predication masks
659 that allow control over individual elements within the SIMD block.
660
661 # Comparison *to* Traditional SIMD: Alt-RVP, Simple-V and RVV Proposals <a name="simd_comparison"></a>
662
663 This section compares the various parallelism proposals as they stand,
664 *against* traditional SIMD as opposed to *alongside* SIMD. In other words,
665 the question is asked "How can each of the proposals effectively implement
666 (or replace) SIMD, and how effective would they be"?
667
668 ## [[alt_rvp]]
669
670 * Alt-RVP would not actually replace SIMD but would augment it: just as with
671 a SIMD architecture where the ALU becomes responsible for the parallelism,
672 Alt-RVP ALUs would likewise be so responsible... with *additional*
673 (lane-based) parallelism on top.
674 * Thus at least some of the downsides of SIMD ISA O(N^3) proliferation by
675 at least one dimension are avoided (architectural upgrades introducing
676 128-bit then 256-bit then 512-bit variants of the exact same 64-bit
677 SIMD block)
678 * Thus, unfortunately, Alt-RVP would suffer the same inherent proliferation
679 of instructions as SIMD, albeit not quite as badly (due to Lanes).
680 * In the same discussion for Alt-RVP, an additional proposal was made to
681 be able to subdivide the bits of each register lane (columns) down into
682 arbitrary bit-lengths (RGB 565 for example).
683 * A recommendation was given instead to make the subdivisions down to 32-bit,
684 16-bit or even 8-bit, effectively dividing the registerfile into
685 Lane0(H), Lane0(L), Lane1(H) ... LaneN(L) or further. If inter-lane
686 "swapping" instructions were then introduced, some of the disadvantages
687 of SIMD could be mitigated.
688
689 ## RVV
690
691 * RVV is designed to replace SIMD with a better paradigm: arbitrary-length
692 parallelism.
693 * However whilst SIMD is usually designed for single-issue in-order simple
694 DSPs with a focus on Multimedia (Audio, Video and Image processing),
695 RVV's primary focus appears to be on Supercomputing: optimisation of
696 mathematical operations that fit into the OpenCL space.
697 * Adding functions (operations) that would normally fit (in parallel)
698 into a SIMD instruction requires an equivalent to be added to the
699 RVV Extension, if one does not exist. Given the specialist nature of
700 some SIMD instructions (8-bit or 16-bit saturated or halving add),
701 this possibility seems extremely unlikely to occur, even if the
702 implementation overhead of RVV were acceptable (compared to
703 normal SIMD/DSP-style single-issue in-order simplicity).
704
705 ## Simple-V
706
707 * Simple-V borrows hugely from RVV as it is intended to be easy to
708 topologically transplant every single instruction from RVV (as
709 designed) into Simple-V equivalents, with *zero loss of functionality
710 or capability*.
711 * With the "parallelism" abstracted out, a hypothetical SIMD-less "DSP"
712 Extension which contained the basic primitives (non-parallelised
713 8, 16 or 32-bit SIMD operations) inherently *become* parallel,
714 automatically.
715 * Additionally, standard operations (ADD, MUL) that would normally have
716 to have special SIMD-parallel opcodes added need no longer have *any*
717 of the length-dependent variants (2of 32-bit ADDs in a 64-bit register,
718 4of 32-bit ADDs in a 128-bit register) because Simple-V takes the
719 *standard* RV opcodes (present and future) and automatically parallelises
720 them.
721 * By inheriting the RVV feature of arbitrary vector-length, then just as
722 with RVV the corner-cases and ISA proliferation of SIMD is avoided.
723 * Whilst not entirely finalised, registers are expected to be
724 capable of being subdivided down to an implementor-chosen bitwidth
725 in the underlying hardware (r1 becomes r1[31..24] r1[23..16] r1[15..8]
726 and r1[7..0], or just r1[31..16] r1[15..0]) where implementors can
727 choose to have separate independent 8-bit ALUs or dual-SIMD 16-bit
728 ALUs that perform twin 8-bit operations as they see fit, or anything
729 else including no subdivisions at all.
730 * Even though implementors have that choice even to have full 64-bit
731 (with RV64) SIMD, they *must* provide predication that transparently
732 switches off appropriate units on the last loop, thus neatly fitting
733 underlying SIMD ALU implementations *into* the arbitrary vector-length
734 RVV paradigm, keeping the uniform consistent API that is a key strategic
735 feature of Simple-V.
736 * With Simple-V fitting into the standard register files, certain classes
737 of SIMD operations such as High/Low arithmetic (r1[31..16] + r2[15..0])
738 can be done by applying *Parallelised* Bit-manipulation operations
739 followed by parallelised *straight* versions of element-to-element
740 arithmetic operations, even if the bit-manipulation operations require
741 changing the bitwidth of the "vectors" to do so. Predication can
742 be utilised to skip high words (or low words) in source or destination.
743 * In essence, the key downside of SIMD - massive duplication of
744 identical functions over time as an architecture evolves from 32-bit
745 wide SIMD all the way up to 512-bit, is avoided with Simple-V, through
746 vector-style parallelism being dropped on top of 8-bit or 16-bit
747 operations, all the while keeping a consistent ISA-level "API" irrespective
748 of implementor design choices (or indeed actual implementations).
749
750 # Impementing V on top of Simple-V
751
752 * Number of Offset CSRs extends from 2
753 * Extra register file: vector-file
754 * Setup of Vector length and bitwidth CSRs now can specify vector-file
755 as well as integer or float file.
756 * Extend CSR tables (bitwidth) with extra bits
757 * TODO
758
759 # Implementing P (renamed to DSP) on top of Simple-V
760
761 * Implementors indicate chosen bitwidth support in Vector-bitwidth CSR
762 (caveat: anything not specified drops through to software-emulation / traps)
763 * TODO
764
765 # Appendix
766
767 ## V-Extension to Simple-V Comparative Analysis
768
769 This section has been moved to its own page [[v_comparative_analysis]]
770
771 ## P-Ext ISA
772
773 This section has been moved to its own page [[p_comparative_analysis]]
774
775 ## Example of vector / vector, vector / scalar, scalar / scalar => vector add
776
777 register CSRvectorlen[XLEN][4]; # not quite decided yet about this one...
778 register CSRpredicate[XLEN][4]; # 2^4 is max vector length
779 register CSRreg_is_vectorised[XLEN]; # just for fun support scalars as well
780 register x[32][XLEN];
781
782 function op_add(rd, rs1, rs2, predr)
783 {
784    /* note that this is ADD, not PADD */
785    int i, id, irs1, irs2;
786    # checks CSRvectorlen[rd] == CSRvectorlen[rs] etc. ignored
787    # also destination makes no sense as a scalar but what the hell...
788    for (i = 0, id=0, irs1=0, irs2=0; i<CSRvectorlen[rd]; i++)
789       if (CSRpredicate[predr][i]) # i *think* this is right...
790          x[rd+id] <= x[rs1+irs1] + x[rs2+irs2];
791       # now increment the idxs
792       if (CSRreg_is_vectorised[rd]) # bitfield check rd, scalar/vector?
793          id += 1;
794       if (CSRreg_is_vectorised[rs1]) # bitfield check rs1, scalar/vector?
795          irs1 += 1;
796       if (CSRreg_is_vectorised[rs2]) # bitfield check rs2, scalar/vector?
797          irs2 += 1;
798 }
799
800 ## Retro-fitting Predication into branch-explicit ISA
801
802 One of the goals of this parallelism proposal is to avoid instruction
803 duplication. However, with the base ISA having been designed explictly
804 to *avoid* condition-codes entirely, shoe-horning predication into it
805 bcomes quite challenging.
806
807 However what if all branch instructions, if referencing a vectorised
808 register, were instead given *completely new analogous meanings* that
809 resulted in a parallel bit-wise predication register being set? This
810 would have to be done for both C.BEQZ and C.BNEZ, as well as BEQ, BNE,
811 BLT and BGE.
812
813 We might imagine that FEQ, FLT and FLT would also need to be converted,
814 however these are effectively *already* in the precise form needed and
815 do not need to be converted *at all*! The difference is that FEQ, FLT
816 and FLE *specifically* write a 1 to an integer register if the condition
817 holds, and 0 if not. All that needs to be done here is to say, "if
818 the integer register is tagged with a bit that says it is a predication
819 register, the **bit** in the integer register is set based on the
820 current vector index" instead.
821
822 There is, in the standard Conditional Branch instruction, more than
823 adequate space to interpret it in a similar fashion:
824
825 [[!table data="""
826 31 |30 ..... 25 |24 ... 20 | 19 ... 15 | 14 ...... 12 | 11 ....... 8 | 7 | 6 ....... 0 |
827 imm[12] | imm[10:5] | rs2 | rs1 | funct3 | imm[4:1] | imm[11] | opcode |
828 1 | 6 | 5 | 5 | 3 | 4 | 1 | 7 |
829 offset[12,10:5] || src2 | src1 | BEQ | offset[11,4:1] || BRANCH |
830 """]]
831
832 This would become:
833
834 [[!table data="""
835 31 | 30 .. 25 |24 ... 20 | 19 15 | 14 12 | 11 .. 8 | 7 | 6 ... 0 |
836 imm[12] | imm[10:5]| rs2 | rs1 | funct3 | imm[4:1] | imm[11] | opcode |
837 1 | 6 | 5 | 5 | 3 | 4 | 1 | 7 |
838 reserved || src2 | src1 | BEQ | predicate rs3 || BRANCH |
839 """]]
840
841 Similarly the C.BEQZ and C.BNEZ instruction format may be retro-fitted,
842 with the interesting side-effect that there is space within what is presently
843 the "immediate offset" field to reinterpret that to add in not only a bit
844 field to distinguish between floating-point compare and integer compare,
845 not only to add in a second source register, but also use some of the bits as
846 a predication target as well.
847
848 [[!table data="""
849 15 ...... 13 | 12 ........... 10 | 9..... 7 | 6 ................. 2 | 1 .. 0 |
850 funct3 | imm | rs10 | imm | op |
851 3 | 3 | 3 | 5 | 2 |
852 C.BEQZ | offset[8,4:3] | src | offset[7:6,2:1,5] | C1 |
853 """]]
854
855 Now uses the CS format:
856
857 [[!table data="""
858 15 ...... 13 | 12 ........... 10 | 9..... 7 | 6 .. 5 | 4......... 2 | 1 .. 0 |
859 funct3 | imm | rs10 | imm | | op |
860 3 | 3 | 3 | 2 | 3 | 2 |
861 C.BEQZ | predicate rs3 | src1 | I/F B | src2 | C1 |
862 """]]
863
864 Bit 6 would be decoded as "operation refers to Integer or Float" including
865 interpreting src1 and src2 accordingly as outlined in Table 12.2 of the
866 "C" Standard, version 2.0,
867 whilst Bit 5 would allow the operation to be extended, in combination with
868 funct3 = 110 or 111: a combination of four distinct (predicated) comparison
869 operators. In both floating-point and integer cases those could be
870 EQ/NEQ/LT/LE (with GT and GE being synthesised by inverting src1 and src2).
871
872 ## Register reordering <a name="register_reordering"></a>
873
874 ### Register File
875
876 | Reg Num | Bits |
877 | ------- | ---- |
878 | r0 | (32..0) |
879 | r1 | (32..0) |
880 | r2 | (32..0) |
881 | r3 | (32..0) |
882 | r4 | (32..0) |
883 | r5 | (32..0) |
884 | r6 | (32..0) |
885 | r7 | (32..0) |
886 | .. | (32..0) |
887 | r31| (32..0) |
888
889 ### Vectorised CSR
890
891 May not be an actual CSR: may be generated from Vector Length CSR:
892 single-bit is less burdensome on instruction decode phase.
893
894 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
895 | - | - | - | - | - | - | - | - |
896 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
897
898 ### Vector Length CSR
899
900 | Reg Num | (3..0) |
901 | ------- | ---- |
902 | r0 | 2 |
903 | r1 | 0 |
904 | r2 | 1 |
905 | r3 | 1 |
906 | r4 | 3 |
907 | r5 | 0 |
908 | r6 | 0 |
909 | r7 | 1 |
910
911 ### Virtual Register Reordering
912
913 This example assumes the above Vector Length CSR table
914
915 | Reg Num | Bits (0) | Bits (1) | Bits (2) |
916 | ------- | -------- | -------- | -------- |
917 | r0 | (32..0) | (32..0) |
918 | r2 | (32..0) |
919 | r3 | (32..0) |
920 | r4 | (32..0) | (32..0) | (32..0) |
921 | r7 | (32..0) |
922
923 ### Bitwidth Virtual Register Reordering
924
925 This example goes a little further and illustrates the effect that a
926 bitwidth CSR has been set on a register. Preconditions:
927
928 * RV32 assumed
929 * CSRintbitwidth[2] = 010 # integer r2 is 16-bit
930 * CSRintvlength[2] = 3 # integer r2 is a vector of length 3
931 * vsetl rs1, 5 # set the vector length to 5
932
933 This is interpreted as follows:
934
935 * Given that the context is RV32, ELEN=32.
936 * With ELEN=32 and bitwidth=16, the number of SIMD elements is 2
937 * Therefore the actual vector length is up to *six* elements
938
939 So when using an operation that uses r2 as a source (or destination)
940 the operation is carried out as follows:
941
942 * 16-bit operation on r2(15..0) - vector element index 0
943 * 16-bit operation on r2(31..16) - vector element index 1
944 * 16-bit operation on r3(15..0) - vector element index 2
945 * 16-bit operation on r3(31..16) - vector element index 3
946 * 16-bit operation on r4(15..0) - vector element index 4
947 * 16-bit operation on r4(31..16) **NOT** carried out due to length being 5
948
949 Predication has been left out of the above example for simplicity, however
950 predication is ANDed with the latter stages (vsetl not equal to maximum
951 capacity).
952
953 Note also that it is entirely an implementor's choice as to whether to have
954 actual separate ALUs down to the minimum bitwidth, or whether to have something
955 more akin to traditional SIMD (at any level of subdivision: 8-bit SIMD
956 operations carried out 32-bits at a time is perfectly acceptable, as is
957 8-bit SIMD operations carried out 16-bits at a time requiring two ALUs).
958 Regardless of the internal parallelism choice, *predication must
959 still be respected*, making Simple-V in effect the "consistent public API".
960
961 ### Example Instruction translation: <a name="example_translation"></a>
962
963 Instructions "ADD r2 r4 r4" would result in three instructions being
964 generated and placed into the FILO:
965
966 * ADD r2 r4 r4
967 * ADD r2 r5 r5
968 * ADD r2 r6 r6
969
970 ### Insights
971
972 SIMD register file splitting still to consider. For RV64, benefits of doubling
973 (quadrupling in the case of Half-Precision IEEE754 FP) the apparent
974 size of the floating point register file to 64 (128 in the case of HP)
975 seem pretty clear and worth the complexity.
976
977 64 virtual 32-bit F.P. registers and given that 32-bit FP operations are
978 done on 64-bit registers it's not so conceptually difficult.  May even
979 be achieved by *actually* splitting the regfile into 64 virtual 32-bit
980 registers such that a 64-bit FP scalar operation is dropped into (r0.H
981 r0.L) tuples.  Implementation therefore hidden through register renaming.
982
983 Implementations intending to introduce VLIW, OoO and parallelism
984 (even without Simple-V) would then find that the instructions are
985 generated quicker (or in a more compact fashion that is less heavy
986 on caches). Interestingly we observe then that Simple-V is about
987 "consolidation of instruction generation", where actual parallelism
988 of underlying hardware is an implementor-choice that could just as
989 equally be applied *without* Simple-V even being implemented.
990
991 ## Analysis of CSR decoding on latency <a name="csr_decoding_analysis"></a>
992
993 It could indeed have been logically deduced (or expected), that there
994 would be additional decode latency in this proposal, because if
995 overloading the opcodes to have different meanings, there is guaranteed
996 to be some state, some-where, directly related to registers.
997
998 There are several cases:
999
1000 * All operands vector-length=1 (scalars), all operands
1001 packed-bitwidth="default": instructions are passed through direct as if
1002 Simple-V did not exist.  Simple-V is, in effect, completely disabled.
1003 * At least one operand vector-length > 1, all operands
1004 packed-bitwidth="default": any parallel vector ALUs placed on "alert",
1005 virtual parallelism looping may be activated.
1006 * All operands vector-length=1 (scalars), at least one
1007 operand packed-bitwidth != default: degenerate case of SIMD,
1008 implementation-specific complexity here (packed decode before ALUs or
1009 *IN* ALUs)
1010 * At least one operand vector-length > 1, at least one operand
1011 packed-bitwidth != default: parallel vector ALUs (if any)
1012 placed on "alert", virtual parallelsim looping may be activated,
1013 implementation-specific SIMD complexity kicks in (packed decode before
1014 ALUs or *IN* ALUs).
1015
1016 Bear in mind that the proposal includes that the decision whether
1017 to parallelise in hardware or whether to virtual-parallelise (to
1018 dramatically simplify compilers and also not to run into the SIMD
1019 instruction proliferation nightmare) *or* a transprent combination
1020 of both, be done on a *per-operand basis*, so that implementors can
1021 specifically choose to create an application-optimised implementation
1022 that they believe (or know) will sell extremely well, without having
1023 "Extra Standards-Mandated Baggage" that would otherwise blow their area
1024 or power budget completely out the window.
1025
1026 Additionally, two possible CSR schemes have been proposed, in order to
1027 greatly reduce CSR space:
1028
1029 * per-register CSRs (vector-length and packed-bitwidth)
1030 * a smaller number of CSRs with the same information but with an *INDEX*
1031 specifying WHICH register in one of three regfiles (vector, fp, int)
1032 the length and bitwidth applies to.
1033
1034 (See "CSR vector-length and CSR SIMD packed-bitwidth" section for details)
1035
1036 In addition, LOAD/STORE has its own associated proposed CSRs that
1037 mirror the STRIDE (but not yet STRIDE-SEGMENT?) functionality of
1038 V (and Hwacha).
1039
1040 Also bear in mind that, for reasons of simplicity for implementors,
1041 I was coming round to the idea of permitting implementors to choose
1042 exactly which bitwidths they would like to support in hardware and which
1043 to allow to fall through to software-trap emulation.
1044
1045 So the question boils down to:
1046
1047 * whether either (or both) of those two CSR schemes have significant
1048 latency that could even potentially require an extra pipeline decode stage
1049 * whether there are implementations that can be thought of which do *not*
1050 introduce significant latency
1051 * whether it is possible to explicitly (through quite simply
1052 disabling Simple-V-Ext) or implicitly (detect the case all-vlens=1,
1053 all-simd-bitwidths=default) switch OFF any decoding, perhaps even to
1054 the extreme of skipping an entire pipeline stage (if one is needed)
1055 * whether packed bitwidth and associated regfile splitting is so complex
1056 that it should definitely, definitely be made mandatory that implementors
1057 move regfile splitting into the ALU, and what are the implications of that
1058 * whether even if that *is* made mandatory, is software-trapped
1059 "unsupported bitwidths" still desirable, on the basis that SIMD is such
1060 a complete nightmare that *even* having a software implementation is
1061 better, making Simple-V have more in common with a software API than
1062 anything else.
1063
1064 Whilst the above may seem to be severe minuses, there are some strong
1065 pluses:
1066
1067 * Significant reduction of V's opcode space: over 85%.
1068 * Smaller reduction of P's opcode space: around 10%.
1069 * The potential to use Compressed instructions in both Vector and SIMD
1070 due to the overloading of register meaning (implicit vectorisation,
1071 implicit packing)
1072 * Not only present but also future extensions automatically gain parallelism.
1073 * Already mentioned but worth emphasising: the simplification to compiler
1074 writers and assembly-level writers of having the same consistent ISA
1075 regardless of whether the internal level of parallelism (number of
1076 parallel ALUs) is only equal to one ("virtual" parallelism), or is
1077 greater than one, should not be underestimated.
1078
1079 ## Reducing Register Bank porting
1080
1081 This looks quite reasonable.
1082 <https://www.princeton.edu/~rblee/ELE572Papers/MultiBankRegFile_ISCA2000.pdf>
1083
1084 The main details are outlined on page 4.  They propose a 2-level register
1085 cache hierarchy, note that registers are typically only read once, that
1086 you never write back from upper to lower cache level but always go in a
1087 cycle lower -> upper -> ALU -> lower, and at the top of page 5 propose
1088 a scheme where you look ahead by only 2 instructions to determine which
1089 registers to bring into the cache.
1090
1091 The nice thing about a vector architecture is that you *know* that
1092 *even more* registers are going to be pulled in: Hwacha uses this fact
1093 to optimise L1/L2 cache-line usage (avoid thrashing), strangely enough
1094 by *introducing* deliberate latency into the execution phase.
1095
1096 # References
1097
1098 * SIMD considered harmful <https://www.sigarch.org/simd-instructions-considered-harmful/>
1099 * Link to first proposal <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/GuukrSjgBH8>
1100 * Recommendation by Jacob Bachmeyer to make zero-overhead loop an
1101 "implicit program-counter" <https://groups.google.com/a/groups.riscv.org/d/msg/isa-dev/vYVi95gF2Mo/SHz6a4_lAgAJ>
1102 * Re-continuing P-Extension proposal <https://groups.google.com/a/groups.riscv.org/forum/#!msg/isa-dev/IkLkQn3HvXQ/SEMyC9IlAgAJ>
1103 * First Draft P-SIMD (DSP) proposal <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/vYVi95gF2Mo>
1104 * B-Extension discussion <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/zi_7B15kj6s>
1105 * Broadcom VideoCore-IV <https://docs.broadcom.com/docs/12358545>
1106 Figure 2 P17 and Section 3 on P16.
1107 * Hwacha <https://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-262.html>
1108 * Hwacha <https://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-263.html>
1109 * Vector Workshop <http://riscv.org/wp-content/uploads/2015/06/riscv-vector-workshop-june2015.pdf>
1110 * Predication <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/XoP4BfYSLXA>
1111 * Branch Divergence <https://jbush001.github.io/2014/12/07/branch-divergence-in-parallel-kernels.html>
1112 * Life of Triangles (3D) <https://jbush001.github.io/2016/02/27/life-of-triangle.html>
1113 * Videocore-IV <https://github.com/hermanhermitage/videocoreiv/wiki/VideoCore-IV-3d-Graphics-Pipeline>
1114 * Discussion proposing CSRs that change ISA definition
1115 <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/InzQ1wr_3Ak>
1116 * Zero-overhead loops <https://pdfs.semanticscholar.org/dbaa/66985cc730d4b44d79f519e96ec9c43ab5b7.pdf>
1117 * Multi-ported VLIW Register File Implementation <https://ce-publications.et.tudelft.nl/publications/1517_multiple_contexts_in_a_multiported_vliw_register_file_impl.pdf>
1118 * Fast context save/restore proposal <https://groups.google.com/a/groups.riscv.org/d/msgid/isa-dev/57F823FA.6030701%40gmail.com>
1119 * Register File Bank Cacheing <https://www.princeton.edu/~rblee/ELE572Papers/MultiBankRegFile_ISCA2000.pdf>