8b56ca1bb864bcaa0982f19ffa2a2674528eaffe
[libreriscv.git] / simple_v_extension.mdwn
1 # Variable-width Variable-packed SIMD / Simple-V / Parallelism Extension Proposal
2
3 Key insight: Simple-V is intended as an abstraction layer to provide
4 a consistent "API" to parallelisation of existing *and future* operations.
5 *Actual* internal hardware-level parallelism is *not* required, such
6 that Simple-V may be viewed as providing a "compact" or "consolidated"
7 means of issuing multiple near-identical arithmetic instructions to an
8 instruction queue (FIFO), pending execution.
9
10 *Actual* parallelism, if added independently of Simple-V in the form
11 of Out-of-order restructuring (including parallel ALU lanes) or VLIW
12 implementations, or SIMD, or anything else, would then benefit *if*
13 Simple-V was added on top.
14
15 [[!toc ]]
16
17 # Introduction
18
19 This proposal exists so as to be able to satisfy several disparate
20 requirements: power-conscious, area-conscious, and performance-conscious
21 designs all pull an ISA and its implementation in different conflicting
22 directions, as do the specific intended uses for any given implementation.
23
24 The existing P (SIMD) proposal and the V (Vector) proposals,
25 whilst each extremely powerful in their own right and clearly desirable,
26 are also:
27
28 * Clearly independent in their origins (Cray and AndesStar v3 respectively)
29 so need work to adapt to the RISC-V ethos and paradigm
30 * Are sufficiently large so as to make adoption (and exploration for
31 analysis and review purposes) prohibitively expensive
32 * Both contain partial duplication of pre-existing RISC-V instructions
33 (an undesirable characteristic)
34 * Both have independent, incompatible and disparate methods for introducing
35 parallelism at the instruction level
36 * Both require that their respective parallelism paradigm be implemented
37 along-side and integral to their respective functionality *or not at all*.
38 * Both independently have methods for introducing parallelism that
39 could, if separated, benefit
40 *other areas of RISC-V not just DSP or Floating-point respectively*.
41
42 There are also key differences between Vectorisation and SIMD (full
43 details outlined in the Appendix), the key points being:
44
45 * SIMD has an extremely seductively compelling ease of implementation argument:
46 each operation is passed to the ALU, which is where the parallelism
47 lies. There is *negligeable* (if any) impact on the rest of the core
48 (with life instead being made hell for compiler writers and applications
49 writers due to extreme ISA proliferation).
50 * By contrast, Vectorisation has quite some complexity (for considerable
51 flexibility, reduction in opcode proliferation and much more).
52 * Vectorisation typically includes much more comprehensive memory load
53 and store schemes (unit stride, constant-stride and indexed), which
54 in turn have ramifications: virtual memory misses (TLB cache misses)
55 and even multiple page-faults... all caused by a *single instruction*,
56 yet with a clear benefit that the regularisation of LOAD/STOREs can
57 be optimised for minimal impact on caches and maximised throughput.
58 * By contrast, SIMD can use "standard" memory load/stores (32-bit aligned
59 to pages), and these load/stores have absolutely nothing to do with the
60 SIMD / ALU engine, no matter how wide the operand. Simplicity but with
61 more impact on instruction and data caches.
62
63 Overall it makes a huge amount of sense to have a means and method
64 of introducing instruction parallelism in a flexible way that provides
65 implementors with the option to choose exactly where they wish to offer
66 performance improvements and where they wish to optimise for power
67 and/or area (and if that can be offered even on a per-operation basis that
68 would provide even more flexibility).
69
70 Additionally it makes sense to *split out* the parallelism inherent within
71 each of P and V, and to see if each of P and V then, in *combination* with
72 a "best-of-both" parallelism extension, could be added on *on top* of
73 this proposal, to topologically provide the exact same functionality of
74 each of P and V. Each of P and V then can focus on providing the best
75 operations possible for their respective target areas, without being
76 hugely concerned about the actual parallelism.
77
78 Furthermore, an additional goal of this proposal is to reduce the number
79 of opcodes utilised by each of P and V as they currently stand, leveraging
80 existing RISC-V opcodes where possible, and also potentially allowing
81 P and V to make use of Compressed Instructions as a result.
82
83 # Analysis and discussion of Vector vs SIMD
84
85 There are six combined areas between the two proposals that help with
86 parallelism (increased performance, reduced power / area) without
87 over-burdening the ISA with a huge proliferation of
88 instructions:
89
90 * Fixed vs variable parallelism (fixed or variable "M" in SIMD)
91 * Implicit vs fixed instruction bit-width (integral to instruction or not)
92 * Implicit vs explicit type-conversion (compounded on bit-width)
93 * Implicit vs explicit inner loops.
94 * Single-instruction LOAD/STORE.
95 * Masks / tagging (selecting/preventing certain indexed elements from execution)
96
97 The pros and cons of each are discussed and analysed below.
98
99 ## Fixed vs variable parallelism length
100
101 In David Patterson and Andrew Waterman's analysis of SIMD and Vector
102 ISAs, the analysis comes out clearly in favour of (effectively) variable
103 length SIMD. As SIMD is a fixed width, typically 4, 8 or in extreme cases
104 16 or 32 simultaneous operations, the setup, teardown and corner-cases of SIMD
105 are extremely burdensome except for applications whose requirements
106 *specifically* match the *precise and exact* depth of the SIMD engine.
107
108 Thus, SIMD, no matter what width is chosen, is never going to be acceptable
109 for general-purpose computation, and in the context of developing a
110 general-purpose ISA, is never going to satisfy 100 percent of implementors.
111
112 To explain this further: for increased workloads over time, as the
113 performance requirements increase for new target markets, implementors
114 choose to extend the SIMD width (so as to again avoid mixing parallelism
115 into the instruction issue phases: the primary "simplicity" benefit of
116 SIMD in the first place), with the result that the entire opcode space
117 effectively doubles with each new SIMD width that's added to the ISA.
118
119 That basically leaves "variable-length vector" as the clear *general-purpose*
120 winner, at least in terms of greatly simplifying the instruction set,
121 reducing the number of instructions required for any given task, and thus
122 reducing power consumption for the same.
123
124 ## Implicit vs fixed instruction bit-width
125
126 SIMD again has a severe disadvantage here, over Vector: huge proliferation
127 of specialist instructions that target 8-bit, 16-bit, 32-bit, 64-bit, and
128 have to then have operations *for each and between each*. It gets very
129 messy, very quickly.
130
131 The V-Extension on the other hand proposes to set the bit-width of
132 future instructions on a per-register basis, such that subsequent instructions
133 involving that register are *implicitly* of that particular bit-width until
134 otherwise changed or reset.
135
136 This has some extremely useful properties, without being particularly
137 burdensome to implementations, given that instruction decode already has
138 to direct the operation to a correctly-sized width ALU engine, anyway.
139
140 Not least: in places where an ISA was previously constrained (due for
141 whatever reason, including limitations of the available operand space),
142 implicit bit-width allows the meaning of certain operations to be
143 type-overloaded *without* pollution or alteration of frozen and immutable
144 instructions, in a fully backwards-compatible fashion.
145
146 ## Implicit and explicit type-conversion
147
148 The Draft 2.3 V-extension proposal has (deprecated) polymorphism to help
149 deal with over-population of instructions, such that type-casting from
150 integer (and floating point) of various sizes is automatically inferred
151 due to "type tagging" that is set with a special instruction. A register
152 will be *specifically* marked as "16-bit Floating-Point" and, if added
153 to an operand that is specifically tagged as "32-bit Integer" an implicit
154 type-conversion will take place *without* requiring that type-conversion
155 to be explicitly done with its own separate instruction.
156
157 However, implicit type-conversion is not only quite burdensome to
158 implement (explosion of inferred type-to-type conversion) but also is
159 never really going to be complete. It gets even worse when bit-widths
160 also have to be taken into consideration. Each new type results in
161 an increased O(N^2) conversion space that, as anyone who has examined
162 python's source code (which has built-in polymorphic type-conversion),
163 knows that the task is more complex than it first seems.
164
165 Overall, type-conversion is generally best to leave to explicit
166 type-conversion instructions, or in definite specific use-cases left to
167 be part of an actual instruction (DSP or FP)
168
169 ## Zero-overhead loops vs explicit loops
170
171 The initial Draft P-SIMD Proposal by Chuanhua Chang of Andes Technology
172 contains an extremely interesting feature: zero-overhead loops. This
173 proposal would basically allow an inner loop of instructions to be
174 repeated indefinitely, a fixed number of times.
175
176 Its specific advantage over explicit loops is that the pipeline in a DSP
177 can potentially be kept completely full *even in an in-order single-issue
178 implementation*. Normally, it requires a superscalar architecture and
179 out-of-order execution capabilities to "pre-process" instructions in
180 order to keep ALU pipelines 100% occupied.
181
182 By bringing that capability in, this proposal could offer a way to increase
183 pipeline activity even in simpler implementations in the one key area
184 which really matters: the inner loop.
185
186 However when looking at much more comprehensive schemes
187 "A portable specification of zero-overhead loop control hardware
188 applied to embedded processors" (ZOLC), optimising only the single
189 inner loop seems inadequate, tending to suggest that ZOLC may be
190 better off being proposed as an entirely separate Extension.
191
192 ## Single-instruction LOAD/STORE
193
194 In traditional Vector Architectures there are instructions which
195 result in multiple register-memory transfer operations resulting
196 from a single instruction. They're complicated to implement in hardware,
197 yet the benefits are a huge consistent regularisation of memory accesses
198 that can be highly optimised with respect to both actual memory and any
199 L1, L2 or other caches. In Hwacha EECS-2015-263 it is explicitly made
200 clear the consequences of getting this architecturally wrong:
201 L2 cache-thrashing at the very least.
202
203 Complications arise when Virtual Memory is involved: TLB cache misses
204 need to be dealt with, as do page faults. Some of the tradeoffs are
205 discussed in <http://people.eecs.berkeley.edu/~krste/thesis.pdf>, Section
206 4.6, and an article by Jeff Bush when faced with some of these issues
207 is particularly enlightening
208 <https://jbush001.github.io/2015/11/03/lost-in-translation.html>
209
210 Interestingly, none of this complexity is faced in SIMD architectures...
211 but then they do not get the opportunity to optimise for highly-streamlined
212 memory accesses either.
213
214 With the "bang-per-buck" ratio being so high and the indirect improvement
215 in L1 Instruction Cache usage (reduced instruction count), as well as
216 the opportunity to optimise L1 and L2 cache usage, the case for including
217 Vector LOAD/STORE is compelling.
218
219 ## Mask and Tagging (Predication)
220
221 Tagging (aka Masks aka Predication) is a pseudo-method of implementing
222 simplistic branching in a parallel fashion, by allowing execution on
223 elements of a vector to be switched on or off depending on the results
224 of prior operations in the same array position.
225
226 The reason for considering this is simple: by *definition* it
227 is not possible to perform individual parallel branches in a SIMD
228 (Single-Instruction, **Multiple**-Data) context. Branches (modifying
229 of the Program Counter) will result in *all* parallel data having
230 a different instruction executed on it: that's just the definition of
231 SIMD, and it is simply unavoidable.
232
233 So these are the ways in which conditional execution may be implemented:
234
235 * explicit compare and branch: BNE x, y -> offs would jump offs
236 instructions if x was not equal to y
237 * explicit store of tag condition: CMP x, y -> tagbit
238 * implicit (condition-code) such as ADD results in a carry, carry bit
239 implicitly (or sometimes explicitly) goes into a "tag" (mask) register
240
241 The first of these is a "normal" branch method, which is flat-out impossible
242 to parallelise without look-ahead and effectively rewriting instructions.
243 This would defeat the purpose of RISC.
244
245 The latter two are where parallelism becomes easy to do without complexity:
246 every operation is modified to be "conditionally executed" (in an explicit
247 way directly in the instruction format *or* implicitly).
248
249 RVV (Vector-Extension) proposes to have *explicit* storing of the compare
250 in a tag/mask register, and to *explicitly* have every vector operation
251 *require* that its operation be "predicated" on the bits within an
252 explicitly-named tag/mask register.
253
254 SIMD (P-Extension) has not yet published precise documentation on what its
255 schema is to be: there is however verbal indication at the time of writing
256 that:
257
258 > The "compare" instructions in the DSP/SIMD ISA proposed by Andes will
259 > be executed using the same compare ALU logic for the base ISA with some
260 > minor modifications to handle smaller data types. The function will not
261 > be duplicated.
262
263 This is an *implicit* form of predication as the base RV ISA does not have
264 condition-codes or predication. By adding a CSR it becomes possible
265 to also tag certain registers as "predicated if referenced as a destination".
266 Example:
267
268 // in future operations from now on, if r0 is the destination use r5 as
269 // the PREDICATION register
270 SET_IMPLICIT_CSRPREDICATE r0, r5
271 // store the compares in r5 as the PREDICATION register
272 CMPEQ8 r5, r1, r2
273 // r0 is used here. ah ha! that means it's predicated using r5!
274 ADD8 r0, r1, r3
275
276 With enough registers (and in RISC-V there are enough registers) some fairly
277 complex predication can be set up and yet still execute without significant
278 stalling, even in a simple non-superscalar architecture.
279
280 (For details on how Branch Instructions would be retro-fitted to indirectly
281 predicated equivalents, see Appendix)
282
283 ## Conclusions
284
285 In the above sections the five different ways where parallel instruction
286 execution has closely and loosely inter-related implications for the ISA and
287 for implementors, were outlined. The pluses and minuses came out as
288 follows:
289
290 * Fixed vs variable parallelism: <b>variable</b>
291 * Implicit (indirect) vs fixed (integral) instruction bit-width: <b>indirect</b>
292 * Implicit vs explicit type-conversion: <b>explicit</b>
293 * Implicit vs explicit inner loops: <b>implicit but best done separately</b>
294 * Single-instruction Vector LOAD/STORE: <b>Complex but highly beneficial</b>
295 * Tag or no-tag: <b>Complex but highly beneficial</b>
296
297 In particular:
298
299 * variable-length vectors came out on top because of the high setup, teardown
300 and corner-cases associated with the fixed width of SIMD.
301 * Implicit bit-width helps to extend the ISA to escape from
302 former limitations and restrictions (in a backwards-compatible fashion),
303 whilst also leaving implementors free to simmplify implementations
304 by using actual explicit internal parallelism.
305 * Implicit (zero-overhead) loops provide a means to keep pipelines
306 potentially 100% occupied in a single-issue in-order implementation
307 i.e. *without* requiring a super-scalar or out-of-order architecture,
308 but doing a proper, full job (ZOLC) is an entirely different matter.
309
310 Constructing a SIMD/Simple-Vector proposal based around four of these six
311 requirements would therefore seem to be a logical thing to do.
312
313 # Instructions
314
315 By being a topological remap of RVV concepts, the following RVV instructions
316 remain exactly the same: VMPOP, VMFIRST, VEXTRACT, VINSERT, VMERGE, VSELECT,
317 VSLIDE, VCLASS and VPOPC. Two instructions, VCLIP and VCLIPI, do not
318 have RV Standard equivalents, so are left out of Simple-V.
319 All other instructions from RVV are topologically re-mapped and retain
320 their complete functionality, intact.
321
322 ## Instruction Format
323
324 The instruction format for Simple-V does not actually have *any* explicit
325 compare operations, *any* arithmetic, floating point or *any*
326 memory instructions.
327 Instead it *overloads* pre-existing branch operations into predicated
328 variants, and implicitly overloads arithmetic operations and LOAD/STORE
329 depending on CSR configurations for vector length, bitwidth and
330 predication. *This includes Compressed instructions* as well as any
331 future instructions and Custom Extensions.
332
333 * For analysis of RVV see [[v_comparative_analysis]] which begins to
334 outline topologically-equivalent mappings of instructions
335 * Also see Appendix "Retro-fitting Predication into branch-explicit ISA"
336 for format of Branch opcodes.
337
338 **TODO**: *analyse and decide whether the implicit nature of predication
339 as proposed is or is not a lot of hassle, and if explicit prefixes are
340 a better idea instead. Parallelism therefore effectively may end up
341 as always being 64-bit opcodes (32 for the prefix, 32 for the instruction)
342 with some opportunities for to use Compressed bringing it down to 48.
343 Also to consider is whether one or both of the last two remaining Compressed
344 instruction codes in Quadrant 1 could be used as a parallelism prefix,
345 bringing parallelised opcodes down to 32-bit (when combined with C)
346 and having the benefit of being explicit.*
347
348 ## Branch Instruction:
349
350 This is the overloaded table for Integer-base Branch operations. Opcode
351 (bits 6..0) is set in all cases to 1100011.
352
353 [[!table data="""
354 31 .. 25 |24 ... 20 | 19 15 | 14 12 | 11 .. 8 | 7 | 6 ... 0 |
355 imm[12|10:5]| rs2 | rs1 | funct3 | imm[4:1] | imm[11] | opcode |
356 7 | 5 | 5 | 3 | 4 | 1 | 7 |
357 reserved | src2 | src1 | BPR | predicate rs3 || BRANCH |
358 reserved | src2 | src1 | 000 | predicate rs3 || BEQ |
359 reserved | src2 | src1 | 001 | predicate rs3 || BNE |
360 reserved | src2 | src1 | 010 | predicate rs3 || rsvd |
361 reserved | src2 | src1 | 011 | predicate rs3 || rsvd |
362 reserved | src2 | src1 | 100 | predicate rs3 || BLE |
363 reserved | src2 | src1 | 101 | predicate rs3 || BGE |
364 reserved | src2 | src1 | 110 | predicate rs3 || BLTU |
365 reserved | src2 | src1 | 111 | predicate rs3 || BGEU |
366 """]]
367
368 Below is the overloaded table for Floating-point Predication operations.
369 Interestingly no change is needed to the instruction format because
370 FP Compare already stores a 1 or a zero in its "rd" integer register
371 target, i.e. it's not actually a Branch at all: it's a compare.
372 The target needs to simply change to be a predication bitfield (done
373 implicitly).
374
375 As with
376 Standard RVF/D/Q, Opcode (bits 6..0) is set in all cases to 1010011.
377 Likewise Single-precision, fmt bits 26..25) is still set to 00.
378 Double-precision is still set to 01, whilst Quad-precision
379 appears not to have a definition in V2.3-Draft (but should be unaffected).
380
381 It is however noted that an entry "FNE" (the opposite of FEQ) is missing,
382 and whilst in ordinary branch code this is fine because the standard
383 RVF compare can always be followed up with an integer BEQ or a BNE (or
384 a compressed comparison to zero or non-zero), in predication terms that
385 becomes more of an impact as an explicit (scalar) instruction is needed
386 to invert the predicate bitmask. An additional encoding funct3=011 is
387 therefore proposed to cater for this.
388
389 [[!table data="""
390 31 .. 27| 26 .. 25 |24 ... 20 | 19 15 | 14 12 | 11 .. 7 | 6 ... 0 |
391 funct5 | fmt | rs2 | rs1 | funct3 | rd | opcode |
392 5 | 2 | 5 | 5 | 3 | 4 | 7 |
393 10100 | 00/01/11 | src2 | src1 | 010 | pred rs3 | FEQ |
394 10100 | 00/01/11 | src2 | src1 | **011**| pred rs3 | FNE |
395 10100 | 00/01/11 | src2 | src1 | 001 | pred rs3 | FLT |
396 10100 | 00/01/11 | src2 | src1 | 000 | pred rs3 | FLE |
397 """]]
398
399 Note (**TBD**): floating-point exceptions will need to be extended
400 to cater for multiple exceptions (and statuses of the same). The
401 usual approach is to have an array of status codes and bit-fields,
402 and one exception, rather than throw separate exceptions for each
403 Vector element.
404
405 In Hwacha EECS-2015-262 Section 6.7.2 the following pseudocode is given
406 for predicated compare operations of function "cmp":
407
408 for (int i=0; i<vl; ++i)
409 if ([!]preg[p][i])
410 preg[pd][i] = cmp(s1 ? vreg[rs1][i] : sreg[rs1],
411 s2 ? vreg[rs2][i] : sreg[rs2]);
412
413 With associated predication, vector-length adjustments and so on,
414 and temporarily ignoring bitwidth (which makes the comparisons more
415 complex), this becomes:
416
417 if I/F == INT: # integer type cmp
418 pred_enabled = int_pred_enabled # TODO: exception if not set!
419 preg = int_pred_reg[rd]
420 reg = int_regfile
421 else:
422 pred_enabled = fp_pred_enabled # TODO: exception if not set!
423 preg = fp_pred_reg[rd]
424 reg = fp_regfile
425
426 s1 = CSRvectorlen[src1] > 1;
427 s2 = CSRvectorlen[src2] > 1;
428 for (int i=0; i<vl; ++i)
429 preg[rs3][i] = cmp(s1 ? reg[src1+i] : reg[src1],
430 s2 ? reg[src2+i] : reg[src2]);
431
432 Notes:
433
434 * Predicated SIMD comparisons would break src1 and src2 further down
435 into bitwidth-sized chunks (see Appendix "Bitwidth Virtual Register
436 Reordering") setting Vector-Length times (number of SIMD elements) bits
437 in Predicate Register rs3 as opposed to just Vector-Length bits.
438 * Predicated Branches do not actually have an adjustment to the Program
439 Counter, so all of bits 25 through 30 in every case are not needed.
440 * There are plenty of reserved opcodes for which bits 25 through 30 could
441 be put to good use if there is a suitable use-case.
442 * FEQ and FNE (and BEQ and BNE) are included in order to save one
443 instruction having to invert the resultant predicate bitfield.
444 FLT and FLE may be inverted to FGT and FGE if needed by swapping
445 src1 and src2 (likewise the integer counterparts).
446
447 ## Compressed Branch Instruction:
448
449 [[!table data="""
450 15..13 | 12...10 | 9..7 | 6..5 | 4..2 | 1..0 | name |
451 funct3 | imm | rs10 | imm | | op | |
452 3 | 3 | 3 | 2 | 3 | 2 | |
453 C.BPR | pred rs3 | src1 | I/F B | src2 | C1 | |
454 110 | pred rs3 | src1 | I/F 0 | src2 | C1 | P.EQ |
455 111 | pred rs3 | src1 | I/F 0 | src2 | C1 | P.NE |
456 110 | pred rs3 | src1 | I/F 1 | src2 | C1 | P.LT |
457 111 | pred rs3 | src1 | I/F 1 | src2 | C1 | P.LE |
458 """]]
459
460 Notes:
461
462 * Bits 5 13 14 and 15 make up the comparator type
463 * In both floating-point and integer cases there are four predication
464 comparators: EQ/NEQ/LT/LE (with GT and GE being synthesised by inverting
465 src1 and src2).
466
467 ## LOAD / STORE Instructions
468
469 For full analysis of topological adaptation of RVV LOAD/STORE
470 see [[v_comparative_analysis]]. All three types (LD, LD.S and LD.X)
471 may be implicitly overloaded into the one base RV LOAD instruction.
472
473 Revised LOAD:
474
475 [[!table data="""
476 31 | 30 | 29 25 | 24 20 | 19 15 | 14 12 | 11 7 | 6 0 |
477 imm[11:0] |||| rs1 | funct3 | rd | opcode |
478 1 | 1 | 5 | 5 | 5 | 3 | 5 | 7 |
479 ? | s | rs2 | imm[4:0] | base | width | dest | LOAD |
480 """]]
481
482 The exact same corresponding adaptation is also carried out on the single,
483 double and quad precision floating-point LOAD-FP and STORE-FP operations,
484 which fit the exact same instruction format. Thus all three types
485 (unit, stride and indexed) may be fitted into FLW, FLD and FLQ,
486 as well as FSW, FSD and FSQ.
487
488 Notes:
489
490 * LOAD remains functionally (topologically) identical to RVV LOAD
491 (for both integer and floating-point variants).
492 * Predication CSR-marking register is not explicitly shown in instruction, it's
493 implicit based on the CSR predicate state for the rd (destination) register
494 * rs2, the source, may *also be marked as a vector*, which implicitly
495 is taken to indicate "Indexed Load" (LD.X)
496 * Bit 30 indicates "element stride" or "constant-stride" (LD or LD.S)
497 * Bit 31 is reserved (ideas under consideration: auto-increment)
498 * **TODO**: include CSR SIMD bitwidth in the pseudo-code below.
499 * **TODO**: clarify where width maps to elsize
500
501 Pseudo-code (excludes CSR SIMD bitwidth for simplicity):
502
503 if (unit-strided) stride = elsize;
504 else stride = areg[as2]; // constant-strided
505
506 pred_enabled = int_pred_enabled
507 preg = int_pred_reg[rd]
508
509 for (int i=0; i<vl; ++i)
510 if (preg_enabled[rd] && [!]preg[i])
511 for (int j=0; j<seglen+1; j++)
512 {
513 if CSRvectorised[rs2])
514 offs = vreg[rs2][i]
515 else
516 offs = i*(seglen+1)*stride;
517 vreg[rd+j][i] = mem[sreg[base] + offs + j*stride];
518 }
519
520 Taking CSR (SIMD) bitwidth into account involves using the vector
521 length and register encoding according to the "Bitwidth Virtual Register
522 Reordering" scheme shown in the Appendix (see function "regoffs").
523
524 A similar instruction exists for STORE, with identical topological
525 translation of all features. **TODO**
526
527 ## Compressed LOAD / STORE Instructions
528
529 Compressed LOAD and STORE are of the same format, where bits 2-4 are
530 a src register instead of dest:
531
532 [[!table data="""
533 15 13 | 12 10 | 9 7 | 6 5 | 4 2 | 1 0 |
534 funct3 | imm | rs10 | imm | rd0 | op |
535 3 | 3 | 3 | 2 | 3 | 2 |
536 C.LW | offset[5:3] | base | offset[2|6] | dest | C0 |
537 """]]
538
539 Unfortunately it is not possible to fit the full functionality
540 of vectorised LOAD / STORE into C.LD / C.ST: the "X" variants (Indexed)
541 require another operand (rs2) in addition to the operand width
542 (which is also missing), offset, base, and src/dest.
543
544 However a close approximation may be achieved by taking the top bit
545 of the offset in each of the five types of LD (and ST), reducing the
546 offset to 4 bits and utilising the 5th bit to indicate whether "stride"
547 is to be enabled. In this way it is at least possible to introduce
548 that functionality.
549
550 (**TODO**: *assess whether the loss of one bit from offset is worth having
551 "stride" capability.*)
552
553 We also assume (including for the "stride" variant) that the "width"
554 parameter, which is missing, is derived and implicit, just as it is
555 with the standard Compressed LOAD/STORE instructions. For C.LW, C.LD
556 and C.LQ, the width is implicitly 4, 8 and 16 respectively, whilst for
557 C.FLW and C.FLD the width is implicitly 4 and 8 respectively.
558
559 Interestingly we note that the Vectorised Simple-V variant of
560 LOAD/STORE (Compressed and otherwise), due to it effectively using the
561 standard register file(s), is the direct functional equivalent of
562 standard load-multiple and store-multiple instructions found in other
563 processors.
564
565 In Section 12.3 riscv-isa manual V2.3-draft it is noted the comments on
566 page 76, "For virtual memory systems some data accesses could be resident
567 in physical memory and some not". The interesting question then arises:
568 how does RVV deal with the exact same scenario?
569 Expired U.S. Patent 5895501 (Filing Date Sep 3 1996) describes a method
570 of detecting early page / segmentation faults and adjusting the TLB
571 in advance, accordingly: other strategies are explored in the Appendix
572 Section "Virtual Memory Page Faults".
573
574 # Note on implementation of parallelism
575
576 One extremely important aspect of this proposal is to respect and support
577 implementors desire to focus on power, area or performance. In that regard,
578 it is proposed that implementors be free to choose whether to implement
579 the Vector (or variable-width SIMD) parallelism as sequential operations
580 with a single ALU, fully parallel (if practical) with multiple ALUs, or
581 a hybrid combination of both.
582
583 In Broadcom's Videocore-IV, they chose hybrid, and called it "Virtual
584 Parallelism". They achieve a 16-way SIMD at an **instruction** level
585 by providing a combination of a 4-way parallel ALU *and* an externally
586 transparent loop that feeds 4 sequential sets of data into each of the
587 4 ALUs.
588
589 Also in the same core, it is worth noting that particularly uncommon
590 but essential operations (Reciprocal-Square-Root for example) are
591 *not* part of the 4-way parallel ALU but instead have a *single* ALU.
592 Under the proposed Vector (varible-width SIMD) implementors would
593 be free to do precisely that: i.e. free to choose *on a per operation
594 basis* whether and how much "Virtual Parallelism" to deploy.
595
596 It is absolutely critical to note that it is proposed that such choices MUST
597 be **entirely transparent** to the end-user and the compiler. Whilst
598 a Vector (varible-width SIM) may not precisely match the width of the
599 parallelism within the implementation, the end-user **should not care**
600 and in this way the performance benefits are gained but the ISA remains
601 straightforward. All that happens at the end of an instruction run is: some
602 parallel units (if there are any) would remain offline, completely
603 transparently to the ISA, the program, and the compiler.
604
605 The "SIMD considered harmful" trap of having huge complexity and extra
606 instructions to deal with corner-cases is thus avoided, and implementors
607 get to choose precisely where to focus and target the benefits of their
608 implementation efforts, without "extra baggage".
609
610 # CSRs <a name="csrs"></a>
611
612 There are a number of CSRs needed, which are used at the instruction
613 decode phase to re-interpret standard RV opcodes (a practice that has
614 precedent in the setting of MISA to enable / disable extensions).
615
616 * Integer Register N is Vector of length M: r(N) -> r(N..N+M-1)
617 * Integer Register N is of implicit bitwidth M (M=default,8,16,32,64)
618 * Floating-point Register N is Vector of length M: r(N) -> r(N..N+M-1)
619 * Floating-point Register N is of implicit bitwidth M (M=default,8,16,32,64)
620 * Integer Register N is a Predication Register (note: a key-value store)
621 * Vector Length CSR (VSETVL, VGETVL)
622
623 Notes:
624
625 * for the purposes of LOAD / STORE, Integer Registers which are
626 marked as a Vector will result in a Vector LOAD / STORE.
627 * Vector Lengths are *not* the same as vsetl but are an integral part
628 of vsetl.
629 * Actual vector length is *multipled* by how many blocks of length
630 "bitwidth" may fit into an XLEN-sized register file.
631 * Predication is a key-value store due to the implicit referencing,
632 as opposed to having the predicate register explicitly in the instruction.
633
634 ## Predication CSR
635
636 The Predication CSR is a key-value store indicating whether, if a given
637 destination register (integer or floating-point) is referred to in an
638 instruction, it is to be predicated. The first entry is whether predication
639 is enabled. The second entry is whether the register index refers to a
640 floating-point or an integer register. The third entry is the index
641 of that register which is to be predicated (if referred to). The fourth entry
642 is the integer register that is treated as a bitfield, indexable by the
643 vector element index.
644
645 | RegNo | 6 | 5 | (4..0) | (4..0) |
646 | ----- | - | - | ------- | ------- |
647 | r0 | pren0 | i/f | regidx | predidx |
648 | r1 | pren1 | i/f | regidx | predidx |
649 | .. | pren.. | i/f | regidx | predidx |
650 | r15 | pren15 | i/f | regidx | predidx |
651
652 The Predication CSR Table is a key-value store, so implementation-wise
653 it will be faster to turn the table around (maintain topologically
654 equivalent state):
655
656 fp_pred_enabled[32];
657 int_pred_enabled[32];
658 for (i = 0; i < 16; i++)
659 if CSRpred[i].pren:
660 idx = CSRpred[i].regidx
661 predidx = CSRpred[i].predidx
662 if CSRpred[i].type == 0: # integer
663 int_pred_enabled[idx] = 1
664 int_pred_reg[idx] = predidx
665 else:
666 fp_pred_enabled[idx] = 1
667 fp_pred_reg[idx] = predidx
668
669 So when an operation is to be predicated, it is the internal state that
670 is used. In Section 6.4.2 of Hwacha's Manual (EECS-2015-262) the following
671 pseudo-code for operations is given, where p is the explicit (direct)
672 reference to the predication register to be used:
673
674 for (int i=0; i<vl; ++i)
675 if ([!]preg[p][i])
676 (d ? vreg[rd][i] : sreg[rd]) =
677 iop(s1 ? vreg[rs1][i] : sreg[rs1],
678 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
679
680 This instead becomes an *indirect* reference using the *internal* state
681 table generated from the Predication CSR key-value store:
682
683 if type(iop) == INT:
684 pred_enabled = int_pred_enabled
685 preg = int_pred_reg[rd]
686 else:
687 pred_enabled = fp_pred_enabled
688 preg = fp_pred_reg[rd]
689
690 for (int i=0; i<vl; ++i)
691 if (preg_enabled[rd] && [!]preg[i])
692 (d ? vreg[rd][i] : sreg[rd]) =
693 iop(s1 ? vreg[rs1][i] : sreg[rs1],
694 s2 ? vreg[rs2][i] : sreg[rs2]); // for insts with 2 inputs
695
696 ## MAXVECTORDEPTH
697
698 MAXVECTORDEPTH is the same concept as MVL in RVV. However in Simple-V,
699 given that its primary (base, unextended) purpose is for 3D, Video and
700 other purposes (not requiring supercomputing capability), it makes sense
701 to limit MAXVECTORDEPTH to the regfile bitwidth (32 for RV32, 64 for RV64
702 and so on).
703
704 The reason for setting this limit is so that predication registers, when
705 marked as such, may fit into a single register as opposed to fanning out
706 over several registers. This keeps the implementation a little simpler.
707 Note that RVV on top of Simple-V may choose to over-ride this decision.
708
709 ## Vector-length CSRs
710
711 Vector lengths are interpreted as meaning "any instruction referring to
712 r(N) generates implicit identical instructions referring to registers
713 r(N+M-1) where M is the Vector Length". Vector Lengths may be set to
714 use up to 16 registers in the register file.
715
716 One separate CSR table is needed for each of the integer and floating-point
717 register files:
718
719 | RegNo | (3..0) |
720 | ----- | ------ |
721 | r0 | vlen0 |
722 | r1 | vlen1 |
723 | .. | vlen.. |
724 | r31 | vlen31 |
725
726 An array of 32 4-bit CSRs is needed (4 bits per register) to indicate
727 whether a register was, if referred to in any standard instructions,
728 implicitly to be treated as a vector. A vector length of 1 indicates
729 that it is to be treated as a scalar. Vector lengths of 0 are reserved.
730
731 Internally, implementations may choose to use the non-zero vector length
732 to set a bit-field per register, to be used in the instruction decode phase.
733 In this way any standard (current or future) operation involving
734 register operands may detect if the operation is to be vector-vector,
735 vector-scalar or scalar-scalar (standard) simply through a single
736 bit test.
737
738 Note that when using the "vsetl rs1, rs2" instruction (caveat: when the
739 bitwidth is specifically not set) it becomes:
740
741 CSRvlength = MIN(MIN(CSRvectorlen[rs1], MAXVECTORDEPTH), rs2)
742
743 This is in contrast to RVV:
744
745 CSRvlength = MIN(MIN(rs1, MAXVECTORDEPTH), rs2)
746
747 ## Element (SIMD) bitwidth CSRs
748
749 Element bitwidths may be specified with a per-register CSR, and indicate
750 how a register (integer or floating-point) is to be subdivided.
751
752 | RegNo | (2..0) |
753 | ----- | ------ |
754 | r0 | vew0 |
755 | r1 | vew1 |
756 | .. | vew.. |
757 | r31 | vew31 |
758
759 vew may be one of the following (giving a table "bytestable", used below):
760
761 | vew | bitwidth |
762 | --- | -------- |
763 | 000 | default |
764 | 001 | 8 |
765 | 010 | 16 |
766 | 011 | 32 |
767 | 100 | 64 |
768 | 101 | 128 |
769 | 110 | rsvd |
770 | 111 | rsvd |
771
772 Extending this table (with extra bits) is covered in the section
773 "Implementing RVV on top of Simple-V".
774
775 Note that when using the "vsetl rs1, rs2" instruction, taking bitwidth
776 into account, it becomes:
777
778 vew = CSRbitwidth[rs1]
779 if (vew == 0)
780 bytesperreg = (XLEN/8) # or FLEN as appropriate
781 else:
782 bytesperreg = bytestable[vew] # 1 2 4 8 16
783 simdmult = (XLEN/8) / bytesperreg # or FLEN as appropriate
784 vlen = CSRvectorlen[rs1] * simdmult
785 CSRvlength = MIN(MIN(vlen, MAXVECTORDEPTH), rs2)
786
787 The reason for multiplying the vector length by the number of SIMD elements
788 (in each individual register) is so that each SIMD element may optionally be
789 predicated.
790
791 An example of how to subdivide the register file when bitwidth != default
792 is given in the section "Bitwidth Virtual Register Reordering".
793
794 # Exceptions
795
796 > What does an ADD of two different-sized vectors do in simple-V?
797
798 * if the two source operands are not the same, throw an exception.
799 * if the destination operand is also a vector, and the source is longer
800 than the destination, throw an exception.
801
802 > And what about instructions like JALR? 
803 > What does jumping to a vector do?
804
805 * Throw an exception. Whether that actually results in spawning threads
806 as part of the trap-handling remains to be seen.
807
808 # Impementing V on top of Simple-V
809
810 With Simple-V converting the original RVV draft concept-for-concept
811 from explicit opcodes to implicit overloading of existing RV Standard
812 Extensions, certain features were (deliberately) excluded that need
813 to be added back in for RVV to reach its full potential. This is
814 made slightly complicated by the fact that RVV itself has two
815 levels: Base and reserved future functionality.
816
817 * Representation Encoding is entirely left out of Simple-V in favour of
818 implicitly taking the exact (explicit) meaning from RV Standard Extensions.
819 * VCLIP and VCLIPI do not have corresponding RV Standard Extension
820 opcodes (and are the only such operations).
821 * Extended Element bitwidths (1 through to 24576 bits) were left out
822 of Simple-V as, again, there is no corresponding RV Standard Extension
823 that covers anything even below 32-bit operands.
824 * Polymorphism was entirely left out of Simple-V due to the inherent
825 complexity of automatic type-conversion.
826 * Vector Register files were specifically left out of Simple-V in favour
827 of fitting on top of the integer and floating-point files. An
828 "RVV re-retro-fit" needs to be able to mark (implicitly marked)
829 registers as being actually in a separate *vector* register file.
830 * Fortunately in RVV (Draft 0.4, V2.3-Draft), the "base" vector
831 register file size is 5 bits (32 registers), whilst the "Extended"
832 variant of RVV specifies 8 bits (256 registers) and has yet to
833 be published.
834 * One big difference: Sections 17.12 and 17.17, there are only two possible
835 predication registers in RVV "Base". Through the "indirect" method,
836 Simple-V provides a key-value CSR table that allows (arbitrarily)
837 up to 16 (TBD) of either the floating-point or integer registers to
838 be marked as "predicated" (key), and if so, which integer register to
839 use as the predication mask (value).
840
841 **TODO**
842
843 # Implementing P (renamed to DSP) on top of Simple-V
844
845 * Implementors indicate chosen bitwidth support in Vector-bitwidth CSR
846 (caveat: anything not specified drops through to software-emulation / traps)
847 * TODO
848
849 # Appendix
850
851 ## V-Extension to Simple-V Comparative Analysis
852
853 This section has been moved to its own page [[v_comparative_analysis]]
854
855 ## P-Ext ISA
856
857 This section has been moved to its own page [[p_comparative_analysis]]
858
859 ## Comparison of "Traditional" SIMD, Alt-RVP, Simple-V and RVV Proposals <a name="parallelism_comparisons"></a>
860
861 This section compares the various parallelism proposals as they stand,
862 including traditional SIMD, in terms of features, ease of implementation,
863 complexity, flexibility, and die area.
864
865 ### [[alt_rvp]]
866
867 Primary benefit of Alt-RVP is the simplicity with which parallelism
868 may be introduced (effective multiplication of regfiles and associated ALUs).
869
870 * plus: the simplicity of the lanes (combined with the regularity of
871 allocating identical opcodes multiple independent registers) meaning
872 that SRAM or 2R1W can be used for entire regfile (potentially).
873 * minus: a more complex instruction set where the parallelism is much
874 more explicitly directly specified in the instruction and
875 * minus: if you *don't* have an explicit instruction (opcode) and you
876 need one, the only place it can be added is... in the vector unit and
877 * minus: opcode functions (and associated ALUs) duplicated in Alt-RVP are
878 not useable or accessible in other Extensions.
879 * plus-and-minus: Lanes may be utilised for high-speed context-switching
880 but with the down-side that they're an all-or-nothing part of the Extension.
881 No Alt-RVP: no fast register-bank switching.
882 * plus: Lane-switching would mean that complex operations not suited to
883 parallelisation can be carried out, followed by further parallel Lane-based
884 work, without moving register contents down to memory (and back)
885 * minus: Access to registers across multiple lanes is challenging. "Solution"
886 is to drop data into memory and immediately back in again (like MMX).
887
888 ### Simple-V
889
890 Primary benefit of Simple-V is the OO abstraction of parallel principles
891 from actual (internal) parallel hardware. It's an API in effect that's
892 designed to be slotted in to an existing implementation (just after
893 instruction decode) with minimum disruption and effort.
894
895 * minus: the complexity of having to use register renames, OoO, VLIW,
896 register file cacheing, all of which has been done before but is a
897 pain
898 * plus: transparent re-use of existing opcodes as-is just indirectly
899 saying "this register's now a vector" which
900 * plus: means that future instructions also get to be inherently
901 parallelised because there's no "separate vector opcodes"
902 * plus: Compressed instructions may also be (indirectly) parallelised
903 * minus: the indirect nature of Simple-V means that setup (setting
904 a CSR register to indicate vector length, a separate one to indicate
905 that it is a predicate register and so on) means a little more setup
906 time than Alt-RVP or RVV's "direct and within the (longer) instruction"
907 approach.
908 * plus: shared register file meaning that, like Alt-RVP, complex
909 operations not suited to parallelisation may be carried out interleaved
910 between parallelised instructions *without* requiring data to be dropped
911 down to memory and back (into a separate vectorised register engine).
912 * plus-and-maybe-minus: re-use of integer and floating-point 32-wide register
913 files means that huge parallel workloads would use up considerable
914 chunks of the register file. However in the case of RV64 and 32-bit
915 operations, that effectively means 64 slots are available for parallel
916 operations.
917 * plus: inherent parallelism (actual parallel ALUs) doesn't actually need to
918 be added, yet the instruction opcodes remain unchanged (and still appear
919 to be parallel). consistent "API" regardless of actual internal parallelism:
920 even an in-order single-issue implementation with a single ALU would still
921 appear to have parallel vectoristion.
922 * hard-to-judge: if actual inherent underlying ALU parallelism is added it's
923 hard to say if there would be pluses or minuses (on die area). At worse it
924 would be "no worse" than existing register renaming, OoO, VLIW and register
925 file cacheing schemes.
926
927 ### RVV (as it stands, Draft 0.4 Section 17, RISC-V ISA V2.3-Draft)
928
929 RVV is extremely well-designed and has some amazing features, including
930 2D reorganisation of memory through LOAD/STORE "strides".
931
932 * plus: regular predictable workload means that implementations may
933 streamline effects on L1/L2 Cache.
934 * plus: regular and clear parallel workload also means that lanes
935 (similar to Alt-RVP) may be used as an implementation detail,
936 using either SRAM or 2R1W registers.
937 * plus: separate engine with no impact on the rest of an implementation
938 * minus: separate *complex* engine with no RTL (ALUs, Pipeline stages) reuse
939 really feasible.
940 * minus: no ISA abstraction or re-use either: additions to other Extensions
941 do not gain parallelism, resulting in prolific duplication of functionality
942 inside RVV *and out*.
943 * minus: when operations require a different approach (scalar operations
944 using the standard integer or FP regfile) an entire vector must be
945 transferred out to memory, into standard regfiles, then back to memory,
946 then back to the vector unit, this to occur potentially multiple times.
947 * minus: will never fit into Compressed instruction space (as-is. May
948 be able to do so if "indirect" features of Simple-V are partially adopted).
949 * plus-and-slight-minus: extended variants may address up to 256
950 vectorised registers (requires 48/64-bit opcodes to do it).
951 * minus-and-partial-plus: separate engine plus complexity increases
952 implementation time and die area, meaning that adoption is likely only
953 to be in high-performance specialist supercomputing (where it will
954 be absolutely superb).
955
956 ### Traditional SIMD
957
958 The only really good things about SIMD are how easy it is to implement and
959 get good performance. Unfortunately that makes it quite seductive...
960
961 * plus: really straightforward, ALU basically does several packed operations
962 at once. Parallelism is inherent at the ALU, making the addition of
963 SIMD-style parallelism an easy decision that has zero significant impact
964 on the rest of any given architectural design and layout.
965 * plus (continuation): SIMD in simple in-order single-issue designs can
966 therefore result in superb throughput, easily achieved even with a very
967 simple execution model.
968 * minus: ridiculously complex setup and corner-cases that disproportionately
969 increase instruction count on what would otherwise be a "simple loop",
970 should the number of elements in an array not happen to exactly match
971 the SIMD group width.
972 * minus: getting data usefully out of registers (if separate regfiles
973 are used) means outputting to memory and back.
974 * minus: quite a lot of supplementary instructions for bit-level manipulation
975 are needed in order to efficiently extract (or prepare) SIMD operands.
976 * minus: MASSIVE proliferation of ISA both in terms of opcodes in one
977 dimension and parallelism (width): an at least O(N^2) and quite probably
978 O(N^3) ISA proliferation that often results in several thousand
979 separate instructions. all requiring separate and distinct corner-case
980 algorithms!
981 * minus: EVEN BIGGER proliferation of SIMD ISA if the functionality of
982 8, 16, 32 or 64-bit reordering is built-in to the SIMD instruction.
983 For example: add (high|low) 16-bits of r1 to (low|high) of r2 requires
984 four separate and distinct instructions: one for (r1:low r2:high),
985 one for (r1:high r2:low), one for (r1:high r2:high) and one for
986 (r1:low r2:low) *per function*.
987 * minus: EVEN BIGGER proliferation of SIMD ISA if there is a mismatch
988 between operand and result bit-widths. In combination with high/low
989 proliferation the situation is made even worse.
990 * minor-saving-grace: some implementations *may* have predication masks
991 that allow control over individual elements within the SIMD block.
992
993 ## Comparison *to* Traditional SIMD: Alt-RVP, Simple-V and RVV Proposals <a name="simd_comparison"></a>
994
995 This section compares the various parallelism proposals as they stand,
996 *against* traditional SIMD as opposed to *alongside* SIMD. In other words,
997 the question is asked "How can each of the proposals effectively implement
998 (or replace) SIMD, and how effective would they be"?
999
1000 ### [[alt_rvp]]
1001
1002 * Alt-RVP would not actually replace SIMD but would augment it: just as with
1003 a SIMD architecture where the ALU becomes responsible for the parallelism,
1004 Alt-RVP ALUs would likewise be so responsible... with *additional*
1005 (lane-based) parallelism on top.
1006 * Thus at least some of the downsides of SIMD ISA O(N^3) proliferation by
1007 at least one dimension are avoided (architectural upgrades introducing
1008 128-bit then 256-bit then 512-bit variants of the exact same 64-bit
1009 SIMD block)
1010 * Thus, unfortunately, Alt-RVP would suffer the same inherent proliferation
1011 of instructions as SIMD, albeit not quite as badly (due to Lanes).
1012 * In the same discussion for Alt-RVP, an additional proposal was made to
1013 be able to subdivide the bits of each register lane (columns) down into
1014 arbitrary bit-lengths (RGB 565 for example).
1015 * A recommendation was given instead to make the subdivisions down to 32-bit,
1016 16-bit or even 8-bit, effectively dividing the registerfile into
1017 Lane0(H), Lane0(L), Lane1(H) ... LaneN(L) or further. If inter-lane
1018 "swapping" instructions were then introduced, some of the disadvantages
1019 of SIMD could be mitigated.
1020
1021 ### RVV
1022
1023 * RVV is designed to replace SIMD with a better paradigm: arbitrary-length
1024 parallelism.
1025 * However whilst SIMD is usually designed for single-issue in-order simple
1026 DSPs with a focus on Multimedia (Audio, Video and Image processing),
1027 RVV's primary focus appears to be on Supercomputing: optimisation of
1028 mathematical operations that fit into the OpenCL space.
1029 * Adding functions (operations) that would normally fit (in parallel)
1030 into a SIMD instruction requires an equivalent to be added to the
1031 RVV Extension, if one does not exist. Given the specialist nature of
1032 some SIMD instructions (8-bit or 16-bit saturated or halving add),
1033 this possibility seems extremely unlikely to occur, even if the
1034 implementation overhead of RVV were acceptable (compared to
1035 normal SIMD/DSP-style single-issue in-order simplicity).
1036
1037 ### Simple-V
1038
1039 * Simple-V borrows hugely from RVV as it is intended to be easy to
1040 topologically transplant every single instruction from RVV (as
1041 designed) into Simple-V equivalents, with *zero loss of functionality
1042 or capability*.
1043 * With the "parallelism" abstracted out, a hypothetical SIMD-less "DSP"
1044 Extension which contained the basic primitives (non-parallelised
1045 8, 16 or 32-bit SIMD operations) inherently *become* parallel,
1046 automatically.
1047 * Additionally, standard operations (ADD, MUL) that would normally have
1048 to have special SIMD-parallel opcodes added need no longer have *any*
1049 of the length-dependent variants (2of 32-bit ADDs in a 64-bit register,
1050 4of 32-bit ADDs in a 128-bit register) because Simple-V takes the
1051 *standard* RV opcodes (present and future) and automatically parallelises
1052 them.
1053 * By inheriting the RVV feature of arbitrary vector-length, then just as
1054 with RVV the corner-cases and ISA proliferation of SIMD is avoided.
1055 * Whilst not entirely finalised, registers are expected to be
1056 capable of being subdivided down to an implementor-chosen bitwidth
1057 in the underlying hardware (r1 becomes r1[31..24] r1[23..16] r1[15..8]
1058 and r1[7..0], or just r1[31..16] r1[15..0]) where implementors can
1059 choose to have separate independent 8-bit ALUs or dual-SIMD 16-bit
1060 ALUs that perform twin 8-bit operations as they see fit, or anything
1061 else including no subdivisions at all.
1062 * Even though implementors have that choice even to have full 64-bit
1063 (with RV64) SIMD, they *must* provide predication that transparently
1064 switches off appropriate units on the last loop, thus neatly fitting
1065 underlying SIMD ALU implementations *into* the arbitrary vector-length
1066 RVV paradigm, keeping the uniform consistent API that is a key strategic
1067 feature of Simple-V.
1068 * With Simple-V fitting into the standard register files, certain classes
1069 of SIMD operations such as High/Low arithmetic (r1[31..16] + r2[15..0])
1070 can be done by applying *Parallelised* Bit-manipulation operations
1071 followed by parallelised *straight* versions of element-to-element
1072 arithmetic operations, even if the bit-manipulation operations require
1073 changing the bitwidth of the "vectors" to do so. Predication can
1074 be utilised to skip high words (or low words) in source or destination.
1075 * In essence, the key downside of SIMD - massive duplication of
1076 identical functions over time as an architecture evolves from 32-bit
1077 wide SIMD all the way up to 512-bit, is avoided with Simple-V, through
1078 vector-style parallelism being dropped on top of 8-bit or 16-bit
1079 operations, all the while keeping a consistent ISA-level "API" irrespective
1080 of implementor design choices (or indeed actual implementations).
1081
1082 ### Example Instruction translation: <a name="example_translation"></a>
1083
1084 Instructions "ADD r2 r4 r4" would result in three instructions being
1085 generated and placed into the FIFO:
1086
1087 * ADD r2 r4 r4
1088 * ADD r2 r5 r5
1089 * ADD r2 r6 r6
1090
1091 ## Example of vector / vector, vector / scalar, scalar / scalar => vector add
1092
1093 register CSRvectorlen[XLEN][4]; # not quite decided yet about this one...
1094 register CSRpredicate[XLEN][4]; # 2^4 is max vector length
1095 register CSRreg_is_vectorised[XLEN]; # just for fun support scalars as well
1096 register x[32][XLEN];
1097
1098 function op_add(rd, rs1, rs2, predr)
1099 {
1100    /* note that this is ADD, not PADD */
1101    int i, id, irs1, irs2;
1102    # checks CSRvectorlen[rd] == CSRvectorlen[rs] etc. ignored
1103    # also destination makes no sense as a scalar but what the hell...
1104    for (i = 0, id=0, irs1=0, irs2=0; i<CSRvectorlen[rd]; i++)
1105       if (CSRpredicate[predr][i]) # i *think* this is right...
1106          x[rd+id] <= x[rs1+irs1] + x[rs2+irs2];
1107       # now increment the idxs
1108       if (CSRreg_is_vectorised[rd]) # bitfield check rd, scalar/vector?
1109          id += 1;
1110       if (CSRreg_is_vectorised[rs1]) # bitfield check rs1, scalar/vector?
1111          irs1 += 1;
1112       if (CSRreg_is_vectorised[rs2]) # bitfield check rs2, scalar/vector?
1113          irs2 += 1;
1114 }
1115
1116 ## Retro-fitting Predication into branch-explicit ISA <a name="predication_retrofit"></a>
1117
1118 One of the goals of this parallelism proposal is to avoid instruction
1119 duplication. However, with the base ISA having been designed explictly
1120 to *avoid* condition-codes entirely, shoe-horning predication into it
1121 bcomes quite challenging.
1122
1123 However what if all branch instructions, if referencing a vectorised
1124 register, were instead given *completely new analogous meanings* that
1125 resulted in a parallel bit-wise predication register being set? This
1126 would have to be done for both C.BEQZ and C.BNEZ, as well as BEQ, BNE,
1127 BLT and BGE.
1128
1129 We might imagine that FEQ, FLT and FLT would also need to be converted,
1130 however these are effectively *already* in the precise form needed and
1131 do not need to be converted *at all*! The difference is that FEQ, FLT
1132 and FLE *specifically* write a 1 to an integer register if the condition
1133 holds, and 0 if not. All that needs to be done here is to say, "if
1134 the integer register is tagged with a bit that says it is a predication
1135 register, the **bit** in the integer register is set based on the
1136 current vector index" instead.
1137
1138 There is, in the standard Conditional Branch instruction, more than
1139 adequate space to interpret it in a similar fashion:
1140
1141 [[!table data="""
1142 31 |30 ..... 25 |24 ... 20 | 19 ... 15 | 14 ...... 12 | 11 ....... 8 | 7 | 6 ....... 0 |
1143 imm[12] | imm[10:5] | rs2 | rs1 | funct3 | imm[4:1] | imm[11] | opcode |
1144 1 | 6 | 5 | 5 | 3 | 4 | 1 | 7 |
1145 offset[12,10:5] || src2 | src1 | BEQ | offset[11,4:1] || BRANCH |
1146 """]]
1147
1148 This would become:
1149
1150 [[!table data="""
1151 31 | 30 .. 25 |24 ... 20 | 19 15 | 14 12 | 11 .. 8 | 7 | 6 ... 0 |
1152 imm[12] | imm[10:5]| rs2 | rs1 | funct3 | imm[4:1] | imm[11] | opcode |
1153 1 | 6 | 5 | 5 | 3 | 4 | 1 | 7 |
1154 reserved || src2 | src1 | BEQ | predicate rs3 || BRANCH |
1155 """]]
1156
1157 Similarly the C.BEQZ and C.BNEZ instruction format may be retro-fitted,
1158 with the interesting side-effect that there is space within what is presently
1159 the "immediate offset" field to reinterpret that to add in not only a bit
1160 field to distinguish between floating-point compare and integer compare,
1161 not only to add in a second source register, but also use some of the bits as
1162 a predication target as well.
1163
1164 [[!table data="""
1165 15 ...... 13 | 12 ........... 10 | 9..... 7 | 6 ................. 2 | 1 .. 0 |
1166 funct3 | imm | rs10 | imm | op |
1167 3 | 3 | 3 | 5 | 2 |
1168 C.BEQZ | offset[8,4:3] | src | offset[7:6,2:1,5] | C1 |
1169 """]]
1170
1171 Now uses the CS format:
1172
1173 [[!table data="""
1174 15 ...... 13 | 12 ........... 10 | 9..... 7 | 6 .. 5 | 4......... 2 | 1 .. 0 |
1175 funct3 | imm | rs10 | imm | | op |
1176 3 | 3 | 3 | 2 | 3 | 2 |
1177 C.BEQZ | predicate rs3 | src1 | I/F B | src2 | C1 |
1178 """]]
1179
1180 Bit 6 would be decoded as "operation refers to Integer or Float" including
1181 interpreting src1 and src2 accordingly as outlined in Table 12.2 of the
1182 "C" Standard, version 2.0,
1183 whilst Bit 5 would allow the operation to be extended, in combination with
1184 funct3 = 110 or 111: a combination of four distinct (predicated) comparison
1185 operators. In both floating-point and integer cases those could be
1186 EQ/NEQ/LT/LE (with GT and GE being synthesised by inverting src1 and src2).
1187
1188 ## Register reordering <a name="register_reordering"></a>
1189
1190 ### Register File
1191
1192 | Reg Num | Bits |
1193 | ------- | ---- |
1194 | r0 | (32..0) |
1195 | r1 | (32..0) |
1196 | r2 | (32..0) |
1197 | r3 | (32..0) |
1198 | r4 | (32..0) |
1199 | r5 | (32..0) |
1200 | r6 | (32..0) |
1201 | r7 | (32..0) |
1202 | .. | (32..0) |
1203 | r31| (32..0) |
1204
1205 ### Vectorised CSR
1206
1207 May not be an actual CSR: may be generated from Vector Length CSR:
1208 single-bit is less burdensome on instruction decode phase.
1209
1210 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
1211 | - | - | - | - | - | - | - | - |
1212 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 |
1213
1214 ### Vector Length CSR
1215
1216 | Reg Num | (3..0) |
1217 | ------- | ---- |
1218 | r0 | 2 |
1219 | r1 | 0 |
1220 | r2 | 1 |
1221 | r3 | 1 |
1222 | r4 | 3 |
1223 | r5 | 0 |
1224 | r6 | 0 |
1225 | r7 | 1 |
1226
1227 ### Virtual Register Reordering
1228
1229 This example assumes the above Vector Length CSR table
1230
1231 | Reg Num | Bits (0) | Bits (1) | Bits (2) |
1232 | ------- | -------- | -------- | -------- |
1233 | r0 | (32..0) | (32..0) |
1234 | r2 | (32..0) |
1235 | r3 | (32..0) |
1236 | r4 | (32..0) | (32..0) | (32..0) |
1237 | r7 | (32..0) |
1238
1239 ### Bitwidth Virtual Register Reordering
1240
1241 This example goes a little further and illustrates the effect that a
1242 bitwidth CSR has been set on a register. Preconditions:
1243
1244 * RV32 assumed
1245 * CSRintbitwidth[2] = 010 # integer r2 is 16-bit
1246 * CSRintvlength[2] = 3 # integer r2 is a vector of length 3
1247 * vsetl rs1, 5 # set the vector length to 5
1248
1249 This is interpreted as follows:
1250
1251 * Given that the context is RV32, ELEN=32.
1252 * With ELEN=32 and bitwidth=16, the number of SIMD elements is 2
1253 * Therefore the actual vector length is up to *six* elements
1254 * However vsetl sets a length 5 therefore the last "element" is skipped
1255
1256 So when using an operation that uses r2 as a source (or destination)
1257 the operation is carried out as follows:
1258
1259 * 16-bit operation on r2(15..0) - vector element index 0
1260 * 16-bit operation on r2(31..16) - vector element index 1
1261 * 16-bit operation on r3(15..0) - vector element index 2
1262 * 16-bit operation on r3(31..16) - vector element index 3
1263 * 16-bit operation on r4(15..0) - vector element index 4
1264 * 16-bit operation on r4(31..16) **NOT** carried out due to length being 5
1265
1266 Predication has been left out of the above example for simplicity, however
1267 predication is ANDed with the latter stages (vsetl not equal to maximum
1268 capacity).
1269
1270 Note also that it is entirely an implementor's choice as to whether to have
1271 actual separate ALUs down to the minimum bitwidth, or whether to have something
1272 more akin to traditional SIMD (at any level of subdivision: 8-bit SIMD
1273 operations carried out 32-bits at a time is perfectly acceptable, as is
1274 8-bit SIMD operations carried out 16-bits at a time requiring two ALUs).
1275 Regardless of the internal parallelism choice, *predication must
1276 still be respected*, making Simple-V in effect the "consistent public API".
1277
1278 vew may be one of the following (giving a table "bytestable", used below):
1279
1280 | vew | bitwidth |
1281 | --- | -------- |
1282 | 000 | default |
1283 | 001 | 8 |
1284 | 010 | 16 |
1285 | 011 | 32 |
1286 | 100 | 64 |
1287 | 101 | 128 |
1288 | 110 | rsvd |
1289 | 111 | rsvd |
1290
1291 Pseudocode for vector length taking CSR SIMD-bitwidth into account:
1292
1293 vew = CSRbitwidth[rs1]
1294 if (vew == 0)
1295 bytesperreg = (XLEN/8) # or FLEN as appropriate
1296 else:
1297 bytesperreg = bytestable[vew] # 1 2 4 8 16
1298 simdmult = (XLEN/8) / bytesperreg # or FLEN as appropriate
1299 vlen = CSRvectorlen[rs1] * simdmult
1300
1301 To index an element in a register rnum where the vector element index is i:
1302
1303 function regoffs(rnum, i):
1304 regidx = floor(i / simdmult) # integer-div rounded down
1305 byteidx = i % simdmult # integer-remainder
1306 return rnum + regidx, # actual real register
1307 byteidx * 8, # low
1308 byteidx * 8 + (vew-1), # high
1309
1310 ### Insights
1311
1312 SIMD register file splitting still to consider. For RV64, benefits of doubling
1313 (quadrupling in the case of Half-Precision IEEE754 FP) the apparent
1314 size of the floating point register file to 64 (128 in the case of HP)
1315 seem pretty clear and worth the complexity.
1316
1317 64 virtual 32-bit F.P. registers and given that 32-bit FP operations are
1318 done on 64-bit registers it's not so conceptually difficult.  May even
1319 be achieved by *actually* splitting the regfile into 64 virtual 32-bit
1320 registers such that a 64-bit FP scalar operation is dropped into (r0.H
1321 r0.L) tuples.  Implementation therefore hidden through register renaming.
1322
1323 Implementations intending to introduce VLIW, OoO and parallelism
1324 (even without Simple-V) would then find that the instructions are
1325 generated quicker (or in a more compact fashion that is less heavy
1326 on caches). Interestingly we observe then that Simple-V is about
1327 "consolidation of instruction generation", where actual parallelism
1328 of underlying hardware is an implementor-choice that could just as
1329 equally be applied *without* Simple-V even being implemented.
1330
1331 ## Analysis of CSR decoding on latency <a name="csr_decoding_analysis"></a>
1332
1333 It could indeed have been logically deduced (or expected), that there
1334 would be additional decode latency in this proposal, because if
1335 overloading the opcodes to have different meanings, there is guaranteed
1336 to be some state, some-where, directly related to registers.
1337
1338 There are several cases:
1339
1340 * All operands vector-length=1 (scalars), all operands
1341 packed-bitwidth="default": instructions are passed through direct as if
1342 Simple-V did not exist.  Simple-V is, in effect, completely disabled.
1343 * At least one operand vector-length > 1, all operands
1344 packed-bitwidth="default": any parallel vector ALUs placed on "alert",
1345 virtual parallelism looping may be activated.
1346 * All operands vector-length=1 (scalars), at least one
1347 operand packed-bitwidth != default: degenerate case of SIMD,
1348 implementation-specific complexity here (packed decode before ALUs or
1349 *IN* ALUs)
1350 * At least one operand vector-length > 1, at least one operand
1351 packed-bitwidth != default: parallel vector ALUs (if any)
1352 placed on "alert", virtual parallelsim looping may be activated,
1353 implementation-specific SIMD complexity kicks in (packed decode before
1354 ALUs or *IN* ALUs).
1355
1356 Bear in mind that the proposal includes that the decision whether
1357 to parallelise in hardware or whether to virtual-parallelise (to
1358 dramatically simplify compilers and also not to run into the SIMD
1359 instruction proliferation nightmare) *or* a transprent combination
1360 of both, be done on a *per-operand basis*, so that implementors can
1361 specifically choose to create an application-optimised implementation
1362 that they believe (or know) will sell extremely well, without having
1363 "Extra Standards-Mandated Baggage" that would otherwise blow their area
1364 or power budget completely out the window.
1365
1366 Additionally, two possible CSR schemes have been proposed, in order to
1367 greatly reduce CSR space:
1368
1369 * per-register CSRs (vector-length and packed-bitwidth)
1370 * a smaller number of CSRs with the same information but with an *INDEX*
1371 specifying WHICH register in one of three regfiles (vector, fp, int)
1372 the length and bitwidth applies to.
1373
1374 (See "CSR vector-length and CSR SIMD packed-bitwidth" section for details)
1375
1376 In addition, LOAD/STORE has its own associated proposed CSRs that
1377 mirror the STRIDE (but not yet STRIDE-SEGMENT?) functionality of
1378 V (and Hwacha).
1379
1380 Also bear in mind that, for reasons of simplicity for implementors,
1381 I was coming round to the idea of permitting implementors to choose
1382 exactly which bitwidths they would like to support in hardware and which
1383 to allow to fall through to software-trap emulation.
1384
1385 So the question boils down to:
1386
1387 * whether either (or both) of those two CSR schemes have significant
1388 latency that could even potentially require an extra pipeline decode stage
1389 * whether there are implementations that can be thought of which do *not*
1390 introduce significant latency
1391 * whether it is possible to explicitly (through quite simply
1392 disabling Simple-V-Ext) or implicitly (detect the case all-vlens=1,
1393 all-simd-bitwidths=default) switch OFF any decoding, perhaps even to
1394 the extreme of skipping an entire pipeline stage (if one is needed)
1395 * whether packed bitwidth and associated regfile splitting is so complex
1396 that it should definitely, definitely be made mandatory that implementors
1397 move regfile splitting into the ALU, and what are the implications of that
1398 * whether even if that *is* made mandatory, is software-trapped
1399 "unsupported bitwidths" still desirable, on the basis that SIMD is such
1400 a complete nightmare that *even* having a software implementation is
1401 better, making Simple-V have more in common with a software API than
1402 anything else.
1403
1404 Whilst the above may seem to be severe minuses, there are some strong
1405 pluses:
1406
1407 * Significant reduction of V's opcode space: over 85%.
1408 * Smaller reduction of P's opcode space: around 10%.
1409 * The potential to use Compressed instructions in both Vector and SIMD
1410 due to the overloading of register meaning (implicit vectorisation,
1411 implicit packing)
1412 * Not only present but also future extensions automatically gain parallelism.
1413 * Already mentioned but worth emphasising: the simplification to compiler
1414 writers and assembly-level writers of having the same consistent ISA
1415 regardless of whether the internal level of parallelism (number of
1416 parallel ALUs) is only equal to one ("virtual" parallelism), or is
1417 greater than one, should not be underestimated.
1418
1419 ## Reducing Register Bank porting
1420
1421 This looks quite reasonable.
1422 <https://www.princeton.edu/~rblee/ELE572Papers/MultiBankRegFile_ISCA2000.pdf>
1423
1424 The main details are outlined on page 4.  They propose a 2-level register
1425 cache hierarchy, note that registers are typically only read once, that
1426 you never write back from upper to lower cache level but always go in a
1427 cycle lower -> upper -> ALU -> lower, and at the top of page 5 propose
1428 a scheme where you look ahead by only 2 instructions to determine which
1429 registers to bring into the cache.
1430
1431 The nice thing about a vector architecture is that you *know* that
1432 *even more* registers are going to be pulled in: Hwacha uses this fact
1433 to optimise L1/L2 cache-line usage (avoid thrashing), strangely enough
1434 by *introducing* deliberate latency into the execution phase.
1435
1436 ## Overflow registers in combination with predication
1437
1438 **TODO**: propose overflow registers be actually one of the integer regs
1439 (flowing to multiple regs).
1440
1441 **TODO**: propose "mask" (predication) registers likewise. combination with
1442 standard RV instructions and overflow registers extremely powerful, see
1443 Aspex ASP.
1444
1445 When integer overflow is stored in an easily-accessible bit (or another
1446 register), parallelisation turns this into a group of bits which can
1447 potentially be interacted with in predication, in interesting and powerful
1448 ways. For example, by taking the integer-overflow result as a predication
1449 field and shifting it by one, a predicated vectorised "add one" can emulate
1450 "carry" on arbitrary (unlimited) length addition.
1451
1452 However despite RVV having made room for floating-point exceptions, neither
1453 RVV nor base RV have taken integer-overflow (carry) into account, which
1454 makes proposing it quite challenging given that the relevant (Base) RV
1455 sections are frozen. Consequently it makes sense to forgo this feature.
1456
1457 ## Virtual Memory page-faults on LOAD/STORE
1458
1459
1460 ### Notes from conversations
1461
1462 > I was going through the C.LOAD / C.STORE section 12.3 of V2.3-Draft
1463 > riscv-isa-manual in order to work out how to re-map RVV onto the standard
1464 > ISA, and came across an interesting comments at the bottom of pages 75
1465 > and 76:
1466
1467 > " A common mechanism used in other ISAs to further reduce save/restore
1468 > code size is load- multiple and store-multiple instructions. "
1469
1470 > Fascinatingly, due to Simple-V proposing to use the *standard* register
1471 > file, both C.LOAD / C.STORE *and* LOAD / STORE would in effect be exactly
1472 > that: load-multiple and store-multiple instructions. Which brings us
1473 > on to this comment:
1474
1475 > "For virtual memory systems, some data accesses could be resident in
1476 > physical memory and
1477 > some could not, which requires a new restart mechanism for partially
1478 > executed instructions."
1479
1480 > Which then of course brings us to the interesting question: how does RVV
1481 > cope with the scenario when, particularly with LD.X (Indexed / indirect
1482 > loads), part-way through the loading a page fault occurs?
1483
1484 > Has this been noted or discussed before?
1485
1486 For applications-class platforms, the RVV exception model is
1487 element-precise (that is, if an exception occurs on element j of a
1488 vector instruction, elements 0..j-1 have completed execution and elements
1489 j+1..vl-1 have not executed).
1490
1491 Certain classes of embedded platforms where exceptions are always fatal
1492 might choose to offer resumable/swappable interrupts but not precise
1493 exceptions.
1494
1495
1496 > Is RVV designed in any way to be re-entrant?
1497
1498 Yes.
1499
1500
1501 > What would the implications be for instructions that were in a FIFO at
1502 > the time, in out-of-order and VLIW implementations, where partial decode
1503 > had taken place?
1504
1505 The usual bag of tricks for maintaining precise exceptions applies to
1506 vector machines as well. Register renaming makes the job easier, and
1507 it's relatively cheaper for vectors, since the control cost is amortized
1508 over longer registers.
1509
1510
1511 > Would it be reasonable at least to say *bypass* (and freeze) the
1512 > instruction FIFO (drop down to a single-issue execution model temporarily)
1513 > for the purposes of executing the instructions in the interrupt (whilst
1514 > setting up the VM page), then re-continue the instruction with all
1515 > state intact?
1516
1517 This approach has been done successfully, but it's desirable to be
1518 able to swap out the vector unit state to support context switches on
1519 exceptions that result in long-latency I/O.
1520
1521
1522 > Or would it be better to switch to an entirely separate secondary
1523 > hyperthread context?
1524
1525 > Does anyone have any ideas or know if there is any academic literature
1526 > on solutions to this problem?
1527
1528 The Vector VAX offered imprecise but restartable and swappable exceptions:
1529 http://mprc.pku.edu.cn/~liuxianhua/chn/corpus/Notes/articles/isca/1990/VAX%20vector%20architecture.pdf
1530
1531 Sec. 4.6 of Krste's dissertation assesses some of
1532 the tradeoffs and references a bunch of related work:
1533 http://people.eecs.berkeley.edu/~krste/thesis.pdf
1534
1535
1536 ----
1537
1538 Started reading section 4.6 of Krste's thesis, noted the "IEE85 F.P
1539 exceptions" and thought, "hmmm that could go into a CSR, must re-read
1540 the section on FP state CSRs in RVV 0.4-Draft again" then i suddenly
1541 thought, "ah ha! what if the memory exceptions were, instead of having
1542 an immediate exception thrown, were simply stored in a type of predication
1543 bit-field with a flag "error this element failed"?
1544
1545 Then, *after* the vector load (or store, or even operation) was
1546 performed, you could *then* raise an exception, at which point it
1547 would be possible (yes in software... I know....) to go "hmmm, these
1548 indexed operations didn't work, let's get them into memory by triggering
1549 page-loads", then *re-run the entire instruction* but this time with a
1550 "memory-predication CSR" that stops the already-performed operations
1551 (whether they be loads, stores or an arithmetic / FP operation) from
1552 being carried out a second time.
1553
1554 This theoretically could end up being done multiple times in an SMP
1555 environment, and also for LD.X there would be the remote outside annoying
1556 possibility that the indexed memory address could end up being modified.
1557
1558 The advantage would be that the order of execution need not be
1559 sequential, which potentially could have some big advantages.
1560 Am still thinking through the implications as any dependent operations
1561 (particularly ones already decoded and moved into the execution FIFO)
1562 would still be there (and stalled). hmmm.
1563
1564 ----
1565
1566 > > # assume internal parallelism of 8 and MAXVECTORLEN of 8
1567 > > VSETL r0, 8
1568 > > FADD x1, x2, x3
1569 >
1570 > > x3[0]: ok
1571 > > x3[1]: exception
1572 > > x3[2]: ok
1573 > > ...
1574 > > ...
1575 > > x3[7]: ok
1576 >
1577 > > what happens to result elements 2-7?  those may be *big* results
1578 > > (RV128)
1579 > > or in the RVV-Extended may be arbitrary bit-widths far greater.
1580 >
1581 >  (you replied:)
1582 >
1583 > Thrown away.
1584
1585 discussion then led to the question of OoO architectures
1586
1587 > The costs of the imprecise-exception model are greater than the benefit.
1588 > Software doesn't want to cope with it.  It's hard to debug.  You can't
1589 > migrate state between different microarchitectures--unless you force all
1590 > implementations to support the same imprecise-exception model, which would
1591 > greatly limit implementation flexibility.  (Less important, but still
1592 > relevant, is that the imprecise model increases the size of the context
1593 > structure, as the microarchitectural guts have to be spilled to memory.)
1594
1595
1596 ## Implementation Paradigms
1597
1598 TODO: assess various implementation paradigms. These are listed roughly
1599 in order of simplicity (minimum compliance, for ultra-light-weight
1600 embedded systems or to reduce design complexity and the burden of
1601 design implementation and compliance, in non-critical areas), right the
1602 way to high-performance systems.
1603
1604 * Full (or partial) software-emulated (via traps): full support for CSRs
1605 required, however when a register is used that is detected (in hardware)
1606 to be vectorised, an exception is thrown.
1607 * Single-issue In-order, reduced pipeline depth (traditional SIMD / DSP)
1608 * In-order 5+ stage pipelines with instruction FIFOs and mild register-renaming
1609 * Out-of-order with instruction FIFOs and aggressive register-renaming
1610 * VLIW
1611
1612 Also to be taken into consideration:
1613
1614 * "Virtual" vectorisation: single-issue loop, no internal ALU parallelism
1615 * Comphrensive vectorisation: FIFOs and internal parallelism
1616 * Hybrid Parallelism
1617
1618 # TODO Research
1619
1620 > For great floating point DSPs check TI’s C3x, C4X, and C6xx DSPs
1621
1622 Idea: basic simple butterfly swap on a few element indices, primarily targetted
1623 at SIMD / DSP. High-byte low-byte swapping, high-word low-word swapping,
1624 perhaps allow reindexing of permutations up to 4 elements? 8? Reason:
1625 such operations are less costly than a full indexed-shuffle, which requires
1626 a separate instruction cycle.
1627
1628 Predication "all zeros" needs to be "leave alone". Detection of
1629 ADD r1, rs1, rs0 cases result in nop on predication index 0, whereas
1630 ADD r0, rs1, rs2 is actually a desirable copy from r2 into r0.
1631 Destruction of destination indices requires a copy of the entire vector
1632 in advance to avoid.
1633
1634 # References
1635
1636 * SIMD considered harmful <https://www.sigarch.org/simd-instructions-considered-harmful/>
1637 * Link to first proposal <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/GuukrSjgBH8>
1638 * Recommendation by Jacob Bachmeyer to make zero-overhead loop an
1639 "implicit program-counter" <https://groups.google.com/a/groups.riscv.org/d/msg/isa-dev/vYVi95gF2Mo/SHz6a4_lAgAJ>
1640 * Re-continuing P-Extension proposal <https://groups.google.com/a/groups.riscv.org/forum/#!msg/isa-dev/IkLkQn3HvXQ/SEMyC9IlAgAJ>
1641 * First Draft P-SIMD (DSP) proposal <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/vYVi95gF2Mo>
1642 * B-Extension discussion <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/zi_7B15kj6s>
1643 * Broadcom VideoCore-IV <https://docs.broadcom.com/docs/12358545>
1644 Figure 2 P17 and Section 3 on P16.
1645 * Hwacha <https://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-262.html>
1646 * Hwacha <https://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-263.html>
1647 * Vector Workshop <http://riscv.org/wp-content/uploads/2015/06/riscv-vector-workshop-june2015.pdf>
1648 * Predication <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/XoP4BfYSLXA>
1649 * Branch Divergence <https://jbush001.github.io/2014/12/07/branch-divergence-in-parallel-kernels.html>
1650 * Life of Triangles (3D) <https://jbush001.github.io/2016/02/27/life-of-triangle.html>
1651 * Videocore-IV <https://github.com/hermanhermitage/videocoreiv/wiki/VideoCore-IV-3d-Graphics-Pipeline>
1652 * Discussion proposing CSRs that change ISA definition
1653 <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/InzQ1wr_3Ak>
1654 * Zero-overhead loops <https://pdfs.semanticscholar.org/dbaa/66985cc730d4b44d79f519e96ec9c43ab5b7.pdf>
1655 * Multi-ported VLIW Register File Implementation <https://ce-publications.et.tudelft.nl/publications/1517_multiple_contexts_in_a_multiported_vliw_register_file_impl.pdf>
1656 * Fast context save/restore proposal <https://groups.google.com/a/groups.riscv.org/d/msgid/isa-dev/57F823FA.6030701%40gmail.com>
1657 * Register File Bank Cacheing <https://www.princeton.edu/~rblee/ELE572Papers/MultiBankRegFile_ISCA2000.pdf>
1658 * Expired Patent on Vector Virtual Memory solutions
1659 <https://patentimages.storage.googleapis.com/fc/f6/e2/2cbee92fcd8743/US5895501.pdf>
1660 * Discussion on RVV "re-entrant" capabilities allowing operations to be
1661 restarted if an exception occurs (VM page-table miss)
1662 <https://groups.google.com/a/groups.riscv.org/d/msg/isa-dev/IuNFitTw9fM/CCKBUlzsAAAJ>
1663 * Dot Product Vector <https://people.eecs.berkeley.edu/~biancolin/papers/arith17.pdf>