add toc
[libreriscv.git] / simple_v_extension.mdwn
1 # Variable-width Variable-packed SIMD / Simple-V / Parallelism Extension Proposal
2
3 [[!toc levels=3]]
4
5 This proposal exists so as to be able to satisfy several disparate
6 requirements: power-conscious, area-conscious, and performance-conscious
7 designs all pull an ISA and its implementation in different conflicting
8 directions, as do the specific intended uses for any given implementation.
9
10 Additionally, the existing P (SIMD) proposal and the V (Vector) proposals,
11 whilst each extremely powerful in their own right and clearly desirable,
12 are also:
13
14 * Clearly independent in their origins (Cray and AndeStar v3 respectively)
15 so need work to adapt to the RISC-V ethos and paradigm
16 * Are sufficiently large so as to make adoption (and exploration for
17 analysis and review purposes) prohibitively expensive
18 * Both contain partial duplication of pre-existing RISC-V instructions
19 (an undesirable characteristic)
20 * Both have independent and disparate methods for introducing parallelism
21 at the instruction level.
22 * Both require that their respective parallelism paradigm be implemented
23 along-side and integral to their respective functionality *or not at all*.
24 * Both independently have methods for introducing parallelism that
25 could, if separated, benefit
26 *other areas of RISC-V not just DSP or Floating-point respectively*.
27
28 Therefore it makes a huge amount of sense to have a means and method
29 of introducing instruction parallelism in a flexible way that provides
30 implementors with the option to choose exactly where they wish to offer
31 performance improvements and where they wish to optimise for power
32 and/or area (and if that can be offered even on a per-operation basis that
33 would provide even more flexibility).
34
35 Additionally it makes sense to *split out* the parallelism inherent within
36 each of P and V, and to see if each of P and V then, in *combination* with
37 a "best-of-both" parallelism extension, would work well.
38
39 **TODO**: reword this to better suit this document:
40
41 Having looked at both P and V as they stand, they're _both_ very much
42 "separate engines" that, despite both their respective merits and
43 extremely powerful features, don't really cleanly fit into the RV design
44 ethos (or the flexible extensibility) and, as such, are both in danger
45 of not being widely adopted. I'm inclined towards recommending:
46
47 * splitting out the DSP aspects of P-SIMD to create a single-issue DSP
48 * splitting out the polymorphism, esoteric data types (GF, complex
49 numbers) and unusual operations of V to create a single-issue "Esoteric
50 Floating-Point" extension
51 * splitting out the loop-aspects, vector aspects and data-width aspects
52 of both P and V to a *new* "P-SIMD / Simple-V" and requiring that they
53 apply across *all* Extensions, whether those be DSP, M, Base, V, P -
54 everything.
55
56 **TODO**: propose overflow registers be actually one of the integer regs
57 (flowing to multiple regs).
58
59 **TODO**: propose "mask" (predication) registers likewise. combination with
60 standard RV instructions and overflow registers extremely powerful
61
62 ## CSR vector-length and CSR SIMD packed-bitwidth
63
64 **TODO** analyse each of these:
65
66 * splitting out the loop-aspects, vector aspects and data-width aspects
67 * integer reg 0 *and* fp reg0 share CSR vlen 0 *and* CSR packed-bitwidth 0
68 * integer reg 1 *and* fp reg1 share CSR vlen 1 *and* CSR packed-bitwidth 1
69 * ....
70 * .... 
71
72 instead:
73
74 * CSR vlen 0 *and* CSR packed-bitwidth 0 register contain extra bits
75 specifying an *INDEX* of WHICH int/fp register they refer to
76 * CSR vlen 1 *and* CSR packed-bitwidth 1 register contain extra bits
77 specifying an *INDEX* of WHICH int/fp register they refer to
78 * ...
79 * ...
80
81 Have to be very *very* careful about not implementing too few of those
82 (or too many). Assess implementation impact on decode latency. Is it
83 worth it?
84
85 Implementation of the latter:
86
87 Operation involving (referring to) register M:
88
89 > bitwidth = default # default for opcode?
90 > vectorlen = 1 # scalar
91 >
92 > for (o = 0, o < 2, o++)
93 >   if (CSR-Vector_registernum[o] == M)
94 >       bitwidth = CSR-Vector_bitwidth[o]
95 >       vectorlen = CSR-Vector_len[o]
96 >       break
97
98 and for the former it would simply be:
99
100 > bitwidth = CSR-Vector_bitwidth[M]
101 > vectorlen = CSR-Vector_len[M]
102
103
104 ## Stride
105
106 **TODO**: propose two LOAD/STORE offset CSRs, which mark a particular
107 register as being "if you use this reg in LOAD/STORE, use the offset
108 amount CSRoffsN (N=0,1) instead of treating LOAD/STORE as contiguous".
109 can be used for matrix spanning.
110
111 > For LOAD/STORE, could a better option be to interpret the offset in the
112 > opcode as a stride instead, so "LOAD t3, 12(t2)" would, if t3 is
113 > configured as a length-4 vector base, result in t3 = *t2, t4 = *(t2+12),
114 > t5 = *(t2+24), t6 = *(t2+32)?  Perhaps include a bit in the
115 > vector-control CSRs to select between offset-as-stride and unit-stride
116 > memory accesses?
117
118 So there would be an instruction like this:
119
120 | SETOFF | On=rN | OBank={float|int} | Smode={offs|unit} | OFFn=rM |
121 | opcode | 5 bit | 1 bit | 1 bit | 5 bit, OFFn=XLEN |
122
123
124 which would mean:
125
126 * CSR-Offset register n <= (float|int) register number N
127 * CSR-Offset Stride-mode = offset or unit
128 * CSR-Offset amount register n = contents of register M
129
130 LOAD rN, ldoffs(rM) would then be (assuming packed bit-width not set):
131
132 > offs = 0
133 > stride = 1
134 > vector-len = CSR-Vector-length register N
135 >
136 > for (o = 0, o < 2, o++)
137 > if (CSR-Offset register o == M)
138 > offs = CSR-Offset amount register o
139 > if CSR-Offset Stride-mode == offset:
140 > stride = ldoffs
141 > break
142 >
143 > for (i = 0, i < vector-len; i++)
144 > r[N+i] = mem[(offs*i + r[M+i])*stride]
145
146 # Analysis and discussion of Vector vs SIMD
147
148 There are four combined areas between the two proposals that help with
149 parallelism without over-burdening the ISA with a huge proliferation of
150 instructions:
151
152 * Fixed vs variable parallelism (fixed or variable "M" in SIMD)
153 * Implicit vs fixed instruction bit-width (integral to instruction or not)
154 * Implicit vs explicit type-conversion (compounded on bit-width)
155 * Implicit vs explicit inner loops.
156 * Masks / tagging (selecting/preventing certain indexed elements from execution)
157
158 The pros and cons of each are discussed and analysed below.
159
160 ## Fixed vs variable parallelism length
161
162 In David Patterson and Andrew Waterman's analysis of SIMD and Vector
163 ISAs, the analysis comes out clearly in favour of (effectively) variable
164 length SIMD. As SIMD is a fixed width, typically 4, 8 or in extreme cases
165 16 or 32 simultaneous operations, the setup, teardown and corner-cases of SIMD
166 are extremely burdensome except for applications whose requirements
167 *specifically* match the *precise and exact* depth of the SIMD engine.
168
169 Thus, SIMD, no matter what width is chosen, is never going to be acceptable
170 for general-purpose computation, and in the context of developing a
171 general-purpose ISA, is never going to satisfy 100 percent of implementors.
172
173 That basically leaves "variable-length vector" as the clear *general-purpose*
174 winner, at least in terms of greatly simplifying the instruction set,
175 reducing the number of instructions required for any given task, and thus
176 reducing power consumption for the same.
177
178 ## Implicit vs fixed instruction bit-width
179
180 SIMD again has a severe disadvantage here, over Vector: huge proliferation
181 of specialist instructions that target 8-bit, 16-bit, 32-bit, 64-bit, and
182 have to then have operations *for each and between each*. It gets very
183 messy, very quickly.
184
185 The V-Extension on the other hand proposes to set the bit-width of
186 future instructions on a per-register basis, such that subsequent instructions
187 involving that register are *implicitly* of that particular bit-width until
188 otherwise changed or reset.
189
190 This has some extremely useful properties, without being particularly
191 burdensome to implementations, given that instruction decode already has
192 to direct the operation to a correctly-sized width ALU engine, anyway.
193
194 Not least: in places where an ISA was previously constrained (due for
195 whatever reason, including limitations of the available operand spcace),
196 implicit bit-width allows the meaning of certain operations to be
197 type-overloaded *without* pollution or alteration of frozen and immutable
198 instructions, in a fully backwards-compatible fashion.
199
200 ## Implicit and explicit type-conversion
201
202 The Draft 2.3 V-extension proposal has (deprecated) polymorphism to help
203 deal with over-population of instructions, such that type-casting from
204 integer (and floating point) of various sizes is automatically inferred
205 due to "type tagging" that is set with a special instruction. A register
206 will be *specifically* marked as "16-bit Floating-Point" and, if added
207 to an operand that is specifically tagged as "32-bit Integer" an implicit
208 type-conversion will take placce *without* requiring that type-conversion
209 to be explicitly done with its own separate instruction.
210
211 However, implicit type-conversion is not only quite burdensome to
212 implement (explosion of inferred type-to-type conversion) but also is
213 never really going to be complete. It gets even worse when bit-widths
214 also have to be taken into consideration.
215
216 Overall, type-conversion is generally best to leave to explicit
217 type-conversion instructions, or in definite specific use-cases left to
218 be part of an actual instruction (DSP or FP)
219
220 ## Zero-overhead loops vs explicit loops
221
222 The initial Draft P-SIMD Proposal by Chuanhua Chang of Andes Technology
223 contains an extremely interesting feature: zero-overhead loops. This
224 proposal would basically allow an inner loop of instructions to be
225 repeated indefinitely, a fixed number of times.
226
227 Its specific advantage over explicit loops is that the pipeline in a
228 DSP can potentially be kept completely full *even in an in-order
229 implementation*. Normally, it requires a superscalar architecture and
230 out-of-order execution capabilities to "pre-process" instructions in order
231 to keep ALU pipelines 100% occupied.
232
233 This very simple proposal offers a way to increase pipeline activity in the
234 one key area which really matters: the inner loop.
235
236 ## Mask and Tagging
237
238 *TODO: research masks as they can be superb and extremely powerful.
239 If B-Extension is implemented and provides Bit-Gather-Scatter it
240 becomes really cool and easy to switch out certain indexed values
241 from an array of data, but actually BGS **on its own** might be
242 sufficient. Bottom line, this is complex, and needs a proper analysis.
243 The other sections are pretty straightforward.*
244
245 ## Conclusions
246
247 In the above sections the four different ways where parallel instruction
248 execution has closely and loosely inter-related implications for the ISA and
249 for implementors, were outlined. The pluses and minuses came out as
250 follows:
251
252 * Fixed vs variable parallelism: <b>variable</b>
253 * Implicit (indirect) vs fixed (integral) instruction bit-width: <b>indirect</b>
254 * Implicit vs explicit type-conversion: <b>explicit</b>
255 * Implicit vs explicit inner loops: <b>implicit</b>
256 * Tag or no-tag: <b>TODO</b>
257
258 In particular: variable-length vectors came out on top because of the
259 high setup, teardown and corner-cases associated with the fixed width
260 of SIMD. Implicit bit-width helps to extend the ISA to escape from
261 former limitations and restrictions (in a backwards-compatible fashion),
262 and implicit (zero-overhead) loops provide a means to keep pipelines
263 potentially 100% occupied *without* requiring a super-scalar or out-of-order
264 architecture.
265
266 Constructing a SIMD/Simple-Vector proposal based around even only these four
267 (five?) requirements would therefore seem to be a logical thing to do.
268
269 # Instruction Format
270
271 **TODO** *basically borrow from both P and V, which should be quite simple
272 to do, with the exception of Tag/no-tag, which needs a bit more
273 thought. V's Section 17.19 of Draft V2.3 spec is reminiscent of B's BGS
274 gather-scatterer, and, if implemented, could actually be a really useful
275 way to span 8-bit up to 64-bit groups of data, where BGS as it stands
276 and described by Clifford does **bits** of up to 16 width. Lots to
277 look at and investigate!*
278
279 # Note on implementation of parallelism
280
281 One extremely important aspect of this proposal is to respect and support
282 implementors desire to focus on power, area or performance. In that regard,
283 it is proposed that implementors be free to choose whether to implement
284 the Vector (or variable-width SIMD) parallelism as sequential operations
285 with a single ALU, fully parallel (if practical) with multiple ALUs, or
286 a hybrid combination of both.
287
288 In Broadcom's Videocore-IV, they chose hybrid, and called it "Virtual
289 Parallelism". They achieve a 16-way SIMD at an **instruction** level
290 by providing a combination of a 4-way parallel ALU *and* an externally
291 transparent loop that feeds 4 sequential sets of data into each of the
292 4 ALUs.
293
294 Also in the same core, it is worth noting that particularly uncommon
295 but essential operations (Reciprocal-Square-Root for example) are
296 *not* part of the 4-way parallel ALU but instead have a *single* ALU.
297 Under the proposed Vector (varible-width SIMD) implementors would
298 be free to do precisely that: i.e. free to choose *on a per operation
299 basis* whether and how much "Virtual Parallelism" to deploy.
300
301 It is absolutely critical to note that it is proposed that such choices MUST
302 be **entirely transparent** to the end-user and the compiler. Whilst
303 a Vector (varible-width SIM) may not precisely match the width of the
304 parallelism within the implementation, the end-user **should not care**
305 and in this way the performance benefits are gained but the ISA remains
306 simple. All that happens at the end of an instruction run is: some
307 parallel units (if there are any) would remain offline, completely
308 transparently to the ISA, the program, and the compiler.
309
310 The "SIMD considered harmful" trap of having huge complexity and extra
311 instructions to deal with corner-cases is thus avoided, and implementors
312 get to choose precisely where to focus and target the benefits of their
313 implementationefforts..
314
315 # V-Extension to Simple-V Comparative Analysis
316
317 This section covers the ways in which Simple-V is comparable
318 to, or more flexible than, V-Extension (V2.3-draft). Also covered is
319 one major weak-point (register files are fixed size, where V is
320 arbitrary length), and how best to deal with that, should V be adapted
321 to be on top of Simple-V.
322
323 The first stages of this section go over each of the sections of V2.3-draft V
324 where appropriate
325
326 ## 17.3 Shape Encoding
327
328 Simple-V's proposed means of expressing whether a register (from the
329 standard integer or the standard floating-point file) is a scalar or
330 a vector is to simply set the vector length to 1. The instruction
331 would however have to specify which register file (integer or FP) that
332 the vector-length was to be applied to.
333
334 Extended shapes (2-D etc) would not be part of Simple-V at all.
335
336 ## 17.4 Representation Encoding
337
338 Simple-V would not have representation-encoding. This is part of
339 polymorphism, which is considered too complex to implement (TODO: confirm?)
340
341 ## 17.5 Element Bitwidth
342
343 This is directly equivalent to Simple-V's "Packed", and implies that
344 integer (or floating-point) are divided down into vector-indexable
345 chunks of size Bitwidth.
346
347 In this way it becomes possible to have ADD effectively and implicitly
348 turn into ADDb (8-bit add), ADDw (16-bit add) and so on, and where
349 vector-length has been set to greater than 1, it becomes a "Packed"
350 (SIMD) instruction.
351
352 It remains to be decided what should be done when RV32 / RV64 ADD (sized)
353 opcodes are used. One useful idea would be, on an RV64 system where
354 a 32-bit-sized ADD was performed, to simply use the least significant
355 32-bits of the register (exactly as is currently done) but at the same
356 time to *respect the packed bitwidth as well*.
357
358 The extended encoding (Table 17.6) would not be part of Simple-V.
359
360 ## 17.6 Base Vector Extension Supported Types
361
362 TODO: analyse. probably exactly the same.
363
364 ## 17.7 Maximum Vector Element Width
365
366 No equivalent in Simple-V
367
368 ## 17.8 Vector Configuration Registers
369
370 TODO: analyse.
371
372 ## 17.9 Legal Vector Unit Configurations
373
374 TODO: analyse
375
376 ## 17.10 Vector Unit CSRs
377
378 TODO: analyse
379
380 > Ok so this is an aspect of Simple-V that I hadn't thought through,
381 > yet (proposal / idea only a few days old!).  in V2.3-Draft ISA Section
382 > 17.10 the CSRs are listed.  I note that there's some general-purpose
383 > CSRs (including a global/active vector-length) and 16 vcfgN CSRs.  i
384 > don't precisely know what those are for.
385
386 >  In the Simple-V proposal, *every* register in both the integer
387 > register-file *and* the floating-point register-file would have at
388 > least a 2-bit "data-width" CSR and probably something like an 8-bit
389 > "vector-length" CSR (less in RV32E, by exactly one bit).
390
391 >  What I *don't* know is whether that would be considered perfectly
392 > reasonable or completely insane.  If it turns out that the proposed
393 > Simple-V CSRs can indeed be stored in SRAM then I would imagine that
394 > adding somewhere in the region of 10 bits per register would be... okay? 
395 > I really don't honestly know.
396
397 >  Would these proposed 10-or-so-bit per-register Simple-V CSRs need to
398 > be multi-ported? No I don't believe they would.
399
400 ## 17.11 Maximum Vector Length (MVL)
401
402 Basically implicitly this is set to the maximum size of the register
403 file multiplied by the number of 8-bit packed ints that can fit into
404 a register (4 for RV32, 8 for RV64 and 16 for RV128).
405
406 ## !7.12 Vector Instruction Formats
407
408 No equivalent in Simple-V because *all* instructions of *all* Extensions
409 are implicitly parallelised (and packed).
410
411 ## 17.13 Polymorphic Vector Instructions
412
413 Polymorphism (implicit type-casting) is deliberately not supported
414 in Simple-V.
415
416 ## 17.14 Rapid Configuration Instructions
417
418 TODO: analyse if this is useful to have an equivalent in Simple-V
419
420 ## 17.15 Vector-Type-Change Instructions
421
422 TODO: analyse if this is useful to have an equivalent in Simple-V
423
424 ## 17.16 Vector Length
425
426 Has a direct corresponding equivalent.
427
428 ## 17.17 Predicated Execution
429
430 Predicated Execution is another name for "masking" or "tagging". Masked
431 (or tagged) implies that there is a bit field which is indexed, and each
432 bit associated with the corresponding indexed offset register within
433 the "Vector". If the tag / mask bit is 1, when a parallel operation is
434 issued, the indexed element of the vector has the operation carried out.
435 However if the tag / mask bit is *zero*, that particular indexed element
436 of the vector does *not* have the requested operation carried out.
437
438 In V2.3-draft V, there is a significant (not recommended) difference:
439 the zero-tagged elements are *set to zero*. This loses a *significant*
440 advantage of mask / tagging, particularly if the entire mask register
441 is itself a general-purpose register, as that general-purpose register
442 can be inverted, shifted, and'ed, or'ed and so on. In other words
443 it becomes possible, especially if Carry/Overflow from each vector
444 operation is also accessible, to do conditional (step-by-step) vector
445 operations including things like turn vectors into 1024-bit or greater
446 operands with very few instructions, by treating the "carry" from
447 one instruction as a way to do "Conditional add of 1 to the register
448 next door". If V2.3-draft V sets zero-tagged elements to zero, such
449 extremely powerful techniques are simply not possible.
450
451 It is noted that there is no mention of an equivalent to BEXT (element
452 skipping) which would be particularly fascinating and powerful to have.
453 In this mode, the "mask" would skip elements where its mask bit was zero
454 in either the source or the destination operand.
455
456 Lots to be discussed.
457
458 ## 17.18 Vector Load/Store Instructions
459
460 These may not have a direct equivalent in Simple-V, except if mask/tagging
461 is to be deployed.
462
463 To be discussed.
464
465 ## 17.19 Vector Register Gather
466
467 TODO
468
469 ## TODO, sort
470
471 > However, there are also several features that go beyond simply attaching VL
472 > to a scalar operation and are crucial to being able to vectorize a lot of
473 > code. To name a few:
474 > - Conditional execution (i.e., predicated operations)
475 > - Inter-lane data movement (e.g. SLIDE, SELECT)
476 > - Reductions (e.g., VADD with a scalar destination)
477
478 Ok so the Conditional and also the Reductions is one of the reasons
479 why as part of SimpleV / variable-SIMD / parallelism (gah gotta think
480 of a decent name) i proposed that it be implemented as "if you say r0
481 is to be a vector / SIMD that means operations actually take place on
482 r0,r1,r2... r(N-1)".
483
484 Consequently any parallel operation could be paused (or... more
485 specifically: vectors disabled by resetting it back to a default /
486 scalar / vector-length=1) yet the results would actually be in the
487 *main register file* (integer or float) and so anything that wasn't
488 possible to easily do in "simple" parallel terms could be done *out*
489 of parallel "mode" instead.
490
491 I do appreciate that the above does imply that there is a limit to the
492 length that SimpleV (whatever) can be parallelised, namely that you
493 run out of registers! my thought there was, "leave space for the main
494 V-Ext proposal to extend it to the length that V currently supports".
495 Honestly i had not thought through precisely how that would work.
496
497 Inter-lane (SELECT) i saw 17.19 in V2.3-Draft p117, I liked that,
498 it reminds me of the discussion with Clifford on bit-manipulation
499 (gather-scatter except not Bit Gather Scatter, *data* gather scatter): if
500 applied "globally and outside of V and P" SLIDE and SELECT might become
501 an extremely powerful way to do fast memory copy and reordering [2[.
502
503 However I haven't quite got my head round how that would work: i am
504 used to the concept of register "tags" (the modern term is "masks")
505 and i *think* if "masks" were applied to a Simple-V-enhanced LOAD /
506 STORE you would get the exact same thing as SELECT.
507
508 SLIDE you could do simply by setting say r0 vector-length to say 16
509 (meaning that if referred to in any operation it would be an implicit
510 parallel operation on *all* registers r0 through r15), and temporarily
511 set say.... r7 vector-length to say... 5. Do a LOAD on r7 and it would
512 implicitly mean "load from memory into r7 through r11". Then you go
513 back and do an operation on r0 and ta-daa, you're actually doing an
514 operation on a SLID {SLIDED?) vector.
515
516 The advantage of Simple-V (whatever) over V would be that you could
517 actually do *operations* in the middle of vectors (not just SLIDEs)
518 simply by (as above) setting r0 vector-length to 16 and r7 vector-length
519 to 5. There would be nothing preventing you from doing an ADD on r0
520 (which meant do an ADD on r0 through r15) followed *immediately in the
521 next instruction with no setup cost* a MUL on r7 (which actually meant
522 "do a parallel MUL on r7 through r11").
523
524 btw it's worth mentioning that you'd get scalar-vector and vector-scalar
525 implicitly by having one of the source register be vector-length 1
526 (the default) and one being N > 1. but without having special opcodes
527 to do it. i *believe* (or more like "logically infer or deduce" as
528 i haven't got access to the spec) that that would result in a further
529 opcode reduction when comparing [draft] V-Ext to [proposed] Simple-V.
530
531 Also, Reduction *might* be possible by specifying that the destination be
532 a scalar (vector-length=1) whilst the source be a vector. However... it
533 would be an awful lot of work to go through *every single instruction*
534 in *every* Extension, working out which ones could be parallelised (ADD,
535 MUL, XOR) and those that definitely could not (DIV, SUB). Is that worth
536 the effort? maybe. Would it result in huge complexity? probably.
537 Could an implementor just go "I ain't doing *that* as parallel!
538 let's make it virtual-parallelism (sequential reduction) instead"?
539 absolutely. So, now that I think it through, Simple-V (whatever)
540 covers Reduction as well. huh, that's a surprise.
541
542
543 > - Vector-length speculation (making it possible to vectorize some loops with
544 > unknown trip count) - I don't think this part of the proposal is written
545 > down yet.
546
547 Now that _is_ an interesting concept. A little scary, i imagine, with
548 the possibility of putting a processor into a hard infinite execution
549 loop... :)
550
551
552 > Also, note the vector ISA consumes relatively little opcode space (all the
553 > arithmetic fits in 7/8ths of a major opcode). This is mainly because data
554 > type and size is a function of runtime configuration, rather than of opcode.
555
556 yes. i love that aspect of V, i am a huge fan of polymorphism [1]
557 which is why i am keen to advocate that the same runtime principle be
558 extended to the rest of the RISC-V ISA [3]
559
560 Yikes that's a lot. I'm going to need to pull this into the wiki to
561 make sure it's not lost.
562
563 [1] inherent data type conversion: 25 years ago i designed a hypothetical
564 hyper-hyper-hyper-escape-code-sequencing ISA based around 2-bit
565 (escape-extended) opcodes and 2-bit (escape-extended) operands that
566 only required a fixed 8-bit instruction length. that relied heavily
567 on polymorphism and runtime size configurations as well. At the time
568 I thought it would have meant one HELL of a lot of CSRs... but then I
569 met RISC-V and was cured instantly of that delusion^Wmisapprehension :)
570
571 [2] Interestingly if you then also add in the other aspect of Simple-V
572 (the data-size, which is effectively functionally orthogonal / identical
573 to "Packed" of Packed-SIMD), masked and packed *and* vectored LOAD / STORE
574 operations become byte / half-word / word augmenters of B-Ext's proposed
575 "BGS" i.e. where B-Ext's BGS dealt with bits, masked-packed-vectored
576 LOAD / STORE would deal with 8 / 16 / 32 bits at a time. Where it
577 would get really REALLY interesting would be masked-packed-vectored
578 B-Ext BGS instructions. I can't even get my head fully round that,
579 which is a good sign that the combination would be *really* powerful :)
580
581 [3] ok sadly maybe not the polymorphism, it's too complicated and I
582 think would be much too hard for implementors to easily "slide in" to an
583 existing non-Simple-V implementation.  i say that despite really *really*
584 wanting IEEE 704 FP Half-precision to end up somewhere in RISC-V in some
585 fashion, for optimising 3D Graphics.  *sigh*.
586
587 ## TODO: instructions (based on Hwacha) V-Ext duplication analysis
588
589 This is partly speculative due to lack of access to an up-to-date
590 V-Ext Spec (V2.3-draft RVV 0.4-Draft at the time of writing). However
591 basin an analysis instead on Hwacha, a cursory examination shows over
592 an **85%** duplication of V-Ext operand-related instructions when
593 compared to Simple-V on a standard RG64G base. Even Vector Fetch
594 is analogous to "zero-overhead loop".
595
596 Exceptions are:
597
598 * Vector Indexed Memory Instructions (non-contiguous)
599 * Vector Atomic Memory Instructions.
600 * Some of the Vector Arithmetic ops: MADD, MSUB,
601 VSRL, VSRA, VEIDX, VFIRST, VSGNJN, VFSGNJX and potentially more.
602 * Consensual Jump
603
604 Table of RV32V Instructions
605
606 | RV32V | |
607 | ----- | --- |
608 | VADD | |
609 | VSUB | |
610 | VSL | |
611 | VSR | |
612 | VAND | |
613 | VOR | |
614 | VXOR | |
615 | VSEQ | |
616 | VSNE | |
617 | VSLT | |
618 | VSGE | |
619 | VCLIP | |
620 | VCVT | |
621 | VMPOP | |
622 | VMFIRST | |
623 | VEXTRACT | |
624 | VINSERT | |
625 | VMERGE | |
626 | VSELECT | |
627 | VSLIDE | |
628 | VDIV | |
629 | VREM | |
630 | VMUL | |
631 | VMULH | |
632 | VMIN | |
633 | VMAX | |
634 | VSGNJ | |
635 | VSGNJN | |
636 | VSGNJX | |
637 | VSQRT | |
638 | VCLASS | |
639 | VPOPC | |
640 | VADDI | |
641 | VSLI | |
642 | VSRI | |
643 | VANDI | |
644 | VORI | |
645 | VXORI | |
646 | VCLIPI | |
647 | VMADD | |
648 | VMSUB | |
649 | VNMADD | |
650 | VNMSUB | |
651 | VLD | |
652 | VLDS | |
653 | VLDX | |
654 | VST | |
655 | VSTS | |
656 | VSTX | |
657 | VAMOSWAP | |
658 | VAMOADD | |
659 | VAMOAND | |
660 | VAMOOR | |
661 | VAMOXOR | |
662 | VAMOMIN | |
663 | VAMOMAX | |
664
665 ## TODO: sort
666
667 > I suspect that the "hardware loop" in question is actually a zero-overhead
668 > loop unit that diverts execution from address X to address Y if a certain
669 > condition is met.
670
671  not quite.  The zero-overhead loop unit interestingly would be at
672 an [independent] level above vector-length.  The distinctions are
673 as follows:
674
675 * Vector-length issues *virtual* instructions where the register
676 operands are *specifically* altered (to cover a range of registers),
677 whereas zero-overhead loops *specifically* do *NOT* alter the operands
678 in *ANY* way.
679
680 * Vector-length-driven "virtual" instructions are driven by *one*
681 and *only* one instruction (whether it be a LOAD, STORE, or pure
682 one/two/three-operand opcode) whereas zero-overhead loop units
683 specifically apply to *multiple* instructions.
684
685 Where vector-length-driven "virtual" instructions might get conceptually
686 blurred with zero-overhead loops is LOAD / STORE.  In the case of LOAD /
687 STORE, to actually be useful, vector-length-driven LOAD / STORE should
688 increment the LOAD / STORE memory address to correspondingly match the
689 increment in the register bank.  example:
690
691 * set vector-length for r0 to 4
692 * issue RV32 LOAD from addr 0x1230 to r0
693
694 translates effectively to:
695
696 * RV32 LOAD from addr 0x1230 to r0
697 * ...
698 * ...
699 * RV32 LOAD from addr 0x123B to r3
700
701 # P-Ext ISA
702
703 ## 16-bit Arithmetic
704
705 | Mnemonic | 16-bit Instruction | Simple-V Equivalent |
706 | ------------------ | ------------------------- | ------------------- |
707 | ADD16 rt, ra, rb | add | RV ADD (bitwidth=16) |
708 | RADD16 rt, ra, rb | Signed Halving add | |
709 | URADD16 rt, ra, rb | Unsigned Halving add | |
710 | KADD16 rt, ra, rb | Signed Saturating add | |
711 | UKADD16 rt, ra, rb | Unsigned Saturating add | |
712 | SUB16 rt, ra, rb | sub | RV SUB (bitwidth=16) |
713 | RSUB16 rt, ra, rb | Signed Halving sub | |
714 | URSUB16 rt, ra, rb | Unsigned Halving sub | |
715 | KSUB16 rt, ra, rb | Signed Saturating sub | |
716 | UKSUB16 rt, ra, rb | Unsigned Saturating sub | |
717 | CRAS16 rt, ra, rb | Cross Add & Sub | |
718 | RCRAS16 rt, ra, rb | Signed Halving Cross Add & Sub | |
719 | URCRAS16 rt, ra, rb| Unsigned Halving Cross Add & Sub | |
720 | KCRAS16 rt, ra, rb | Signed Saturating Cross Add & Sub | |
721 | UKCRAS16 rt, ra, rb| Unsigned Saturating Cross Add & Sub | |
722 | CRSA16 rt, ra, rb | Cross Sub & Add | |
723 | RCRSA16 rt, ra, rb | Signed Halving Cross Sub & Add | |
724 | URCRSA16 rt, ra, rb| Unsigned Halving Cross Sub & Add | |
725 | KCRSA16 rt, ra, rb | Signed Saturating Cross Sub & Add | |
726 | UKCRSA16 rt, ra, rb| Unsigned Saturating Cross Sub & Add | |
727
728 ## 8-bit Arithmetic
729
730 | Mnemonic | 16-bit Instruction | Simple-V Equivalent |
731 | ------------------ | ------------------------- | ------------------- |
732 | ADD8 rt, ra, rb | add | RV ADD (bitwidth=8)|
733 | RADD8 rt, ra, rb | Signed Halving add | |
734 | URADD8 rt, ra, rb | Unsigned Halving add | |
735 | KADD8 rt, ra, rb | Signed Saturating add | |
736 | UKADD8 rt, ra, rb | Unsigned Saturating add | |
737 | SUB8 rt, ra, rb | sub | RV SUB (bitwidth=8)|
738 | RSUB8 rt, ra, rb | Signed Halving sub | |
739 | URSUB8 rt, ra, rb | Unsigned Halving sub | |
740
741 # Exceptions
742
743 > What does an ADD of two different-sized vectors do in simple-V?
744
745 * if the two source operands are not the same, throw an exception.
746 * if the destination operand is also a vector, and the source is longer
747 than the destination, throw an exception.
748
749 > And what about instructions like JALR? 
750 > What does jumping to a vector do?
751
752 * Throw an exception. Whether that actually results in spawning threads
753 as part of the trap-handling remains to be seen.
754
755 # Impementing V on top of Simple-V
756
757 * Number of Offset CSRs extends from 2
758 * Extra register file: vector-file
759 * Setup of Vector length and bitwidth CSRs now can specify vector-file
760 as well as integer or float file.
761 * TODO
762
763 # Implementing P (renamed to DSP) on top of Simple-V
764
765 * Implementors indicate chosen bitwidth support in Vector-bitwidth CSR
766 (caveat: anything not specified drops through to software-emulation / traps)
767 * TODO
768
769 # Analysis of CSR decoding on latency
770
771 It could indeed have been logically deduced (or expected), that there
772 would be additional decode latency in this proposal, because if
773 overloading the opcodes to have different meanings, there is guaranteed
774 to be some state, some-where, directly related to registers.
775
776 There are several cases:
777
778 * All operands vector-length=1 (scalars), all operands
779 packed-bitwidth="default": instructions are passed through direct as if
780 Simple-V did not exist.  Simple-V is, in effect, completely disabled.
781 * At least one operand vector-length > 1, all operands
782 packed-bitwidth="default": any parallel vector ALUs placed on "alert",
783 virtual parallelism looping may be activated.
784 * All operands vector-length=1 (scalars), at least one
785 operand packed-bitwidth != default: degenerate case of SIMD,
786 implementation-specific complexity here (packed decode before ALUs or
787 *IN* ALUs)
788 * At least one operand vector-length > 1, at least one operand
789 packed-bitwidth != default: parallel vector ALUs (if any)
790 placed on "alert", virtual parallelsim looping may be activated,
791 implementation-specific SIMD complexity kicks in (packed decode before
792 ALUs or *IN* ALUs).
793
794 Bear in mind that the proposal includes that the decision whether
795 to parallelise in hardware or whether to virtual-parallelise (to
796 dramatically simplify compilers and also not to run into the SIMD
797 instruction proliferation nightmare) *or* a transprent combination
798 of both, be done on a *per-operand basis*, so that implementors can
799 specifically choose to create an application-optimised implementation
800 that they believe (or know) will sell extremely well, without having
801 "Extra Standards-Mandated Baggage" that would otherwise blow their area
802 or power budget completely out the window.
803
804 Additionally, two possible CSR schemes have been proposed, in order to
805 greatly reduce CSR space:
806
807 * per-register CSRs (vector-length and packed-bitwidth)
808 * a smaller number of CSRs with the same information but with an *INDEX*
809 specifying WHICH register in one of three regfiles (vector, fp, int)
810 the length and bitwidth applies to.
811
812 (See "CSR vector-length and CSR SIMD packed-bitwidth" section for details)
813
814 Also bear in mind that, for reasons of simplicity for implementors,
815 I was coming round to the idea of permitting implementors to choose
816 exactly which bitwidths they would like to support in hardware and which
817 to allow to fall through to software-trap emulation.
818
819 So the question boils down to:
820
821 * whether either (or both) of those two CSR schemes have significant
822 latency that could even potentially require an extra pipeline decode stage
823 * whether there are implementations that can be thought of which do *not*
824 introduce significant latency
825 * whether it is possible to explicitly (through quite simply
826 disabling Simple-V-Ext) or implicitly (detect the case all-vlens=1,
827 all-simd-bitwidths=default) switch OFF any decoding, perhaps even to
828 the extreme of skipping an entire pipeline stage (if one is needed)
829 * whether packed bitwidth and associated regfile splitting is so complex
830 that it should definitely, definitely be made mandatory that implementors
831 move regfile splitting into the ALU, and what are the implications of that
832 * whether even if that *is* made mandatory, is software-trapped
833 "unsupported bitwidths" still desirable, on the basis that SIMD is such
834 a complete nightmare that *even* having a software implementation is
835 better, making Simple-V have more in common with a software API than
836 anything else.
837
838
839
840 # References
841
842 * SIMD considered harmful <https://www.sigarch.org/simd-instructions-considered-harmful/>
843 * Link to first proposal <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/GuukrSjgBH8>
844 * Recommendation by Jacob Bachmeyer to make zero-overhead loop an
845 "implicit program-counter" <https://groups.google.com/a/groups.riscv.org/d/msg/isa-dev/vYVi95gF2Mo/SHz6a4_lAgAJ>
846 * Re-continuing P-Extension proposal <https://groups.google.com/a/groups.riscv.org/forum/#!msg/isa-dev/IkLkQn3HvXQ/SEMyC9IlAgAJ>
847 * First Draft P-SIMD (DSP) proposal <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/vYVi95gF2Mo>
848 * B-Extension discussion <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/zi_7B15kj6s>
849 * Broadcom VideoCore-IV <https://docs.broadcom.com/docs/12358545>
850 Figure 2 P17 and Section 3 on P16.
851 * Hwacha <https://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-262.html>
852 * Hwacha <https://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-263.html>
853 * Vector Workshop <http://riscv.org/wp-content/uploads/2015/06/riscv-vector-workshop-june2015.pdf>
854