add plus sides to analysis
[libreriscv.git] / simple_v_extension.mdwn
1 # Variable-width Variable-packed SIMD / Simple-V / Parallelism Extension Proposal
2
3 [[!toc ]]
4
5 This proposal exists so as to be able to satisfy several disparate
6 requirements: power-conscious, area-conscious, and performance-conscious
7 designs all pull an ISA and its implementation in different conflicting
8 directions, as do the specific intended uses for any given implementation.
9
10 Additionally, the existing P (SIMD) proposal and the V (Vector) proposals,
11 whilst each extremely powerful in their own right and clearly desirable,
12 are also:
13
14 * Clearly independent in their origins (Cray and AndeStar v3 respectively)
15 so need work to adapt to the RISC-V ethos and paradigm
16 * Are sufficiently large so as to make adoption (and exploration for
17 analysis and review purposes) prohibitively expensive
18 * Both contain partial duplication of pre-existing RISC-V instructions
19 (an undesirable characteristic)
20 * Both have independent and disparate methods for introducing parallelism
21 at the instruction level.
22 * Both require that their respective parallelism paradigm be implemented
23 along-side and integral to their respective functionality *or not at all*.
24 * Both independently have methods for introducing parallelism that
25 could, if separated, benefit
26 *other areas of RISC-V not just DSP or Floating-point respectively*.
27
28 Therefore it makes a huge amount of sense to have a means and method
29 of introducing instruction parallelism in a flexible way that provides
30 implementors with the option to choose exactly where they wish to offer
31 performance improvements and where they wish to optimise for power
32 and/or area (and if that can be offered even on a per-operation basis that
33 would provide even more flexibility).
34
35 Additionally it makes sense to *split out* the parallelism inherent within
36 each of P and V, and to see if each of P and V then, in *combination* with
37 a "best-of-both" parallelism extension, would work well.
38
39 Furthermore, an additional goal of this proposal is to reduce the number
40 of opcodes utilised by each of P and V as they currently stand, leveraging
41 existing RISC-V opcodes where possible, and also potentially allowing
42 P and V to make use of Compressed Instructions as a result.
43
44 **TODO**: reword this to better suit this document:
45
46 Having looked at both P and V as they stand, they're _both_ very much
47 "separate engines" that, despite both their respective merits and
48 extremely powerful features, don't really cleanly fit into the RV design
49 ethos (or the flexible extensibility) and, as such, are both in danger
50 of not being widely adopted. I'm inclined towards recommending:
51
52 * splitting out the DSP aspects of P-SIMD to create a single-issue DSP
53 * splitting out the polymorphism, esoteric data types (GF, complex
54 numbers) and unusual operations of V to create a single-issue "Esoteric
55 Floating-Point" extension
56 * splitting out the loop-aspects, vector aspects and data-width aspects
57 of both P and V to a *new* "P-SIMD / Simple-V" and requiring that they
58 apply across *all* Extensions, whether those be DSP, M, Base, V, P -
59 everything.
60
61 **TODO**: propose overflow registers be actually one of the integer regs
62 (flowing to multiple regs).
63
64 **TODO**: propose "mask" (predication) registers likewise. combination with
65 standard RV instructions and overflow registers extremely powerful
66
67 ## CSR vector-length and CSR SIMD packed-bitwidth
68
69 **TODO** analyse each of these:
70
71 * splitting out the loop-aspects, vector aspects and data-width aspects
72 * integer reg 0 *and* fp reg0 share CSR vlen 0 *and* CSR packed-bitwidth 0
73 * integer reg 1 *and* fp reg1 share CSR vlen 1 *and* CSR packed-bitwidth 1
74 * ....
75 * .... 
76
77 instead:
78
79 * CSR vlen 0 *and* CSR packed-bitwidth 0 register contain extra bits
80 specifying an *INDEX* of WHICH int/fp register they refer to
81 * CSR vlen 1 *and* CSR packed-bitwidth 1 register contain extra bits
82 specifying an *INDEX* of WHICH int/fp register they refer to
83 * ...
84 * ...
85
86 Have to be very *very* careful about not implementing too few of those
87 (or too many). Assess implementation impact on decode latency. Is it
88 worth it?
89
90 Implementation of the latter:
91
92 Operation involving (referring to) register M:
93
94 > bitwidth = default # default for opcode?
95 > vectorlen = 1 # scalar
96 >
97 > for (o = 0, o < 2, o++)
98 >   if (CSR-Vector_registernum[o] == M)
99 >       bitwidth = CSR-Vector_bitwidth[o]
100 >       vectorlen = CSR-Vector_len[o]
101 >       break
102
103 and for the former it would simply be:
104
105 > bitwidth = CSR-Vector_bitwidth[M]
106 > vectorlen = CSR-Vector_len[M]
107
108
109 ## Stride
110
111 **TODO**: propose two LOAD/STORE offset CSRs, which mark a particular
112 register as being "if you use this reg in LOAD/STORE, use the offset
113 amount CSRoffsN (N=0,1) instead of treating LOAD/STORE as contiguous".
114 can be used for matrix spanning.
115
116 > For LOAD/STORE, could a better option be to interpret the offset in the
117 > opcode as a stride instead, so "LOAD t3, 12(t2)" would, if t3 is
118 > configured as a length-4 vector base, result in t3 = *t2, t4 = *(t2+12),
119 > t5 = *(t2+24), t6 = *(t2+32)?  Perhaps include a bit in the
120 > vector-control CSRs to select between offset-as-stride and unit-stride
121 > memory accesses?
122
123 So there would be an instruction like this:
124
125 | SETOFF | On=rN | OBank={float|int} | Smode={offs|unit} | OFFn=rM |
126 | opcode | 5 bit | 1 bit | 1 bit | 5 bit, OFFn=XLEN |
127
128
129 which would mean:
130
131 * CSR-Offset register n <= (float|int) register number N
132 * CSR-Offset Stride-mode = offset or unit
133 * CSR-Offset amount register n = contents of register M
134
135 LOAD rN, ldoffs(rM) would then be (assuming packed bit-width not set):
136
137 > offs = 0
138 > stride = 1
139 > vector-len = CSR-Vector-length register N
140 >
141 > for (o = 0, o < 2, o++)
142 > if (CSR-Offset register o == M)
143 > offs = CSR-Offset amount register o
144 > if CSR-Offset Stride-mode == offset:
145 > stride = ldoffs
146 > break
147 >
148 > for (i = 0, i < vector-len; i++)
149 > r[N+i] = mem[(offs*i + r[M+i])*stride]
150
151 # Analysis and discussion of Vector vs SIMD
152
153 There are four combined areas between the two proposals that help with
154 parallelism without over-burdening the ISA with a huge proliferation of
155 instructions:
156
157 * Fixed vs variable parallelism (fixed or variable "M" in SIMD)
158 * Implicit vs fixed instruction bit-width (integral to instruction or not)
159 * Implicit vs explicit type-conversion (compounded on bit-width)
160 * Implicit vs explicit inner loops.
161 * Masks / tagging (selecting/preventing certain indexed elements from execution)
162
163 The pros and cons of each are discussed and analysed below.
164
165 ## Fixed vs variable parallelism length
166
167 In David Patterson and Andrew Waterman's analysis of SIMD and Vector
168 ISAs, the analysis comes out clearly in favour of (effectively) variable
169 length SIMD. As SIMD is a fixed width, typically 4, 8 or in extreme cases
170 16 or 32 simultaneous operations, the setup, teardown and corner-cases of SIMD
171 are extremely burdensome except for applications whose requirements
172 *specifically* match the *precise and exact* depth of the SIMD engine.
173
174 Thus, SIMD, no matter what width is chosen, is never going to be acceptable
175 for general-purpose computation, and in the context of developing a
176 general-purpose ISA, is never going to satisfy 100 percent of implementors.
177
178 That basically leaves "variable-length vector" as the clear *general-purpose*
179 winner, at least in terms of greatly simplifying the instruction set,
180 reducing the number of instructions required for any given task, and thus
181 reducing power consumption for the same.
182
183 ## Implicit vs fixed instruction bit-width
184
185 SIMD again has a severe disadvantage here, over Vector: huge proliferation
186 of specialist instructions that target 8-bit, 16-bit, 32-bit, 64-bit, and
187 have to then have operations *for each and between each*. It gets very
188 messy, very quickly.
189
190 The V-Extension on the other hand proposes to set the bit-width of
191 future instructions on a per-register basis, such that subsequent instructions
192 involving that register are *implicitly* of that particular bit-width until
193 otherwise changed or reset.
194
195 This has some extremely useful properties, without being particularly
196 burdensome to implementations, given that instruction decode already has
197 to direct the operation to a correctly-sized width ALU engine, anyway.
198
199 Not least: in places where an ISA was previously constrained (due for
200 whatever reason, including limitations of the available operand spcace),
201 implicit bit-width allows the meaning of certain operations to be
202 type-overloaded *without* pollution or alteration of frozen and immutable
203 instructions, in a fully backwards-compatible fashion.
204
205 ## Implicit and explicit type-conversion
206
207 The Draft 2.3 V-extension proposal has (deprecated) polymorphism to help
208 deal with over-population of instructions, such that type-casting from
209 integer (and floating point) of various sizes is automatically inferred
210 due to "type tagging" that is set with a special instruction. A register
211 will be *specifically* marked as "16-bit Floating-Point" and, if added
212 to an operand that is specifically tagged as "32-bit Integer" an implicit
213 type-conversion will take placce *without* requiring that type-conversion
214 to be explicitly done with its own separate instruction.
215
216 However, implicit type-conversion is not only quite burdensome to
217 implement (explosion of inferred type-to-type conversion) but also is
218 never really going to be complete. It gets even worse when bit-widths
219 also have to be taken into consideration.
220
221 Overall, type-conversion is generally best to leave to explicit
222 type-conversion instructions, or in definite specific use-cases left to
223 be part of an actual instruction (DSP or FP)
224
225 ## Zero-overhead loops vs explicit loops
226
227 The initial Draft P-SIMD Proposal by Chuanhua Chang of Andes Technology
228 contains an extremely interesting feature: zero-overhead loops. This
229 proposal would basically allow an inner loop of instructions to be
230 repeated indefinitely, a fixed number of times.
231
232 Its specific advantage over explicit loops is that the pipeline in a
233 DSP can potentially be kept completely full *even in an in-order
234 implementation*. Normally, it requires a superscalar architecture and
235 out-of-order execution capabilities to "pre-process" instructions in order
236 to keep ALU pipelines 100% occupied.
237
238 This very simple proposal offers a way to increase pipeline activity in the
239 one key area which really matters: the inner loop.
240
241 ## Mask and Tagging
242
243 *TODO: research masks as they can be superb and extremely powerful.
244 If B-Extension is implemented and provides Bit-Gather-Scatter it
245 becomes really cool and easy to switch out certain indexed values
246 from an array of data, but actually BGS **on its own** might be
247 sufficient. Bottom line, this is complex, and needs a proper analysis.
248 The other sections are pretty straightforward.*
249
250 ## Conclusions
251
252 In the above sections the four different ways where parallel instruction
253 execution has closely and loosely inter-related implications for the ISA and
254 for implementors, were outlined. The pluses and minuses came out as
255 follows:
256
257 * Fixed vs variable parallelism: <b>variable</b>
258 * Implicit (indirect) vs fixed (integral) instruction bit-width: <b>indirect</b>
259 * Implicit vs explicit type-conversion: <b>explicit</b>
260 * Implicit vs explicit inner loops: <b>implicit</b>
261 * Tag or no-tag: <b>TODO</b>
262
263 In particular: variable-length vectors came out on top because of the
264 high setup, teardown and corner-cases associated with the fixed width
265 of SIMD. Implicit bit-width helps to extend the ISA to escape from
266 former limitations and restrictions (in a backwards-compatible fashion),
267 and implicit (zero-overhead) loops provide a means to keep pipelines
268 potentially 100% occupied *without* requiring a super-scalar or out-of-order
269 architecture.
270
271 Constructing a SIMD/Simple-Vector proposal based around even only these four
272 (five?) requirements would therefore seem to be a logical thing to do.
273
274 # Instruction Format
275
276 **TODO** *basically borrow from both P and V, which should be quite simple
277 to do, with the exception of Tag/no-tag, which needs a bit more
278 thought. V's Section 17.19 of Draft V2.3 spec is reminiscent of B's BGS
279 gather-scatterer, and, if implemented, could actually be a really useful
280 way to span 8-bit up to 64-bit groups of data, where BGS as it stands
281 and described by Clifford does **bits** of up to 16 width. Lots to
282 look at and investigate!*
283
284 # Note on implementation of parallelism
285
286 One extremely important aspect of this proposal is to respect and support
287 implementors desire to focus on power, area or performance. In that regard,
288 it is proposed that implementors be free to choose whether to implement
289 the Vector (or variable-width SIMD) parallelism as sequential operations
290 with a single ALU, fully parallel (if practical) with multiple ALUs, or
291 a hybrid combination of both.
292
293 In Broadcom's Videocore-IV, they chose hybrid, and called it "Virtual
294 Parallelism". They achieve a 16-way SIMD at an **instruction** level
295 by providing a combination of a 4-way parallel ALU *and* an externally
296 transparent loop that feeds 4 sequential sets of data into each of the
297 4 ALUs.
298
299 Also in the same core, it is worth noting that particularly uncommon
300 but essential operations (Reciprocal-Square-Root for example) are
301 *not* part of the 4-way parallel ALU but instead have a *single* ALU.
302 Under the proposed Vector (varible-width SIMD) implementors would
303 be free to do precisely that: i.e. free to choose *on a per operation
304 basis* whether and how much "Virtual Parallelism" to deploy.
305
306 It is absolutely critical to note that it is proposed that such choices MUST
307 be **entirely transparent** to the end-user and the compiler. Whilst
308 a Vector (varible-width SIM) may not precisely match the width of the
309 parallelism within the implementation, the end-user **should not care**
310 and in this way the performance benefits are gained but the ISA remains
311 simple. All that happens at the end of an instruction run is: some
312 parallel units (if there are any) would remain offline, completely
313 transparently to the ISA, the program, and the compiler.
314
315 The "SIMD considered harmful" trap of having huge complexity and extra
316 instructions to deal with corner-cases is thus avoided, and implementors
317 get to choose precisely where to focus and target the benefits of their
318 implementationefforts..
319
320 # V-Extension to Simple-V Comparative Analysis
321
322 This section covers the ways in which Simple-V is comparable
323 to, or more flexible than, V-Extension (V2.3-draft). Also covered is
324 one major weak-point (register files are fixed size, where V is
325 arbitrary length), and how best to deal with that, should V be adapted
326 to be on top of Simple-V.
327
328 The first stages of this section go over each of the sections of V2.3-draft V
329 where appropriate
330
331 ## 17.3 Shape Encoding
332
333 Simple-V's proposed means of expressing whether a register (from the
334 standard integer or the standard floating-point file) is a scalar or
335 a vector is to simply set the vector length to 1. The instruction
336 would however have to specify which register file (integer or FP) that
337 the vector-length was to be applied to.
338
339 Extended shapes (2-D etc) would not be part of Simple-V at all.
340
341 ## 17.4 Representation Encoding
342
343 Simple-V would not have representation-encoding. This is part of
344 polymorphism, which is considered too complex to implement (TODO: confirm?)
345
346 ## 17.5 Element Bitwidth
347
348 This is directly equivalent to Simple-V's "Packed", and implies that
349 integer (or floating-point) are divided down into vector-indexable
350 chunks of size Bitwidth.
351
352 In this way it becomes possible to have ADD effectively and implicitly
353 turn into ADDb (8-bit add), ADDw (16-bit add) and so on, and where
354 vector-length has been set to greater than 1, it becomes a "Packed"
355 (SIMD) instruction.
356
357 It remains to be decided what should be done when RV32 / RV64 ADD (sized)
358 opcodes are used. One useful idea would be, on an RV64 system where
359 a 32-bit-sized ADD was performed, to simply use the least significant
360 32-bits of the register (exactly as is currently done) but at the same
361 time to *respect the packed bitwidth as well*.
362
363 The extended encoding (Table 17.6) would not be part of Simple-V.
364
365 ## 17.6 Base Vector Extension Supported Types
366
367 TODO: analyse. probably exactly the same.
368
369 ## 17.7 Maximum Vector Element Width
370
371 No equivalent in Simple-V
372
373 ## 17.8 Vector Configuration Registers
374
375 TODO: analyse.
376
377 ## 17.9 Legal Vector Unit Configurations
378
379 TODO: analyse
380
381 ## 17.10 Vector Unit CSRs
382
383 TODO: analyse
384
385 > Ok so this is an aspect of Simple-V that I hadn't thought through,
386 > yet (proposal / idea only a few days old!).  in V2.3-Draft ISA Section
387 > 17.10 the CSRs are listed.  I note that there's some general-purpose
388 > CSRs (including a global/active vector-length) and 16 vcfgN CSRs.  i
389 > don't precisely know what those are for.
390
391 >  In the Simple-V proposal, *every* register in both the integer
392 > register-file *and* the floating-point register-file would have at
393 > least a 2-bit "data-width" CSR and probably something like an 8-bit
394 > "vector-length" CSR (less in RV32E, by exactly one bit).
395
396 >  What I *don't* know is whether that would be considered perfectly
397 > reasonable or completely insane.  If it turns out that the proposed
398 > Simple-V CSRs can indeed be stored in SRAM then I would imagine that
399 > adding somewhere in the region of 10 bits per register would be... okay? 
400 > I really don't honestly know.
401
402 >  Would these proposed 10-or-so-bit per-register Simple-V CSRs need to
403 > be multi-ported? No I don't believe they would.
404
405 ## 17.11 Maximum Vector Length (MVL)
406
407 Basically implicitly this is set to the maximum size of the register
408 file multiplied by the number of 8-bit packed ints that can fit into
409 a register (4 for RV32, 8 for RV64 and 16 for RV128).
410
411 ## !7.12 Vector Instruction Formats
412
413 No equivalent in Simple-V because *all* instructions of *all* Extensions
414 are implicitly parallelised (and packed).
415
416 ## 17.13 Polymorphic Vector Instructions
417
418 Polymorphism (implicit type-casting) is deliberately not supported
419 in Simple-V.
420
421 ## 17.14 Rapid Configuration Instructions
422
423 TODO: analyse if this is useful to have an equivalent in Simple-V
424
425 ## 17.15 Vector-Type-Change Instructions
426
427 TODO: analyse if this is useful to have an equivalent in Simple-V
428
429 ## 17.16 Vector Length
430
431 Has a direct corresponding equivalent.
432
433 ## 17.17 Predicated Execution
434
435 Predicated Execution is another name for "masking" or "tagging". Masked
436 (or tagged) implies that there is a bit field which is indexed, and each
437 bit associated with the corresponding indexed offset register within
438 the "Vector". If the tag / mask bit is 1, when a parallel operation is
439 issued, the indexed element of the vector has the operation carried out.
440 However if the tag / mask bit is *zero*, that particular indexed element
441 of the vector does *not* have the requested operation carried out.
442
443 In V2.3-draft V, there is a significant (not recommended) difference:
444 the zero-tagged elements are *set to zero*. This loses a *significant*
445 advantage of mask / tagging, particularly if the entire mask register
446 is itself a general-purpose register, as that general-purpose register
447 can be inverted, shifted, and'ed, or'ed and so on. In other words
448 it becomes possible, especially if Carry/Overflow from each vector
449 operation is also accessible, to do conditional (step-by-step) vector
450 operations including things like turn vectors into 1024-bit or greater
451 operands with very few instructions, by treating the "carry" from
452 one instruction as a way to do "Conditional add of 1 to the register
453 next door". If V2.3-draft V sets zero-tagged elements to zero, such
454 extremely powerful techniques are simply not possible.
455
456 It is noted that there is no mention of an equivalent to BEXT (element
457 skipping) which would be particularly fascinating and powerful to have.
458 In this mode, the "mask" would skip elements where its mask bit was zero
459 in either the source or the destination operand.
460
461 Lots to be discussed.
462
463 ## 17.18 Vector Load/Store Instructions
464
465 These may not have a direct equivalent in Simple-V, except if mask/tagging
466 is to be deployed.
467
468 To be discussed.
469
470 ## 17.19 Vector Register Gather
471
472 TODO
473
474 ## TODO, sort
475
476 > However, there are also several features that go beyond simply attaching VL
477 > to a scalar operation and are crucial to being able to vectorize a lot of
478 > code. To name a few:
479 > - Conditional execution (i.e., predicated operations)
480 > - Inter-lane data movement (e.g. SLIDE, SELECT)
481 > - Reductions (e.g., VADD with a scalar destination)
482
483 Ok so the Conditional and also the Reductions is one of the reasons
484 why as part of SimpleV / variable-SIMD / parallelism (gah gotta think
485 of a decent name) i proposed that it be implemented as "if you say r0
486 is to be a vector / SIMD that means operations actually take place on
487 r0,r1,r2... r(N-1)".
488
489 Consequently any parallel operation could be paused (or... more
490 specifically: vectors disabled by resetting it back to a default /
491 scalar / vector-length=1) yet the results would actually be in the
492 *main register file* (integer or float) and so anything that wasn't
493 possible to easily do in "simple" parallel terms could be done *out*
494 of parallel "mode" instead.
495
496 I do appreciate that the above does imply that there is a limit to the
497 length that SimpleV (whatever) can be parallelised, namely that you
498 run out of registers! my thought there was, "leave space for the main
499 V-Ext proposal to extend it to the length that V currently supports".
500 Honestly i had not thought through precisely how that would work.
501
502 Inter-lane (SELECT) i saw 17.19 in V2.3-Draft p117, I liked that,
503 it reminds me of the discussion with Clifford on bit-manipulation
504 (gather-scatter except not Bit Gather Scatter, *data* gather scatter): if
505 applied "globally and outside of V and P" SLIDE and SELECT might become
506 an extremely powerful way to do fast memory copy and reordering [2[.
507
508 However I haven't quite got my head round how that would work: i am
509 used to the concept of register "tags" (the modern term is "masks")
510 and i *think* if "masks" were applied to a Simple-V-enhanced LOAD /
511 STORE you would get the exact same thing as SELECT.
512
513 SLIDE you could do simply by setting say r0 vector-length to say 16
514 (meaning that if referred to in any operation it would be an implicit
515 parallel operation on *all* registers r0 through r15), and temporarily
516 set say.... r7 vector-length to say... 5. Do a LOAD on r7 and it would
517 implicitly mean "load from memory into r7 through r11". Then you go
518 back and do an operation on r0 and ta-daa, you're actually doing an
519 operation on a SLID {SLIDED?) vector.
520
521 The advantage of Simple-V (whatever) over V would be that you could
522 actually do *operations* in the middle of vectors (not just SLIDEs)
523 simply by (as above) setting r0 vector-length to 16 and r7 vector-length
524 to 5. There would be nothing preventing you from doing an ADD on r0
525 (which meant do an ADD on r0 through r15) followed *immediately in the
526 next instruction with no setup cost* a MUL on r7 (which actually meant
527 "do a parallel MUL on r7 through r11").
528
529 btw it's worth mentioning that you'd get scalar-vector and vector-scalar
530 implicitly by having one of the source register be vector-length 1
531 (the default) and one being N > 1. but without having special opcodes
532 to do it. i *believe* (or more like "logically infer or deduce" as
533 i haven't got access to the spec) that that would result in a further
534 opcode reduction when comparing [draft] V-Ext to [proposed] Simple-V.
535
536 Also, Reduction *might* be possible by specifying that the destination be
537 a scalar (vector-length=1) whilst the source be a vector. However... it
538 would be an awful lot of work to go through *every single instruction*
539 in *every* Extension, working out which ones could be parallelised (ADD,
540 MUL, XOR) and those that definitely could not (DIV, SUB). Is that worth
541 the effort? maybe. Would it result in huge complexity? probably.
542 Could an implementor just go "I ain't doing *that* as parallel!
543 let's make it virtual-parallelism (sequential reduction) instead"?
544 absolutely. So, now that I think it through, Simple-V (whatever)
545 covers Reduction as well. huh, that's a surprise.
546
547
548 > - Vector-length speculation (making it possible to vectorize some loops with
549 > unknown trip count) - I don't think this part of the proposal is written
550 > down yet.
551
552 Now that _is_ an interesting concept. A little scary, i imagine, with
553 the possibility of putting a processor into a hard infinite execution
554 loop... :)
555
556
557 > Also, note the vector ISA consumes relatively little opcode space (all the
558 > arithmetic fits in 7/8ths of a major opcode). This is mainly because data
559 > type and size is a function of runtime configuration, rather than of opcode.
560
561 yes. i love that aspect of V, i am a huge fan of polymorphism [1]
562 which is why i am keen to advocate that the same runtime principle be
563 extended to the rest of the RISC-V ISA [3]
564
565 Yikes that's a lot. I'm going to need to pull this into the wiki to
566 make sure it's not lost.
567
568 [1] inherent data type conversion: 25 years ago i designed a hypothetical
569 hyper-hyper-hyper-escape-code-sequencing ISA based around 2-bit
570 (escape-extended) opcodes and 2-bit (escape-extended) operands that
571 only required a fixed 8-bit instruction length. that relied heavily
572 on polymorphism and runtime size configurations as well. At the time
573 I thought it would have meant one HELL of a lot of CSRs... but then I
574 met RISC-V and was cured instantly of that delusion^Wmisapprehension :)
575
576 [2] Interestingly if you then also add in the other aspect of Simple-V
577 (the data-size, which is effectively functionally orthogonal / identical
578 to "Packed" of Packed-SIMD), masked and packed *and* vectored LOAD / STORE
579 operations become byte / half-word / word augmenters of B-Ext's proposed
580 "BGS" i.e. where B-Ext's BGS dealt with bits, masked-packed-vectored
581 LOAD / STORE would deal with 8 / 16 / 32 bits at a time. Where it
582 would get really REALLY interesting would be masked-packed-vectored
583 B-Ext BGS instructions. I can't even get my head fully round that,
584 which is a good sign that the combination would be *really* powerful :)
585
586 [3] ok sadly maybe not the polymorphism, it's too complicated and I
587 think would be much too hard for implementors to easily "slide in" to an
588 existing non-Simple-V implementation.  i say that despite really *really*
589 wanting IEEE 704 FP Half-precision to end up somewhere in RISC-V in some
590 fashion, for optimising 3D Graphics.  *sigh*.
591
592 ## TODO: instructions (based on Hwacha) V-Ext duplication analysis
593
594 This is partly speculative due to lack of access to an up-to-date
595 V-Ext Spec (V2.3-draft RVV 0.4-Draft at the time of writing). However
596 basin an analysis instead on Hwacha, a cursory examination shows over
597 an **85%** duplication of V-Ext operand-related instructions when
598 compared to Simple-V on a standard RG64G base. Even Vector Fetch
599 is analogous to "zero-overhead loop".
600
601 Exceptions are:
602
603 * Vector Indexed Memory Instructions (non-contiguous)
604 * Vector Atomic Memory Instructions.
605 * Some of the Vector Arithmetic ops: MADD, MSUB,
606 VSRL, VSRA, VEIDX, VFIRST, VSGNJN, VFSGNJX and potentially more.
607 * Consensual Jump
608
609 Table of RV32V Instructions
610
611 | RV32V | |
612 | ----- | --- |
613 | VADD | |
614 | VSUB | |
615 | VSL | |
616 | VSR | |
617 | VAND | |
618 | VOR | |
619 | VXOR | |
620 | VSEQ | |
621 | VSNE | |
622 | VSLT | |
623 | VSGE | |
624 | VCLIP | |
625 | VCVT | |
626 | VMPOP | |
627 | VMFIRST | |
628 | VEXTRACT | |
629 | VINSERT | |
630 | VMERGE | |
631 | VSELECT | |
632 | VSLIDE | |
633 | VDIV | |
634 | VREM | |
635 | VMUL | |
636 | VMULH | |
637 | VMIN | |
638 | VMAX | |
639 | VSGNJ | |
640 | VSGNJN | |
641 | VSGNJX | |
642 | VSQRT | |
643 | VCLASS | |
644 | VPOPC | |
645 | VADDI | |
646 | VSLI | |
647 | VSRI | |
648 | VANDI | |
649 | VORI | |
650 | VXORI | |
651 | VCLIPI | |
652 | VMADD | |
653 | VMSUB | |
654 | VNMADD | |
655 | VNMSUB | |
656 | VLD | |
657 | VLDS | |
658 | VLDX | |
659 | VST | |
660 | VSTS | |
661 | VSTX | |
662 | VAMOSWAP | |
663 | VAMOADD | |
664 | VAMOAND | |
665 | VAMOOR | |
666 | VAMOXOR | |
667 | VAMOMIN | |
668 | VAMOMAX | |
669
670 ## TODO: sort
671
672 > I suspect that the "hardware loop" in question is actually a zero-overhead
673 > loop unit that diverts execution from address X to address Y if a certain
674 > condition is met.
675
676  not quite.  The zero-overhead loop unit interestingly would be at
677 an [independent] level above vector-length.  The distinctions are
678 as follows:
679
680 * Vector-length issues *virtual* instructions where the register
681 operands are *specifically* altered (to cover a range of registers),
682 whereas zero-overhead loops *specifically* do *NOT* alter the operands
683 in *ANY* way.
684
685 * Vector-length-driven "virtual" instructions are driven by *one*
686 and *only* one instruction (whether it be a LOAD, STORE, or pure
687 one/two/three-operand opcode) whereas zero-overhead loop units
688 specifically apply to *multiple* instructions.
689
690 Where vector-length-driven "virtual" instructions might get conceptually
691 blurred with zero-overhead loops is LOAD / STORE.  In the case of LOAD /
692 STORE, to actually be useful, vector-length-driven LOAD / STORE should
693 increment the LOAD / STORE memory address to correspondingly match the
694 increment in the register bank.  example:
695
696 * set vector-length for r0 to 4
697 * issue RV32 LOAD from addr 0x1230 to r0
698
699 translates effectively to:
700
701 * RV32 LOAD from addr 0x1230 to r0
702 * ...
703 * ...
704 * RV32 LOAD from addr 0x123B to r3
705
706 # P-Ext ISA
707
708 ## 16-bit Arithmetic
709
710 | Mnemonic | 16-bit Instruction | Simple-V Equivalent |
711 | ------------------ | ------------------------- | ------------------- |
712 | ADD16 rt, ra, rb | add | RV ADD (bitwidth=16) |
713 | RADD16 rt, ra, rb | Signed Halving add | |
714 | URADD16 rt, ra, rb | Unsigned Halving add | |
715 | KADD16 rt, ra, rb | Signed Saturating add | |
716 | UKADD16 rt, ra, rb | Unsigned Saturating add | |
717 | SUB16 rt, ra, rb | sub | RV SUB (bitwidth=16) |
718 | RSUB16 rt, ra, rb | Signed Halving sub | |
719 | URSUB16 rt, ra, rb | Unsigned Halving sub | |
720 | KSUB16 rt, ra, rb | Signed Saturating sub | |
721 | UKSUB16 rt, ra, rb | Unsigned Saturating sub | |
722 | CRAS16 rt, ra, rb | Cross Add & Sub | |
723 | RCRAS16 rt, ra, rb | Signed Halving Cross Add & Sub | |
724 | URCRAS16 rt, ra, rb| Unsigned Halving Cross Add & Sub | |
725 | KCRAS16 rt, ra, rb | Signed Saturating Cross Add & Sub | |
726 | UKCRAS16 rt, ra, rb| Unsigned Saturating Cross Add & Sub | |
727 | CRSA16 rt, ra, rb | Cross Sub & Add | |
728 | RCRSA16 rt, ra, rb | Signed Halving Cross Sub & Add | |
729 | URCRSA16 rt, ra, rb| Unsigned Halving Cross Sub & Add | |
730 | KCRSA16 rt, ra, rb | Signed Saturating Cross Sub & Add | |
731 | UKCRSA16 rt, ra, rb| Unsigned Saturating Cross Sub & Add | |
732
733 ## 8-bit Arithmetic
734
735 | Mnemonic | 16-bit Instruction | Simple-V Equivalent |
736 | ------------------ | ------------------------- | ------------------- |
737 | ADD8 rt, ra, rb | add | RV ADD (bitwidth=8)|
738 | RADD8 rt, ra, rb | Signed Halving add | |
739 | URADD8 rt, ra, rb | Unsigned Halving add | |
740 | KADD8 rt, ra, rb | Signed Saturating add | |
741 | UKADD8 rt, ra, rb | Unsigned Saturating add | |
742 | SUB8 rt, ra, rb | sub | RV SUB (bitwidth=8)|
743 | RSUB8 rt, ra, rb | Signed Halving sub | |
744 | URSUB8 rt, ra, rb | Unsigned Halving sub | |
745
746 # Exceptions
747
748 > What does an ADD of two different-sized vectors do in simple-V?
749
750 * if the two source operands are not the same, throw an exception.
751 * if the destination operand is also a vector, and the source is longer
752 than the destination, throw an exception.
753
754 > And what about instructions like JALR? 
755 > What does jumping to a vector do?
756
757 * Throw an exception. Whether that actually results in spawning threads
758 as part of the trap-handling remains to be seen.
759
760 # Impementing V on top of Simple-V
761
762 * Number of Offset CSRs extends from 2
763 * Extra register file: vector-file
764 * Setup of Vector length and bitwidth CSRs now can specify vector-file
765 as well as integer or float file.
766 * TODO
767
768 # Implementing P (renamed to DSP) on top of Simple-V
769
770 * Implementors indicate chosen bitwidth support in Vector-bitwidth CSR
771 (caveat: anything not specified drops through to software-emulation / traps)
772 * TODO
773
774 # Analysis of CSR decoding on latency
775
776 <a name="csr_decoding_analysis"></a>
777
778 It could indeed have been logically deduced (or expected), that there
779 would be additional decode latency in this proposal, because if
780 overloading the opcodes to have different meanings, there is guaranteed
781 to be some state, some-where, directly related to registers.
782
783 There are several cases:
784
785 * All operands vector-length=1 (scalars), all operands
786 packed-bitwidth="default": instructions are passed through direct as if
787 Simple-V did not exist.  Simple-V is, in effect, completely disabled.
788 * At least one operand vector-length > 1, all operands
789 packed-bitwidth="default": any parallel vector ALUs placed on "alert",
790 virtual parallelism looping may be activated.
791 * All operands vector-length=1 (scalars), at least one
792 operand packed-bitwidth != default: degenerate case of SIMD,
793 implementation-specific complexity here (packed decode before ALUs or
794 *IN* ALUs)
795 * At least one operand vector-length > 1, at least one operand
796 packed-bitwidth != default: parallel vector ALUs (if any)
797 placed on "alert", virtual parallelsim looping may be activated,
798 implementation-specific SIMD complexity kicks in (packed decode before
799 ALUs or *IN* ALUs).
800
801 Bear in mind that the proposal includes that the decision whether
802 to parallelise in hardware or whether to virtual-parallelise (to
803 dramatically simplify compilers and also not to run into the SIMD
804 instruction proliferation nightmare) *or* a transprent combination
805 of both, be done on a *per-operand basis*, so that implementors can
806 specifically choose to create an application-optimised implementation
807 that they believe (or know) will sell extremely well, without having
808 "Extra Standards-Mandated Baggage" that would otherwise blow their area
809 or power budget completely out the window.
810
811 Additionally, two possible CSR schemes have been proposed, in order to
812 greatly reduce CSR space:
813
814 * per-register CSRs (vector-length and packed-bitwidth)
815 * a smaller number of CSRs with the same information but with an *INDEX*
816 specifying WHICH register in one of three regfiles (vector, fp, int)
817 the length and bitwidth applies to.
818
819 (See "CSR vector-length and CSR SIMD packed-bitwidth" section for details)
820
821 In addition, LOAD/STORE has its own associated proposed CSRs that
822 mirror the STRIDE (but not yet STRIDE-SEGMENT?) functionality of
823 V (and Hwacha).
824
825 Also bear in mind that, for reasons of simplicity for implementors,
826 I was coming round to the idea of permitting implementors to choose
827 exactly which bitwidths they would like to support in hardware and which
828 to allow to fall through to software-trap emulation.
829
830 So the question boils down to:
831
832 * whether either (or both) of those two CSR schemes have significant
833 latency that could even potentially require an extra pipeline decode stage
834 * whether there are implementations that can be thought of which do *not*
835 introduce significant latency
836 * whether it is possible to explicitly (through quite simply
837 disabling Simple-V-Ext) or implicitly (detect the case all-vlens=1,
838 all-simd-bitwidths=default) switch OFF any decoding, perhaps even to
839 the extreme of skipping an entire pipeline stage (if one is needed)
840 * whether packed bitwidth and associated regfile splitting is so complex
841 that it should definitely, definitely be made mandatory that implementors
842 move regfile splitting into the ALU, and what are the implications of that
843 * whether even if that *is* made mandatory, is software-trapped
844 "unsupported bitwidths" still desirable, on the basis that SIMD is such
845 a complete nightmare that *even* having a software implementation is
846 better, making Simple-V have more in common with a software API than
847 anything else.
848
849 Whilst the above may seem to be severe minuses, there are some strong
850 pluses:
851
852 * Significant reduction of V's opcode space: over 85%.
853 * Smaller reduction of P's opcode space: around 10%.
854 * The potential to use Compressed instructions in both Vector and SIMD
855 due to the overloading of register meaning (implicit vectorisation,
856 implicit packing)
857 * Not only present but also future extensions automatically gain parallelism.
858 * Already mentioned but worth emphasising: the simplification to compiler
859 writers and assembly-level writers of having the same consistent ISA
860 regardless of whether the internal level of parallelism (number of
861 parallel ALUs) is only equal to one ("virtual" parallelism), or is
862 greater than one, should not be underestimated.
863
864
865 # References
866
867 * SIMD considered harmful <https://www.sigarch.org/simd-instructions-considered-harmful/>
868 * Link to first proposal <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/GuukrSjgBH8>
869 * Recommendation by Jacob Bachmeyer to make zero-overhead loop an
870 "implicit program-counter" <https://groups.google.com/a/groups.riscv.org/d/msg/isa-dev/vYVi95gF2Mo/SHz6a4_lAgAJ>
871 * Re-continuing P-Extension proposal <https://groups.google.com/a/groups.riscv.org/forum/#!msg/isa-dev/IkLkQn3HvXQ/SEMyC9IlAgAJ>
872 * First Draft P-SIMD (DSP) proposal <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/vYVi95gF2Mo>
873 * B-Extension discussion <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/zi_7B15kj6s>
874 * Broadcom VideoCore-IV <https://docs.broadcom.com/docs/12358545>
875 Figure 2 P17 and Section 3 on P16.
876 * Hwacha <https://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-262.html>
877 * Hwacha <https://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-263.html>
878 * Vector Workshop <http://riscv.org/wp-content/uploads/2015/06/riscv-vector-workshop-june2015.pdf>
879