add Stride LOAD example and encoding
[libreriscv.git] / simple_v_extension.mdwn
1 # Variable-width Variable-packed SIMD / Simple-V / Parallelism Extension Proposal
2
3 This proposal exists so as to be able to satisfy several disparate
4 requirements: power-conscious, area-conscious, and performance-conscious
5 designs all pull an ISA and its implementation in different conflicting
6 directions, as do the specific intended uses for any given implementation.
7
8 Additionally, the existing P (SIMD) proposal and the V (Vector) proposals,
9 whilst each extremely powerful in their own right and clearly desirable,
10 are also:
11
12 * Clearly independent in their origins (Cray and AndeStar v3 respectively)
13 so need work to adapt to the RISC-V ethos and paradigm
14 * Are sufficiently large so as to make adoption (and exploration for
15 analysis and review purposes) prohibitively expensive
16 * Both contain partial duplication of pre-existing RISC-V instructions
17 (an undesirable characteristic)
18 * Both have independent and disparate methods for introducing parallelism
19 at the instruction level.
20 * Both require that their respective parallelism paradigm be implemented
21 along-side and integral to their respective functionality *or not at all*.
22 * Both independently have methods for introducing parallelism that
23 could, if separated, benefit
24 *other areas of RISC-V not just DSP or Floating-point respectively*.
25
26 Therefore it makes a huge amount of sense to have a means and method
27 of introducing instruction parallelism in a flexible way that provides
28 implementors with the option to choose exactly where they wish to offer
29 performance improvements and where they wish to optimise for power
30 and/or area (and if that can be offered even on a per-operation basis that
31 would provide even more flexibility).
32
33 Additionally it makes sense to *split out* the parallelism inherent within
34 each of P and V, and to see if each of P and V then, in *combination* with
35 a "best-of-both" parallelism extension, would work well.
36
37 **TODO**: reword this to better suit this document:
38
39 Having looked at both P and V as they stand, they're _both_ very much
40 "separate engines" that, despite both their respective merits and
41 extremely powerful features, don't really cleanly fit into the RV design
42 ethos (or the flexible extensibility) and, as such, are both in danger
43 of not being widely adopted. I'm inclined towards recommending:
44
45 * splitting out the DSP aspects of P-SIMD to create a single-issue DSP
46 * splitting out the polymorphism, esoteric data types (GF, complex
47 numbers) and unusual operations of V to create a single-issue "Esoteric
48 Floating-Point" extension
49 * splitting out the loop-aspects, vector aspects and data-width aspects
50 of both P and V to a *new* "P-SIMD / Simple-V" and requiring that they
51 apply across *all* Extensions, whether those be DSP, M, Base, V, P -
52 everything.
53
54 **TODO**: propose overflow registers be actually one of the integer regs
55 (flowing to multiple regs).
56
57 **TODO**: propose "mask" (predication) registers likewise. combination with
58 standard RV instructions and overflow registers extremely powerful
59
60 ## Stride
61
62 **TODO**: propose two LOAD/STORE offset CSRs, which mark a particular
63 register as being "if you use this reg in LOAD/STORE, use the offset
64 amount CSRoffsN (N=0,1) instead of treating LOAD/STORE as contiguous".
65 can be used for matrix spanning.
66
67 > For LOAD/STORE, could a better option be to interpret the offset in the
68 > opcode as a stride instead, so "LOAD t3, 12(t2)" would, if t3 is
69 > configured as a length-4 vector base, result in t3 = *t2, t4 = *(t2+12),
70 > t5 = *(t2+24), t6 = *(t2+32)?  Perhaps include a bit in the
71 > vector-control CSRs to select between offset-as-stride and unit-stride
72 > memory accesses?
73
74 So there would be an instruction like this:
75
76 | SETOFF | On=rN | OBank={float|int} | Smode={offs|unit} | OFFn=rM |
77 | opcode | 5 bit | 1 bit | 1 bit | 5 bit, OFFn=XLEN |
78
79
80 which would mean:
81
82 * CSR-Offset register n <= (float|int) register number N
83 * CSR-Offset Stride-mode = offset or unit
84 * CSR-Offset amount register n = contents of register M
85
86 LOAD rN, ldoffs(rM) would then be (assuming packed bit-width not set):
87
88 > offs = 0
89 > stride = 1
90 > vector-len = CSR-Vector-length register N
91 >
92 > for (o = 0, o < 2, o++)
93 > if (CSR-Offset register o == M)
94 > offs = CSR-Offset amount register o
95 > if CSR-Offset Stride-mode == offset:
96 > stride = ldoffs
97 > break
98 >
99 > for (i = 0, i < vector-len; i++)
100 > r[N+i] = mem[(offs*i + r[M+i])*stride]
101
102 # Analysis and discussion of Vector vs SIMD
103
104 There are four combined areas between the two proposals that help with
105 parallelism without over-burdening the ISA with a huge proliferation of
106 instructions:
107
108 * Fixed vs variable parallelism (fixed or variable "M" in SIMD)
109 * Implicit vs fixed instruction bit-width (integral to instruction or not)
110 * Implicit vs explicit type-conversion (compounded on bit-width)
111 * Implicit vs explicit inner loops.
112 * Masks / tagging (selecting/preventing certain indexed elements from execution)
113
114 The pros and cons of each are discussed and analysed below.
115
116 ## Fixed vs variable parallelism length
117
118 In David Patterson and Andrew Waterman's analysis of SIMD and Vector
119 ISAs, the analysis comes out clearly in favour of (effectively) variable
120 length SIMD. As SIMD is a fixed width, typically 4, 8 or in extreme cases
121 16 or 32 simultaneous operations, the setup, teardown and corner-cases of SIMD
122 are extremely burdensome except for applications whose requirements
123 *specifically* match the *precise and exact* depth of the SIMD engine.
124
125 Thus, SIMD, no matter what width is chosen, is never going to be acceptable
126 for general-purpose computation, and in the context of developing a
127 general-purpose ISA, is never going to satisfy 100 percent of implementors.
128
129 That basically leaves "variable-length vector" as the clear *general-purpose*
130 winner, at least in terms of greatly simplifying the instruction set,
131 reducing the number of instructions required for any given task, and thus
132 reducing power consumption for the same.
133
134 ## Implicit vs fixed instruction bit-width
135
136 SIMD again has a severe disadvantage here, over Vector: huge proliferation
137 of specialist instructions that target 8-bit, 16-bit, 32-bit, 64-bit, and
138 have to then have operations *for each and between each*. It gets very
139 messy, very quickly.
140
141 The V-Extension on the other hand proposes to set the bit-width of
142 future instructions on a per-register basis, such that subsequent instructions
143 involving that register are *implicitly* of that particular bit-width until
144 otherwise changed or reset.
145
146 This has some extremely useful properties, without being particularly
147 burdensome to implementations, given that instruction decode already has
148 to direct the operation to a correctly-sized width ALU engine, anyway.
149
150 Not least: in places where an ISA was previously constrained (due for
151 whatever reason, including limitations of the available operand spcace),
152 implicit bit-width allows the meaning of certain operations to be
153 type-overloaded *without* pollution or alteration of frozen and immutable
154 instructions, in a fully backwards-compatible fashion.
155
156 ## Implicit and explicit type-conversion
157
158 The Draft 2.3 V-extension proposal has (deprecated) polymorphism to help
159 deal with over-population of instructions, such that type-casting from
160 integer (and floating point) of various sizes is automatically inferred
161 due to "type tagging" that is set with a special instruction. A register
162 will be *specifically* marked as "16-bit Floating-Point" and, if added
163 to an operand that is specifically tagged as "32-bit Integer" an implicit
164 type-conversion will take placce *without* requiring that type-conversion
165 to be explicitly done with its own separate instruction.
166
167 However, implicit type-conversion is not only quite burdensome to
168 implement (explosion of inferred type-to-type conversion) but also is
169 never really going to be complete. It gets even worse when bit-widths
170 also have to be taken into consideration.
171
172 Overall, type-conversion is generally best to leave to explicit
173 type-conversion instructions, or in definite specific use-cases left to
174 be part of an actual instruction (DSP or FP)
175
176 ## Zero-overhead loops vs explicit loops
177
178 The initial Draft P-SIMD Proposal by Chuanhua Chang of Andes Technology
179 contains an extremely interesting feature: zero-overhead loops. This
180 proposal would basically allow an inner loop of instructions to be
181 repeated indefinitely, a fixed number of times.
182
183 Its specific advantage over explicit loops is that the pipeline in a
184 DSP can potentially be kept completely full *even in an in-order
185 implementation*. Normally, it requires a superscalar architecture and
186 out-of-order execution capabilities to "pre-process" instructions in order
187 to keep ALU pipelines 100% occupied.
188
189 This very simple proposal offers a way to increase pipeline activity in the
190 one key area which really matters: the inner loop.
191
192 ## Mask and Tagging
193
194 *TODO: research masks as they can be superb and extremely powerful.
195 If B-Extension is implemented and provides Bit-Gather-Scatter it
196 becomes really cool and easy to switch out certain indexed values
197 from an array of data, but actually BGS **on its own** might be
198 sufficient. Bottom line, this is complex, and needs a proper analysis.
199 The other sections are pretty straightforward.*
200
201 ## Conclusions
202
203 In the above sections the four different ways where parallel instruction
204 execution has closely and loosely inter-related implications for the ISA and
205 for implementors, were outlined. The pluses and minuses came out as
206 follows:
207
208 * Fixed vs variable parallelism: <b>variable</b>
209 * Implicit (indirect) vs fixed (integral) instruction bit-width: <b>indirect</b>
210 * Implicit vs explicit type-conversion: <b>explicit</b>
211 * Implicit vs explicit inner loops: <b>implicit</b>
212 * Tag or no-tag: <b>TODO</b>
213
214 In particular: variable-length vectors came out on top because of the
215 high setup, teardown and corner-cases associated with the fixed width
216 of SIMD. Implicit bit-width helps to extend the ISA to escape from
217 former limitations and restrictions (in a backwards-compatible fashion),
218 and implicit (zero-overhead) loops provide a means to keep pipelines
219 potentially 100% occupied *without* requiring a super-scalar or out-of-order
220 architecture.
221
222 Constructing a SIMD/Simple-Vector proposal based around even only these four
223 (five?) requirements would therefore seem to be a logical thing to do.
224
225 # Instruction Format
226
227 **TODO** *basically borrow from both P and V, which should be quite simple
228 to do, with the exception of Tag/no-tag, which needs a bit more
229 thought. V's Section 17.19 of Draft V2.3 spec is reminiscent of B's BGS
230 gather-scatterer, and, if implemented, could actually be a really useful
231 way to span 8-bit up to 64-bit groups of data, where BGS as it stands
232 and described by Clifford does **bits** of up to 16 width. Lots to
233 look at and investigate!*
234
235 # Note on implementation of parallelism
236
237 One extremely important aspect of this proposal is to respect and support
238 implementors desire to focus on power, area or performance. In that regard,
239 it is proposed that implementors be free to choose whether to implement
240 the Vector (or variable-width SIMD) parallelism as sequential operations
241 with a single ALU, fully parallel (if practical) with multiple ALUs, or
242 a hybrid combination of both.
243
244 In Broadcom's Videocore-IV, they chose hybrid, and called it "Virtual
245 Parallelism". They achieve a 16-way SIMD at an **instruction** level
246 by providing a combination of a 4-way parallel ALU *and* an externally
247 transparent loop that feeds 4 sequential sets of data into each of the
248 4 ALUs.
249
250 Also in the same core, it is worth noting that particularly uncommon
251 but essential operations (Reciprocal-Square-Root for example) are
252 *not* part of the 4-way parallel ALU but instead have a *single* ALU.
253 Under the proposed Vector (varible-width SIMD) implementors would
254 be free to do precisely that: i.e. free to choose *on a per operation
255 basis* whether and how much "Virtual Parallelism" to deploy.
256
257 It is absolutely critical to note that it is proposed that such choices MUST
258 be **entirely transparent** to the end-user and the compiler. Whilst
259 a Vector (varible-width SIM) may not precisely match the width of the
260 parallelism within the implementation, the end-user **should not care**
261 and in this way the performance benefits are gained but the ISA remains
262 simple. All that happens at the end of an instruction run is: some
263 parallel units (if there are any) would remain offline, completely
264 transparently to the ISA, the program, and the compiler.
265
266 The "SIMD considered harmful" trap of having huge complexity and extra
267 instructions to deal with corner-cases is thus avoided, and implementors
268 get to choose precisely where to focus and target the benefits of their
269 implementationefforts..
270
271 # V-Extension to Simple-V Comparative Analysis
272
273 This section covers the ways in which Simple-V is comparable
274 to, or more flexible than, V-Extension (V2.3-draft). Also covered is
275 one major weak-point (register files are fixed size, where V is
276 arbitrary length), and how best to deal with that, should V be adapted
277 to be on top of Simple-V.
278
279 The first stages of this section go over each of the sections of V2.3-draft V
280 where appropriate
281
282 ## 17.3 Shape Encoding
283
284 Simple-V's proposed means of expressing whether a register (from the
285 standard integer or the standard floating-point file) is a scalar or
286 a vector is to simply set the vector length to 1. The instruction
287 would however have to specify which register file (integer or FP) that
288 the vector-length was to be applied to.
289
290 Extended shapes (2-D etc) would not be part of Simple-V at all.
291
292 ## 17.4 Representation Encoding
293
294 Simple-V would not have representation-encoding. This is part of
295 polymorphism, which is considered too complex to implement (TODO: confirm?)
296
297 ## 17.5 Element Bitwidth
298
299 This is directly equivalent to Simple-V's "Packed", and implies that
300 integer (or floating-point) are divided down into vector-indexable
301 chunks of size Bitwidth.
302
303 In this way it becomes possible to have ADD effectively and implicitly
304 turn into ADDb (8-bit add), ADDw (16-bit add) and so on, and where
305 vector-length has been set to greater than 1, it becomes a "Packed"
306 (SIMD) instruction.
307
308 It remains to be decided what should be done when RV32 / RV64 ADD (sized)
309 opcodes are used. One useful idea would be, on an RV64 system where
310 a 32-bit-sized ADD was performed, to simply use the least significant
311 32-bits of the register (exactly as is currently done) but at the same
312 time to *respect the packed bitwidth as well*.
313
314 The extended encoding (Table 17.6) would not be part of Simple-V.
315
316 ## 17.6 Base Vector Extension Supported Types
317
318 TODO: analyse. probably exactly the same.
319
320 ## 17.7 Maximum Vector Element Width
321
322 No equivalent in Simple-V
323
324 ## 17.8 Vector Configuration Registers
325
326 TODO: analyse.
327
328 ## 17.9 Legal Vector Unit Configurations
329
330 TODO: analyse
331
332 ## 17.10 Vector Unit CSRs
333
334 TODO: analyse
335
336 > Ok so this is an aspect of Simple-V that I hadn't thought through,
337 > yet (proposal / idea only a few days old!).  in V2.3-Draft ISA Section
338 > 17.10 the CSRs are listed.  I note that there's some general-purpose
339 > CSRs (including a global/active vector-length) and 16 vcfgN CSRs.  i
340 > don't precisely know what those are for.
341
342 >  In the Simple-V proposal, *every* register in both the integer
343 > register-file *and* the floating-point register-file would have at
344 > least a 2-bit "data-width" CSR and probably something like an 8-bit
345 > "vector-length" CSR (less in RV32E, by exactly one bit).
346
347 >  What I *don't* know is whether that would be considered perfectly
348 > reasonable or completely insane.  If it turns out that the proposed
349 > Simple-V CSRs can indeed be stored in SRAM then I would imagine that
350 > adding somewhere in the region of 10 bits per register would be... okay? 
351 > I really don't honestly know.
352
353 >  Would these proposed 10-or-so-bit per-register Simple-V CSRs need to
354 > be multi-ported? No I don't believe they would.
355
356 ## 17.11 Maximum Vector Length (MVL)
357
358 Basically implicitly this is set to the maximum size of the register
359 file multiplied by the number of 8-bit packed ints that can fit into
360 a register (4 for RV32, 8 for RV64 and 16 for RV128).
361
362 ## !7.12 Vector Instruction Formats
363
364 No equivalent in Simple-V because *all* instructions of *all* Extensions
365 are implicitly parallelised (and packed).
366
367 ## 17.13 Polymorphic Vector Instructions
368
369 Polymorphism (implicit type-casting) is deliberately not supported
370 in Simple-V.
371
372 ## 17.14 Rapid Configuration Instructions
373
374 TODO: analyse if this is useful to have an equivalent in Simple-V
375
376 ## 17.15 Vector-Type-Change Instructions
377
378 TODO: analyse if this is useful to have an equivalent in Simple-V
379
380 ## 17.16 Vector Length
381
382 Has a direct corresponding equivalent.
383
384 ## 17.17 Predicated Execution
385
386 Predicated Execution is another name for "masking" or "tagging". Masked
387 (or tagged) implies that there is a bit field which is indexed, and each
388 bit associated with the corresponding indexed offset register within
389 the "Vector". If the tag / mask bit is 1, when a parallel operation is
390 issued, the indexed element of the vector has the operation carried out.
391 However if the tag / mask bit is *zero*, that particular indexed element
392 of the vector does *not* have the requested operation carried out.
393
394 In V2.3-draft V, there is a significant (not recommended) difference:
395 the zero-tagged elements are *set to zero*. This loses a *significant*
396 advantage of mask / tagging, particularly if the entire mask register
397 is itself a general-purpose register, as that general-purpose register
398 can be inverted, shifted, and'ed, or'ed and so on. In other words
399 it becomes possible, especially if Carry/Overflow from each vector
400 operation is also accessible, to do conditional (step-by-step) vector
401 operations including things like turn vectors into 1024-bit or greater
402 operands with very few instructions, by treating the "carry" from
403 one instruction as a way to do "Conditional add of 1 to the register
404 next door". If V2.3-draft V sets zero-tagged elements to zero, such
405 extremely powerful techniques are simply not possible.
406
407 It is noted that there is no mention of an equivalent to BEXT (element
408 skipping) which would be particularly fascinating and powerful to have.
409 In this mode, the "mask" would skip elements where its mask bit was zero
410 in either the source or the destination operand.
411
412 Lots to be discussed.
413
414 ## 17.18 Vector Load/Store Instructions
415
416 These may not have a direct equivalent in Simple-V, except if mask/tagging
417 is to be deployed.
418
419 To be discussed.
420
421 ## 17.19 Vector Register Gather
422
423 TODO
424
425 ## TODO, sort
426
427 > However, there are also several features that go beyond simply attaching VL
428 > to a scalar operation and are crucial to being able to vectorize a lot of
429 > code. To name a few:
430 > - Conditional execution (i.e., predicated operations)
431 > - Inter-lane data movement (e.g. SLIDE, SELECT)
432 > - Reductions (e.g., VADD with a scalar destination)
433
434 Ok so the Conditional and also the Reductions is one of the reasons
435 why as part of SimpleV / variable-SIMD / parallelism (gah gotta think
436 of a decent name) i proposed that it be implemented as "if you say r0
437 is to be a vector / SIMD that means operations actually take place on
438 r0,r1,r2... r(N-1)".
439
440 Consequently any parallel operation could be paused (or... more
441 specifically: vectors disabled by resetting it back to a default /
442 scalar / vector-length=1) yet the results would actually be in the
443 *main register file* (integer or float) and so anything that wasn't
444 possible to easily do in "simple" parallel terms could be done *out*
445 of parallel "mode" instead.
446
447 I do appreciate that the above does imply that there is a limit to the
448 length that SimpleV (whatever) can be parallelised, namely that you
449 run out of registers! my thought there was, "leave space for the main
450 V-Ext proposal to extend it to the length that V currently supports".
451 Honestly i had not thought through precisely how that would work.
452
453 Inter-lane (SELECT) i saw 17.19 in V2.3-Draft p117, I liked that,
454 it reminds me of the discussion with Clifford on bit-manipulation
455 (gather-scatter except not Bit Gather Scatter, *data* gather scatter): if
456 applied "globally and outside of V and P" SLIDE and SELECT might become
457 an extremely powerful way to do fast memory copy and reordering [2[.
458
459 However I haven't quite got my head round how that would work: i am
460 used to the concept of register "tags" (the modern term is "masks")
461 and i *think* if "masks" were applied to a Simple-V-enhanced LOAD /
462 STORE you would get the exact same thing as SELECT.
463
464 SLIDE you could do simply by setting say r0 vector-length to say 16
465 (meaning that if referred to in any operation it would be an implicit
466 parallel operation on *all* registers r0 through r15), and temporarily
467 set say.... r7 vector-length to say... 5. Do a LOAD on r7 and it would
468 implicitly mean "load from memory into r7 through r11". Then you go
469 back and do an operation on r0 and ta-daa, you're actually doing an
470 operation on a SLID {SLIDED?) vector.
471
472 The advantage of Simple-V (whatever) over V would be that you could
473 actually do *operations* in the middle of vectors (not just SLIDEs)
474 simply by (as above) setting r0 vector-length to 16 and r7 vector-length
475 to 5. There would be nothing preventing you from doing an ADD on r0
476 (which meant do an ADD on r0 through r15) followed *immediately in the
477 next instruction with no setup cost* a MUL on r7 (which actually meant
478 "do a parallel MUL on r7 through r11").
479
480 btw it's worth mentioning that you'd get scalar-vector and vector-scalar
481 implicitly by having one of the source register be vector-length 1
482 (the default) and one being N > 1. but without having special opcodes
483 to do it. i *believe* (or more like "logically infer or deduce" as
484 i haven't got access to the spec) that that would result in a further
485 opcode reduction when comparing [draft] V-Ext to [proposed] Simple-V.
486
487 Also, Reduction *might* be possible by specifying that the destination be
488 a scalar (vector-length=1) whilst the source be a vector. However... it
489 would be an awful lot of work to go through *every single instruction*
490 in *every* Extension, working out which ones could be parallelised (ADD,
491 MUL, XOR) and those that definitely could not (DIV, SUB). Is that worth
492 the effort? maybe. Would it result in huge complexity? probably.
493 Could an implementor just go "I ain't doing *that* as parallel!
494 let's make it virtual-parallelism (sequential reduction) instead"?
495 absolutely. So, now that I think it through, Simple-V (whatever)
496 covers Reduction as well. huh, that's a surprise.
497
498
499 > - Vector-length speculation (making it possible to vectorize some loops with
500 > unknown trip count) - I don't think this part of the proposal is written
501 > down yet.
502
503 Now that _is_ an interesting concept. A little scary, i imagine, with
504 the possibility of putting a processor into a hard infinite execution
505 loop... :)
506
507
508 > Also, note the vector ISA consumes relatively little opcode space (all the
509 > arithmetic fits in 7/8ths of a major opcode). This is mainly because data
510 > type and size is a function of runtime configuration, rather than of opcode.
511
512 yes. i love that aspect of V, i am a huge fan of polymorphism [1]
513 which is why i am keen to advocate that the same runtime principle be
514 extended to the rest of the RISC-V ISA [3]
515
516 Yikes that's a lot. I'm going to need to pull this into the wiki to
517 make sure it's not lost.
518
519 [1] inherent data type conversion: 25 years ago i designed a hypothetical
520 hyper-hyper-hyper-escape-code-sequencing ISA based around 2-bit
521 (escape-extended) opcodes and 2-bit (escape-extended) operands that
522 only required a fixed 8-bit instruction length. that relied heavily
523 on polymorphism and runtime size configurations as well. At the time
524 I thought it would have meant one HELL of a lot of CSRs... but then I
525 met RISC-V and was cured instantly of that delusion^Wmisapprehension :)
526
527 [2] Interestingly if you then also add in the other aspect of Simple-V
528 (the data-size, which is effectively functionally orthogonal / identical
529 to "Packed" of Packed-SIMD), masked and packed *and* vectored LOAD / STORE
530 operations become byte / half-word / word augmenters of B-Ext's proposed
531 "BGS" i.e. where B-Ext's BGS dealt with bits, masked-packed-vectored
532 LOAD / STORE would deal with 8 / 16 / 32 bits at a time. Where it
533 would get really REALLY interesting would be masked-packed-vectored
534 B-Ext BGS instructions. I can't even get my head fully round that,
535 which is a good sign that the combination would be *really* powerful :)
536
537 [3] ok sadly maybe not the polymorphism, it's too complicated and I
538 think would be much too hard for implementors to easily "slide in" to an
539 existing non-Simple-V implementation.  i say that despite really *really*
540 wanting IEEE 704 FP Half-precision to end up somewhere in RISC-V in some
541 fashion, for optimising 3D Graphics.  *sigh*.
542
543 ## TODO: instructions (based on Hwacha) V-Ext duplication analysis
544
545 This is partly speculative due to lack of access to an up-to-date
546 V-Ext Spec (V2.3-draft RVV 0.4-Draft at the time of writing). However
547 basin an analysis instead on Hwacha, a cursory examination shows over
548 an **85%** duplication of V-Ext operand-related instructions when
549 compared to Simple-V on a standard RG64G base. Even Vector Fetch
550 is analogous to "zero-overhead loop".
551
552 Exceptions are:
553
554 * Vector Indexed Memory Instructions (non-contiguous)
555 * Vector Atomic Memory Instructions.
556 * Some of the Vector Arithmetic ops: MADD, MSUB,
557 VSRL, VSRA, VEIDX, VFIRST, VSGNJN, VFSGNJX and potentially more.
558 * Consensual Jump
559
560 Table of RV32V Instructions
561
562 | RV32V | |
563 | ----- | --- |
564 | VADD | |
565 | VSUB | |
566 | VSL | |
567 | VSR | |
568 | VAND | |
569 | VOR | |
570 | VXOR | |
571 | VSEQ | |
572 | VSNE | |
573 | VSLT | |
574 | VSGE | |
575 | VCLIP | |
576 | VCVT | |
577 | VMPOP | |
578 | VMFIRST | |
579 | VEXTRACT | |
580 | VINSERT | |
581 | VMERGE | |
582 | VSELECT | |
583 | VSLIDE | |
584 | VDIV | |
585 | VREM | |
586 | VMUL | |
587 | VMULH | |
588 | VMIN | |
589 | VMAX | |
590 | VSGNJ | |
591 | VSGNJN | |
592 | VSGNJX | |
593 | VSQRT | |
594 | VCLASS | |
595 | VPOPC | |
596 | VADDI | |
597 | VSLI | |
598 | VSRI | |
599 | VANDI | |
600 | VORI | |
601 | VXORI | |
602 | VCLIPI | |
603 | VMADD | |
604 | VMSUB | |
605 | VNMADD | |
606 | VNMSUB | |
607 | VLD | |
608 | VLDS | |
609 | VLDX | |
610 | VST | |
611 | VSTS | |
612 | VSTX | |
613 | VAMOSWAP | |
614 | VAMOADD | |
615 | VAMOAND | |
616 | VAMOOR | |
617 | VAMOXOR | |
618 | VAMOMIN | |
619 | VAMOMAX | |
620
621 ## TODO: sort
622
623 > I suspect that the "hardware loop" in question is actually a zero-overhead
624 > loop unit that diverts execution from address X to address Y if a certain
625 > condition is met.
626
627  not quite.  The zero-overhead loop unit interestingly would be at
628 an [independent] level above vector-length.  The distinctions are
629 as follows:
630
631 * Vector-length issues *virtual* instructions where the register
632 operands are *specifically* altered (to cover a range of registers),
633 whereas zero-overhead loops *specifically* do *NOT* alter the operands
634 in *ANY* way.
635
636 * Vector-length-driven "virtual" instructions are driven by *one*
637 and *only* one instruction (whether it be a LOAD, STORE, or pure
638 one/two/three-operand opcode) whereas zero-overhead loop units
639 specifically apply to *multiple* instructions.
640
641 Where vector-length-driven "virtual" instructions might get conceptually
642 blurred with zero-overhead loops is LOAD / STORE.  In the case of LOAD /
643 STORE, to actually be useful, vector-length-driven LOAD / STORE should
644 increment the LOAD / STORE memory address to correspondingly match the
645 increment in the register bank.  example:
646
647 * set vector-length for r0 to 4
648 * issue RV32 LOAD from addr 0x1230 to r0
649
650 translates effectively to:
651
652 * RV32 LOAD from addr 0x1230 to r0
653 * ...
654 * ...
655 * RV32 LOAD from addr 0x123B to r3
656
657 # P-Ext ISA
658
659 ## 16-bit Arithmetic
660
661 | Mnemonic | 16-bit Instruction | Simple-V Equivalent |
662 | ------------------ | ------------------------- | ------------------- |
663 | ADD16 rt, ra, rb | add | RV ADD (bitwidth=16) |
664 | RADD16 rt, ra, rb | Signed Halving add | |
665 | URADD16 rt, ra, rb | Unsigned Halving add | |
666 | KADD16 rt, ra, rb | Signed Saturating add | |
667 | UKADD16 rt, ra, rb | Unsigned Saturating add | |
668 | SUB16 rt, ra, rb | sub | RV SUB (bitwidth=16) |
669 | RSUB16 rt, ra, rb | Signed Halving sub | |
670 | URSUB16 rt, ra, rb | Unsigned Halving sub | |
671 | KSUB16 rt, ra, rb | Signed Saturating sub | |
672 | UKSUB16 rt, ra, rb | Unsigned Saturating sub | |
673 | CRAS16 rt, ra, rb | Cross Add & Sub | |
674 | RCRAS16 rt, ra, rb | Signed Halving Cross Add & Sub | |
675 | URCRAS16 rt, ra, rb| Unsigned Halving Cross Add & Sub | |
676 | KCRAS16 rt, ra, rb | Signed Saturating Cross Add & Sub | |
677 | UKCRAS16 rt, ra, rb| Unsigned Saturating Cross Add & Sub | |
678 | CRSA16 rt, ra, rb | Cross Sub & Add | |
679 | RCRSA16 rt, ra, rb | Signed Halving Cross Sub & Add | |
680 | URCRSA16 rt, ra, rb| Unsigned Halving Cross Sub & Add | |
681 | KCRSA16 rt, ra, rb | Signed Saturating Cross Sub & Add | |
682 | UKCRSA16 rt, ra, rb| Unsigned Saturating Cross Sub & Add | |
683
684 ## 8-bit Arithmetic
685
686 | Mnemonic | 16-bit Instruction | Simple-V Equivalent |
687 | ------------------ | ------------------------- | ------------------- |
688 | ADD8 rt, ra, rb | add | RV ADD (bitwidth=8)|
689 | RADD8 rt, ra, rb | Signed Halving add | |
690 | URADD8 rt, ra, rb | Unsigned Halving add | |
691 | KADD8 rt, ra, rb | Signed Saturating add | |
692 | UKADD8 rt, ra, rb | Unsigned Saturating add | |
693 | SUB8 rt, ra, rb | sub | RV SUB (bitwidth=8)|
694 | RSUB8 rt, ra, rb | Signed Halving sub | |
695 | URSUB8 rt, ra, rb | Unsigned Halving sub | |
696
697
698
699 # References
700
701 * SIMD considered harmful <https://www.sigarch.org/simd-instructions-considered-harmful/>
702 * Link to first proposal <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/GuukrSjgBH8>
703 * Recommendation by Jacob Bachmeyer to make zero-overhead loop an
704 "implicit program-counter" <https://groups.google.com/a/groups.riscv.org/d/msg/isa-dev/vYVi95gF2Mo/SHz6a4_lAgAJ>
705 * Re-continuing P-Extension proposal <https://groups.google.com/a/groups.riscv.org/forum/#!msg/isa-dev/IkLkQn3HvXQ/SEMyC9IlAgAJ>
706 * First Draft P-SIMD (DSP) proposal <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/vYVi95gF2Mo>
707 * B-Extension discussion <https://groups.google.com/a/groups.riscv.org/forum/#!topic/isa-dev/zi_7B15kj6s>
708 * Broadcom VideoCore-IV <https://docs.broadcom.com/docs/12358545>
709 Figure 2 P17 and Section 3 on P16.
710 * Hwacha <https://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-262.html>
711 * Hwacha <https://www2.eecs.berkeley.edu/Pubs/TechRpts/2015/EECS-2015-263.html>
712 * Vector Workshop <http://riscv.org/wp-content/uploads/2015/06/riscv-vector-workshop-june2015.pdf>