reorg
[libreriscv.git] / simple_v_extension / simple_v_chennai_2018.tex
1 \documentclass[slidestop]{beamer}
2 \usepackage{beamerthemesplit}
3 \usepackage{graphics}
4 \usepackage{pstricks}
5
6 \title{Simple-V RISC-V Extension for Vectorisation and SIMD}
7 \author{Luke Kenneth Casson Leighton}
8
9
10 \begin{document}
11
12 \frame{
13 \begin{center}
14 \huge{Simple-V RISC-V Extension for Vectors and SIMD}\\
15 \vspace{32pt}
16 \Large{Flexible Vectorisation}\\
17 \Large{(aka not so Simple-V?)}\\
18 \Large{(aka How to Parallelise the RISC-V ISA)}\\
19 \vspace{24pt}
20 \Large{[proposed for] Chennai 9th RISC-V Workshop}\\
21 \vspace{16pt}
22 \large{\today}
23 \end{center}
24 }
25
26
27 \frame{\frametitle{Credits and Acknowledgements}
28
29 \begin{itemize}
30 \item The Designers of RISC-V\vspace{15pt}
31 \item The RVV Working Group and contributors\vspace{15pt}
32 \item Allen Baum, Jacob Bachmeyer, Xan Phung, Chuanhua Chang,\\
33 Guy Lemurieux, Jonathan Neuschafer, Roger Brussee,
34 and others\vspace{15pt}
35 \item ISA-Dev Group Members\vspace{10pt}
36 \end{itemize}
37 }
38
39
40 \frame{\frametitle{Quick refresher on SIMD}
41
42 \begin{itemize}
43 \item SIMD very easy to implement (and very seductive)\vspace{8pt}
44 \item Parallelism is in the ALU\vspace{8pt}
45 \item Zero-to-Negligeable impact for rest of core\vspace{8pt}
46 \end{itemize}
47 Where SIMD Goes Wrong:\vspace{10pt}
48 \begin{itemize}
49 \item See "SIMD instructions considered harmful"
50 https://sigarch.org/simd-instructions-considered-harmful
51 \item Setup and corner-cases alone are extremely complex.\\
52 Hardware is easy, but software is hell.
53 \item O($N^{6}$) ISA opcode proliferation!\\
54 opcode, elwidth, veclen, src1-src2-dest hi/lo
55 \end{itemize}
56 }
57
58 \frame{\frametitle{Quick refresher on RVV}
59
60 \begin{itemize}
61 \item Extremely powerful (extensible to 256 registers)\vspace{10pt}
62 \item Supports polymorphism, several datatypes (inc. FP16)\vspace{10pt}
63 \item Requires a separate Register File (32 w/ext to 256)\vspace{10pt}
64 \item Implemented as a separate pipeline (no impact on scalar)\vspace{10pt}
65 \end{itemize}
66 However...\vspace{10pt}
67 \begin{itemize}
68 \item 98 percent opcode duplication with rest of RV (CLIP)
69 \item Extending RVV requires customisation not just of h/w:\\
70 gcc, binutils also need customisation (and maintenance)
71 \end{itemize}
72 }
73
74
75 \frame{\frametitle{The Simon Sinek lowdown (Why, How, What)}
76
77 \begin{itemize}
78 \item Why?
79 Implementors need flexibility in vectorisation to optimise for
80 area or performance depending on the scope:
81 embedded DSP, Mobile GPU's, Server CPU's and more.\\
82 Compilers also need flexibility in vectorisation to optimise for cost
83 of pipeline setup, amount of state to context switch
84 and software portability
85 \item How?
86 By marking INT/FP regs as "Vectorised" and
87 adding a level of indirection,
88 SV expresses how existing instructions should act
89 on [contiguous] blocks of registers, in parallel, WITHOUT
90 needing new any actual extra arithmetic opcodes.
91 \item What?
92 Simple-V is an "API" that implicitly extends
93 existing (scalar) instructions with explicit parallelisation\\
94 i.e. SV is actually about parallelism NOT vectors per se.\\
95 Has a lot in common with VLIW (without the actual VLIW).
96 \end{itemize}
97 }
98
99
100 \frame{\frametitle{What's the value of SV? Why adopt it even in non-V?}
101
102 \begin{itemize}
103 \item memcpy becomes much smaller (higher bang-per-buck)
104 \item context-switch (LOAD/STORE multiple): 1-2 instructions
105 \item Compressed instrs further reduces I-cache (etc.)
106 \item Greatly-reduced I-cache load (and less reads)
107 \item Amazingly, SIMD becomes (more) tolerable (no corner-cases)
108 \item Modularity/Abstraction in both the h/w and the toolchain.
109 \item "Reach" of registers accessible by Compressed is enhanced
110 \item Future: double the standard INT/FP register file sizes.
111 \end{itemize}
112 Note:
113 \begin{itemize}
114 \item It's not just about Vectors: it's about instruction effectiveness
115 \item Anything implementor is not interested in HW-optimising,\\
116 let it fall through to exceptions (implement as a trap).
117 \end{itemize}
118 }
119
120
121 \frame{\frametitle{How does Simple-V relate to RVV? What's different?}
122
123 \begin{itemize}
124 \item RVV very heavy-duty (excellent for supercomputing)\vspace{8pt}
125 \item Simple-V abstracts parallelism (based on best of RVV)\vspace{8pt}
126 \item Graded levels: hardware, hybrid or traps (fit impl. need)\vspace{8pt}
127 \item Even Compressed become vectorised (RVV can't)\vspace{8pt}
128 \item No polymorphism in SV (too complex)\vspace{8pt}
129 \end{itemize}
130 What Simple-V is not:\vspace{4pt}
131 \begin{itemize}
132 \item A full supercomputer-level Vector Proposal
133 \item A replacement for RVV (SV is designed to be over-ridden\\
134 by - or augmented to become - RVV)
135 \end{itemize}
136 }
137
138
139 \frame{\frametitle{How is Parallelism abstracted in Simple-V?}
140
141 \begin{itemize}
142 \item Register "typing" turns any op into an implicit Vector op:\\
143 registers are reinterpreted through a level of indirection
144 \item Primarily at the Instruction issue phase (except SIMD)\\
145 Note: it's ok to pass predication through to ALU (like SIMD)
146 \item Standard (and future, and custom) opcodes now parallel\vspace{10pt}
147 \end{itemize}
148 Note: EVERYTHING is parallelised:
149 \begin{itemize}
150 \item All LOAD/STORE (inc. Compressed, Int/FP versions)
151 \item All ALU ops (Int, FP, SIMD, DSP, everything)
152 \item All branches become predication targets (C.FNE added?)
153 \item C.MV of particular interest (s/v, v/v, v/s)
154 \item FCVT, FMV, FSGNJ etc. very similar to C.MV
155 \end{itemize}
156 }
157
158
159 \frame{\frametitle{Implementation Options}
160
161 \begin{itemize}
162 \item Absolute minimum: Exceptions: if CSRs indicate "V", trap.\\
163 (Requires as absolute minimum that CSRs be in H/W)
164 \item Hardware loop, single-instruction issue\\
165 (Do / Don't send through predication to ALU)
166 \item Hardware loop, parallel (multi-instruction) issue\\
167 (Do / Don't send through predication to ALU)
168 \item Hardware loop, full parallel ALU (not recommended)
169 \end{itemize}
170 Notes:\vspace{4pt}
171 \begin{itemize}
172 \item 4 (or more?) options above may be deployed on per-op basis
173 \item SIMD always sends predication bits through to ALU
174 \item Minimum MVL MUST be sufficient to cover regfile LD/ST
175 \item Instr. FIFO may repeatedly split off N scalar ops at a time
176 \end{itemize}
177 }
178 % Instr. FIFO may need its own slide. Basically, the vectorised op
179 % gets pushed into the FIFO, where it is then "processed". Processing
180 % will remove the first set of ops from its vector numbering (taking
181 % predication into account) and shoving them **BACK** into the FIFO,
182 % but MODIFYING the remaining "vectorised" op, subtracting the now
183 % scalar ops from it.
184
185 \frame{\frametitle{Predicated 8-parallel ADD: 1-wide ALU}
186 \begin{center}
187 \includegraphics[height=2.5in]{padd9_alu1.png}\\
188 {\bf \red Predicated adds are shuffled down: 6 cycles in total}
189 \end{center}
190 }
191
192
193 \frame{\frametitle{Predicated 8-parallel ADD: 4-wide ALU}
194 \begin{center}
195 \includegraphics[height=2.5in]{padd9_alu4.png}\\
196 {\bf \red Predicated adds are shuffled down: 4 in 1st cycle, 2 in 2nd}
197 \end{center}
198 }
199
200
201 \frame{\frametitle{Predicated 8-parallel ADD: 3 phase FIFO expansion}
202 \begin{center}
203 \includegraphics[height=2.5in]{padd9_fifo.png}\\
204 {\bf \red First cycle takes first four 1s; second takes the rest}
205 \end{center}
206 }
207
208
209 \frame{\frametitle{How are SIMD Instructions Vectorised?}
210
211 \begin{itemize}
212 \item SIMD ALU(s) primarily unchanged
213 \item Predication is added down each SIMD element (if requested,
214 otherwise the entire block will be predicated)
215 \item Predication bits sent in groups to the ALU (if requested,
216 otherwise just one bit for the entire packed block)
217 \item End of Vector enables (additional) predication:
218 completely nullifies end-case code (but only in group
219 predication mode)
220 \end{itemize}
221 Considerations:\vspace{4pt}
222 \begin{itemize}
223 \item Many SIMD ALUs possible (parallel execution)
224 \item Implementor free to choose (API remains the same)
225 \item Unused ALU units wasted, but s/w DRASTICALLY simpler
226 \item Very long SIMD ALUs could waste significant die area
227 \end{itemize}
228 }
229 % With multiple SIMD ALUs at for example 32-bit wide they can be used
230 % to either issue 64-bit or 128-bit or 256-bit wide SIMD operations
231 % or they can be used to cover several operations on totally different
232 % vectors / registers.
233
234 \frame{\frametitle{Predicated 9-parallel SIMD ADD}
235 \begin{center}
236 \includegraphics[height=2.5in]{padd9_simd.png}\\
237 {\bf \red 4-wide 8-bit SIMD, 4 bits of predicate passed to ALU}
238 \end{center}
239 }
240
241
242 \frame{\frametitle{What's the deal / juice / score?}
243
244 \begin{itemize}
245 \item Standard Register File(s) overloaded with CSR "reg is vector"\\
246 (see pseudocode slides for examples)
247 \item "2nd FP\&INT register bank" possibility, reserved for future\\
248 (would allow standard regfiles to remain unmodified)
249 \item Element width concept remain same as RVV\\
250 (CSRs give new size to elements in registers)
251 \item CSRs are key-value tables (overlaps allowed: v. important)
252 \end{itemize}
253 Key differences from RVV:
254 \begin{itemize}
255 \item Predication in INT regs as a BIT field (max VL=XLEN)
256 \item Minimum VL must be Num Regs - 1 (all regs single LD/ST)
257 \item SV may condense sparse Vecs: RVV lets ALU do predication
258 \item Choice to Zero or skip non-predicated elements
259 \end{itemize}
260 }
261
262
263 \begin{frame}[fragile]
264 \frametitle{ADD pseudocode (or trap, or actual hardware loop)}
265
266 \begin{semiverbatim}
267 function op\_add(rd, rs1, rs2, predr) # add not VADD!
268  int i, id=0, irs1=0, irs2=0;
269  for (i = 0; i < VL; i++)
270   if (ireg[predr] & 1<<i) # predication uses intregs
271    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
272 if (reg\_is\_vectorised[rd]) \{ id += 1; \}
273 if (reg\_is\_vectorised[rs1]) \{ irs1 += 1; \}
274 if (reg\_is\_vectorised[rs2]) \{ irs2 += 1; \}
275 \end{semiverbatim}
276
277 \begin{itemize}
278 \item Above is oversimplified: Reg. indirection left out (for clarity).
279 \item SIMD slightly more complex (case above is elwidth = default)
280 \item Scalar-scalar and scalar-vector and vector-vector now all in one
281 \item OoO may choose to push ADDs into instr. queue (v. busy!)
282 \end{itemize}
283 \end{frame}
284
285 % yes it really *is* ADD not VADD. that's the entire point of
286 % this proposal, that *standard* operations are overloaded to
287 % become vectorised-on-demand
288
289
290 \begin{frame}[fragile]
291 \frametitle{Predication-Branch (or trap, or actual hardware loop)}
292
293 \begin{semiverbatim}
294 s1 = reg\_is\_vectorised(src1);
295 s2 = reg\_is\_vectorised(src2);
296 if (!s2 && !s1) goto branch;
297 for (int i = 0; i < VL; ++i)
298 if (cmp(s1 ? reg[src1+i]:reg[src1],
299 s2 ? reg[src2+i]:reg[src2])
300 ireg[rs3] |= 1<<i;
301 \end{semiverbatim}
302
303 \begin{itemize}
304 \item SIMD slightly more complex (case above is elwidth = default)
305 \item If s1 and s2 both scalars, Standard branch occurs
306 \item Predication stored in integer regfile as a bitfield
307 \item Scalar-vector and vector-vector supported
308 \item Overload Branch immediate to be predication target rs3
309 \end{itemize}
310 \end{frame}
311
312 \begin{frame}[fragile]
313 \frametitle{VLD/VLD.S/VLD.X (or trap, or actual hardware loop)}
314
315 \begin{semiverbatim}
316 if (unit-strided) stride = elsize;
317 else stride = areg[as2]; // constant-strided
318 for (int i = 0; i < VL; ++i)
319 if (preg\_enabled[rd] && ([!]preg[rd] & 1<<i))
320 for (int j = 0; j < seglen+1; j++)
321 if (reg\_is\_vectorised[rs2]) offs = vreg[rs2+i]
322 else offs = i*(seglen+1)*stride;
323 vreg[rd+j][i] = mem[sreg[base] + offs + j*stride]
324 \end{semiverbatim}
325
326 \begin{itemize}
327 \item Again: elwidth != default slightly more complex
328 \item rs2 vectorised taken to implicitly indicate VLD.X
329 \end{itemize}
330 \end{frame}
331
332
333 \frame{\frametitle{Predication key-value CSR store}
334
335 \begin{itemize}
336 \item key is int regfile number or FP regfile number (1 bit)\vspace{6pt}
337 \item register to be predicated if referred to (5 bits, key)\vspace{6pt}
338 \item register to store actual predication in (5 bits, value)\vspace{6pt}
339 \item predication is inverted Y/N (1 bit)\vspace{6pt}
340 \item non-predicated elements are to be zero'd Y/N (1 bit)\vspace{6pt}
341 \end{itemize}
342 Notes:\vspace{10pt}
343 \begin{itemize}
344 \item Table should be expanded out for high-speed implementations
345 \item Multiple "keys" (and values) theoretically permitted
346 \item RVV rules about deleting higher-indexed CSRs followed
347 \end{itemize}
348 }
349
350
351 \begin{frame}[fragile]
352 \frametitle{Predication key-value CSR table decoding pseudocode}
353
354 \begin{semiverbatim}
355 struct pred fp\_pred[32];
356 struct pred int\_pred[32];
357
358 for (i = 0; i < 16; i++) // 16 CSRs?
359 tb = int\_pred if CSRpred[i].type == 0 else fp\_pred
360 idx = CSRpred[i].regidx
361 tb[idx].zero = CSRpred[i].zero
362 tb[idx].inv = CSRpred[i].inv
363 tb[idx].predidx = CSRpred[i].predidx
364 tb[idx].enabled = true
365 \end{semiverbatim}
366
367 \begin{itemize}
368 \item All 64 (int and FP) Entries zero'd before setting
369 \item Might be a bit complex to set up (TBD)
370 \end{itemize}
371
372 \end{frame}
373
374
375 \begin{frame}[fragile]
376 \frametitle{Get Predication value pseudocode}
377
378 \begin{semiverbatim}
379 def get\_pred\_val(bool is\_fp\_op, int reg):
380 tb = int\_pred if is\_fp\_op else fp\_pred
381 if (!tb[reg].enabled):
382 return ~0x0 // all ops enabled
383 predidx = tb[reg].predidx // redirection occurs HERE
384 predicate = intreg[predidx] // actual predicate HERE
385 if (tb[reg].inv):
386 predicate = ~predicate // invert ALL bits
387 return predicate
388 \end{semiverbatim}
389
390 \begin{itemize}
391 \item References different (internal) mapping table for INT or FP
392 \item Actual predicate bitmask ALWAYS from the INT regfile
393 \item Hard-limit on MVL of XLEN (predication only 1 intreg)
394 \end{itemize}
395
396 \end{frame}
397
398
399 \frame{\frametitle{To Zero or not to place zeros in non-predicated elements?}
400
401 \begin{itemize}
402 \item Zeroing is an implementation optimisation favouring OoO
403 \item Simple implementations may skip non-predicated operations
404 \item Simple implementations explicitly have to destroy data
405 \item Complex implementations may use reg-renames to save power\\
406 Zeroing on predication chains makes optimisation harder
407 \item Compromise: REQUIRE both (specified in predication CSRs).
408 \end{itemize}
409 Considerations:
410 \begin{itemize}
411 \item Complex not really impacted, simple impacted a LOT\\
412 with Zeroing... however it's useful (memzero)
413 \item Non-zero'd overlapping "Vectors" may issue overlapping ops\\
414 (2nd op's predicated elements slot in 1st's non-predicated ops)
415 \item Please don't use Vectors for "security" (use Sec-Ext)
416 \end{itemize}
417 }
418 % with overlapping "vectors" - bearing in mind that "vectors" are
419 % just a remap onto the standard register file, if the top bits of
420 % predication are zero, and there happens to be a second vector
421 % that uses some of the same register file that happens to be
422 % predicated out, the second vector op may be issued *at the same time*
423 % if there are available parallel ALUs to do so.
424
425
426 \frame{\frametitle{Register key-value CSR store}
427
428 \begin{itemize}
429 \item key is int regfile number or FP regfile number (1 bit)
430 \item treated as vector if referred to in op (5 bits, key)
431 \item starting register to actually be used (5 bits, value)
432 \item element bitwidth: default, dflt/2, 8, 16 (2 bits)
433 \item is vector: Y/N (1 bit)
434 \item is packed SIMD: Y/N (1 bit)
435 \item register bank: 0/reserved for future ext. (1 bit)
436 \end{itemize}
437 Notes:
438 \begin{itemize}
439 \item References different (internal) mapping table for INT or FP
440 \item Level of indirection has implications for pipeline latency
441 \item (future) bank bit, no need to extend opcodes: set bank=1,
442 just use normal 5-bit regs, indirection takes care of the rest.
443 \end{itemize}
444 }
445
446
447 \frame{\frametitle{Register element width and packed SIMD}
448
449 Packed SIMD = N:
450 \begin{itemize}
451 \item default: RV32/64/128 opcodes define elwidth = 32/64/128
452 \item default/2: RV32/64/128 opcodes, elwidth = 16/32/64 with
453 top half of register ignored (src), zero'd/s-ext (dest)
454 \item 8 or 16: elwidth = 8 (or 16), similar to default/2
455 \end{itemize}
456 Packed SIMD = Y (default is moot, packing is 1:1)
457 \begin{itemize}
458 \item default/2: 2 elements per register @ opcode-defined bitwidth
459 \item 8 or 16: standard 8 (or 16) packed SIMD
460 \end{itemize}
461 Notes:
462 \begin{itemize}
463 \item Different src/dest widths (and packs) PERMITTED
464 \item RV* already allows (and defines) how RV32 ops work in RV64\\
465 so just logically follow that lead/example.
466 \end{itemize}
467 }
468
469
470 \begin{frame}[fragile]
471 \frametitle{Register key-value CSR table decoding pseudocode}
472
473 \begin{semiverbatim}
474 struct vectorised fp\_vec[32], int\_vec[32]; // 64 in future
475
476 for (i = 0; i < 16; i++) // 16 CSRs?
477 tb = int\_vec if CSRvectortb[i].type == 0 else fp\_vec
478 idx = CSRvectortb[i].regidx
479 tb[idx].elwidth = CSRpred[i].elwidth
480 tb[idx].regidx = CSRpred[i].regidx // indirection
481 tb[idx].isvector = CSRpred[i].isvector
482 tb[idx].packed = CSRpred[i].packed // SIMD or not
483 tb[idx].bank = CSRpred[i].bank // 0 (1=rsvd)
484 \end{semiverbatim}
485
486 \begin{itemize}
487 \item All 32 int (and 32 FP) entries zero'd before setup
488 \item Might be a bit complex to set up (TBD)
489 \end{itemize}
490
491 \end{frame}
492
493
494 \begin{frame}[fragile]
495 \frametitle{ADD pseudocode with redirection, this time}
496
497 \begin{semiverbatim}
498 function op\_add(rd, rs1, rs2) # add not VADD!
499  int i, id=0, irs1=0, irs2=0;
500  rd = int\_vec[rd ].isvector ? int\_vec[rd ].regidx : rd;
501  rs1 = int\_vec[rs1].isvector ? int\_vec[rs1].regidx : rs1;
502  rs2 = int\_vec[rs2].isvector ? int\_vec[rs2].regidx : rs2;
503  predval = get\_pred\_val(FALSE, rd);
504  for (i = 0; i < VL; i++)
505 if (predval \& 1<<i) # predication uses intregs
506    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
507 if (int\_vec[rd ].isvector)  \{ id += 1; \}
508 if (int\_vec[rs1].isvector)  \{ irs1 += 1; \}
509 if (int\_vec[rs2].isvector)  \{ irs2 += 1; \}
510 \end{semiverbatim}
511
512 \begin{itemize}
513 \item SIMD (elwidth != default) not covered above
514 \end{itemize}
515 \end{frame}
516
517
518 \frame{\frametitle{Why are overlaps allowed in Regfiles?}
519
520 \begin{itemize}
521 \item Same register(s) can have multiple "interpretations"
522 \item Set "real" register (scalar) without needing to set/unset CSRs.
523 \item xBitManip plus SIMD plus xBitManip = Hi/Lo bitops
524 \item (32-bit GREV plus 4x8-bit SIMD plus 32-bit GREV:\\
525 GREV @ VL=N,wid=32; SIMD @ VL=Nx4,wid=8)
526 \item RGB 565 (video): BEXTW plus 4x8-bit SIMD plus BDEPW\\
527 (BEXT/BDEP @ VL=N,wid=32; SIMD @ VL=Nx4,wid=8)
528 \item Same register(s) can be offset (no need for VSLIDE)\vspace{6pt}
529 \end{itemize}
530 Note:
531 \begin{itemize}
532 \item xBitManip reduces O($N^{6}$) SIMD down to O($N^{3}$)
533 \item Hi-Performance: Macro-op fusion (more pipeline stages?)
534 \end{itemize}
535 }
536
537
538 \frame{\frametitle{C.MV extremely flexible!}
539
540 \begin{itemize}
541 \item scalar-to-vector (w/ no pred): VSPLAT
542 \item scalar-to-vector (w/ dest-pred): Sparse VSPLAT
543 \item scalar-to-vector (w/ 1-bit dest-pred): VINSERT
544 \item vector-to-scalar (w/ [1-bit?] src-pred): VEXTRACT
545 \item vector-to-vector (w/ no pred): Vector Copy
546 \item vector-to-vector (w/ src pred): Vector Gather
547 \item vector-to-vector (w/ dest pred): Vector Scatter
548 \item vector-to-vector (w/ src \& dest pred): Vector Gather/Scatter
549 \end{itemize}
550 \vspace{4pt}
551 Notes:
552 \begin{itemize}
553 \item Surprisingly powerful! Zero-predication even more so
554 \item Same arrangement for FVCT, FMV, FSGNJ etc.
555 \end{itemize}
556 }
557
558
559 \begin{frame}[fragile]
560 \frametitle{MV pseudocode with predication}
561
562 \begin{semiverbatim}
563 function op\_mv(rd, rs) # MV not VMV!
564  rd = int\_vec[rd].isvector ? int\_vec[rd].regidx : rd;
565  rs = int\_vec[rs].isvector ? int\_vec[rs].regidx : rs;
566  ps = get\_pred\_val(FALSE, rs); # predication on src
567  pd = get\_pred\_val(FALSE, rd); # ... AND on dest
568  for (int i = 0, int j = 0; i < VL && j < VL;):
569 if (int\_vec[rs].isvec) while (!(ps \& 1<<i)) i++;
570 if (int\_vec[rd].isvec) while (!(pd \& 1<<j)) j++;
571 ireg[rd+j] <= ireg[rs+i];
572 if (int\_vec[rs].isvec) i++;
573 if (int\_vec[rd].isvec) j++;
574 \end{semiverbatim}
575
576 \begin{itemize}
577 \item elwidth != default not covered above (might be a bit hairy)
578 \item Ending early with 1-bit predication not included (VINSERT)
579 \end{itemize}
580 \end{frame}
581
582
583 \begin{frame}[fragile]
584 \frametitle{VSELECT: stays or goes? Stays if MV.X exists...}
585
586 \begin{semiverbatim}
587 def op_mv_x(rd, rs): # (hypothetical) RV MX.X
588 rs = regfile[rs] # level of indirection (MV.X)
589 regfile[rd] = regfile[rs] # straight regcopy
590 \end{semiverbatim}
591
592 Vectorised version aka "VSELECT":
593
594 \begin{semiverbatim}
595 def op_mv_x(rd, rs): # SV version of MX.X
596 for i in range(VL):
597 rs1 = regfile[rs+i] # indirection
598 regfile[rd+i] = regfile[rs] # straight regcopy
599 \end{semiverbatim}
600
601 \begin{itemize}
602 \item However MV.X does not exist in RV, so neither can VSELECT
603 \item \red SV is not about adding new functionality, only parallelism
604 \end{itemize}
605
606
607 \end{frame}
608
609
610 \frame{\frametitle{Opcodes, compared to RVV}
611
612 \begin{itemize}
613 \item All integer and FP opcodes all removed (no CLIP, FNE)
614 \item VMPOP, VFIRST etc. all removed (use xBitManip)
615 \item VSLIDE removed (use regfile overlaps)
616 \item C.MV covers VEXTRACT VINSERT and VSPLAT (and more)
617 \item Vector (or scalar-vector) copy: use C.MV (MV is a pseudo-op)
618 \item VMERGE: twin predicated C.MVs (one inverted. macro-op'd)
619 \item VSETVL, VGETVL stay (the only ops that do!)
620 \end{itemize}
621 Issues:
622 \begin{itemize}
623 \item VSELECT stays? no MV.X, so no (add with custom ext?)
624 \item VSNE exists, but no FNE (use predication inversion?)
625 \item VCLIP is not in RV* (add with custom ext?)
626 \end{itemize}
627 }
628
629
630 \begin{frame}[fragile]
631 \frametitle{Example c code: DAXPY}
632
633 \begin{semiverbatim}
634 void daxpy(size_t n, double a,
635 const double x[], double y[])
636 \{
637 for (size_t i = 0; i < n; i++) \{
638 y[i] = a*x[i] + y[i];
639 \}
640 \}
641 \end{semiverbatim}
642
643 \begin{itemize}
644 \item See "SIMD Considered Harmful" for SIMD/RVV analysis\\
645 https://sigarch.org/simd-instructions-considered-harmful/
646 \end{itemize}
647
648
649 \end{frame}
650
651
652 \begin{frame}[fragile]
653 \frametitle{RVV DAXPY assembly (RV32V)}
654
655 \begin{semiverbatim}
656 # a0 is n, a1 is ptr to x[0], a2 is ptr to y[0], fa0 is a
657 li t0, 2<<25
658 vsetdcfg t0 # enable 2 64b Fl.Pt. registers
659 loop:
660 setvl t0, a0 # vl = t0 = min(mvl, n)
661 vld v0, a1 # load vector x
662 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
663 vld v1, a2 # load vector y
664 add a1, a1, t1 # increment pointer to x by vl*8
665 vfmadd v1, v0, fa0, v1 # v1 += v0 * fa0 (y = a * x + y)
666 sub a0, a0, t0 # n -= vl (t0)
667 vst v1, a2 # store Y
668 add a2, a2, t1 # increment pointer to y by vl*8
669 bnez a0, loop # repeat if n != 0
670 \end{semiverbatim}
671 \end{frame}
672
673
674 \begin{frame}[fragile]
675 \frametitle{SV DAXPY assembly (RV64D)}
676
677 \begin{semiverbatim}
678 # a0 is n, a1 is ptr to x[0], a2 is ptr to y[0], fa0 is a
679 CSRvect1 = \{type: F, key: a3, val: a3, elwidth: dflt\}
680 CSRvect2 = \{type: F, key: a7, val: a7, elwidth: dflt\}
681 loop:
682 setvl t0, a0, 4 # vl = t0 = min(4, n)
683 ld a3, a1 # load 4 registers a3-6 from x
684 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
685 ld a7, a2 # load 4 registers a7-10 from y
686 add a1, a1, t1 # increment pointer to x by vl*8
687 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
688 sub a0, a0, t0 # n -= vl (t0)
689 st a7, a2 # store 4 registers a7-10 to y
690 add a2, a2, t1 # increment pointer to y by vl*8
691 bnez a0, loop # repeat if n != 0
692 \end{semiverbatim}
693 \end{frame}
694
695
696 \frame{\frametitle{Under consideration}
697
698 \begin{itemize}
699 \item Is C.FNE actually needed? Should it be added if it is?
700 \item Element type implies polymorphism. Should it be in SV?
701 \item Should use of registers be allowed to "wrap" (x30 x31 x1 x2)?
702 \item Is detection of all-scalar ops ok (without slowing pipeline)?
703 \item Can VSELECT be removed? (it's really complex)
704 \item Can CLIP be done as a CSR (mode, like elwidth)
705 \item SIMD saturation (etc.) also set as a mode?
706 \item Include src1/src2 predication on Comparison Ops?\\
707 (same arrangement as C.MV, with same flexibility/power)
708 \item 8/16-bit ops is it worthwhile adding a "start offset"? \\
709 (a bit like misaligned addressing... for registers)\\
710 or just use predication to skip start?
711 \end{itemize}
712 }
713
714
715 \frame{\frametitle{What's the downside(s) of SV?}
716 \begin{itemize}
717 \item EVERY register operation is inherently parallelised\\
718 (scalar ops are just vectors of length 1)\vspace{4pt}
719 \item Tightly coupled with the core (instruction issue)\\
720 could be disabled through MISA switch\vspace{4pt}
721 \item An extra pipeline phase almost certainly essential\\
722 for fast low-latency implementations\vspace{4pt}
723 \item With zeroing off, skipping non-predicated elements is hard:\\
724 it is however an optimisation (and could be skipped).\vspace{4pt}
725 \item Setting up the Register/Predication tables (interpreting the\\
726 CSR key-value stores) might be a bit complex to optimise
727 (any change to a CSR key-value entry needs to redo the table)
728 \end{itemize}
729 }
730
731
732 \frame{\frametitle{Summary}
733
734 \begin{itemize}
735 \item Actually about parallelism, not Vectors (or SIMD) per se\\
736 and NOT about adding new ALU/logic/functionality.
737 \item Only needs 2 actual instructions (plus the CSRs).\\
738 RVV - and "standard" SIMD - require ISA duplication
739 \item Designed for flexibility (graded levels of complexity)
740 \item Huge range of implementor freedom
741 \item Fits RISC-V ethos: achieve more with less
742 \item Reduces SIMD ISA proliferation by 3-4 orders of magnitude \\
743 (without SIMD downsides or sacrificing speed trade-off)
744 \item Covers 98\% of RVV, allows RVV to fit "on top"
745 \item Byproduct of SV is a reduction in code size, power usage
746 etc. (increase efficiency, just like Compressed)
747 \end{itemize}
748 }
749
750
751 \frame{
752 \begin{center}
753 {\Huge The end\vspace{20pt}\\
754 Thank you\vspace{20pt}\\
755 Questions?\vspace{20pt}
756 }
757 \end{center}
758
759 \begin{itemize}
760 \item Discussion: ISA-DEV mailing list
761 \item http://libre-riscv.org/simple\_v\_extension/
762 \end{itemize}
763 }
764
765
766 \end{document}