(no commit message)
[libreriscv.git] / simple_v_extension / simple_v_chennai_2018.tex
1 \documentclass[slidestop]{beamer}
2 \usepackage{beamerthemesplit}
3 \usepackage{graphics}
4 \usepackage{pstricks}
5
6 \title{Simple-V RISC-V Extension for Vectorisation and SIMD}
7 \author{Luke Kenneth Casson Leighton}
8
9
10 \begin{document}
11
12 \frame{
13 \begin{center}
14 \huge{Simple-V RISC-V Parallelism Abstraction Extension}\\
15 \vspace{32pt}
16 \Large{Flexible Vectorisation}\\
17 \Large{(aka not so Simple-V?)}\\
18 \Large{(aka A Parallelism API for the RISC-V ISA)}\\
19 \vspace{24pt}
20 \Large{[proposed for] Chennai 9th RISC-V Workshop}\\
21 \vspace{16pt}
22 \large{\today}
23 \end{center}
24 }
25
26
27 \frame{\frametitle{Credits and Acknowledgements}
28
29 \begin{itemize}
30 \item The Designers of RISC-V\vspace{15pt}
31 \item The RVV Working Group and contributors\vspace{15pt}
32 \item Allen Baum, Jacob Bachmeyer, Xan Phung, Chuanhua Chang,\\
33 Guy Lemurieux, Jonathan Neuschafer, Roger Brussee,
34 and others\vspace{15pt}
35 \item ISA-Dev Group Members\vspace{10pt}
36 \end{itemize}
37 }
38
39
40 \frame{\frametitle{Quick refresher on SIMD}
41
42 \begin{itemize}
43 \item SIMD very easy to implement (and very seductive)\vspace{8pt}
44 \item Parallelism is in the ALU\vspace{8pt}
45 \item Zero-to-Negligeable impact for rest of core\vspace{8pt}
46 \end{itemize}
47 Where SIMD Goes Wrong:\vspace{10pt}
48 \begin{itemize}
49 \item See "SIMD instructions considered harmful"
50 https://sigarch.org/simd-instructions-considered-harmful
51 \item Setup and corner-cases alone are extremely complex.\\
52 Hardware is easy, but software is hell.
53 \item O($N^{6}$) ISA opcode proliferation (1000s of instructions)\\
54 opcode, elwidth, veclen, src1-src2-dest hi/lo
55 \end{itemize}
56 }
57
58 \frame{\frametitle{Quick refresher on RVV}
59
60 \begin{itemize}
61 \item Effectively a variant of SIMD / SIMT (arbitrary length)\vspace{4pt}
62 \item Fascinatingly, despite being a SIMD-variant, RVV only has
63 O(N) opcode proliferation! (extremely well designed)
64 \item Extremely powerful (extensible to 256 registers)\vspace{4pt}
65 \item Supports polymorphism, several datatypes (inc. FP16)\vspace{4pt}
66 \item Requires a separate Register File (32 w/ext to 256)\vspace{4pt}
67 \item Implemented as a separate pipeline (no impact on scalar)
68 \end{itemize}
69 However...
70 \begin{itemize}
71 \item 98 percent opcode duplication with rest of RV
72 \item Extending RVV requires customisation not just of h/w:\\
73 gcc, binutils also need customisation (and maintenance)
74 \end{itemize}
75 }
76
77
78 \frame{\frametitle{The Simon Sinek lowdown (Why, How, What)}
79
80 \begin{itemize}
81 \item Why?
82 Implementors need flexibility in vectorisation to optimise for
83 area or performance depending on the scope:
84 embedded DSP, Mobile GPU's, Server CPU's and more.\\
85 Compilers also need flexibility in vectorisation to optimise for cost
86 of pipeline setup, amount of state to context switch
87 and software portability
88 \item How?
89 By marking INT/FP regs as "Vectorised" and
90 adding a level of indirection,
91 SV expresses how existing instructions should act
92 on [contiguous] blocks of registers, in parallel, WITHOUT
93 needing any new extra arithmetic opcodes.
94 \item What?
95 Simple-V is an "API" that implicitly extends
96 existing (scalar) instructions with explicit parallelisation\\
97 i.e. SV is actually about parallelism NOT vectors per se.\\
98 Has a lot in common with VLIW (without the actual VLIW).
99 \end{itemize}
100 }
101
102
103 \frame{\frametitle{What's the value of SV? Why adopt it even in non-V?}
104
105 \begin{itemize}
106 \item memcpy has a much higher bang-per-buck ratio
107 \item context-switch (LOAD/STORE multiple): 1-2 instructions
108 \item Compressed instrs further reduces I-cache (etc.)
109 \item Reduced I-cache load (and less I-reads)
110 \item Amazingly, SIMD becomes tolerable (no corner-cases)
111 \item Modularity/Abstraction in both the h/w and the toolchain.
112 \item "Reach" of registers accessible by Compressed is enhanced
113 \item Future: double the standard INT/FP register file sizes.
114 \end{itemize}
115 Note:
116 \begin{itemize}
117 \item It's not just about Vectors: it's about instruction effectiveness
118 \item Anything implementor is not interested in HW-optimising,\\
119 let it fall through to exceptions (implement as a trap).
120 \end{itemize}
121 }
122
123
124 \frame{\frametitle{How does Simple-V relate to RVV? What's different?}
125
126 \begin{itemize}
127 \item RVV very heavy-duty (excellent for supercomputing)\vspace{4pt}
128 \item Simple-V abstracts parallelism (based on best of RVV)\vspace{4pt}
129 \item Graded levels: hardware, hybrid or traps (fit impl. need)\vspace{4pt}
130 \item Even Compressed become vectorised (RVV can't)\vspace{4pt}
131 \item No polymorphism in SV (too complex)\vspace{4pt}
132 \end{itemize}
133 What Simple-V is not:\vspace{4pt}
134 \begin{itemize}
135 \item A full supercomputer-level Vector Proposal\\
136 (it's not actually a Vector Proposal at all!)
137 \item A replacement for RVV (SV is designed to be over-ridden\\
138 by - or augmented to become - RVV)
139 \end{itemize}
140 }
141
142
143 \frame{\frametitle{How is Parallelism abstracted in Simple-V?}
144
145 \begin{itemize}
146 \item Register "typing" turns any op into an implicit Vector op:\\
147 registers are reinterpreted through a level of indirection
148 \item Primarily at the Instruction issue phase (except SIMD)\\
149 Note: it's ok to pass predication through to ALU (like SIMD)
150 \item Standard and future and custom opcodes now parallel\\
151 (crucially: with NO extra instructions needing to be added)
152 \end{itemize}
153 Note: EVERY scalar op now paralleliseable
154 \begin{itemize}
155 \item All LOAD/STORE (inc. Compressed, Int/FP versions)
156 \item All ALU ops (Int, FP, SIMD, DSP, everything)
157 \item All branches become predication targets (note: no FNE)
158 \item C.MV of particular interest (s/v, v/v, v/s)
159 \item FCVT, FMV, FSGNJ etc. very similar to C.MV
160 \end{itemize}
161 }
162
163
164 \frame{\frametitle{What's the deal / juice / score?}
165
166 \begin{itemize}
167 \item Standard Register File(s) overloaded with CSR "reg is vector"\\
168 (see pseudocode slides for examples)
169 \item "2nd FP\&INT register bank" possibility, reserved for future\\
170 (would allow standard regfiles to remain unmodified)
171 \item Element width concept remain same as RVV\\
172 (CSRs give new size: overrides opcode-defined meaning)
173 \item CSRs are key-value tables (overlaps allowed: v. important)
174 \end{itemize}
175 Key differences from RVV:
176 \begin{itemize}
177 \item Predication in INT reg as a BIT field (max VL=XLEN)
178 \item Minimum VL must be Num Regs - 1 (all regs single LD/ST)
179 \item SV may condense sparse Vecs: RVV cannot (SIMD-like):\\
180 SV gives choice to Zero or skip non-predicated elements\\
181 (no such choice in RVV: zeroing-only)
182 \end{itemize}
183 }
184
185
186 \begin{frame}[fragile]
187 \frametitle{ADD pseudocode (or trap, or actual hardware loop)}
188
189 \begin{semiverbatim}
190 function op\_add(rd, rs1, rs2, predr) # add not VADD!
191  int i, id=0, irs1=0, irs2=0;
192  for (i = 0; i < VL; i++)
193   if (ireg[predr] & 1<<i) # predication uses intregs
194    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
195 if (reg\_is\_vectorised[rd] )  \{ id += 1; \}
196 if (reg\_is\_vectorised[rs1])  \{ irs1 += 1; \}
197 if (reg\_is\_vectorised[rs2])  \{ irs2 += 1; \}
198 \end{semiverbatim}
199
200 \begin{itemize}
201 \item Above is oversimplified: Reg. indirection left out (for clarity).
202 \item SIMD slightly more complex (case above is elwidth = default)
203 \item Scalar-scalar and scalar-vector and vector-vector now all in one
204 \item OoO may choose to push ADDs into instr. queue (v. busy!)
205 \end{itemize}
206 \end{frame}
207
208 % yes it really *is* ADD not VADD. that's the entire point of
209 % this proposal, that *standard* operations are overloaded to
210 % become vectorised-on-demand
211
212
213 \begin{frame}[fragile]
214 \frametitle{Predication-Branch (or trap, or actual hardware loop)}
215
216 \begin{semiverbatim}
217 s1 = reg\_is\_vectorised(src1);
218 s2 = reg\_is\_vectorised(src2);
219 if (!s2 && !s1) goto branch;
220 for (int i = 0; i < VL; ++i)
221 if (cmp(s1 ? reg[src1+i]:reg[src1],
222 s2 ? reg[src2+i]:reg[src2])
223 ireg[rs3] |= 1<<i;
224 \end{semiverbatim}
225
226 \begin{itemize}
227 \item SIMD slightly more complex (case above is elwidth = default)
228 \item If s1 and s2 both scalars, Standard branch occurs
229 \item Predication stored in integer regfile as a bitfield
230 \item Scalar-vector and vector-vector supported
231 \item Overload Branch immediate to be predication target rs3
232 \end{itemize}
233 \end{frame}
234
235 \begin{frame}[fragile]
236 \frametitle{VLD/VLD.S/VLD.X (or trap, or actual hardware loop)}
237
238 \begin{semiverbatim}
239 if (unit-strided) stride = elsize;
240 else stride = areg[as2]; // constant-strided
241 for (int i = 0; i < VL; ++i)
242 if ([!]preg[rd] & 1<<i)
243 for (int j = 0; j < seglen+1; j++)
244 if (reg\_is\_vectorised[rs2]) offs = vreg[rs2+i]
245 else offs = i*(seglen+1)*stride;
246 vreg[rd+j][i] = mem[sreg[base] + offs + j*stride]
247 \end{semiverbatim}
248
249 \begin{itemize}
250 \item Again: elwidth != default slightly more complex
251 \item rs2 vectorised taken to implicitly indicate VLD.X
252 \end{itemize}
253 \end{frame}
254
255
256 \frame{\frametitle{Register key-value CSR store (lookup table / CAM)}
257
258 \begin{itemize}
259 \item key is int regfile number or FP regfile number (1 bit)
260 \item treated as vector if referred to in op (5 bits, key)
261 \item starting register to actually be used (5 bits, value)
262 \item element bitwidth: default, dflt/2, 8, 16 (2 bits)
263 \item is vector: Y/N (1 bit)
264 \item is packed SIMD: Y/N (1 bit)
265 \item register bank: 0/reserved for future ext. (1 bit)
266 \end{itemize}
267 Notes:
268 \begin{itemize}
269 \item References different (internal) mapping table for INT or FP
270 \item Level of indirection has implications for pipeline latency
271 \item (future) bank bit, no need to extend opcodes: set bank=1,
272 just use normal 5-bit regs, indirection takes care of the rest.
273 \end{itemize}
274 }
275
276
277 \frame{\frametitle{Register element width and packed SIMD}
278
279 Packed SIMD = N:
280 \begin{itemize}
281 \item default: RV32/64/128 opcodes define elwidth = 32/64/128
282 \item default/2: RV32/64/128 opcodes, elwidth = 16/32/64 with
283 top half of register ignored (src), zero'd/s-ext (dest)
284 \item 8 or 16: elwidth = 8 (or 16), similar to default/2
285 \end{itemize}
286 Packed SIMD = Y (default is moot, packing is 1:1)
287 \begin{itemize}
288 \item default/2: 2 elements per register @ opcode-defined bitwidth
289 \item 8 or 16: standard 8 (or 16) packed SIMD
290 \end{itemize}
291 Notes:
292 \begin{itemize}
293 \item Different src/dest widths (and packs) PERMITTED
294 \item RV* already allows (and defines) how RV32 ops work in RV64\\
295 so just logically follow that lead/example.
296 \end{itemize}
297 }
298
299
300 \begin{frame}[fragile]
301 \frametitle{Register key-value CSR table decoding pseudocode}
302
303 \begin{semiverbatim}
304 struct vectorised fp\_vec[32], int\_vec[32];
305 for (i = 0; i < 16; i++) // 16 CSRs?
306 tb = int\_vec if CSRvec[i].type == 0 else fp\_vec
307 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
308 tb[idx].elwidth = CSRvec[i].elwidth
309 tb[idx].regidx = CSRvec[i].regidx // indirection
310 tb[idx].regidx += CSRvec[i].bank << 5 // 0 (1=rsvd)
311 tb[idx].isvector = CSRvec[i].isvector
312 tb[idx].packed = CSRvec[i].packed // SIMD or not
313 tb[idx].enabled = true
314 \end{semiverbatim}
315
316 \begin{itemize}
317 \item All 32 int (and 32 FP) entries zero'd before setup
318 \item Might be a bit complex to set up in hardware (keep as CAM?)
319 \end{itemize}
320
321 \end{frame}
322
323
324 \frame{\frametitle{Predication key-value CSR store}
325
326 \begin{itemize}
327 \item key is int regfile number or FP regfile number (1 bit)
328 \item register to be predicated if referred to (5 bits, key)
329 \item INT reg with actual predication mask (5 bits, value)
330 \item predication is inverted Y/N (1 bit)
331 \item non-predicated elements are to be zero'd Y/N (1 bit)
332 \item register bank: 0/reserved for future ext. (1 bit)
333 \end{itemize}
334 Notes:\vspace{10pt}
335 \begin{itemize}
336 \item Table should be expanded out for high-speed implementations
337 \item Key-value overlaps permitted, but (key+type) must be unique
338 \item RVV rules about deleting higher-indexed CSRs followed
339 \end{itemize}
340 }
341
342
343 \begin{frame}[fragile]
344 \frametitle{Predication key-value CSR table decoding pseudocode}
345
346 \begin{semiverbatim}
347 struct pred fp\_pred[32], int\_pred[32];
348 for (i = 0; i < 16; i++) // 16 CSRs?
349 tb = int\_pred if CSRpred[i].type == 0 else fp\_pred
350 idx = CSRpred[i].regkey
351 tb[idx].zero = CSRpred[i].zero // zeroing
352 tb[idx].inv = CSRpred[i].inv // inverted
353 tb[idx].predidx = CSRpred[i].predidx // actual reg
354 tb[idx].predidx += CSRvec[i].bank << 5 // 0 (1=rsvd)
355 tb[idx].enabled = true
356 \end{semiverbatim}
357
358 \begin{itemize}
359 \item All 32 int and 32 FP entries zero'd before setting\\
360 (predication disabled)
361 \item Might be a bit complex to set up in hardware (keep as CAM?)
362 \end{itemize}
363
364 \end{frame}
365
366
367 \begin{frame}[fragile]
368 \frametitle{Get Predication value pseudocode}
369
370 \begin{semiverbatim}
371 def get\_pred\_val(bool is\_fp\_op, int reg):
372 tb = int\_pred if is\_fp\_op else fp\_pred
373 if (!tb[reg].enabled): return ~0x0 // all ops enabled
374 predidx = tb[reg].predidx // redirection occurs HERE
375 predidx += tb[reg].bank << 5 // 0 (1=rsvd)
376 predicate = intreg[predidx] // actual predicate HERE
377 if (tb[reg].inv):
378 predicate = ~predicate // invert ALL bits
379 return predicate
380 \end{semiverbatim}
381
382 \begin{itemize}
383 \item References different (internal) mapping table for INT or FP
384 \item Actual predicate bitmask ALWAYS from the INT regfile
385 \item Hard-limit on MVL of XLEN (predication only 1 intreg)
386 \end{itemize}
387
388 \end{frame}
389
390
391 \frame{\frametitle{To Zero or not to place zeros in non-predicated elements?}
392
393 \begin{itemize}
394 \item Zeroing is an implementation optimisation favouring OoO
395 \item Simple implementations may skip non-predicated operations
396 \item Simple implementations explicitly have to destroy data
397 \item Complex implementations may use reg-renames to save power\\
398 Zeroing on predication chains makes optimisation harder
399 \item Compromise: REQUIRE both (specified in predication CSRs).
400 \end{itemize}
401 Considerations:
402 \begin{itemize}
403 \item Complex not really impacted, simple impacted a LOT\\
404 with Zeroing... however it's useful (memzero)
405 \item Non-zero'd overlapping "Vectors" may issue overlapping ops\\
406 (2nd op's predicated elements slot in 1st's non-predicated ops)
407 \item Please don't use Vectors for "security" (use Sec-Ext)
408 \end{itemize}
409 }
410 % with overlapping "vectors" - bearing in mind that "vectors" are
411 % just a remap onto the standard register file, if the top bits of
412 % predication are zero, and there happens to be a second vector
413 % that uses some of the same register file that happens to be
414 % predicated out, the second vector op may be issued *at the same time*
415 % if there are available parallel ALUs to do so.
416
417
418 \frame{\frametitle{Implementation Options}
419
420 \begin{itemize}
421 \item Absolute minimum: Exceptions: if CSRs indicate "V", trap.\\
422 (Requires as absolute minimum that CSRs be in Hardware)
423 \item Hardware loop, single-instruction issue\\
424 (Do / Don't send through predication to ALU)
425 \item Hardware loop, parallel (multi-instruction) issue\\
426 (Do / Don't send through predication to ALU)
427 \item Hardware loop, full parallel ALU (not recommended)
428 \end{itemize}
429 Notes:\vspace{4pt}
430 \begin{itemize}
431 \item 4 (or more?) options above may be deployed on per-op basis
432 \item SIMD always sends predication bits to ALU (if requested)
433 \item Minimum MVL MUST be sufficient to cover regfile LD/ST
434 \item Instr. FIFO may repeatedly split off N scalar ops at a time
435 \end{itemize}
436 }
437 % Instr. FIFO may need its own slide. Basically, the vectorised op
438 % gets pushed into the FIFO, where it is then "processed". Processing
439 % will remove the first set of ops from its vector numbering (taking
440 % predication into account) and shoving them **BACK** into the FIFO,
441 % but MODIFYING the remaining "vectorised" op, subtracting the now
442 % scalar ops from it.
443
444 \frame{\frametitle{Predicated 8-parallel ADD: 1-wide ALU (no zeroing)}
445 \begin{center}
446 \includegraphics[height=2.5in]{padd9_alu1.png}\\
447 {\bf \red Predicated adds are shuffled down: 6 cycles in total}
448 \end{center}
449 }
450
451
452 \frame{\frametitle{Predicated 8-parallel ADD: 4-wide ALU (no zeroing)}
453 \begin{center}
454 \includegraphics[height=2.5in]{padd9_alu4.png}\\
455 {\bf \red Predicated adds are shuffled down: 4 in 1st cycle, 2 in 2nd}
456 \end{center}
457 }
458
459
460 \frame{\frametitle{Predicated 8-parallel ADD: 3 phase FIFO expansion}
461 \begin{center}
462 \includegraphics[height=2.5in]{padd9_fifo.png}\\
463 {\bf \red First cycle takes first four 1s; second takes the rest}
464 \end{center}
465 }
466
467
468 \begin{frame}[fragile]
469 \frametitle{ADD pseudocode with redirection (and proper predication)}
470
471 \begin{semiverbatim}
472 function op\_add(rd, rs1, rs2) # add not VADD!
473  int i, id=0, irs1=0, irs2=0;
474  predval = get\_pred\_val(FALSE, rd);
475  rd = int\_vec[rd ].isvector ? int\_vec[rd ].regidx : rd;
476  rs1 = int\_vec[rs1].isvector ? int\_vec[rs1].regidx : rs1;
477  rs2 = int\_vec[rs2].isvector ? int\_vec[rs2].regidx : rs2;
478  for (i = 0; i < VL; i++)
479 if (predval \& 1<<i) # predication uses intregs
480    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
481 if (int\_vec[rd ].isvector)  \{ id += 1; \}
482 if (int\_vec[rs1].isvector)  \{ irs1 += 1; \}
483 if (int\_vec[rs2].isvector)  \{ irs2 += 1; \}
484 \end{semiverbatim}
485
486 \begin{itemize}
487 \item SIMD (elwidth != default) not covered above
488 \end{itemize}
489 \end{frame}
490
491
492 \frame{\frametitle{How are SIMD Instructions Vectorised?}
493
494 \begin{itemize}
495 \item SIMD ALU(s) primarily unchanged
496 \item Predication added down to each SIMD element (if requested,
497 otherwise entire block will be predicated as a whole)
498 \item Predication bits sent in groups to the ALU (if requested,
499 otherwise just one bit for the entire packed block)
500 \item End of Vector enables (additional) predication:
501 completely nullifies end-case code (ONLY in multi-bit
502 predication mode)
503 \end{itemize}
504 Considerations:
505 \begin{itemize}
506 \item Many SIMD ALUs possible (parallel execution)
507 \item Implementor free to choose (API remains the same)
508 \item Unused ALU units wasted, but s/w DRASTICALLY simpler
509 \item Very long SIMD ALUs could waste significant die area
510 \end{itemize}
511 }
512 % With multiple SIMD ALUs at for example 32-bit wide they can be used
513 % to either issue 64-bit or 128-bit or 256-bit wide SIMD operations
514 % or they can be used to cover several operations on totally different
515 % vectors / registers.
516
517 \frame{\frametitle{Predicated 9-parallel SIMD ADD (Packed=Y)}
518 \begin{center}
519 \includegraphics[height=2.5in]{padd9_simd.png}\\
520 {\bf \red 4-wide 8-bit SIMD, 4 bits of predicate passed to ALU}
521 \end{center}
522 }
523
524
525 \frame{\frametitle{Why are overlaps allowed in Regfiles?}
526
527 \begin{itemize}
528 \item Same target register(s) can have multiple "interpretations"
529 \item CSRs are costly to write to (do it once)
530 \item Set "real" register (scalar) without needing to set/unset CSRs.
531 \item xBitManip plus SIMD plus xBitManip = Hi/Lo bitops
532 \item (32-bit GREV plus 4x8-bit SIMD plus 32-bit GREV:\\
533 GREV @ VL=N,wid=32; SIMD @ VL=Nx4,wid=8)
534 \item RGB 565 (video): BEXTW plus 4x8-bit SIMD plus BDEPW\\
535 (BEXT/BDEP @ VL=N,wid=32; SIMD @ VL=Nx4,wid=8)
536 \item Same register(s) can be offset (no need for VSLIDE)\vspace{6pt}
537 \end{itemize}
538 Note:
539 \begin{itemize}
540 \item xBitManip reduces O($N^{6}$) SIMD down to O($N^{3}$) on its own.
541 \item Hi-Performance: Macro-op fusion (more pipeline stages?)
542 \end{itemize}
543 }
544
545
546 \frame{\frametitle{C.MV extremely flexible!}
547
548 \begin{itemize}
549 \item scalar-to-vector (w/ no pred): VSPLAT
550 \item scalar-to-vector (w/ dest-pred): Sparse VSPLAT
551 \item scalar-to-vector (w/ 1-bit dest-pred): VINSERT
552 \item vector-to-scalar (w/ [1-bit?] src-pred): VEXTRACT
553 \item vector-to-vector (w/ no pred): Vector Copy
554 \item vector-to-vector (w/ src pred): Vector Gather (inc VSLIDE)
555 \item vector-to-vector (w/ dest pred): Vector Scatter (inc. VSLIDE)
556 \item vector-to-vector (w/ src \& dest pred): Vector Gather/Scatter
557 \end{itemize}
558 \vspace{4pt}
559 Notes:
560 \begin{itemize}
561 \item Surprisingly powerful! Zero-predication even more so
562 \item Same arrangement for FCVT, FMV, FSGNJ etc.
563 \end{itemize}
564 }
565
566
567 \begin{frame}[fragile]
568 \frametitle{MV pseudocode with predication}
569
570 \begin{semiverbatim}
571 function op\_mv(rd, rs) # MV not VMV!
572  rd = int\_vec[rd].isvector ? int\_vec[rd].regidx : rd;
573  rs = int\_vec[rs].isvector ? int\_vec[rs].regidx : rs;
574  ps = get\_pred\_val(FALSE, rs); # predication on src
575  pd = get\_pred\_val(FALSE, rd); # ... AND on dest
576  for (int i = 0, int j = 0; i < VL && j < VL;):
577 if (int\_vec[rs].isvec) while (!(ps \& 1<<i)) i++;
578 if (int\_vec[rd].isvec) while (!(pd \& 1<<j)) j++;
579 ireg[rd+j] <= ireg[rs+i];
580 if (int\_vec[rs].isvec) i++;
581 if (int\_vec[rd].isvec) j++;
582 \end{semiverbatim}
583
584 \begin{itemize}
585 \item elwidth != default not covered above (might be a bit hairy)
586 \item Ending early with 1-bit predication not included (VINSERT)
587 \end{itemize}
588 \end{frame}
589
590
591 \begin{frame}[fragile]
592 \frametitle{VSELECT: stays or goes? Stays if MV.X exists...}
593
594 \begin{semiverbatim}
595 def op_mv_x(rd, rs): # (hypothetical) RV MX.X
596 rs = regfile[rs] # level of indirection (MV.X)
597 regfile[rd] = regfile[rs] # straight regcopy
598 \end{semiverbatim}
599
600 Vectorised version aka "VSELECT":
601
602 \begin{semiverbatim}
603 def op_mv_x(rd, rs): # SV version of MX.X
604 for i in range(VL):
605 rs1 = regfile[rs+i] # indirection
606 regfile[rd+i] = regfile[rs] # straight regcopy
607 \end{semiverbatim}
608
609 \begin{itemize}
610 \item However MV.X does not exist in RV, so neither can VSELECT
611 \item \red SV is not about adding new functionality, only parallelism
612 \end{itemize}
613
614
615 \end{frame}
616
617
618 \frame{\frametitle{Opcodes, compared to RVV}
619
620 \begin{itemize}
621 \item All integer and FP opcodes all removed (no CLIP, FNE)
622 \item VMPOP, VFIRST etc. all removed (use xBitManip)
623 \item VSLIDE removed (use regfile overlaps)
624 \item C.MV covers VEXTRACT VINSERT and VSPLAT (and more)
625 \item Vector (or scalar-vector) copy: use C.MV (MV is a pseudo-op)
626 \item VMERGE: twin predicated C.MVs (one inverted. macro-op'd)
627 \item VSETVL, VGETVL stay (the only ops that do!)
628 \end{itemize}
629 Issues:
630 \begin{itemize}
631 \item VSELECT stays? no MV.X, so no (add with custom ext?)
632 \item VSNE exists, but no FNE (use predication inversion?)
633 \item VCLIP is not in RV* (add with custom ext? or CSR?)
634 \end{itemize}
635 }
636
637
638 \begin{frame}[fragile]
639 \frametitle{Example c code: DAXPY}
640
641 \begin{semiverbatim}
642 void daxpy(size_t n, double a,
643 const double x[], double y[])
644 \{
645 for (size_t i = 0; i < n; i++) \{
646 y[i] = a*x[i] + y[i];
647 \}
648 \}
649 \end{semiverbatim}
650
651 \begin{itemize}
652 \item See "SIMD Considered Harmful" for SIMD/RVV analysis\\
653 https://sigarch.org/simd-instructions-considered-harmful/
654 \end{itemize}
655
656
657 \end{frame}
658
659
660 \begin{frame}[fragile]
661 \frametitle{RVV DAXPY assembly (RV32V)}
662
663 \begin{semiverbatim}
664 # a0 is n, a1 is ptr to x[0], a2 is ptr to y[0], fa0 is a
665 li t0, 2<<25
666 vsetdcfg t0 # enable 2 64b Fl.Pt. registers
667 loop:
668 setvl t0, a0 # vl = t0 = min(mvl, n)
669 vld v0, a1 # load vector x
670 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
671 vld v1, a2 # load vector y
672 add a1, a1, t1 # increment pointer to x by vl*8
673 vfmadd v1, v0, fa0, v1 # v1 += v0 * fa0 (y = a * x + y)
674 sub a0, a0, t0 # n -= vl (t0)
675 vst v1, a2 # store Y
676 add a2, a2, t1 # increment pointer to y by vl*8
677 bnez a0, loop # repeat if n != 0
678 \end{semiverbatim}
679 \end{frame}
680
681
682 \begin{frame}[fragile]
683 \frametitle{SV DAXPY assembly (RV64D)}
684
685 \begin{semiverbatim}
686 # a0 is n, a1 is ptr to x[0], a2 is ptr to y[0], fa0 is a
687 CSRvect1 = \{type: F, key: a3, val: a3, elwidth: dflt\}
688 CSRvect2 = \{type: F, key: a7, val: a7, elwidth: dflt\}
689 loop:
690 setvl t0, a0, 4 # vl = t0 = min(min(63, 4), a0))
691 ld a3, a1 # load 4 registers a3-6 from x
692 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
693 ld a7, a2 # load 4 registers a7-10 from y
694 add a1, a1, t1 # increment pointer to x by vl*8
695 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
696 sub a0, a0, t0 # n -= vl (t0)
697 st a7, a2 # store 4 registers a7-10 to y
698 add a2, a2, t1 # increment pointer to y by vl*8
699 bnez a0, loop # repeat if n != 0
700 \end{semiverbatim}
701 \end{frame}
702
703
704 \frame{\frametitle{Under consideration (some answers documented)}
705
706 \begin{itemize}
707 \item Should future extra bank be included now?
708 \item How many Register and Predication CSRs should there be?\\
709 (and how many in RV32E)
710 \item How many in M-Mode (for doing context-switch)?
711 \item Should use of registers be allowed to "wrap" (x30 x31 x1 x2)?
712 \item Can CLIP be done as a CSR (mode, like elwidth)
713 \item SIMD saturation (etc.) also set as a mode?
714 \item Include src1/src2 predication on Comparison Ops?\\
715 (same arrangement as C.MV, with same flexibility/power)
716 \item 8/16-bit ops is it worthwhile adding a "start offset"? \\
717 (a bit like misaligned addressing... for registers)\\
718 or just use predication to skip start?
719 \item see http://libre-riscv.org/simple\_v\_extension/\#issues
720 \end{itemize}
721 }
722
723
724 \frame{\frametitle{What's the downside(s) of SV?}
725 \begin{itemize}
726 \item EVERY register operation is inherently parallelised\\
727 (scalar ops are just vectors of length 1)\vspace{4pt}
728 \item Tightly coupled with the core (instruction issue)\\
729 could be disabled through MISA switch\vspace{4pt}
730 \item An extra pipeline phase almost certainly essential\\
731 for fast low-latency implementations\vspace{4pt}
732 \item With zeroing off, skipping non-predicated elements is hard:\\
733 it is however an optimisation (and need not be done).\vspace{4pt}
734 \item Setting up the Register/Predication tables (interpreting the\\
735 CSR key-value stores) might be a bit complex to optimise
736 (any change to a CSR key-value entry needs to redo the table)
737 \end{itemize}
738 }
739
740
741 \frame{\frametitle{Summary}
742
743 \begin{itemize}
744 \item Actually about parallelism, not Vectors (or SIMD) per se\\
745 and NOT about adding new ALU/logic/functionality.
746 \item Only needs 2 actual instructions (plus the CSRs).\\
747 RVV - and "standard" SIMD - require ISA duplication
748 \item Designed for flexibility (graded levels of complexity)
749 \item Huge range of implementor freedom
750 \item Fits RISC-V ethos: achieve more with less
751 \item Reduces SIMD ISA proliferation by 3-4 orders of magnitude \\
752 (without SIMD downsides or sacrificing speed trade-off)
753 \item Covers 98\% of RVV, allows RVV to fit "on top"
754 \item Byproduct of SV is a reduction in code size, power usage
755 etc. (increase efficiency, just like Compressed)
756 \end{itemize}
757 }
758
759
760 \frame{
761 \begin{center}
762 {\Huge The end\vspace{20pt}\\
763 Thank you\vspace{20pt}\\
764 Questions?\vspace{20pt}
765 }
766 \end{center}
767
768 \begin{itemize}
769 \item Discussion: ISA-DEV mailing list
770 \item http://libre-riscv.org/simple\_v\_extension/
771 \end{itemize}
772 }
773
774
775 \end{document}