update
[libreriscv.git] / simple_v_extension / simple_v_chennai_2018.tex
1 \documentclass[slidestop]{beamer}
2 \usepackage{beamerthemesplit}
3 \usepackage{graphics}
4 \usepackage{pstricks}
5
6 \title{Simple-V RISC-V Extension for Vectorisation and SIMD}
7 \author{Luke Kenneth Casson Leighton}
8
9
10 \begin{document}
11
12 \frame{
13 \begin{center}
14 \huge{Simple-V RISC-V Parallelism Abstraction Extension}\\
15 \vspace{32pt}
16 \Large{Flexible Vectorisation}\\
17 \Large{(aka not so Simple-V?)}\\
18 \Large{(aka A Parallelism API for the RISC-V ISA)}\\
19 \vspace{24pt}
20 \Large{[proposed for] Chennai 9th RISC-V Workshop}\\
21 \vspace{16pt}
22 \large{\today}
23 \end{center}
24 }
25
26
27 \frame{\frametitle{Credits and Acknowledgements}
28
29 \begin{itemize}
30 \item The Designers of RISC-V\vspace{15pt}
31 \item The RVV Working Group and contributors\vspace{15pt}
32 \item Allen Baum, Jacob Bachmeyer, Xan Phung, Chuanhua Chang,\\
33 Guy Lemurieux, Jonathan Neuschafer, Roger Brussee,
34 and others\vspace{15pt}
35 \item ISA-Dev Group Members\vspace{10pt}
36 \end{itemize}
37 }
38
39
40 \frame{\frametitle{Quick refresher on SIMD}
41
42 \begin{itemize}
43 \item SIMD very easy to implement (and very seductive)\vspace{8pt}
44 \item Parallelism is in the ALU\vspace{8pt}
45 \item Zero-to-Negligeable impact for rest of core\vspace{8pt}
46 \end{itemize}
47 Where SIMD Goes Wrong:\vspace{10pt}
48 \begin{itemize}
49 \item See "SIMD instructions considered harmful"
50 https://sigarch.org/simd-instructions-considered-harmful
51 \item Setup and corner-cases alone are extremely complex.\\
52 Hardware is easy, but software is hell.
53 \item O($N^{6}$) ISA opcode proliferation!\\
54 opcode, elwidth, veclen, src1-src2-dest hi/lo
55 \end{itemize}
56 }
57
58 \frame{\frametitle{Quick refresher on RVV}
59
60 \begin{itemize}
61 \item Effectively a variant of SIMD / SIMT (arbitrary length)\vspace{4pt}
62 \item Extremely powerful (extensible to 256 registers)\vspace{4pt}
63 \item Supports polymorphism, several datatypes (inc. FP16)\vspace{4pt}
64 \item Requires a separate Register File (32 w/ext to 256)\vspace{4pt}
65 \item Implemented as a separate pipeline (no impact on scalar)
66 \end{itemize}
67 However...
68 \begin{itemize}
69 \item 98 percent opcode duplication with rest of RV (CLIP)
70 \item Extending RVV requires customisation not just of h/w:\\
71 gcc, binutils also need customisation (and maintenance)
72 \item Fascinatingly, despite being a SIMD-variant, RVV only has
73 O(N) opcode proliferation! (extremely well designed)
74 \end{itemize}
75 }
76
77
78 \frame{\frametitle{The Simon Sinek lowdown (Why, How, What)}
79
80 \begin{itemize}
81 \item Why?
82 Implementors need flexibility in vectorisation to optimise for
83 area or performance depending on the scope:
84 embedded DSP, Mobile GPU's, Server CPU's and more.\\
85 Compilers also need flexibility in vectorisation to optimise for cost
86 of pipeline setup, amount of state to context switch
87 and software portability
88 \item How?
89 By marking INT/FP regs as "Vectorised" and
90 adding a level of indirection,
91 SV expresses how existing instructions should act
92 on [contiguous] blocks of registers, in parallel, WITHOUT
93 needing any new extra arithmetic opcodes.
94 \item What?
95 Simple-V is an "API" that implicitly extends
96 existing (scalar) instructions with explicit parallelisation\\
97 i.e. SV is actually about parallelism NOT vectors per se.\\
98 Has a lot in common with VLIW (without the actual VLIW).
99 \end{itemize}
100 }
101
102
103 \frame{\frametitle{What's the value of SV? Why adopt it even in non-V?}
104
105 \begin{itemize}
106 \item memcpy becomes much smaller (higher bang-per-buck)
107 \item context-switch (LOAD/STORE multiple): 1-2 instructions
108 \item Compressed instrs further reduces I-cache (etc.)
109 \item Reduced I-cache load (and less I-reads)
110 \item Amazingly, SIMD becomes tolerable (no corner-cases)
111 \item Modularity/Abstraction in both the h/w and the toolchain.
112 \item "Reach" of registers accessible by Compressed is enhanced
113 \item Future: double the standard INT/FP register file sizes.
114 \end{itemize}
115 Note:
116 \begin{itemize}
117 \item It's not just about Vectors: it's about instruction effectiveness
118 \item Anything implementor is not interested in HW-optimising,\\
119 let it fall through to exceptions (implement as a trap).
120 \end{itemize}
121 }
122
123
124 \frame{\frametitle{How does Simple-V relate to RVV? What's different?}
125
126 \begin{itemize}
127 \item RVV very heavy-duty (excellent for supercomputing)\vspace{8pt}
128 \item Simple-V abstracts parallelism (based on best of RVV)\vspace{8pt}
129 \item Graded levels: hardware, hybrid or traps (fit impl. need)\vspace{8pt}
130 \item Even Compressed become vectorised (RVV can't)\vspace{8pt}
131 \item No polymorphism in SV (too complex)\vspace{8pt}
132 \end{itemize}
133 What Simple-V is not:\vspace{4pt}
134 \begin{itemize}
135 \item A full supercomputer-level Vector Proposal
136 \item A replacement for RVV (SV is designed to be over-ridden\\
137 by - or augmented to become - RVV)
138 \end{itemize}
139 }
140
141
142 \frame{\frametitle{How is Parallelism abstracted in Simple-V?}
143
144 \begin{itemize}
145 \item Register "typing" turns any op into an implicit Vector op:\\
146 registers are reinterpreted through a level of indirection
147 \item Primarily at the Instruction issue phase (except SIMD)\\
148 Note: it's ok to pass predication through to ALU (like SIMD)
149 \item Standard and future and custom opcodes now parallel\\
150 (crucially: with NO extra instructions needing to be added)
151 \end{itemize}
152 Note: EVERYTHING is parallelised:
153 \begin{itemize}
154 \item All LOAD/STORE (inc. Compressed, Int/FP versions)
155 \item All ALU ops (Int, FP, SIMD, DSP, everything)
156 \item All branches become predication targets (C.FNE added?)
157 \item C.MV of particular interest (s/v, v/v, v/s)
158 \item FCVT, FMV, FSGNJ etc. very similar to C.MV
159 \end{itemize}
160 }
161
162
163 \frame{\frametitle{What's the deal / juice / score?}
164
165 \begin{itemize}
166 \item Standard Register File(s) overloaded with CSR "reg is vector"\\
167 (see pseudocode slides for examples)
168 \item "2nd FP\&INT register bank" possibility, reserved for future\\
169 (would allow standard regfiles to remain unmodified)
170 \item Element width concept remain same as RVV\\
171 (CSRs give new size: overrides opcode-defined meaning)
172 \item CSRs are key-value tables (overlaps allowed: v. important)
173 \end{itemize}
174 Key differences from RVV:
175 \begin{itemize}
176 \item Predication in INT reg as a BIT field (max VL=XLEN)
177 \item Minimum VL must be Num Regs - 1 (all regs single LD/ST)
178 \item SV may condense sparse Vecs: RVV cannot (SIMD-like):\\
179 SV gives choice to Zero or skip non-predicated elements\\
180 (no such choice in RVV: zeroing-only)
181 \end{itemize}
182 }
183
184
185 \begin{frame}[fragile]
186 \frametitle{ADD pseudocode (or trap, or actual hardware loop)}
187
188 \begin{semiverbatim}
189 function op\_add(rd, rs1, rs2, predr) # add not VADD!
190  int i, id=0, irs1=0, irs2=0;
191  for (i = 0; i < VL; i++)
192   if (ireg[predr] & 1<<i) # predication uses intregs
193    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
194 if (reg\_is\_vectorised[rd] )  \{ id += 1; \}
195 if (reg\_is\_vectorised[rs1])  \{ irs1 += 1; \}
196 if (reg\_is\_vectorised[rs2])  \{ irs2 += 1; \}
197 \end{semiverbatim}
198
199 \begin{itemize}
200 \item Above is oversimplified: Reg. indirection left out (for clarity).
201 \item SIMD slightly more complex (case above is elwidth = default)
202 \item Scalar-scalar and scalar-vector and vector-vector now all in one
203 \item OoO may choose to push ADDs into instr. queue (v. busy!)
204 \end{itemize}
205 \end{frame}
206
207 % yes it really *is* ADD not VADD. that's the entire point of
208 % this proposal, that *standard* operations are overloaded to
209 % become vectorised-on-demand
210
211
212 \begin{frame}[fragile]
213 \frametitle{Predication-Branch (or trap, or actual hardware loop)}
214
215 \begin{semiverbatim}
216 s1 = reg\_is\_vectorised(src1);
217 s2 = reg\_is\_vectorised(src2);
218 if (!s2 && !s1) goto branch;
219 for (int i = 0; i < VL; ++i)
220 if (cmp(s1 ? reg[src1+i]:reg[src1],
221 s2 ? reg[src2+i]:reg[src2])
222 ireg[rs3] |= 1<<i;
223 \end{semiverbatim}
224
225 \begin{itemize}
226 \item SIMD slightly more complex (case above is elwidth = default)
227 \item If s1 and s2 both scalars, Standard branch occurs
228 \item Predication stored in integer regfile as a bitfield
229 \item Scalar-vector and vector-vector supported
230 \item Overload Branch immediate to be predication target rs3
231 \end{itemize}
232 \end{frame}
233
234 \begin{frame}[fragile]
235 \frametitle{VLD/VLD.S/VLD.X (or trap, or actual hardware loop)}
236
237 \begin{semiverbatim}
238 if (unit-strided) stride = elsize;
239 else stride = areg[as2]; // constant-strided
240 for (int i = 0; i < VL; ++i)
241 if ([!]preg[rd] & 1<<i)
242 for (int j = 0; j < seglen+1; j++)
243 if (reg\_is\_vectorised[rs2]) offs = vreg[rs2+i]
244 else offs = i*(seglen+1)*stride;
245 vreg[rd+j][i] = mem[sreg[base] + offs + j*stride]
246 \end{semiverbatim}
247
248 \begin{itemize}
249 \item Again: elwidth != default slightly more complex
250 \item rs2 vectorised taken to implicitly indicate VLD.X
251 \end{itemize}
252 \end{frame}
253
254
255 \frame{\frametitle{Register key-value CSR store (lookup table / CAM)}
256
257 \begin{itemize}
258 \item key is int regfile number or FP regfile number (1 bit)
259 \item treated as vector if referred to in op (5 bits, key)
260 \item starting register to actually be used (5 bits, value)
261 \item element bitwidth: default, dflt/2, 8, 16 (2 bits)
262 \item is vector: Y/N (1 bit)
263 \item is packed SIMD: Y/N (1 bit)
264 \item register bank: 0/reserved for future ext. (1 bit)
265 \end{itemize}
266 Notes:
267 \begin{itemize}
268 \item References different (internal) mapping table for INT or FP
269 \item Level of indirection has implications for pipeline latency
270 \item (future) bank bit, no need to extend opcodes: set bank=1,
271 just use normal 5-bit regs, indirection takes care of the rest.
272 \end{itemize}
273 }
274
275
276 \frame{\frametitle{Register element width and packed SIMD}
277
278 Packed SIMD = N:
279 \begin{itemize}
280 \item default: RV32/64/128 opcodes define elwidth = 32/64/128
281 \item default/2: RV32/64/128 opcodes, elwidth = 16/32/64 with
282 top half of register ignored (src), zero'd/s-ext (dest)
283 \item 8 or 16: elwidth = 8 (or 16), similar to default/2
284 \end{itemize}
285 Packed SIMD = Y (default is moot, packing is 1:1)
286 \begin{itemize}
287 \item default/2: 2 elements per register @ opcode-defined bitwidth
288 \item 8 or 16: standard 8 (or 16) packed SIMD
289 \end{itemize}
290 Notes:
291 \begin{itemize}
292 \item Different src/dest widths (and packs) PERMITTED
293 \item RV* already allows (and defines) how RV32 ops work in RV64\\
294 so just logically follow that lead/example.
295 \end{itemize}
296 }
297
298
299 \begin{frame}[fragile]
300 \frametitle{Register key-value CSR table decoding pseudocode}
301
302 \begin{semiverbatim}
303 struct vectorised fp\_vec[32], int\_vec[32]; // 64 in future
304 for (i = 0; i < 16; i++) // 16 CSRs?
305 tb = int\_vec if CSRvec[i].type == 0 else fp\_vec
306 idx = CSRvec[i].regkey // INT/FP src/dst reg in opcode
307 tb[idx].elwidth = CSRvec[i].elwidth
308 tb[idx].regidx = CSRvec[i].regidx // indirection
309 tb[idx].isvector = CSRvec[i].isvector
310 tb[idx].packed = CSRvec[i].packed // SIMD or not
311 tb[idx].bank = CSRvec[i].bank // 0 (1=rsvd)
312 tb[idx].enabled = true
313 \end{semiverbatim}
314
315 \begin{itemize}
316 \item All 32 int (and 32 FP) entries zero'd before setup
317 \item Might be a bit complex to set up in hardware (keep as CAM?)
318 \end{itemize}
319
320 \end{frame}
321
322
323 \frame{\frametitle{Predication key-value CSR store}
324
325 \begin{itemize}
326 \item key is int regfile number or FP regfile number (1 bit)
327 \item register to be predicated if referred to (5 bits, key)
328 \item INT reg with actual predication mask (5 bits, value)
329 \item predication is inverted Y/N (1 bit)
330 \item non-predicated elements are to be zero'd Y/N (1 bit)
331 \item register bank: 0/reserved for future ext. (1 bit)
332 \end{itemize}
333 Notes:\vspace{10pt}
334 \begin{itemize}
335 \item Table should be expanded out for high-speed implementations
336 \item Key-value overlaps permitted, but (key+type) must be unique
337 \item RVV rules about deleting higher-indexed CSRs followed
338 \end{itemize}
339 }
340
341
342 \begin{frame}[fragile]
343 \frametitle{Predication key-value CSR table decoding pseudocode}
344
345 \begin{semiverbatim}
346 struct pred fp\_pred[32], int\_pred[32]; // 64 in future
347 for (i = 0; i < 16; i++) // 16 CSRs?
348 tb = int\_pred if CSRpred[i].type == 0 else fp\_pred
349 idx = CSRpred[i].regkey
350 tb[idx].zero = CSRpred[i].zero // zeroing
351 tb[idx].inv = CSRpred[i].inv // inverted
352 tb[idx].predidx = CSRpred[i].predidx // actual reg
353 tb[idx].bank = CSRpred[i].bank // 0 for now
354 tb[idx].enabled = true
355 \end{semiverbatim}
356
357 \begin{itemize}
358 \item All 32 int and 32 FP entries zero'd before setting\\
359 (predication disabled)
360 \item Might be a bit complex to set up in hardware (keep as CAM?)
361 \end{itemize}
362
363 \end{frame}
364
365
366 \begin{frame}[fragile]
367 \frametitle{Get Predication value pseudocode}
368
369 \begin{semiverbatim}
370 def get\_pred\_val(bool is\_fp\_op, int reg):
371 tb = int\_pred if is\_fp\_op else fp\_pred
372 if (!tb[reg].enabled):
373 return ~0x0 // all ops enabled
374 predidx = tb[reg].predidx // redirection occurs HERE
375 predicate = intreg[predidx] // actual predicate HERE
376 if (tb[reg].inv):
377 predicate = ~predicate // invert ALL bits
378 return predicate
379 \end{semiverbatim}
380
381 \begin{itemize}
382 \item References different (internal) mapping table for INT or FP
383 \item Actual predicate bitmask ALWAYS from the INT regfile
384 \item Hard-limit on MVL of XLEN (predication only 1 intreg)
385 \end{itemize}
386
387 \end{frame}
388
389
390 \frame{\frametitle{To Zero or not to place zeros in non-predicated elements?}
391
392 \begin{itemize}
393 \item Zeroing is an implementation optimisation favouring OoO
394 \item Simple implementations may skip non-predicated operations
395 \item Simple implementations explicitly have to destroy data
396 \item Complex implementations may use reg-renames to save power\\
397 Zeroing on predication chains makes optimisation harder
398 \item Compromise: REQUIRE both (specified in predication CSRs).
399 \end{itemize}
400 Considerations:
401 \begin{itemize}
402 \item Complex not really impacted, simple impacted a LOT\\
403 with Zeroing... however it's useful (memzero)
404 \item Non-zero'd overlapping "Vectors" may issue overlapping ops\\
405 (2nd op's predicated elements slot in 1st's non-predicated ops)
406 \item Please don't use Vectors for "security" (use Sec-Ext)
407 \end{itemize}
408 }
409 % with overlapping "vectors" - bearing in mind that "vectors" are
410 % just a remap onto the standard register file, if the top bits of
411 % predication are zero, and there happens to be a second vector
412 % that uses some of the same register file that happens to be
413 % predicated out, the second vector op may be issued *at the same time*
414 % if there are available parallel ALUs to do so.
415
416
417 \frame{\frametitle{Implementation Options}
418
419 \begin{itemize}
420 \item Absolute minimum: Exceptions: if CSRs indicate "V", trap.\\
421 (Requires as absolute minimum that CSRs be in Hardware)
422 \item Hardware loop, single-instruction issue\\
423 (Do / Don't send through predication to ALU)
424 \item Hardware loop, parallel (multi-instruction) issue\\
425 (Do / Don't send through predication to ALU)
426 \item Hardware loop, full parallel ALU (not recommended)
427 \end{itemize}
428 Notes:\vspace{4pt}
429 \begin{itemize}
430 \item 4 (or more?) options above may be deployed on per-op basis
431 \item SIMD always sends predication bits to ALU (if requested)
432 \item Minimum MVL MUST be sufficient to cover regfile LD/ST
433 \item Instr. FIFO may repeatedly split off N scalar ops at a time
434 \end{itemize}
435 }
436 % Instr. FIFO may need its own slide. Basically, the vectorised op
437 % gets pushed into the FIFO, where it is then "processed". Processing
438 % will remove the first set of ops from its vector numbering (taking
439 % predication into account) and shoving them **BACK** into the FIFO,
440 % but MODIFYING the remaining "vectorised" op, subtracting the now
441 % scalar ops from it.
442
443 \frame{\frametitle{Predicated 8-parallel ADD: 1-wide ALU (no zeroing)}
444 \begin{center}
445 \includegraphics[height=2.5in]{padd9_alu1.png}\\
446 {\bf \red Predicated adds are shuffled down: 6 cycles in total}
447 \end{center}
448 }
449
450
451 \frame{\frametitle{Predicated 8-parallel ADD: 4-wide ALU (no zeroing)}
452 \begin{center}
453 \includegraphics[height=2.5in]{padd9_alu4.png}\\
454 {\bf \red Predicated adds are shuffled down: 4 in 1st cycle, 2 in 2nd}
455 \end{center}
456 }
457
458
459 \frame{\frametitle{Predicated 8-parallel ADD: 3 phase FIFO expansion}
460 \begin{center}
461 \includegraphics[height=2.5in]{padd9_fifo.png}\\
462 {\bf \red First cycle takes first four 1s; second takes the rest}
463 \end{center}
464 }
465
466
467 \begin{frame}[fragile]
468 \frametitle{ADD pseudocode with redirection (and proper predication)}
469
470 \begin{semiverbatim}
471 function op\_add(rd, rs1, rs2) # add not VADD!
472  int i, id=0, irs1=0, irs2=0;
473  rd = int\_vec[rd ].isvector ? int\_vec[rd ].regidx : rd;
474  rs1 = int\_vec[rs1].isvector ? int\_vec[rs1].regidx : rs1;
475  rs2 = int\_vec[rs2].isvector ? int\_vec[rs2].regidx : rs2;
476  predval = get\_pred\_val(FALSE, rd);
477  for (i = 0; i < VL; i++)
478 if (predval \& 1<<i) # predication uses intregs
479    ireg[rd+id] <= ireg[rs1+irs1] + ireg[rs2+irs2];
480 if (int\_vec[rd ].isvector)  \{ id += 1; \}
481 if (int\_vec[rs1].isvector)  \{ irs1 += 1; \}
482 if (int\_vec[rs2].isvector)  \{ irs2 += 1; \}
483 \end{semiverbatim}
484
485 \begin{itemize}
486 \item SIMD (elwidth != default) not covered above
487 \end{itemize}
488 \end{frame}
489
490
491 \frame{\frametitle{How are SIMD Instructions Vectorised?}
492
493 \begin{itemize}
494 \item SIMD ALU(s) primarily unchanged
495 \item Predication added down to each SIMD element (if requested,
496 otherwise entire block will be predicated as a whole)
497 \item Predication bits sent in groups to the ALU (if requested,
498 otherwise just one bit for the entire packed block)
499 \item End of Vector enables (additional) predication:
500 completely nullifies end-case code (ONLY in multi-bit
501 predication mode)
502 \end{itemize}
503 Considerations:
504 \begin{itemize}
505 \item Many SIMD ALUs possible (parallel execution)
506 \item Implementor free to choose (API remains the same)
507 \item Unused ALU units wasted, but s/w DRASTICALLY simpler
508 \item Very long SIMD ALUs could waste significant die area
509 \end{itemize}
510 }
511 % With multiple SIMD ALUs at for example 32-bit wide they can be used
512 % to either issue 64-bit or 128-bit or 256-bit wide SIMD operations
513 % or they can be used to cover several operations on totally different
514 % vectors / registers.
515
516 \frame{\frametitle{Predicated 9-parallel SIMD ADD (Packed=Y)}
517 \begin{center}
518 \includegraphics[height=2.5in]{padd9_simd.png}\\
519 {\bf \red 4-wide 8-bit SIMD, 4 bits of predicate passed to ALU}
520 \end{center}
521 }
522
523
524 \frame{\frametitle{Why are overlaps allowed in Regfiles?}
525
526 \begin{itemize}
527 \item Same target register(s) can have multiple "interpretations"
528 \item CSRs are costly to write to (do it once)
529 \item Set "real" register (scalar) without needing to set/unset CSRs.
530 \item xBitManip plus SIMD plus xBitManip = Hi/Lo bitops
531 \item (32-bit GREV plus 4x8-bit SIMD plus 32-bit GREV:\\
532 GREV @ VL=N,wid=32; SIMD @ VL=Nx4,wid=8)
533 \item RGB 565 (video): BEXTW plus 4x8-bit SIMD plus BDEPW\\
534 (BEXT/BDEP @ VL=N,wid=32; SIMD @ VL=Nx4,wid=8)
535 \item Same register(s) can be offset (no need for VSLIDE)\vspace{6pt}
536 \end{itemize}
537 Note:
538 \begin{itemize}
539 \item xBitManip reduces O($N^{6}$) SIMD down to O($N^{3}$) on its own.
540 \item Hi-Performance: Macro-op fusion (more pipeline stages?)
541 \end{itemize}
542 }
543
544
545 \frame{\frametitle{C.MV extremely flexible!}
546
547 \begin{itemize}
548 \item scalar-to-vector (w/ no pred): VSPLAT
549 \item scalar-to-vector (w/ dest-pred): Sparse VSPLAT
550 \item scalar-to-vector (w/ 1-bit dest-pred): VINSERT
551 \item vector-to-scalar (w/ [1-bit?] src-pred): VEXTRACT
552 \item vector-to-vector (w/ no pred): Vector Copy
553 \item vector-to-vector (w/ src pred): Vector Gather (inc VSLIDE)
554 \item vector-to-vector (w/ dest pred): Vector Scatter (inc. VSLIDE)
555 \item vector-to-vector (w/ src \& dest pred): Vector Gather/Scatter
556 \end{itemize}
557 \vspace{4pt}
558 Notes:
559 \begin{itemize}
560 \item Surprisingly powerful! Zero-predication even more so
561 \item Same arrangement for FCVT, FMV, FSGNJ etc.
562 \end{itemize}
563 }
564
565
566 \begin{frame}[fragile]
567 \frametitle{MV pseudocode with predication}
568
569 \begin{semiverbatim}
570 function op\_mv(rd, rs) # MV not VMV!
571  rd = int\_vec[rd].isvector ? int\_vec[rd].regidx : rd;
572  rs = int\_vec[rs].isvector ? int\_vec[rs].regidx : rs;
573  ps = get\_pred\_val(FALSE, rs); # predication on src
574  pd = get\_pred\_val(FALSE, rd); # ... AND on dest
575  for (int i = 0, int j = 0; i < VL && j < VL;):
576 if (int\_vec[rs].isvec) while (!(ps \& 1<<i)) i++;
577 if (int\_vec[rd].isvec) while (!(pd \& 1<<j)) j++;
578 ireg[rd+j] <= ireg[rs+i];
579 if (int\_vec[rs].isvec) i++;
580 if (int\_vec[rd].isvec) j++;
581 \end{semiverbatim}
582
583 \begin{itemize}
584 \item elwidth != default not covered above (might be a bit hairy)
585 \item Ending early with 1-bit predication not included (VINSERT)
586 \end{itemize}
587 \end{frame}
588
589
590 \begin{frame}[fragile]
591 \frametitle{VSELECT: stays or goes? Stays if MV.X exists...}
592
593 \begin{semiverbatim}
594 def op_mv_x(rd, rs): # (hypothetical) RV MX.X
595 rs = regfile[rs] # level of indirection (MV.X)
596 regfile[rd] = regfile[rs] # straight regcopy
597 \end{semiverbatim}
598
599 Vectorised version aka "VSELECT":
600
601 \begin{semiverbatim}
602 def op_mv_x(rd, rs): # SV version of MX.X
603 for i in range(VL):
604 rs1 = regfile[rs+i] # indirection
605 regfile[rd+i] = regfile[rs] # straight regcopy
606 \end{semiverbatim}
607
608 \begin{itemize}
609 \item However MV.X does not exist in RV, so neither can VSELECT
610 \item \red SV is not about adding new functionality, only parallelism
611 \end{itemize}
612
613
614 \end{frame}
615
616
617 \frame{\frametitle{Opcodes, compared to RVV}
618
619 \begin{itemize}
620 \item All integer and FP opcodes all removed (no CLIP, FNE)
621 \item VMPOP, VFIRST etc. all removed (use xBitManip)
622 \item VSLIDE removed (use regfile overlaps)
623 \item C.MV covers VEXTRACT VINSERT and VSPLAT (and more)
624 \item Vector (or scalar-vector) copy: use C.MV (MV is a pseudo-op)
625 \item VMERGE: twin predicated C.MVs (one inverted. macro-op'd)
626 \item VSETVL, VGETVL stay (the only ops that do!)
627 \end{itemize}
628 Issues:
629 \begin{itemize}
630 \item VSELECT stays? no MV.X, so no (add with custom ext?)
631 \item VSNE exists, but no FNE (use predication inversion?)
632 \item VCLIP is not in RV* (add with custom ext? or CSR?)
633 \end{itemize}
634 }
635
636
637 \begin{frame}[fragile]
638 \frametitle{Example c code: DAXPY}
639
640 \begin{semiverbatim}
641 void daxpy(size_t n, double a,
642 const double x[], double y[])
643 \{
644 for (size_t i = 0; i < n; i++) \{
645 y[i] = a*x[i] + y[i];
646 \}
647 \}
648 \end{semiverbatim}
649
650 \begin{itemize}
651 \item See "SIMD Considered Harmful" for SIMD/RVV analysis\\
652 https://sigarch.org/simd-instructions-considered-harmful/
653 \end{itemize}
654
655
656 \end{frame}
657
658
659 \begin{frame}[fragile]
660 \frametitle{RVV DAXPY assembly (RV32V)}
661
662 \begin{semiverbatim}
663 # a0 is n, a1 is ptr to x[0], a2 is ptr to y[0], fa0 is a
664 li t0, 2<<25
665 vsetdcfg t0 # enable 2 64b Fl.Pt. registers
666 loop:
667 setvl t0, a0 # vl = t0 = min(mvl, n)
668 vld v0, a1 # load vector x
669 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
670 vld v1, a2 # load vector y
671 add a1, a1, t1 # increment pointer to x by vl*8
672 vfmadd v1, v0, fa0, v1 # v1 += v0 * fa0 (y = a * x + y)
673 sub a0, a0, t0 # n -= vl (t0)
674 vst v1, a2 # store Y
675 add a2, a2, t1 # increment pointer to y by vl*8
676 bnez a0, loop # repeat if n != 0
677 \end{semiverbatim}
678 \end{frame}
679
680
681 \begin{frame}[fragile]
682 \frametitle{SV DAXPY assembly (RV64D)}
683
684 \begin{semiverbatim}
685 # a0 is n, a1 is ptr to x[0], a2 is ptr to y[0], fa0 is a
686 CSRvect1 = \{type: F, key: a3, val: a3, elwidth: dflt\}
687 CSRvect2 = \{type: F, key: a7, val: a7, elwidth: dflt\}
688 loop:
689 setvl t0, a0, 4 # vl = t0 = min(min(mvl, 4, n))
690 ld a3, a1 # load 4 registers a3-6 from x
691 slli t1, t0, 3 # t1 = vl * 8 (in bytes)
692 ld a7, a2 # load 4 registers a7-10 from y
693 add a1, a1, t1 # increment pointer to x by vl*8
694 fmadd a7, a3, fa0, a7 # v1 += v0 * fa0 (y = a * x + y)
695 sub a0, a0, t0 # n -= vl (t0)
696 st a7, a2 # store 4 registers a7-10 to y
697 add a2, a2, t1 # increment pointer to y by vl*8
698 bnez a0, loop # repeat if n != 0
699 \end{semiverbatim}
700 \end{frame}
701
702
703 \frame{\frametitle{Under consideration (some answers documented)}
704
705 \begin{itemize}
706 \item Should future extra bank be included now?
707 \item How many Register and Predication CSRs should there be?\\
708 (and how many in RV32E)
709 \item How many in M-Mode (for doing context-switch)?
710 \item Should use of registers be allowed to "wrap" (x30 x31 x1 x2)?
711 \item Can CLIP be done as a CSR (mode, like elwidth)
712 \item SIMD saturation (etc.) also set as a mode?
713 \item Include src1/src2 predication on Comparison Ops?\\
714 (same arrangement as C.MV, with same flexibility/power)
715 \item 8/16-bit ops is it worthwhile adding a "start offset"? \\
716 (a bit like misaligned addressing... for registers)\\
717 or just use predication to skip start?
718 \end{itemize}
719 }
720
721
722 \frame{\frametitle{What's the downside(s) of SV?}
723 \begin{itemize}
724 \item EVERY register operation is inherently parallelised\\
725 (scalar ops are just vectors of length 1)\vspace{4pt}
726 \item Tightly coupled with the core (instruction issue)\\
727 could be disabled through MISA switch\vspace{4pt}
728 \item An extra pipeline phase almost certainly essential\\
729 for fast low-latency implementations\vspace{4pt}
730 \item With zeroing off, skipping non-predicated elements is hard:\\
731 it is however an optimisation (and need not be done).\vspace{4pt}
732 \item Setting up the Register/Predication tables (interpreting the\\
733 CSR key-value stores) might be a bit complex to optimise
734 (any change to a CSR key-value entry needs to redo the table)
735 \end{itemize}
736 }
737
738
739 \frame{\frametitle{Summary}
740
741 \begin{itemize}
742 \item Actually about parallelism, not Vectors (or SIMD) per se\\
743 and NOT about adding new ALU/logic/functionality.
744 \item Only needs 2 actual instructions (plus the CSRs).\\
745 RVV - and "standard" SIMD - require ISA duplication
746 \item Designed for flexibility (graded levels of complexity)
747 \item Huge range of implementor freedom
748 \item Fits RISC-V ethos: achieve more with less
749 \item Reduces SIMD ISA proliferation by 3-4 orders of magnitude \\
750 (without SIMD downsides or sacrificing speed trade-off)
751 \item Covers 98\% of RVV, allows RVV to fit "on top"
752 \item Byproduct of SV is a reduction in code size, power usage
753 etc. (increase efficiency, just like Compressed)
754 \end{itemize}
755 }
756
757
758 \frame{
759 \begin{center}
760 {\Huge The end\vspace{20pt}\\
761 Thank you\vspace{20pt}\\
762 Questions?\vspace{20pt}
763 }
764 \end{center}
765
766 \begin{itemize}
767 \item Discussion: ISA-DEV mailing list
768 \item http://libre-riscv.org/simple\_v\_extension/
769 \end{itemize}
770 }
771
772
773 \end{document}