f44ae726aedd99e004a27e2b22eb63607c441656
[libreriscv.git] / openpower / sv / overview.mdwn
1 # SV Overview
2
3 **SV is in DRAFT STATUS**. SV has not yet been submitted to the OpenPOWER Foundation ISA WG for review.
4
5 This document provides an overview and introduction as to why SV (a
6 [[!wikipedia Cray]]-style Vector augmentation to [[!wikipedia OpenPOWER]]) exists, and how it works.
7
8 **Sponsored by NLnet under the Privacy and Enhanced Trust Programme**
9
10 Links:
11
12 * This page: [http://libre-soc.org/openpower/sv/overview](http://libre-soc.org/openpower/sv/overview)
13 * [FOSDEM2021 SimpleV for OpenPOWER](https://fosdem.org/2021/schedule/event/the_libresoc_project_simple_v_vectorisation/)
14 * FOSDEM2021 presentation <https://www.youtube.com/watch?v=FS6tbfyb2VA>
15 * [[discussion]] and
16 [bugreport](https://bugs.libre-soc.org/show_bug.cgi?id=556)
17 feel free to add comments, questions.
18 * [[SV|sv]]
19 * [[sv/svp64]]
20 * [x86 REP instruction](https://c9x.me/x86/html/file_module_x86_id_279.html):
21 a useful way to quickly understand that the core of the SV concept
22 is not new.
23 * [Article about register tagging](http://science.lpnu.ua/sites/default/files/journal-paper/2019/jul/17084/volum3number1text-9-16_1.pdf) showing
24 that tagging is not a new idea either. Register tags
25 are also used in the Mill Architecture.
26
27 Table of contents:
28
29 [[!toc]]
30
31 # Introduction: SIMD and Cray Vectors
32
33 SIMD, the primary method for easy parallelism of the
34 past 30 years in Computer Architectures, is
35 [known to be harmful](https://www.sigarch.org/simd-instructions-considered-harmful/).
36 SIMD provides a seductive simplicity that is easy to implement in
37 hardware. With each doubling in width it promises increases in raw
38 performance without the complexity of either multi-issue or out-of-order
39 execution.
40
41 Unfortunately, even with predication added, SIMD only becomes more and
42 more problematic with each power of two SIMD width increase introduced
43 through an ISA revision. The opcode proliferation, at O(N^6), inexorably
44 spirals out of control in the ISA, detrimentally impacting the hardware,
45 the software, the compilers and the testing and compliance. Here are
46 the typical dimensions that result in such massive proliferation:
47
48 * Operation (add, mul)
49 * bitwidth (8, 16, 32, 64, 128)
50 * Conversion between bitwidths (FP16-FP32-64)
51 * Signed/unsigned
52 * HI/LO swizzle (Audio L/R channels)
53 - HI/LO selection on src 1
54 - selection on src 2
55 - selection on dest
56 - Example: AndesSTAR Audio DSP
57 * Saturation (Clamping at max range)
58
59 These typically are multiplied up to produce explicit opcodes numbering
60 in the thousands on, for example the ARC Video/DSP cores.
61
62 Cray-style variable-length Vectors on the other hand result in
63 stunningly elegant and small loops, exceptionally high data throughput
64 per instruction (by one *or greater* orders of magnitude than SIMD), with
65 no alarmingly high setup and cleanup code, where at the hardware level
66 the microarchitecture may execute from one element right the way through
67 to tens of thousands at a time, yet the executable remains exactly the
68 same and the ISA remains clear, true to the RISC paradigm, and clean.
69 Unlike in SIMD, powers of two limitations are not involved in the ISA
70 or in the assembly code.
71
72 SimpleV takes the Cray style Vector principle and applies it in the
73 abstract to a Scalar ISA in the same way that x86 used to do its "REP" instruction. In the process, "context" is applied, allowing amongst other things
74 a register file size
75 increase using "tagging" (similar to how x86 originally extended
76 registers from 32 to 64 bit).
77
78 ## SV
79
80 The fundamentals are (just like x86 "REP"):
81
82 * The Program Counter (PC) gains a "Sub Counter" context (Sub-PC)
83 * Vectorisation pauses the PC and runs a Sub-PC loop from 0 to VL-1
84 (where VL is Vector Length)
85 * The [[Program Order]] of "Sub-PC" instructions must be preserved,
86 just as is expected of instructions ordered by the PC.
87 * Some registers may be "tagged" as Vectors
88 * During the loop, "Vector"-tagged register are incremented by
89 one with each iteration, executing the *same instruction*
90 but with *different registers*
91 * Once the loop is completed *only then* is the Program Counter
92 allowed to move to the next instruction.
93
94 ![image](/svp64-primer/img/power_pipelines.svg)
95
96 Hardware (and simulator) implementors are free and clear to implement this
97 as literally a for-loop, sitting in between instruction decode and issue.
98 Higher performance systems may deploy SIMD backends, multi-issue and
99 out-of-order execution, although it is strongly recommended to add
100 predication capability directly into SIMD backend units.
101
102 In OpenPOWER ISA v3.0B pseudo-code form, an ADD operation, assuming both
103 source and destination have been "tagged" as Vectors, is simply:
104
105 for i = 0 to VL-1:
106 GPR(RT+i) = GPR(RA+i) + GPR(RB+i)
107
108 At its heart, SimpleV really is this simple. On top of this fundamental
109 basis further refinements can be added which build up towards an extremely
110 powerful Vector augmentation system, with very little in the way of
111 additional opcodes required: simply external "context".
112
113 x86 was originally only 80 instructions: prior to AVX512 over 1,300
114 additional instructions have been added, almost all of them SIMD.
115
116 RISC-V RVV as of version 0.9 is over 188 instructions (more than the
117 rest of RV64G combined: 80 for RV64G and 27 for C). Over 95% of that
118 functionality is added to OpenPOWER v3 0B, by SimpleV augmentation,
119 with around 5 to 8 instructions.
120
121 Even in OpenPOWER v3.0B, the Scalar Integer ISA is around 150
122 instructions, with IEEE754 FP adding approximately 80 more. VSX, being
123 based on SIMD design principles, adds somewhere in the region of 600 more.
124 SimpleV again provides over 95% of VSX functionality, simply by augmenting
125 the *Scalar* OpenPOWER ISA, and in the process providing features such
126 as predication, which VSX is entirely missing.
127
128 AVX512, SVE2, VSX, RVV, all of these systems have to provide different
129 types of register files: Scalar and Vector is the minimum. AVX512
130 even provides a mini mask regfile, followed by explicit instructions
131 that handle operations on each of them *and map between all of them*.
132 SV simply not only uses the existing scalar regfiles (including CRs),
133 but because operations exist within OpenPOWER to cover interactions
134 between the scalar regfiles (`mfcr`, `fcvt`) there is very little that
135 needs to be added.
136
137 In fairness to both VSX and RVV, there are things that are not provided
138 by SimpleV:
139
140 * 128 bit or above arithmetic and other operations
141 (VSX Rijndael and SHA primitives; VSX shuffle and bitpermute operations)
142 * register files above 128 entries
143 * Vector lengths over 64
144 * Unit-strided LD/ST and other comprehensive memory operations
145 (struct-based LD/ST from RVV for example)
146 * 32-bit instruction lengths. [[svp64]] had to be added as 64 bit.
147
148 These limitations, which stem inherently from the adaptation process of
149 starting from a Scalar ISA, are not insurmountable. Over time, they may
150 well be addressed in future revisions of SV.
151
152 The rest of this document builds on the above simple loop to add:
153
154 * Vector-Scalar, Scalar-Vector and Scalar-Scalar operation
155 (of all register files: Integer, FP *and CRs*)
156 * Traditional Vector operations (VSPLAT, VINSERT, VCOMPRESS etc)
157 * Predication masks (essential for parallel if/else constructs)
158 * 8, 16 and 32 bit integer operations, and both FP16 and BF16.
159 * Compacted operations into registers (normally only provided by SIMD)
160 * Fail-on-first (introduced in ARM SVE2)
161 * A new concept: Data-dependent fail-first
162 * Condition-Register based *post-result* predication (also new)
163 * A completely new concept: "Twin Predication"
164 * vec2/3/4 "Subvectors" and Swizzling (standard fare for 3D)
165
166 All of this is *without modifying the OpenPOWER v3.0B ISA*, except to add
167 "wrapping context", similar to how v3.1B 64 Prefixes work.
168
169 # Adding Scalar / Vector
170
171 The first augmentation to the simple loop is to add the option for all
172 source and destinations to all be either scalar or vector. As a FSM
173 this is where our "simple" loop gets its first complexity.
174
175 function op_add(RT, RA, RB) # add not VADD!
176 int id=0, irs1=0, irs2=0;
177 for i = 0 to VL-1:
178 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
179 if (!RT.isvec) break;
180 if (RT.isvec) { id += 1; }
181 if (RA.isvec) { irs1 += 1; }
182 if (RB.isvec) { irs2 += 1; }
183
184 This could have been written out as eight separate cases: one each for
185 when each of `RA`, `RB` or `RT` is scalar or vector. Those eight cases,
186 when optimally combined, result in the pseudocode above.
187
188 With some walkthroughs it is clear that the loop exits immediately
189 after the first scalar destination result is written, and that when the
190 destination is a Vector the loop proceeds to fill up the register file,
191 sequentially, starting at `RT` and ending at `RT+VL-1`. The two source
192 registers will, independently, either remain pointing at `RB` or `RA`
193 respectively, or, if marked as Vectors, will march incrementally in
194 lockstep, producing element results along the way, as the destination
195 also progresses through elements.
196
197 In this way all the eight permutations of Scalar and Vector behaviour
198 are covered, although without predication the scalar-destination ones are
199 reduced in usefulness. It does however clearly illustrate the principle.
200
201 Note in particular: there is no separate Scalar add instruction and
202 separate Vector instruction and separate Scalar-Vector instruction, *and
203 there is no separate Vector register file*: it's all the same instruction,
204 on the standard register file, just with a loop. Scalar happens to set
205 that loop size to one.
206
207 The important insight from the above is that, strictly speaking, Simple-V
208 is not really a Vectorisation scheme at all: it is more of a hardware
209 ISA "Compression scheme", allowing as it does for what would normally
210 require multiple sequential instructions to be replaced with just one.
211 This is where the rule that Program Order must be preserved in Sub-PC
212 execution derives from. However in other ways, which will emerge below,
213 the "tagging" concept presents an opportunity to include features
214 definitely not common outside of Vector ISAs, and in that regard it's
215 definitely a class of Vectorisation.
216
217 ## Register "tagging"
218
219 As an aside: in [[sv/svp64]] the encoding which allows SV to both extend
220 the range beyond r0-r31 and to determine whether it is a scalar or vector
221 is encoded in two to three bits, depending on the instruction.
222
223 The reason for using so few bits is because there are up to *four*
224 registers to mark in this way (`fma`, `isel`) which starts to be of
225 concern when there are only 24 available bits to specify the entire SV
226 Vectorisation Context. In fact, for a small subset of instructions it
227 is just not possible to tag every single register. Under these rare
228 circumstances a tag has to be shared between two registers.
229
230 Below is the pseudocode which expresses the relationship which is usually
231 applied to *every* register:
232
233 if extra3_mode:
234 spec = EXTRA3 # bit 2 s/v, 0-1 extends range
235 else:
236 spec = EXTRA2 << 1 # same as EXTRA3, shifted
237 if spec[2]: # vector
238 RA.isvec = True
239 return (RA << 2) | spec[0:1]
240 else: # scalar
241 RA.isvec = False
242 return (spec[0:1] << 5) | RA
243
244 Here we can see that the scalar registers are extended in the top bits,
245 whilst vectors are shifted up by 2 bits, and then extended in the LSBs.
246 Condition Registers have a slightly different scheme, along the same
247 principle, which takes into account the fact that each CR may be bit-level
248 addressed by Condition Register operations.
249
250 Readers familiar with OpenPOWER will know of Rc=1 operations that create
251 an associated post-result "test", placing this test into an implicit
252 Condition Register. The original researchers who created the POWER ISA
253 chose CR0 for Integer, and CR1 for Floating Point. These *also become
254 Vectorised* - implicitly - if the associated destination register is
255 also Vectorised. This allows for some very interesting savings on
256 instruction count due to the very same CR Vectors being predication masks.
257
258 # Adding single predication
259
260 The next step is to add a single predicate mask. This is where it gets
261 interesting. Predicate masks are a bitvector, each bit specifying, in
262 order, whether the element operation is to be skipped ("masked out")
263 or allowed. If there is no predicate, it is set to all 1s, which is
264 effectively the same as "no predicate".
265
266 function op_add(RT, RA, RB) # add not VADD!
267 int id=0, irs1=0, irs2=0;
268 predval = get_pred_val(FALSE, rd);
269 for i = 0 to VL-1:
270 if (predval & 1<<i) # predication bit test
271 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
272 if (!RT.isvec) break;
273 if (RT.isvec) { id += 1; }
274 if (RA.isvec) { irs1 += 1; }
275 if (RB.isvec) { irs2 += 1; }
276
277 The key modification is to skip the creation and storage of the result
278 if the relevant predicate mask bit is clear, but *not the progression
279 through the registers*.
280
281 A particularly interesting case is if the destination is scalar, and the
282 first few bits of the predicate are zero. The loop proceeds to increment
283 the Vector *source* registers until the first nonzero predicate bit is
284 found, whereupon a single *Scalar* result is computed, and *then* the loop
285 exits.
286 This in effect uses the predicate to perform *Vector source indexing*.
287 This case was not possible without the predicate mask. Also, interestingly,
288 the predicate mode `1<<r3` is specifically provided as a way to select
289 one single entry from a Vector.
290
291 If all three registers are marked as Vector then the "traditional"
292 predicated Vector behaviour is provided. Yet, just as before, all other
293 options are still provided, right the way back to the pure-scalar case,
294 as if this were a straight OpenPOWER v3.0B non-augmented instruction.
295
296 Single Predication therefore provides several modes traditionally seen
297 in Vector ISAs:
298
299 * VINSERT: the predicate may be set as a single bit, the sources are
300 scalar and the destination a vector.
301 * VSPLAT (result broadcasting) is provided by making the sources scalar
302 and the destination a vector, and having no predicate set or having
303 multiple bits set.
304 * VSELECT is provided by setting up (at least one of) the sources as a
305 vector, using a single bit in the predicate, and the destination as
306 a scalar.
307
308 All of this capability and coverage without even adding one single actual
309 Vector opcode, let alone 180, 600 or 1,300!
310
311 # Predicate "zeroing" mode
312
313 Sometimes with predication it is ok to leave the masked-out element
314 alone (not modify the result) however sometimes it is better to zero the
315 masked-out elements. Zeroing can be combined with bit-wise ORing to build
316 up vectors from multiple predicate patterns: the same combining with
317 nonzeroing involves more mv operations and predicate mask operations.
318 Our pseudocode therefore ends up as follows, to take the enhancement
319 into account:
320
321 function op_add(RT, RA, RB) # add not VADD!
322 int id=0, irs1=0, irs2=0;
323 predval = get_pred_val(FALSE, rd);
324 for i = 0 to VL-1:
325 if (predval & 1<<i) # predication bit test
326 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
327 if (!RT.isvec) break;
328 else if zeroing: # predicate failed
329 ireg[RT+id] = 0 # set element to zero
330 if (RT.isvec) { id += 1; }
331 if (RA.isvec) { irs1 += 1; }
332 if (RB.isvec) { irs2 += 1; }
333
334 Many Vector systems either have zeroing or they have nonzeroing, they
335 do not have both. This is because they usually have separate Vector
336 register files. However SV sits on top of standard register files and
337 consequently there are advantages to both, so both are provided.
338
339 # Element Width overrides <a name="elwidths"></a>
340
341 All good Vector ISAs have the usual bitwidths for operations: 8/16/32/64
342 bit integer operations, and IEEE754 FP32 and 64. Often also included
343 is FP16 and more recently BF16. The *really* good Vector ISAs have
344 variable-width vectors right down to bitlevel, and as high as 1024 bit
345 arithmetic per element, as well as IEEE754 FP128.
346
347 SV has an "override" system that *changes* the bitwidth of operations
348 that were intended by the original scalar ISA designers to have (for
349 example) 64 bit operations (only). The override widths are 8, 16 and
350 32 for integer, and FP16 and FP32 for IEEE754 (with BF16 to be added in
351 the future).
352
353 This presents a particularly intriguing conundrum given that the OpenPOWER
354 Scalar ISA was never designed with for example 8 bit operations in mind,
355 let alone Vectors of 8 bit.
356
357 The solution comes in terms of rethinking the definition of a Register
358 File. The typical regfile may be considered to be a multi-ported SRAM
359 block, 64 bits wide and usually 32 entries deep, to give 32 64 bit
360 registers. In c this would be:
361
362 typedef uint64_t reg_t;
363 reg_t int_regfile[32]; // standard scalar 32x 64bit
364
365 Conceptually, to get our variable element width vectors,
366 we may think of the regfile as instead being the following c-based data
367 structure, where all types uint16_t etc. are in little-endian order:
368
369 #pragma(packed)
370 typedef union {
371 uint8_t actual_bytes[8];
372 uint8_t b[0]; // array of type uint8_t
373 uint16_t s[0]; // array of LE ordered uint16_t
374 uint32_t i[0];
375 uint64_t l[0]; // default OpenPOWER ISA uses this
376 } reg_t;
377
378 reg_t int_regfile[128]; // SV extends to 128 regs
379
380 This means that Vector elements start from locations specified by 64 bit
381 "register" but that from that location onwards the elements *overlap
382 subsequent registers*.
383
384 ![image](/svp64-primer/img/svp64_regs.svg){ width=40% }
385
386 Here is another way to view the same concept, bearing in mind that it
387 is assumed a LE memory order:
388
389 uint8_t reg_sram[8*128];
390 uint8_t *actual_bytes = &reg_sram[RA*8];
391 if elwidth == 8:
392 uint8_t *b = (uint8_t*)actual_bytes;
393 b[idx] = result;
394 if elwidth == 16:
395 uint16_t *s = (uint16_t*)actual_bytes;
396 s[idx] = result;
397 if elwidth == 32:
398 uint32_t *i = (uint32_t*)actual_bytes;
399 i[idx] = result;
400 if elwidth == default:
401 uint64_t *l = (uint64_t*)actual_bytes;
402 l[idx] = result;
403
404 Starting with all zeros, setting `actual_bytes[3]` in any given `reg_t`
405 to 0x01 would mean that:
406
407 * b[0..2] = 0x00 and b[3] = 0x01
408 * s[0] = 0x0000 and s[1] = 0x0001
409 * i[0] = 0x00010000
410 * l[0] = 0x0000000000010000
411
412 In tabular form, starting an elwidth=8 loop from r0 and extending for
413 16 elements would begin at r0 and extend over the entirety of r1:
414
415 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
416 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
417 r0 | b[0] | b[1] | b[2] | b[3] | b[4] | b[5] | b[6] | b[7] |
418 r1 | b[8] | b[9] | b[10] | b[11] | b[12] | b[13] | b[14] | b[15] |
419
420 Starting an elwidth=16 loop from r0 and extending for
421 7 elements would begin at r0 and extend partly over r1. Note that
422 b0 indicates the low byte (lowest 8 bits) of each 16-bit word, and
423 b1 represents the top byte:
424
425 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
426 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
427 r0 | s[0].b0 b1 | s[1].b0 b1 | s[2].b0 b1 | s[3].b0 b1 |
428 r1 | s[4].b0 b1 | s[5].b0 b1 | s[6].b0 b1 | unmodified |
429
430 Likewise for elwidth=32, and a loop extending for 3 elements. b0 through
431 b3 represent the bytes (numbered lowest for LSB and highest for MSB) within
432 each element word:
433
434 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
435 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
436 r0 | w[0].b0 b1 b2 b3 | w[1].b0 b1 b2 b3 |
437 r1 | w[2].b0 b1 b2 b3 | unmodified unmodified |
438
439 64-bit (default) elements access the full registers. In each case the
440 register number (`RT`, `RA`) indicates the *starting* point for the storage
441 and retrieval of the elements.
442
443 Our simple loop, instead of accessing the array of regfile entries
444 with a computed index `iregs[RT+i]`, would access the appropriate element
445 of the appropriate width, such as `iregs[RT].s[i]` in order to access
446 16 bit elements starting from RT. Thus we have a series of overlapping
447 conceptual arrays that each start at what is traditionally thought of as
448 "a register". It then helps if we have a couple of routines:
449
450 get_polymorphed_reg(reg, bitwidth, offset):
451 reg_t res = 0;
452 if (!reg.isvec): # scalar
453 offset = 0
454 if bitwidth == 8:
455 reg.b = int_regfile[reg].b[offset]
456 elif bitwidth == 16:
457 reg.s = int_regfile[reg].s[offset]
458 elif bitwidth == 32:
459 reg.i = int_regfile[reg].i[offset]
460 elif bitwidth == default: # 64
461 reg.l = int_regfile[reg].l[offset]
462 return res
463
464 set_polymorphed_reg(reg, bitwidth, offset, val):
465 if (!reg.isvec): # scalar
466 offset = 0
467 if bitwidth == 8:
468 int_regfile[reg].b[offset] = val
469 elif bitwidth == 16:
470 int_regfile[reg].s[offset] = val
471 elif bitwidth == 32:
472 int_regfile[reg].i[offset] = val
473 elif bitwidth == default: # 64
474 int_regfile[reg].l[offset] = val
475
476 These basically provide a convenient parameterised way to access the
477 register file, at an arbitrary vector element offset and an arbitrary
478 element width. Our first simple loop thus becomes:
479
480 for i = 0 to VL-1:
481 src1 = get_polymorphed_reg(RA, srcwid, i)
482 src2 = get_polymorphed_reg(RB, srcwid, i)
483 result = src1 + src2 # actual add here
484 set_polymorphed_reg(RT, destwid, i, result)
485
486 With this loop, if elwidth=16 and VL=3 the first 48 bits of the target
487 register will contain three 16 bit addition results, and the upper 16
488 bits will be *unaltered*.
489
490 Note that things such as zero/sign-extension (and predication) have
491 been left out to illustrate the elwidth concept. Also note that it turns
492 out to be important to perform the operation internally at effectively an *infinite* bitwidth such that any truncation, rounding errors or
493 other artefacts may all be ironed out. This turns out to be important
494 when applying Saturation for Audio DSP workloads, particularly for multiply and IEEE754 FP rounding. By "infinite" this is conceptual only: in reality, the application of the different truncations and width-extensions set a fixed deterministic practical limit on the internal precision needed, on a per-operation basis.
495
496 Other than that, element width overrides, which can be applied to *either*
497 source or destination or both, are pretty straightforward, conceptually.
498 The details, for hardware engineers, involve byte-level write-enable
499 lines, which is exactly what is used on SRAMs anyway. Compiler writers
500 have to alter Register Allocation Tables to byte-level granularity.
501
502 One critical thing to note: upper parts of the underlying 64 bit
503 register are *not zero'd out* by a write involving a non-aligned Vector
504 Length. An 8 bit operation with VL=7 will *not* overwrite the 8th byte
505 of the destination. The only situation where a full overwrite occurs
506 is on "default" behaviour. This is extremely important to consider the
507 register file as a byte-level store, not a 64-bit-level store.
508
509 ## Why a LE regfile?
510
511 The concept of having a regfile where the byte ordering of the underlying
512 SRAM seems utter nonsense. Surely, a hardware implementation gets to
513 choose the order, right? It's memory only where LE/BE matters, right? The
514 bytes come in, all registers are 64 bit and it's just wiring, right?
515
516 Ordinarily this would be 100% correct, in both a scalar ISA and in a Cray
517 style Vector one. The assumption in that last question was, however, "all
518 registers are 64 bit". SV allows SIMD-style packing of vectors into the
519 64 bit registers, where one instruction and the next may interpret that
520 very same register as containing elements of completely different widths.
521
522 Consequently it becomes critically important to decide a byte-order.
523 That decision was - arbitrarily - LE mode. Actually it wasn't arbitrary
524 at all: it was such hell to implement BE supported interpretations of CRs
525 and LD/ST in LibreSOC, based on a terse spec that provides insufficient
526 clarity and assumes significant working knowledge of OpenPOWER, with
527 arbitrary insertions of 7-index here and 3-bitindex there, the decision
528 to pick LE was extremely easy.
529
530 Without such a decision, if two words are packed as elements into a 64
531 bit register, what does this mean? Should they be inverted so that the
532 lower indexed element goes into the HI or the LO word? should the 8
533 bytes of each register be inverted? Should the bytes in each element
534 be inverted? Should the element indexing loop order be broken onto
535 discontiguous chunks such as 32107654 rather than 01234567, and if so
536 at what granularity of discontinuity? These are all equally valid and
537 legitimate interpretations of what constitutes "BE" and they all cause
538 merry mayhem.
539
540 The decision was therefore made: the c typedef union is the canonical
541 definition, and its members are defined as being in LE order. From there,
542 implementations may choose whatever internal HDL wire order they like
543 as long as the results produced conform to the elwidth pseudocode.
544
545 *Note: it turns out that both x86 SIMD and NEON SIMD follow this convention, namely that both are implicitly LE, even though their ISA Manuals may not explicitly spell this out*
546
547 * <https://developer.arm.com/documentation/ddi0406/c/Application-Level-Architecture/Application-Level-Memory-Model/Endian-support/Endianness-in-Advanced-SIMD?lang=en>
548 * <https://stackoverflow.com/questions/24045102/how-does-endianness-work-with-simd-registers>
549 * <https://llvm.org/docs/BigEndianNEON.html>
550
551
552 ## Source and Destination overrides
553
554 A minor fly in the ointment: what happens if the source and destination
555 are over-ridden to different widths? For example, FP16 arithmetic is
556 not accurate enough and may introduce rounding errors when up-converted
557 to FP32 output. The rule is therefore set:
558
559 The operation MUST take place effectively at infinite precision:
560 actual precision determined by the operation and the operand widths
561
562 In pseudocode this is:
563
564 for i = 0 to VL-1:
565 src1 = get_polymorphed_reg(RA, srcwid, i)
566 src2 = get_polymorphed_reg(RB, srcwid, i)
567 opwidth = max(srcwid, destwid) # usually
568 result = op_add(src1, src2, opwidth) # at max width
569 set_polymorphed_reg(rd, destwid, i, result)
570
571 In reality the source and destination widths determine the actual required
572 precision in a given ALU. The reason for setting "effectively" infinite precision
573 is illustrated for example by Saturated-multiply, where if the internal precision was insufficient it would not be possible to correctly determine the maximum clip range had been exceeded.
574
575 Thus it will turn out that under some conditions the combination of the
576 extension of the source registers followed by truncation of the result
577 gets rid of bits that didn't matter, and the operation might as well have
578 taken place at the narrower width and could save resources that way.
579 Examples include Logical OR where the source extension would place
580 zeros in the upper bits, the result will be truncated and throw those
581 zeros away.
582
583 Counterexamples include the previously mentioned FP16 arithmetic,
584 where for operations such as division of large numbers by very small
585 ones it should be clear that internal accuracy will play a major role
586 in influencing the result. Hence the rule that the calculation takes
587 place at the maximum bitwidth, and truncation follows afterwards.
588
589 ## Signed arithmetic
590
591 What happens when the operation involves signed arithmetic? Here the
592 implementor has to use common sense, and make sure behaviour is accurately
593 documented. If the result of the unmodified operation is sign-extended
594 because one of the inputs is signed, then the input source operands must
595 be first read at their overridden bitwidth and *then* sign-extended:
596
597 for i = 0 to VL-1:
598 src1 = get_polymorphed_reg(RA, srcwid, i)
599 src2 = get_polymorphed_reg(RB, srcwid, i)
600 opwidth = max(srcwid, destwid)
601 # srces known to be less than result width
602 src1 = sign_extend(src1, srcwid, opwidth)
603 src2 = sign_extend(src2, srcwid, opwidth)
604 result = op_signed(src1, src2, opwidth) # at max width
605 set_polymorphed_reg(rd, destwid, i, result)
606
607 The key here is that the cues are taken from the underlying operation.
608
609 ## Saturation
610
611 Audio DSPs need to be able to clip sound when the "volume" is adjusted,
612 but if it is too loud and the signal wraps, distortion occurs. The
613 solution is to clip (saturate) the audio and allow this to be detected.
614 In practical terms this is a post-result analysis however it needs to
615 take place at the largest bitwidth i.e. before a result is element width
616 truncated. Only then can the arithmetic saturation condition be detected:
617
618 for i = 0 to VL-1:
619 src1 = get_polymorphed_reg(RA, srcwid, i)
620 src2 = get_polymorphed_reg(RB, srcwid, i)
621 opwidth = max(srcwid, destwid)
622 # unsigned add
623 result = op_add(src1, src2, opwidth) # at max width
624 # now saturate (unsigned)
625 sat = min(result, (1<<destwid)-1)
626 set_polymorphed_reg(rd, destwid, i, sat)
627 # set sat overflow
628 if Rc=1:
629 CR[i].ov = (sat != result)
630
631 So the actual computation took place at the larger width, but was
632 post-analysed as an unsigned operation. If however "signed" saturation
633 is requested then the actual arithmetic operation has to be carefully
634 analysed to see what that actually means.
635
636 In terms of FP arithmetic, which by definition has a sign bit (so
637 always takes place as a signed operation anyway), the request to saturate
638 to signed min/max is pretty clear. However for integer arithmetic such
639 as shift (plain shift, not arithmetic shift), or logical operations
640 such as XOR, which were never designed to have the assumption that its
641 inputs be considered as signed numbers, common sense has to kick in,
642 and follow what CR0 does.
643
644 CR0 for Logical operations still applies: the test is still applied to
645 produce CR.eq, CR.lt and CR.gt analysis. Following this lead we may
646 do the same thing: although the input operations for and OR or XOR can
647 in no way be thought of as "signed" we may at least consider the result
648 to be signed, and thus apply min/max range detection -128 to +127 when
649 truncating down to 8 bit for example.
650
651 for i = 0 to VL-1:
652 src1 = get_polymorphed_reg(RA, srcwid, i)
653 src2 = get_polymorphed_reg(RB, srcwid, i)
654 opwidth = max(srcwid, destwid)
655 # logical op, signed has no meaning
656 result = op_xor(src1, src2, opwidth)
657 # now saturate (signed)
658 sat = min(result, (1<<destwid-1)-1)
659 sat = max(result, -(1<<destwid-1))
660 set_polymorphed_reg(rd, destwid, i, sat)
661
662 Overall here the rule is: apply common sense then document the behaviour
663 really clearly, for each and every operation.
664
665 # Quick recap so far
666
667 The above functionality pretty much covers around 85% of Vector ISA needs.
668 Predication is provided so that parallel if/then/else constructs can
669 be performed: critical given that sequential if/then statements and
670 branches simply do not translate successfully to Vector workloads.
671 VSPLAT capability is provided which is approximately 20% of all GPU
672 workload operations. Also covered, with elwidth overriding, is the
673 smaller arithmetic operations that caused ISAs developed from the
674 late 80s onwards to get themselves into a tiz when adding "Multimedia"
675 acceleration aka "SIMD" instructions.
676
677 Experienced Vector ISA readers will however have noted that VCOMPRESS
678 and VEXPAND are missing, as is Vector "reduce" (mapreduce) capability
679 and VGATHER and VSCATTER. Compress and Expand are covered by Twin
680 Predication, and yet to also be covered is fail-on-first, CR-based result
681 predication, and Subvectors and Swizzle.
682
683 ## SUBVL <a name="subvl"></a>
684
685 Adding in support for SUBVL is a matter of adding in an extra inner
686 for-loop, where register src and dest are still incremented inside the
687 inner part. Predication is still taken from the VL index, however it
688 is applied to the whole subvector:
689
690 function op_add(RT, RA, RB) # add not VADD!
691  int id=0, irs1=0, irs2=0;
692  predval = get_pred_val(FALSE, rd);
693 for i = 0 to VL-1:
694 if (predval & 1<<i) # predication uses intregs
695 for (s = 0; s < SUBVL; s++)
696 sd = id*SUBVL + s
697 srs1 = irs1*SUBVL + s
698 srs2 = irs2*SUBVL + s
699 ireg[RT+sd] <= ireg[RA+srs1] + ireg[RB+srs2];
700 if (!RT.isvec) break;
701 if (RT.isvec) { id += 1; }
702 if (RA.isvec) { irs1 += 1; }
703 if (RB.isvec) { irs2 += 1; }
704
705 The primary reason for this is because Shader Compilers treat vec2/3/4 as
706 "single units". Recognising this in hardware is just sensible.
707
708 # Swizzle <a name="swizzle"></a>
709
710 Swizzle is particularly important for 3D work. It allows in-place
711 reordering of XYZW, ARGB etc. and access of sub-portions of the same in
712 arbitrary order *without* requiring timeconsuming scalar mv instructions
713 (scalar due to the convoluted offsets).
714
715 Swizzling does not just do permutations: it allows arbitrary selection and multiple copying of
716 vec2/3/4 elements, such as XXXZ as the source operand, which will take
717 3 copies of the vec4 first element (vec4[0]), placing them at positions vec4[0],
718 vec4[1] and vec4[2], whilst the "Z" element (vec4[2]) was copied into vec4[3].
719
720 With somewhere between 10% and 30% of operations in 3D Shaders involving
721 swizzle this is a huge saving and reduces pressure on register files
722 due to having to use significant numbers of mv operations to get vector
723 elements to "line up".
724
725 In SV given the percentage of operations that also involve initialisation
726 to 0.0 or 1.0 into subvector elements the decision was made to include
727 those:
728
729 swizzle = get_swizzle_immed() # 12 bits
730 for (s = 0; s < SUBVL; s++)
731 remap = (swizzle >> 3*s) & 0b111
732 if remap == 0b000: continue # skip
733 if remap == 0b001: break # end marker
734 if remap == 0b010: ireg[rd+s] <= 0.0 # constant 0
735 elif remap == 0b011: ireg[rd+s] <= 1.0 # constant 1
736 else: # XYZW
737 sm = id*SUBVL + (remap-4)
738 ireg[rd+s] <= ireg[RA+sm]
739
740 Note that a value of 0b000 will leave the target subvector element
741 untouched. This is equivalent to a predicate mask which is built-in,
742 in immediate form, into the [[sv/mv.swizzle]] operation. mv.swizzle is
743 rare in that it is one of the few instructions needed to be added that
744 are never going to be part of a Scalar ISA. Even in High Performance
745 Compute workloads it is unusual: it is only because SV is targetted at
746 3D and Video that it is being considered.
747
748 Some 3D GPU ISAs also allow for two-operand subvector swizzles. These are
749 sufficiently unusual, and the immediate opcode space required so large
750 (12 bits per vec4 source),
751 that the tradeoff balance was decided in SV to only add mv.swizzle.
752
753 # Twin Predication
754
755 Twin Predication is cool. Essentially it is a back-to-back
756 VCOMPRESS-VEXPAND (a multiple sequentially ordered VINSERT). The compress
757 part is covered by the source predicate and the expand part by the
758 destination predicate. Of course, if either of those is all 1s then
759 the operation degenerates *to* VCOMPRESS or VEXPAND, respectively.
760
761 function op(RT, RS):
762  ps = get_pred_val(FALSE, RS); # predication on src
763  pd = get_pred_val(FALSE, RT); # ... AND on dest
764  for (int i = 0, int j = 0; i < VL && j < VL;):
765 if (RS.isvec) while (!(ps & 1<<i)) i++;
766 if (RT.isvec) while (!(pd & 1<<j)) j++;
767 reg[RT+j] = SCALAR_OPERATION_ON(reg[RS+i])
768 if (RS.isvec) i++;
769 if (RT.isvec) j++; else break
770
771 Here's the interesting part: given the fact that SV is a "context"
772 extension, the above pattern can be applied to a lot more than just MV,
773 which is normally only what VCOMPRESS and VEXPAND do in traditional
774 Vector ISAs: move registers. Twin Predication can be applied to `extsw`
775 or `fcvt`, LD/ST operations and even `rlwinmi` and other operations
776 taking a single source and immediate(s) such as `addi`. All of these
777 are termed single-source, single-destination.
778
779 LDST Address-generation, or AGEN, is a special case of single source,
780 because elwidth overriding does not make sense to apply to the computation
781 of the 64 bit address itself, but it *does* make sense to apply elwidth
782 overrides to the data being accessed *at* that memory address.
783
784 It also turns out that by using a single bit set in the source or
785 destination, *all* the sequential ordered standard patterns of Vector
786 ISAs are provided: VSPLAT, VSELECT, VINSERT, VCOMPRESS, VEXPAND.
787
788 The only one missing from the list here, because it is non-sequential,
789 is VGATHER (and VSCATTER): moving registers by specifying a vector of
790 register indices (`regs[rd] = regs[regs[rs]]` in a loop). This one is
791 tricky because it typically does not exist in standard scalar ISAs.
792 If it did it would be called [[sv/mv.x]]. Once Vectorised, it's a
793 VGATHER/VSCATTER.
794
795 # CR predicate result analysis
796
797 OpenPOWER has Condition Registers. These store an analysis of the result
798 of an operation to test it for being greater, less than or equal to zero.
799 What if a test could be done, similar to branch BO testing, which hooked
800 into the predication system?
801
802 for i in range(VL):
803 # predication test, skip all masked out elements.
804 if predicate_masked_out(i): continue # skip
805 result = op(iregs[RA+i], iregs[RB+i])
806 CRnew = analyse(result) # calculates eq/lt/gt
807 # Rc=1 always stores the CR
808 if RC1 or Rc=1: crregs[offs+i] = CRnew
809 if RC1: continue # RC1 mode skips result store
810 # now test CR, similar to branch
811 if CRnew[BO[0:1]] == BO[2]:
812 # result optionally stored but CR always is
813 iregs[RT+i] = result
814
815 Note that whilst the Vector of CRs is always written to the CR regfile,
816 only those result elements that pass the BO test get written to the
817 integer regfile (when RC1 mode is not set). In RC1 mode the CR is always
818 stored, but the result never is. This effectively turns every arithmetic
819 operation into a type of `cmp` instruction.
820
821 Here for example if FP overflow occurred, and the CR testing was carried
822 out for that, all valid results would be stored but invalid ones would
823 not, but in addition the Vector of CRs would contain the indicators of
824 which ones failed. With the invalid results being simply not written
825 this could save resources (save on register file writes).
826
827 Also expected is, due to the fact that the predicate mask is effectively
828 ANDed with the post-result analysis as a secondary type of predication,
829 that there would be savings to be had in some types of operations where
830 the post-result analysis, if not included in SV, would need a second
831 predicate calculation followed by a predicate mask AND operation.
832
833 Note, hilariously, that Vectorised Condition Register Operations (crand,
834 cror) may also have post-result analysis applied to them. With Vectors
835 of CRs being utilised *for* predication, possibilities for compact and
836 elegant code begin to emerge from this innocuous-looking addition to SV.
837
838 # Exception-based Fail-on-first
839
840 One of the major issues with Vectorised LD/ST operations is when a
841 batch of LDs cross a page-fault boundary. With considerable resources
842 being taken up with in-flight data, a large Vector LD being cancelled
843 or unable to roll back is either a detriment to performance or can cause
844 data corruption.
845
846 What if, then, rather than cancel an entire Vector LD because the last
847 operation would cause a page fault, instead truncate the Vector to the
848 last successful element?
849
850 This is called "fail-on-first". Here is strncpy, illustrated from RVV:
851
852 strncpy:
853 c.mv a3, a0 # Copy dst
854 loop:
855 setvli x0, a2, vint8 # Vectors of bytes.
856 vlbff.v v1, (a1) # Get src bytes
857 vseq.vi v0, v1, 0 # Flag zero bytes
858 vmfirst a4, v0 # Zero found?
859 vmsif.v v0, v0 # Set mask up to and including zero byte.
860 vsb.v v1, (a3), v0.t # Write out bytes
861 c.bgez a4, exit # Done
862 csrr t1, vl # Get number of bytes fetched
863 c.add a1, a1, t1 # Bump src pointer
864 c.sub a2, a2, t1 # Decrement count.
865 c.add a3, a3, t1 # Bump dst pointer
866 c.bnez a2, loop # Anymore?
867 exit:
868 c.ret
869
870 Vector Length VL is truncated inherently at the first page faulting
871 byte-level LD. Otherwise, with more powerful hardware the number of
872 elements LOADed from memory could be dozens to hundreds or greater
873 (memory bandwidth permitting).
874
875 With VL truncated the analysis looking for the zero byte and the
876 subsequent STORE (a straight ST, not a ffirst ST) can proceed, safe in the
877 knowledge that every byte loaded in the Vector is valid. Implementors are
878 even permitted to "adapt" VL, truncating it early so that, for example,
879 subsequent iterations of loops will have LD/STs on aligned boundaries.
880
881 SIMD strncpy hand-written assembly routines are, to be blunt about it,
882 a total nightmare. 240 instructions is not uncommon, and the worst
883 thing about them is that they are unable to cope with detection of a
884 page fault condition.
885
886 Note: see <https://bugs.libre-soc.org/show_bug.cgi?id=561>
887
888 # Data-dependent fail-first
889
890 This is a minor variant on the CR-based predicate-result mode. Where
891 pred-result continues with independent element testing (any of which may
892 be parallelised), data-dependent fail-first *stops* at the first failure:
893
894 if Rc=0: BO = inv<<2 | 0b00 # test CR.eq bit z/nz
895 for i in range(VL):
896 # predication test, skip all masked out elements.
897 if predicate_masked_out(i): continue # skip
898 result = op(iregs[RA+i], iregs[RB+i])
899 CRnew = analyse(result) # calculates eq/lt/gt
900 # now test CR, similar to branch
901 if CRnew[BO[0:1]] != BO[2]:
902 VL = i # truncate: only successes allowed
903 break
904 # test passed: store result (and CR?)
905 if not RC1: iregs[RT+i] = result
906 if RC1 or Rc=1: crregs[offs+i] = CRnew
907
908 This is particularly useful, again, for FP operations that might overflow,
909 where it is desirable to end the loop early, but also desirable to
910 complete at least those operations that were okay (passed the test)
911 without also having to slow down execution by adding extra instructions
912 that tested for the possibility of that failure, in advance of doing
913 the actual calculation.
914
915 The only minor downside here though is the change to VL, which in some
916 implementations may cause pipeline stalls. This was one of the reasons
917 why CR-based pred-result analysis was added, because that at least is
918 entirely paralleliseable.
919
920 # Vertical-First Mode
921
922 This is a relatively new addition to SVP64 under development as of
923 July 2021. Where Horizontal-First is the standard Cray-style for-loop,
924 Vertical-First typically executes just the **one** scalar element
925 in each Vectorised operation. That element is selected by srcstep
926 and dststep *neither of which are changed as a side-effect of execution*.
927 Illustrating this in pseodocode, with a branch/loop.
928 To create loops, a new instruction `svstep` must be called,
929 explicitly, with Rc=1:
930
931 ```
932 loop:
933 sv.addi r0.v, r8.v, 5 # GPR(0+dststep) = GPR(8+srcstep) + 5
934 sv.addi r0.v, r8, 5 # GPR(0+dststep) = GPR(8 ) + 5
935 sv.addi r0, r8.v, 5 # GPR(0 ) = GPR(8+srcstep) + 5
936 svstep. # srcstep++, dststep++, CR0.eq = srcstep==VL
937 beq loop
938 ```
939
940 Three examples are illustrated of different types of Scalar-Vector
941 operations. Note that in its simplest form **only one** element is
942 executed per instruction **not** multiple elements per instruction.
943 (The more advanced version of Vertical-First mode may execute multiple
944 elements per instruction, however the number executed **must** remain
945 a fixed quantity.)
946
947 Now that such explicit loops can increment inexorably towards VL,
948 of course we now need a way to test if srcstep or dststep have reached
949 VL. This is achieved in one of two ways: [[sv/svstep]] has an Rc=1 mode
950 where CR0 will be updated if VL is reached. A standard v3.0B Branch
951 Conditional may rely on that. Alternatively, the number of elements
952 may be transferred into CTR, as is standard practice in Power ISA.
953 Here, SVP64 [[sv/branches]] have a mode which allows CTR to be decremented
954 by the number of vertical elements executed.
955
956 # Instruction format
957
958 Whilst this overview shows the internals, it does not go into detail
959 on the actual instruction format itself. There are a couple of reasons
960 for this: firstly, it's under development, and secondly, it needs to be
961 proposed to the OpenPOWER Foundation ISA WG for consideration and review.
962
963 That said: draft pages for [[sv/setvl]] and [[sv/svp64]] are written up.
964 The `setvl` instruction is pretty much as would be expected from a
965 Cray style VL instruction: the only differences being that, firstly,
966 the MAXVL (Maximum Vector Length) has to be specified, because that
967 determines - precisely - how many of the *scalar* registers are to be
968 used for a given Vector. Secondly: within the limit of MAXVL, VL is
969 required to be set to the requested value. By contrast, RVV systems
970 permit the hardware to set arbitrary values of VL.
971
972 The other key question is of course: what's the actual instruction format,
973 and what's in it? Bearing in mind that this requires OPF review, the
974 current draft is at the [[sv/svp64]] page, and includes space for all the
975 different modes, the predicates, element width overrides, SUBVL and the
976 register extensions, in 24 bits. This just about fits into an OpenPOWER
977 v3.1B 64 bit Prefix by borrowing some of the Reserved Encoding space.
978 The v3.1B suffix - containing as it does a 32 bit OpenPOWER instruction -
979 aligns perfectly with SV.
980
981 Further reading is at the main [[SV|sv]] page.
982
983 # Conclusion
984
985 Starting from a scalar ISA - OpenPOWER v3.0B - it was shown above that,
986 with conceptual sub-loops, a Scalar ISA can be turned into a Vector one,
987 by embedding Scalar instructions - unmodified - into a Vector "context"
988 using "Prefixing". With careful thought, this technique reaches 90%
989 par with good Vector ISAs, increasing to 95% with the addition of a
990 mere handful of additional context-vectoriseable scalar instructions
991 ([[sv/mv.x]] amongst them).
992
993 What is particularly cool about the SV concept is that custom extensions
994 and research need not be concerned about inventing new Vector instructions
995 and how to get them to interact with the Scalar ISA: they are effectively
996 one and the same. Any new instruction added at the Scalar level is
997 inherently and automatically Vectorised, following some simple rules.
998