4ac2f68f2c82acbd2acc975dd9ddfaba3facd81c
[libreriscv.git] / openpower / sv / overview.mdwn
1 # SV Overview
2
3 **SV is in DRAFT STATUS**. SV has not yet been submitted to the OpenPOWER Foundation ISA WG for review.
4
5 This document provides an overview and introduction as to why SV (a
6 [[!wikipedia Cray]]-style Vector augmentation to [[!wikipedia OpenPOWER]]) exists, and how it works.
7
8 **Sponsored by NLnet under the Privacy and Enhanced Trust Programme**
9
10 Links:
11
12 * This page: [http://libre-soc.org/openpower/sv/overview](http://libre-soc.org/openpower/sv/overview)
13 * [FOSDEM2021 SimpleV for OpenPOWER](https://fosdem.org/2021/schedule/event/the_libresoc_project_simple_v_vectorisation/)
14 * FOSDEM2021 presentation <https://www.youtube.com/watch?v=FS6tbfyb2VA>
15 * [[discussion]] and
16 [bugreport](https://bugs.libre-soc.org/show_bug.cgi?id=556)
17 feel free to add comments, questions.
18 * [[SV|sv]]
19 * [[sv/svp64]]
20 * [x86 REP instruction](https://c9x.me/x86/html/file_module_x86_id_279.html):
21 a useful way to quickly understand that the core of the SV concept
22 is not new.
23 * [Article about register tagging](http://science.lpnu.ua/sites/default/files/journal-paper/2019/jul/17084/volum3number1text-9-16_1.pdf) showing
24 that tagging is not a new idea either. Register tags
25 are also used in the Mill Architecture.
26
27
28 [[!toc]]
29
30 # Introduction: SIMD and Cray Vectors
31
32 SIMD, the primary method for easy parallelism of the
33 past 30 years in Computer Architectures, is
34 [known to be harmful](https://www.sigarch.org/simd-instructions-considered-harmful/).
35 SIMD provides a seductive simplicity that is easy to implement in
36 hardware. With each doubling in width it promises increases in raw
37 performance without the complexity of either multi-issue or out-of-order
38 execution.
39
40 Unfortunately, even with predication added, SIMD only becomes more and
41 more problematic with each power of two SIMD width increase introduced
42 through an ISA revision. The opcode proliferation, at O(N^6), inexorably
43 spirals out of control in the ISA, detrimentally impacting the hardware,
44 the software, the compilers and the testing and compliance. Here are
45 the typical dimensions that result in such massive proliferation:
46
47 * Operation (add, mul)
48 * bitwidth (8, 16, 32, 64, 128)
49 * Conversion between bitwidths (FP16-FP32-64)
50 * Signed/unsigned
51 * HI/LO swizzle (Audio L/R channels)
52 - HI/LO selection on src 1
53 - selection on src 2
54 - selection on dest
55 - Example: AndesSTAR Audio DSP
56 * Saturation (Clamping at max range)
57
58 These typically are multiplied up to produce explicit opcodes numbering
59 in the thousands on, for example the ARC Video/DSP cores.
60
61 Cray-style variable-length Vectors on the other hand result in
62 stunningly elegant and small loops, exceptionally high data throughput
63 per instruction (by one *or greater* orders of magnitude than SIMD), with
64 no alarmingly high setup and cleanup code, where at the hardware level
65 the microarchitecture may execute from one element right the way through
66 to tens of thousands at a time, yet the executable remains exactly the
67 same and the ISA remains clear, true to the RISC paradigm, and clean.
68 Unlike in SIMD, powers of two limitations are not involved in the ISA
69 or in the assembly code.
70
71 SimpleV takes the Cray style Vector principle and applies it in the
72 abstract to a Scalar ISA in the same way that x86 used to do its "REP" instruction. In the process, "context" is applied, allowing amongst other things
73 a register file size
74 increase using "tagging" (similar to how x86 originally extended
75 registers from 32 to 64 bit).
76
77 ## SV
78
79 The fundamentals are (just like x86 "REP"):
80
81 * The Program Counter (PC) gains a "Sub Counter" context (Sub-PC)
82 * Vectorisation pauses the PC and runs a Sub-PC loop from 0 to VL-1
83 (where VL is Vector Length)
84 * The [[Program Order]] of "Sub-PC" instructions must be preserved,
85 just as is expected of instructions ordered by the PC.
86 * Some registers may be "tagged" as Vectors
87 * During the loop, "Vector"-tagged register are incremented by
88 one with each iteration, executing the *same instruction*
89 but with *different registers*
90 * Once the loop is completed *only then* is the Program Counter
91 allowed to move to the next instruction.
92
93 [[!img "svp64-primer/img/power_pipelines.svg" ]]
94
95 Hardware (and simulator) implementors are free and clear to implement this
96 as literally a for-loop, sitting in between instruction decode and issue.
97 Higher performance systems may deploy SIMD backends, multi-issue and
98 out-of-order execution, although it is strongly recommended to add
99 predication capability directly into SIMD backend units.
100
101 In OpenPOWER ISA v3.0B pseudo-code form, an ADD operation, assuming both
102 source and destination have been "tagged" as Vectors, is simply:
103
104 for i = 0 to VL-1:
105 GPR(RT+i) = GPR(RA+i) + GPR(RB+i)
106
107 At its heart, SimpleV really is this simple. On top of this fundamental
108 basis further refinements can be added which build up towards an extremely
109 powerful Vector augmentation system, with very little in the way of
110 additional opcodes required: simply external "context".
111
112 x86 was originally only 80 instructions: prior to AVX512 over 1,300
113 additional instructions have been added, almost all of them SIMD.
114
115 RISC-V RVV as of version 0.9 is over 188 instructions (more than the
116 rest of RV64G combined: 80 for RV64G and 27 for C). Over 95% of that
117 functionality is added to OpenPOWER v3 0B, by SimpleV augmentation,
118 with around 5 to 8 instructions.
119
120 Even in OpenPOWER v3.0B, the Scalar Integer ISA is around 150
121 instructions, with IEEE754 FP adding approximately 80 more. VSX, being
122 based on SIMD design principles, adds somewhere in the region of 600 more.
123 SimpleV again provides over 95% of VSX functionality, simply by augmenting
124 the *Scalar* OpenPOWER ISA, and in the process providing features such
125 as predication, which VSX is entirely missing.
126
127 AVX512, SVE2, VSX, RVV, all of these systems have to provide different
128 types of register files: Scalar and Vector is the minimum. AVX512
129 even provides a mini mask regfile, followed by explicit instructions
130 that handle operations on each of them *and map between all of them*.
131 SV simply not only uses the existing scalar regfiles (including CRs),
132 but because operations exist within OpenPOWER to cover interactions
133 between the scalar regfiles (`mfcr`, `fcvt`) there is very little that
134 needs to be added.
135
136 In fairness to both VSX and RVV, there are things that are not provided
137 by SimpleV:
138
139 * 128 bit or above arithmetic and other operations
140 (VSX Rijndael and SHA primitives; VSX shuffle and bitpermute operations)
141 * register files above 128 entries
142 * Vector lengths over 64
143 * Unit-strided LD/ST and other comprehensive memory operations
144 (struct-based LD/ST from RVV for example)
145 * 32-bit instruction lengths. [[svp64]] had to be added as 64 bit.
146
147 These limitations, which stem inherently from the adaptation process of
148 starting from a Scalar ISA, are not insurmountable. Over time, they may
149 well be addressed in future revisions of SV.
150
151 The rest of this document builds on the above simple loop to add:
152
153 * Vector-Scalar, Scalar-Vector and Scalar-Scalar operation
154 (of all register files: Integer, FP *and CRs*)
155 * Traditional Vector operations (VSPLAT, VINSERT, VCOMPRESS etc)
156 * Predication masks (essential for parallel if/else constructs)
157 * 8, 16 and 32 bit integer operations, and both FP16 and BF16.
158 * Compacted operations into registers (normally only provided by SIMD)
159 * Fail-on-first (introduced in ARM SVE2)
160 * A new concept: Data-dependent fail-first
161 * Condition-Register based *post-result* predication (also new)
162 * A completely new concept: "Twin Predication"
163 * vec2/3/4 "Subvectors" and Swizzling (standard fare for 3D)
164
165 All of this is *without modifying the OpenPOWER v3.0B ISA*, except to add
166 "wrapping context", similar to how v3.1B 64 Prefixes work.
167
168 # Adding Scalar / Vector
169
170 The first augmentation to the simple loop is to add the option for all
171 source and destinations to all be either scalar or vector. As a FSM
172 this is where our "simple" loop gets its first complexity.
173
174 function op_add(RT, RA, RB) # add not VADD!
175 int id=0, irs1=0, irs2=0;
176 for i = 0 to VL-1:
177 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
178 if (!RT.isvec) break;
179 if (RT.isvec) { id += 1; }
180 if (RA.isvec) { irs1 += 1; }
181 if (RB.isvec) { irs2 += 1; }
182
183 This could have been written out as eight separate cases: one each for
184 when each of `RA`, `RB` or `RT` is scalar or vector. Those eight cases,
185 when optimally combined, result in the pseudocode above.
186
187 With some walkthroughs it is clear that the loop exits immediately
188 after the first scalar destination result is written, and that when the
189 destination is a Vector the loop proceeds to fill up the register file,
190 sequentially, starting at `RT` and ending at `RT+VL-1`. The two source
191 registers will, independently, either remain pointing at `RB` or `RA`
192 respectively, or, if marked as Vectors, will march incrementally in
193 lockstep, producing element results along the way, as the destination
194 also progresses through elements.
195
196 In this way all the eight permutations of Scalar and Vector behaviour
197 are covered, although without predication the scalar-destination ones are
198 reduced in usefulness. It does however clearly illustrate the principle.
199
200 Note in particular: there is no separate Scalar add instruction and
201 separate Vector instruction and separate Scalar-Vector instruction, *and
202 there is no separate Vector register file*: it's all the same instruction,
203 on the standard register file, just with a loop. Scalar happens to set
204 that loop size to one.
205
206 The important insight from the above is that, strictly speaking, Simple-V
207 is not really a Vectorisation scheme at all: it is more of a hardware
208 ISA "Compression scheme", allowing as it does for what would normally
209 require multiple sequential instructions to be replaced with just one.
210 This is where the rule that Program Order must be preserved in Sub-PC
211 execution derives from. However in other ways, which will emerge below,
212 the "tagging" concept presents an opportunity to include features
213 definitely not common outside of Vector ISAs, and in that regard it's
214 definitely a class of Vectorisation.
215
216 ## Register "tagging"
217
218 As an aside: in [[sv/svp64]] the encoding which allows SV to both extend
219 the range beyond r0-r31 and to determine whether it is a scalar or vector
220 is encoded in two to three bits, depending on the instruction.
221
222 The reason for using so few bits is because there are up to *four*
223 registers to mark in this way (`fma`, `isel`) which starts to be of
224 concern when there are only 24 available bits to specify the entire SV
225 Vectorisation Context. In fact, for a small subset of instructions it
226 is just not possible to tag every single register. Under these rare
227 circumstances a tag has to be shared between two registers.
228
229 Below is the pseudocode which expresses the relationship which is usually
230 applied to *every* register:
231
232 if extra3_mode:
233 spec = EXTRA3 # bit 2 s/v, 0-1 extends range
234 else:
235 spec = EXTRA2 << 1 # same as EXTRA3, shifted
236 if spec[2]: # vector
237 RA.isvec = True
238 return (RA << 2) | spec[0:1]
239 else: # scalar
240 RA.isvec = False
241 return (spec[0:1] << 5) | RA
242
243 Here we can see that the scalar registers are extended in the top bits,
244 whilst vectors are shifted up by 2 bits, and then extended in the LSBs.
245 Condition Registers have a slightly different scheme, along the same
246 principle, which takes into account the fact that each CR may be bit-level
247 addressed by Condition Register operations.
248
249 Readers familiar with OpenPOWER will know of Rc=1 operations that create
250 an associated post-result "test", placing this test into an implicit
251 Condition Register. The original researchers who created the POWER ISA
252 chose CR0 for Integer, and CR1 for Floating Point. These *also become
253 Vectorised* - implicitly - if the associated destination register is
254 also Vectorised. This allows for some very interesting savings on
255 instruction count due to the very same CR Vectors being predication masks.
256
257 # Adding single predication
258
259 The next step is to add a single predicate mask. This is where it gets
260 interesting. Predicate masks are a bitvector, each bit specifying, in
261 order, whether the element operation is to be skipped ("masked out")
262 or allowed. If there is no predicate, it is set to all 1s, which is
263 effectively the same as "no predicate".
264
265 function op_add(RT, RA, RB) # add not VADD!
266 int id=0, irs1=0, irs2=0;
267 predval = get_pred_val(FALSE, rd);
268 for i = 0 to VL-1:
269 if (predval & 1<<i) # predication bit test
270 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
271 if (!RT.isvec) break;
272 if (RT.isvec) { id += 1; }
273 if (RA.isvec) { irs1 += 1; }
274 if (RB.isvec) { irs2 += 1; }
275
276 The key modification is to skip the creation and storage of the result
277 if the relevant predicate mask bit is clear, but *not the progression
278 through the registers*.
279
280 A particularly interesting case is if the destination is scalar, and the
281 first few bits of the predicate are zero. The loop proceeds to increment
282 the Scalar *source* registers until the first nonzero predicate bit is
283 found, whereupon a single result is computed, and *then* the loop exits.
284 This therefore uses the predicate to perform Vector source indexing.
285 This case was not possible without the predicate mask.
286
287 If all three registers are marked as Vector then the "traditional"
288 predicated Vector behaviour is provided. Yet, just as before, all other
289 options are still provided, right the way back to the pure-scalar case,
290 as if this were a straight OpenPOWER v3.0B non-augmented instruction.
291
292 Single Predication therefore provides several modes traditionally seen
293 in Vector ISAs:
294
295 * VINSERT: the predicate may be set as a single bit, the sources are
296 scalar and the destination a vector.
297 * VSPLAT (result broadcasting) is provided by making the sources scalar
298 and the destination a vector, and having no predicate set or having
299 multiple bits set.
300 * VSELECT is provided by setting up (at least one of) the sources as a
301 vector, using a single bit in the predicate, and the destination as
302 a scalar.
303
304 All of this capability and coverage without even adding one single actual
305 Vector opcode, let alone 180, 600 or 1,300!
306
307 # Predicate "zeroing" mode
308
309 Sometimes with predication it is ok to leave the masked-out element
310 alone (not modify the result) however sometimes it is better to zero the
311 masked-out elements. Zeroing can be combined with bit-wise ORing to build
312 up vectors from multiple predicate patterns: the same combining with
313 nonzeroing involves more mv operations and predicate mask operations.
314 Our pseudocode therefore ends up as follows, to take the enhancement
315 into account:
316
317 function op_add(RT, RA, RB) # add not VADD!
318 int id=0, irs1=0, irs2=0;
319 predval = get_pred_val(FALSE, rd);
320 for i = 0 to VL-1:
321 if (predval & 1<<i) # predication bit test
322 ireg[RT+id] <= ireg[RA+irs1] + ireg[RB+irs2];
323 if (!RT.isvec) break;
324 else if zeroing: # predicate failed
325 ireg[RT+id] = 0 # set element to zero
326 if (RT.isvec) { id += 1; }
327 if (RA.isvec) { irs1 += 1; }
328 if (RB.isvec) { irs2 += 1; }
329
330 Many Vector systems either have zeroing or they have nonzeroing, they
331 do not have both. This is because they usually have separate Vector
332 register files. However SV sits on top of standard register files and
333 consequently there are advantages to both, so both are provided.
334
335 # Element Width overrides <a name="elwidths"></a>
336
337 All good Vector ISAs have the usual bitwidths for operations: 8/16/32/64
338 bit integer operations, and IEEE754 FP32 and 64. Often also included
339 is FP16 and more recently BF16. The *really* good Vector ISAs have
340 variable-width vectors right down to bitlevel, and as high as 1024 bit
341 arithmetic per element, as well as IEEE754 FP128.
342
343 SV has an "override" system that *changes* the bitwidth of operations
344 that were intended by the original scalar ISA designers to have (for
345 example) 64 bit operations (only). The override widths are 8, 16 and
346 32 for integer, and FP16 and FP32 for IEEE754 (with BF16 to be added in
347 the future).
348
349 This presents a particularly intriguing conundrum given that the OpenPOWER
350 Scalar ISA was never designed with for example 8 bit operations in mind,
351 let alone Vectors of 8 bit.
352
353 The solution comes in terms of rethinking the definition of a Register
354 File. The typical regfile may be considered to be a multi-ported SRAM
355 block, 64 bits wide and usually 32 entries deep, to give 32 64 bit
356 registers. In c this would be:
357
358 typedef uint64_t reg_t;
359 reg_t int_regfile[32]; // standard scalar 32x 64bit
360
361 Conceptually, to get our variable element width vectors,
362 we may think of the regfile as instead being the following c-based data
363 structure, where all types uint16_t etc. are in little-endian order:
364
365 #pragma(packed)
366 typedef union {
367 uint8_t actual_bytes[8];
368 uint8_t b[0]; // array of type uint8_t
369 uint16_t s[0]; // array of LE ordered uint16_t
370 uint32_t i[0];
371 uint64_t l[0]; // default OpenPOWER ISA uses this
372 } reg_t;
373
374 reg_t int_regfile[128]; // SV extends to 128 regs
375
376 This means that Vector elements start from locations specified by 64 bit
377 "register" but that from that location onwards the elements *overlap
378 subsequent registers*.
379
380 Here is another way to view the same concept, bearing in mind that it
381 is assumed a LE memory order:
382
383 uint8_t reg_sram[8*128];
384 uint8_t *actual_bytes = &reg_sram[RA*8];
385 if elwidth == 8:
386 uint8_t *b = (uint8_t*)actual_bytes;
387 b[idx] = result;
388 if elwidth == 16:
389 uint16_t *s = (uint16_t*)actual_bytes;
390 s[idx] = result;
391 if elwidth == 32:
392 uint32_t *i = (uint32_t*)actual_bytes;
393 i[idx] = result;
394 if elwidth == default:
395 uint64_t *l = (uint64_t*)actual_bytes;
396 l[idx] = result;
397
398 Starting with all zeros, setting `actual_bytes[3]` in any given `reg_t`
399 to 0x01 would mean that:
400
401 * b[0..2] = 0x00 and b[3] = 0x01
402 * s[0] = 0x0000 and s[1] = 0x0001
403 * i[0] = 0x00010000
404 * l[0] = 0x0000000000010000
405
406 In tabular form, starting an elwidth=8 loop from r0 and extending for
407 16 elements would begin at r0 and extend over the entirety of r1:
408
409 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
410 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
411 r0 | b[0] | b[1] | b[2] | b[3] | b[4] | b[5] | b[6] | b[7] |
412 r1 | b[8] | b[9] | b[10] | b[11] | b[12] | b[13] | b[14] | b[15] |
413
414 Starting an elwidth=16 loop from r0 and extending for
415 7 elements would begin at r0 and extend partly over r1. Note that
416 b0 indicates the low byte (lowest 8 bits) of each 16-bit word, and
417 b1 represents the top byte:
418
419 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
420 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
421 r0 | s[0].b0 b1 | s[1].b0 b1 | s[2].b0 b1 | s[3].b0 b1 |
422 r1 | s[4].b0 b1 | s[5].b0 b1 | s[6].b0 b1 | unmodified |
423
424 Likewise for elwidth=32, and a loop extending for 3 elements. b0 through
425 b3 represent the bytes (numbered lowest for LSB and highest for MSB) within
426 each element word:
427
428 | byte0 | byte1 | byte2 | byte3 | byte4 | byte5 | byte6 | byte7 |
429 | ----- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
430 r0 | w[0].b0 b1 b2 b3 | w[1].b0 b1 b2 b3 |
431 r1 | w[2].b0 b1 b2 b3 | unmodified unmodified |
432
433 64-bit (default) elements access the full registers. In each case the
434 register number (`RT`, `RA`) indicates the *starting* point for the storage
435 and retrieval of the elements.
436
437 Our simple loop, instead of accessing the array of regfile entries
438 with a computed index `iregs[RT+i]`, would access the appropriate element
439 of the appropriate width, such as `iregs[RT].s[i]` in order to access
440 16 bit elements starting from RT. Thus we have a series of overlapping
441 conceptual arrays that each start at what is traditionally thought of as
442 "a register". It then helps if we have a couple of routines:
443
444 get_polymorphed_reg(reg, bitwidth, offset):
445 reg_t res = 0;
446 if (!reg.isvec): # scalar
447 offset = 0
448 if bitwidth == 8:
449 reg.b = int_regfile[reg].b[offset]
450 elif bitwidth == 16:
451 reg.s = int_regfile[reg].s[offset]
452 elif bitwidth == 32:
453 reg.i = int_regfile[reg].i[offset]
454 elif bitwidth == default: # 64
455 reg.l = int_regfile[reg].l[offset]
456 return res
457
458 set_polymorphed_reg(reg, bitwidth, offset, val):
459 if (!reg.isvec): # scalar
460 offset = 0
461 if bitwidth == 8:
462 int_regfile[reg].b[offset] = val
463 elif bitwidth == 16:
464 int_regfile[reg].s[offset] = val
465 elif bitwidth == 32:
466 int_regfile[reg].i[offset] = val
467 elif bitwidth == default: # 64
468 int_regfile[reg].l[offset] = val
469
470 These basically provide a convenient parameterised way to access the
471 register file, at an arbitrary vector element offset and an arbitrary
472 element width. Our first simple loop thus becomes:
473
474 for i = 0 to VL-1:
475 src1 = get_polymorphed_reg(RA, srcwid, i)
476 src2 = get_polymorphed_reg(RB, srcwid, i)
477 result = src1 + src2 # actual add here
478 set_polymorphed_reg(RT, destwid, i, result)
479
480 With this loop, if elwidth=16 and VL=3 the first 48 bits of the target
481 register will contain three 16 bit addition results, and the upper 16
482 bits will be *unaltered*.
483
484 Note that things such as zero/sign-extension (and predication) have
485 been left out to illustrate the elwidth concept. Also note that it turns
486 out to be important to perform the operation internally at effectively an *infinite* bitwidth such that any truncation, rounding errors or
487 other artefacts may all be ironed out. This turns out to be important
488 when applying Saturation for Audio DSP workloads, particularly for multiply and IEEE754 FP rounding. By "infinite" this is conceptual only: in reality, the application of the different truncations and width-extensions set a fixed deterministic practical limit on the internal precision needed, on a per-operation basis.
489
490 Other than that, element width overrides, which can be applied to *either*
491 source or destination or both, are pretty straightforward, conceptually.
492 The details, for hardware engineers, involve byte-level write-enable
493 lines, which is exactly what is used on SRAMs anyway. Compiler writers
494 have to alter Register Allocation Tables to byte-level granularity.
495
496 One critical thing to note: upper parts of the underlying 64 bit
497 register are *not zero'd out* by a write involving a non-aligned Vector
498 Length. An 8 bit operation with VL=7 will *not* overwrite the 8th byte
499 of the destination. The only situation where a full overwrite occurs
500 is on "default" behaviour. This is extremely important to consider the
501 register file as a byte-level store, not a 64-bit-level store.
502
503 ## Why a LE regfile?
504
505 The concept of having a regfile where the byte ordering of the underlying
506 SRAM seems utter nonsense. Surely, a hardware implementation gets to
507 choose the order, right? It's memory only where LE/BE matters, right? The
508 bytes come in, all registers are 64 bit and it's just wiring, right?
509
510 Ordinarily this would be 100% correct, in both a scalar ISA and in a Cray
511 style Vector one. The assumption in that last question was, however, "all
512 registers are 64 bit". SV allows SIMD-style packing of vectors into the
513 64 bit registers, where one instruction and the next may interpret that
514 very same register as containing elements of completely different widths.
515
516 Consequently it becomes critically important to decide a byte-order.
517 That decision was - arbitrarily - LE mode. Actually it wasn't arbitrary
518 at all: it was such hell to implement BE supported interpretations of CRs
519 and LD/ST in LibreSOC, based on a terse spec that provides insufficient
520 clarity and assumes significant working knowledge of OpenPOWER, with
521 arbitrary insertions of 7-index here and 3-bitindex there, the decision
522 to pick LE was extremely easy.
523
524 Without such a decision, if two words are packed as elements into a 64
525 bit register, what does this mean? Should they be inverted so that the
526 lower indexed element goes into the HI or the LO word? should the 8
527 bytes of each register be inverted? Should the bytes in each element
528 be inverted? Should the element indexing loop order be broken onto
529 discontiguous chunks such as 32107654 rather than 01234567, and if so
530 at what granularity of discontinuity? These are all equally valid and
531 legitimate interpretations of what constitutes "BE" and they all cause
532 merry mayhem.
533
534 The decision was therefore made: the c typedef union is the canonical
535 definition, and its members are defined as being in LE order. From there,
536 implementations may choose whatever internal HDL wire order they like
537 as long as the results produced conform to the elwidth pseudocode.
538
539 *Note: it turns out that both x86 SIMD and NEON SIMD follow this convention, namely that both are implicitly LE, even though their ISA Manuals may not explicitly spell this out*
540
541 * <https://developer.arm.com/documentation/ddi0406/c/Application-Level-Architecture/Application-Level-Memory-Model/Endian-support/Endianness-in-Advanced-SIMD?lang=en>
542 * <https://stackoverflow.com/questions/24045102/how-does-endianness-work-with-simd-registers>
543 * <https://llvm.org/docs/BigEndianNEON.html>
544
545
546 ## Source and Destination overrides
547
548 A minor fly in the ointment: what happens if the source and destination
549 are over-ridden to different widths? For example, FP16 arithmetic is
550 not accurate enough and may introduce rounding errors when up-converted
551 to FP32 output. The rule is therefore set:
552
553 The operation MUST take place effectively at infinite precision:
554 actual precision determined by the operation and the operand widths
555
556 In pseudocode this is:
557
558 for i = 0 to VL-1:
559 src1 = get_polymorphed_reg(RA, srcwid, i)
560 src2 = get_polymorphed_reg(RB, srcwid, i)
561 opwidth = max(srcwid, destwid) # usually
562 result = op_add(src1, src2, opwidth) # at max width
563 set_polymorphed_reg(rd, destwid, i, result)
564
565 In reality the source and destination widths determine the actual required
566 precision in a given ALU. The reason for setting "effectively" infinite precision
567 is illustrated for example by Saturated-multiply, where if the internal precision was insufficient it would not be possible to correctly determine the maximum clip range had been exceeded.
568
569 Thus it will turn out that under some conditions the combination of the
570 extension of the source registers followed by truncation of the result
571 gets rid of bits that didn't matter, and the operation might as well have
572 taken place at the narrower width and could save resources that way.
573 Examples include Logical OR where the source extension would place
574 zeros in the upper bits, the result will be truncated and throw those
575 zeros away.
576
577 Counterexamples include the previously mentioned FP16 arithmetic,
578 where for operations such as division of large numbers by very small
579 ones it should be clear that internal accuracy will play a major role
580 in influencing the result. Hence the rule that the calculation takes
581 place at the maximum bitwidth, and truncation follows afterwards.
582
583 ## Signed arithmetic
584
585 What happens when the operation involves signed arithmetic? Here the
586 implementor has to use common sense, and make sure behaviour is accurately
587 documented. If the result of the unmodified operation is sign-extended
588 because one of the inputs is signed, then the input source operands must
589 be first read at their overridden bitwidth and *then* sign-extended:
590
591 for i = 0 to VL-1:
592 src1 = get_polymorphed_reg(RA, srcwid, i)
593 src2 = get_polymorphed_reg(RB, srcwid, i)
594 opwidth = max(srcwid, destwid)
595 # srces known to be less than result width
596 src1 = sign_extend(src1, srcwid, opwidth)
597 src2 = sign_extend(src2, srcwid, opwidth)
598 result = op_signed(src1, src2, opwidth) # at max width
599 set_polymorphed_reg(rd, destwid, i, result)
600
601 The key here is that the cues are taken from the underlying operation.
602
603 ## Saturation
604
605 Audio DSPs need to be able to clip sound when the "volume" is adjusted,
606 but if it is too loud and the signal wraps, distortion occurs. The
607 solution is to clip (saturate) the audio and allow this to be detected.
608 In practical terms this is a post-result analysis however it needs to
609 take place at the largest bitwidth i.e. before a result is element width
610 truncated. Only then can the arithmetic saturation condition be detected:
611
612 for i = 0 to VL-1:
613 src1 = get_polymorphed_reg(RA, srcwid, i)
614 src2 = get_polymorphed_reg(RB, srcwid, i)
615 opwidth = max(srcwid, destwid)
616 # unsigned add
617 result = op_add(src1, src2, opwidth) # at max width
618 # now saturate (unsigned)
619 sat = min(result, (1<<destwid)-1)
620 set_polymorphed_reg(rd, destwid, i, sat)
621 # set sat overflow
622 if Rc=1:
623 CR[i].ov = (sat != result)
624
625 So the actual computation took place at the larger width, but was
626 post-analysed as an unsigned operation. If however "signed" saturation
627 is requested then the actual arithmetic operation has to be carefully
628 analysed to see what that actually means.
629
630 In terms of FP arithmetic, which by definition has a sign bit (so
631 always takes place as a signed operation anyway), the request to saturate
632 to signed min/max is pretty clear. However for integer arithmetic such
633 as shift (plain shift, not arithmetic shift), or logical operations
634 such as XOR, which were never designed to have the assumption that its
635 inputs be considered as signed numbers, common sense has to kick in,
636 and follow what CR0 does.
637
638 CR0 for Logical operations still applies: the test is still applied to
639 produce CR.eq, CR.lt and CR.gt analysis. Following this lead we may
640 do the same thing: although the input operations for and OR or XOR can
641 in no way be thought of as "signed" we may at least consider the result
642 to be signed, and thus apply min/max range detection -128 to +127 when
643 truncating down to 8 bit for example.
644
645 for i = 0 to VL-1:
646 src1 = get_polymorphed_reg(RA, srcwid, i)
647 src2 = get_polymorphed_reg(RB, srcwid, i)
648 opwidth = max(srcwid, destwid)
649 # logical op, signed has no meaning
650 result = op_xor(src1, src2, opwidth)
651 # now saturate (signed)
652 sat = min(result, (1<<destwid-1)-1)
653 sat = max(result, -(1<<destwid-1))
654 set_polymorphed_reg(rd, destwid, i, sat)
655
656 Overall here the rule is: apply common sense then document the behaviour
657 really clearly, for each and every operation.
658
659 # Quick recap so far
660
661 The above functionality pretty much covers around 85% of Vector ISA needs.
662 Predication is provided so that parallel if/then/else constructs can
663 be performed: critical given that sequential if/then statements and
664 branches simply do not translate successfully to Vector workloads.
665 VSPLAT capability is provided which is approximately 20% of all GPU
666 workload operations. Also covered, with elwidth overriding, is the
667 smaller arithmetic operations that caused ISAs developed from the
668 late 80s onwards to get themselves into a tiz when adding "Multimedia"
669 acceleration aka "SIMD" instructions.
670
671 Experienced Vector ISA readers will however have noted that VCOMPRESS
672 and VEXPAND are missing, as is Vector "reduce" (mapreduce) capability
673 and VGATHER and VSCATTER. Compress and Expand are covered by Twin
674 Predication, and yet to also be covered is fail-on-first, CR-based result
675 predication, and Subvectors and Swizzle.
676
677 ## SUBVL <a name="subvl"></a>
678
679 Adding in support for SUBVL is a matter of adding in an extra inner
680 for-loop, where register src and dest are still incremented inside the
681 inner part. Predication is still taken from the VL index, however it
682 is applied to the whole subvector:
683
684 function op_add(RT, RA, RB) # add not VADD!
685  int id=0, irs1=0, irs2=0;
686  predval = get_pred_val(FALSE, rd);
687 for i = 0 to VL-1:
688 if (predval & 1<<i) # predication uses intregs
689 for (s = 0; s < SUBVL; s++)
690 sd = id*SUBVL + s
691 srs1 = irs1*SUBVL + s
692 srs2 = irs2*SUBVL + s
693 ireg[RT+sd] <= ireg[RA+srs1] + ireg[RB+srs2];
694 if (!RT.isvec) break;
695 if (RT.isvec) { id += 1; }
696 if (RA.isvec) { irs1 += 1; }
697 if (RB.isvec) { irs2 += 1; }
698
699 The primary reason for this is because Shader Compilers treat vec2/3/4 as
700 "single units". Recognising this in hardware is just sensible.
701
702 # Swizzle <a name="swizzle"></a>
703
704 Swizzle is particularly important for 3D work. It allows in-place
705 reordering of XYZW, ARGB etc. and access of sub-portions of the same in
706 arbitrary order *without* requiring timeconsuming scalar mv instructions
707 (scalar due to the convoluted offsets).
708
709 Swizzling does not just do permutations: it allows arbitrary selection and multiple copying of
710 vec2/3/4 elements, such as XXXZ as the source operand, which will take
711 3 copies of the vec4 first element (vec4[0]), placing them at positions vec4[0],
712 vec4[1] and vec4[2], whilst the "Z" element (vec4[2]) was copied into vec4[3].
713
714 With somewhere between 10% and 30% of operations in 3D Shaders involving
715 swizzle this is a huge saving and reduces pressure on register files
716 due to having to use significant numbers of mv operations to get vector
717 elements to "line up".
718
719 In SV given the percentage of operations that also involve initialisation
720 to 0.0 or 1.0 into subvector elements the decision was made to include
721 those:
722
723 swizzle = get_swizzle_immed() # 12 bits
724 for (s = 0; s < SUBVL; s++)
725 remap = (swizzle >> 3*s) & 0b111
726 if remap < 4:
727 sm = id*SUBVL + remap
728 ireg[rd+s] <= ireg[RA+sm]
729 elif remap == 4:
730 ireg[rd+s] <= 0.0
731 elif remap == 5:
732 ireg[rd+s] <= 1.0
733
734 Note that a value of 6 (and 7) will leave the target subvector element
735 untouched. This is equivalent to a predicate mask which is built-in,
736 in immediate form, into the [[sv/mv.swizzle]] operation. mv.swizzle is
737 rare in that it is one of the few instructions needed to be added that
738 are never going to be part of a Scalar ISA. Even in High Performance
739 Compute workloads it is unusual: it is only because SV is targetted at
740 3D and Video that it is being considered.
741
742 Some 3D GPU ISAs also allow for two-operand subvector swizzles. These are
743 sufficiently unusual, and the immediate opcode space required so large
744 (12 bits per vec4 source),
745 that the tradeoff balance was decided in SV to only add mv.swizzle.
746
747 # Twin Predication
748
749 Twin Predication is cool. Essentially it is a back-to-back
750 VCOMPRESS-VEXPAND (a multiple sequentially ordered VINSERT). The compress
751 part is covered by the source predicate and the expand part by the
752 destination predicate. Of course, if either of those is all 1s then
753 the operation degenerates *to* VCOMPRESS or VEXPAND, respectively.
754
755 function op(RT, RS):
756  ps = get_pred_val(FALSE, RS); # predication on src
757  pd = get_pred_val(FALSE, RT); # ... AND on dest
758  for (int i = 0, int j = 0; i < VL && j < VL;):
759 if (RS.isvec) while (!(ps & 1<<i)) i++;
760 if (RT.isvec) while (!(pd & 1<<j)) j++;
761 reg[RT+j] = SCALAR_OPERATION_ON(reg[RS+i])
762 if (RS.isvec) i++;
763 if (RT.isvec) j++; else break
764
765 Here's the interesting part: given the fact that SV is a "context"
766 extension, the above pattern can be applied to a lot more than just MV,
767 which is normally only what VCOMPRESS and VEXPAND do in traditional
768 Vector ISAs: move registers. Twin Predication can be applied to `extsw`
769 or `fcvt`, LD/ST operations and even `rlwinmi` and other operations
770 taking a single source and immediate(s) such as `addi`. All of these
771 are termed single-source, single-destination.
772
773 LDST Address-generation, or AGEN, is a special case of single source,
774 because elwidth overriding does not make sense to apply to the computation
775 of the 64 bit address itself, but it *does* make sense to apply elwidth
776 overrides to the data being accessed *at* that memory address.
777
778 It also turns out that by using a single bit set in the source or
779 destination, *all* the sequential ordered standard patterns of Vector
780 ISAs are provided: VSPLAT, VSELECT, VINSERT, VCOMPRESS, VEXPAND.
781
782 The only one missing from the list here, because it is non-sequential,
783 is VGATHER (and VSCATTER): moving registers by specifying a vector of
784 register indices (`regs[rd] = regs[regs[rs]]` in a loop). This one is
785 tricky because it typically does not exist in standard scalar ISAs.
786 If it did it would be called [[sv/mv.x]]. Once Vectorised, it's a
787 VGATHER/VSCATTER.
788
789 # CR predicate result analysis
790
791 OpenPOWER has Condition Registers. These store an analysis of the result
792 of an operation to test it for being greater, less than or equal to zero.
793 What if a test could be done, similar to branch BO testing, which hooked
794 into the predication system?
795
796 for i in range(VL):
797 # predication test, skip all masked out elements.
798 if predicate_masked_out(i): continue # skip
799 result = op(iregs[RA+i], iregs[RB+i])
800 CRnew = analyse(result) # calculates eq/lt/gt
801 # Rc=1 always stores the CR
802 if RC1 or Rc=1: crregs[offs+i] = CRnew
803 if RC1: continue # RC1 mode skips result store
804 # now test CR, similar to branch
805 if CRnew[BO[0:1]] == BO[2]:
806 # result optionally stored but CR always is
807 iregs[RT+i] = result
808
809 Note that whilst the Vector of CRs is always written to the CR regfile,
810 only those result elements that pass the BO test get written to the
811 integer regfile (when RC1 mode is not set). In RC1 mode the CR is always
812 stored, but the result never is. This effectively turns every arithmetic
813 operation into a type of `cmp` instruction.
814
815 Here for example if FP overflow occurred, and the CR testing was carried
816 out for that, all valid results would be stored but invalid ones would
817 not, but in addition the Vector of CRs would contain the indicators of
818 which ones failed. With the invalid results being simply not written
819 this could save resources (save on register file writes).
820
821 Also expected is, due to the fact that the predicate mask is effectively
822 ANDed with the post-result analysis as a secondary type of predication,
823 that there would be savings to be had in some types of operations where
824 the post-result analysis, if not included in SV, would need a second
825 predicate calculation followed by a predicate mask AND operation.
826
827 Note, hilariously, that Vectorised Condition Register Operations (crand,
828 cror) may also have post-result analysis applied to them. With Vectors
829 of CRs being utilised *for* predication, possibilities for compact and
830 elegant code begin to emerge from this innocuous-looking addition to SV.
831
832 # Exception-based Fail-on-first
833
834 One of the major issues with Vectorised LD/ST operations is when a
835 batch of LDs cross a page-fault boundary. With considerable resources
836 being taken up with in-flight data, a large Vector LD being cancelled
837 or unable to roll back is either a detriment to performance or can cause
838 data corruption.
839
840 What if, then, rather than cancel an entire Vector LD because the last
841 operation would cause a page fault, instead truncate the Vector to the
842 last successful element?
843
844 This is called "fail-on-first". Here is strncpy, illustrated from RVV:
845
846 strncpy:
847 c.mv a3, a0 # Copy dst
848 loop:
849 setvli x0, a2, vint8 # Vectors of bytes.
850 vlbff.v v1, (a1) # Get src bytes
851 vseq.vi v0, v1, 0 # Flag zero bytes
852 vmfirst a4, v0 # Zero found?
853 vmsif.v v0, v0 # Set mask up to and including zero byte.
854 vsb.v v1, (a3), v0.t # Write out bytes
855 c.bgez a4, exit # Done
856 csrr t1, vl # Get number of bytes fetched
857 c.add a1, a1, t1 # Bump src pointer
858 c.sub a2, a2, t1 # Decrement count.
859 c.add a3, a3, t1 # Bump dst pointer
860 c.bnez a2, loop # Anymore?
861 exit:
862 c.ret
863
864 Vector Length VL is truncated inherently at the first page faulting
865 byte-level LD. Otherwise, with more powerful hardware the number of
866 elements LOADed from memory could be dozens to hundreds or greater
867 (memory bandwidth permitting).
868
869 With VL truncated the analysis looking for the zero byte and the
870 subsequent STORE (a straight ST, not a ffirst ST) can proceed, safe in the
871 knowledge that every byte loaded in the Vector is valid. Implementors are
872 even permitted to "adapt" VL, truncating it early so that, for example,
873 subsequent iterations of loops will have LD/STs on aligned boundaries.
874
875 SIMD strncpy hand-written assembly routines are, to be blunt about it,
876 a total nightmare. 240 instructions is not uncommon, and the worst
877 thing about them is that they are unable to cope with detection of a
878 page fault condition.
879
880 Note: see <https://bugs.libre-soc.org/show_bug.cgi?id=561>
881
882 # Data-dependent fail-first
883
884 This is a minor variant on the CR-based predicate-result mode. Where
885 pred-result continues with independent element testing (any of which may
886 be parallelised), data-dependent fail-first *stops* at the first failure:
887
888 if Rc=0: BO = inv<<2 | 0b00 # test CR.eq bit z/nz
889 for i in range(VL):
890 # predication test, skip all masked out elements.
891 if predicate_masked_out(i): continue # skip
892 result = op(iregs[RA+i], iregs[RB+i])
893 CRnew = analyse(result) # calculates eq/lt/gt
894 # now test CR, similar to branch
895 if CRnew[BO[0:1]] != BO[2]:
896 VL = i # truncate: only successes allowed
897 break
898 # test passed: store result (and CR?)
899 if not RC1: iregs[RT+i] = result
900 if RC1 or Rc=1: crregs[offs+i] = CRnew
901
902 This is particularly useful, again, for FP operations that might overflow,
903 where it is desirable to end the loop early, but also desirable to
904 complete at least those operations that were okay (passed the test)
905 without also having to slow down execution by adding extra instructions
906 that tested for the possibility of that failure, in advance of doing
907 the actual calculation.
908
909 The only minor downside here though is the change to VL, which in some
910 implementations may cause pipeline stalls. This was one of the reasons
911 why CR-based pred-result analysis was added, because that at least is
912 entirely paralleliseable.
913
914 # Vertical-First Mode
915
916 This is a relatively new addition to SVP64 under development as of
917 July 2021. Where Horizontal-First is the standard Cray-style for-loop,
918 Vertical-First typically executes just the **one** scalar element
919 in each Vectorised operation. That element is selected by srcstep
920 and dststep *neither of which are changed as a side-effect of execution*.
921 Illustrating this in pseodocode, with a branch/loop.
922 To create loops, a new instruction `svstep` must be called,
923 explicitly, with Rc=1:
924
925 ```
926 loop:
927 sv.addi r0.v, r8.v, 5 # GPR(0+dststep) = GPR(8+srcstep) + 5
928 sv.addi r0.v, r8, 5 # GPR(0+dststep) = GPR(8 ) + 5
929 sv.addi r0, r8.v, 5 # GPR(0 ) = GPR(8+srcstep) + 5
930 svstep. # srcstep++, dststep++, CR0.eq = srcstep==VL
931 beq loop
932 ```
933
934 Three examples are illustrated of different types of Scalar-Vector
935 operations. Note that in its simplest form **only one** element is
936 executed per instruction **not** multiple elements per instruction.
937 (The more advanced version of Vertical-First mode may execute multiple
938 elements per instruction, however the number executed **must** remain
939 a fixed quantity.)
940
941 Now that such explicit loops can increment inexorably towards VL,
942 of course we now need a way to test if srcstep or dststep have reached
943 VL. This is achieved in one of two ways: [[sv/svstep]] has an Rc=1 mode
944 where CR0 will be updated if VL is reached. A standard v3.0B Branch
945 Conditional may rely on that. Alternatively, the number of elements
946 may be transferred into CTR, as is standard practice in Power ISA.
947 Here, SVP64 [[sv/branches]] have a mode which allows CTR to be decremented
948 by the number of vertical elements executed.
949
950 # Instruction format
951
952 Whilst this overview shows the internals, it does not go into detail
953 on the actual instruction format itself. There are a couple of reasons
954 for this: firstly, it's under development, and secondly, it needs to be
955 proposed to the OpenPOWER Foundation ISA WG for consideration and review.
956
957 That said: draft pages for [[sv/setvl]] and [[sv/svp64]] are written up.
958 The `setvl` instruction is pretty much as would be expected from a
959 Cray style VL instruction: the only differences being that, firstly,
960 the MAXVL (Maximum Vector Length) has to be specified, because that
961 determines - precisely - how many of the *scalar* registers are to be
962 used for a given Vector. Secondly: within the limit of MAXVL, VL is
963 required to be set to the requested value. By contrast, RVV systems
964 permit the hardware to set arbitrary values of VL.
965
966 The other key question is of course: what's the actual instruction format,
967 and what's in it? Bearing in mind that this requires OPF review, the
968 current draft is at the [[sv/svp64]] page, and includes space for all the
969 different modes, the predicates, element width overrides, SUBVL and the
970 register extensions, in 24 bits. This just about fits into an OpenPOWER
971 v3.1B 64 bit Prefix by borrowing some of the Reserved Encoding space.
972 The v3.1B suffix - containing as it does a 32 bit OpenPOWER instruction -
973 aligns perfectly with SV.
974
975 Further reading is at the main [[SV|sv]] page.
976
977 # Conclusion
978
979 Starting from a scalar ISA - OpenPOWER v3.0B - it was shown above that,
980 with conceptual sub-loops, a Scalar ISA can be turned into a Vector one,
981 by embedding Scalar instructions - unmodified - into a Vector "context"
982 using "Prefixing". With careful thought, this technique reaches 90%
983 par with good Vector ISAs, increasing to 95% with the addition of a
984 mere handful of additional context-vectoriseable scalar instructions
985 ([[sv/mv.x]] amongst them).
986
987 What is particularly cool about the SV concept is that custom extensions
988 and research need not be concerned about inventing new Vector instructions
989 and how to get them to interact with the Scalar ISA: they are effectively
990 one and the same. Any new instruction added at the Scalar level is
991 inherently and automatically Vectorised, following some simple rules.
992