(no commit message)
[libreriscv.git] / openpower / sv / bitmanip.mdwn
1 [[!tag standards]]
2
3 [[!toc levels=1]]
4
5 # Implementation Log
6
7 * ternlogi <https://bugs.libre-soc.org/show_bug.cgi?id=745>
8 * grev <https://bugs.libre-soc.org/show_bug.cgi?id=755>
9 * GF2^M <https://bugs.libre-soc.org/show_bug.cgi?id=782>
10
11
12 # bitmanipulation
13
14 **DRAFT STATUS**
15
16 pseudocode: [[openpower/isa/bitmanip]]
17
18 this extension amalgamates bitmanipulation primitives from many sources,
19 including RISC-V bitmanip, Packed SIMD, AVX-512 and OpenPOWER VSX.
20 Also included are DSP/Multimedia operations suitable for Audio/Video.
21 Vectorisation and SIMD are removed: these are straight scalar (element)
22 operations making them suitable for embedded applications. Vectorisation
23 Context is provided by [[openpower/sv]].
24
25 When combined with SV, scalar variants of bitmanip operations found in
26 VSX are added so that the Packed SIMD aspects of VSX may be retired as
27 "legacy" in the far future (10 to 20 years). Also, VSX is hundreds of
28 opcodes, requires 128 bit pathways, and is wholly unsuited to low power
29 or embedded scenarios.
30
31 ternlogv is experimental and is the only operation that may be considered
32 a "Packed SIMD". It is added as a variant of the already well-justified
33 ternlog operation (done in AVX512 as an immediate only) "because it
34 looks fun". As it is based on the LUT4 concept it will allow accelerated
35 emulation of FPGAs. Other vendors of ISAs are buying FPGA companies to
36 achieve similar objectives.
37
38 general-purpose Galois Field 2^M operations are added so as to avoid
39 huge custom opcode proliferation across many areas of Computer Science.
40 however for convenience and also to avoid setup costs, some of the more
41 common operations (clmul, crc32) are also added. The expectation is
42 that these operations would all be covered by the same pipeline.
43
44 note that there are brownfield spaces below that could incorporate
45 some of the set-before-first and other scalar operations listed in
46 [[sv/mv.swizzle]],
47 [[sv/vector_ops]], [[sv/int_fp_mv]] and the [[sv/av_opcodes]] as well as
48 [[sv/setvl]], [[sv/svstep]], [[sv/remap]]
49
50 Useful resource:
51
52 * <https://en.wikiversity.org/wiki/Reed%E2%80%93Solomon_codes_for_coders>
53 * <https://maths-people.anu.edu.au/~brent/pd/rpb232tr.pdf>
54
55 [[!inline quick="yes" raw="yes" pages="openpower/sv/draft_opcode_tables"]]
56
57 # binary and ternary bitops
58
59 Similar to FPGA LUTs: for two (binary) or three (ternary) inputs take
60 bits from each input, concatenate them and perform a lookup into a
61 table using an 8-8-bit immediate (for the ternary instructions), or in
62 another register (4-bit for the binary instructions). The binary lookup
63 instructions have CR Field lookup variants due to CR Fields being 4 bit.
64
65 Like the x86 AVX512F
66 [vpternlogd/vpternlogq](https://www.felixcloutier.com/x86/vpternlogd:vpternlogq)
67 instructions.
68
69 ## ternlogi
70
71 | 0.5|6.10|11.15|16.20| 21..28|29.30|31|
72 | -- | -- | --- | --- | ----- | --- |--|
73 | NN | RT | RA | RB | im0-7 | 00 |Rc|
74
75 lut3(imm, a, b, c):
76 idx = c << 2 | b << 1 | a
77 return imm[idx] # idx by LSB0 order
78
79 for i in range(64):
80 RT[i] = lut3(imm, RB[i], RA[i], RT[i])
81
82 ## binlut
83
84 Binary lookup is a dynamic LUT2 version of ternlogi. Firstly, the
85 lookup table is 4 bits wide not 8 bits, and secondly the lookup
86 table comes from a register not an immediate.
87
88 | 0.5|6.10|11.15|16.20| 21..25|26..31 | Form |
89 | -- | -- | --- | --- | ----- |--------|---------|
90 | NN | RT | RA | RB | RC |nh 00001| VA-Form |
91 | NN | RT | RA | RB | /BFA/ |0 01001| VA-Form |
92
93 For binlut, the 4-bit LUT may be selected from either the high nibble
94 or the low nibble of the first byte of RC:
95
96 lut2(imm, a, b):
97 idx = b << 1 | a
98 return imm[idx] # idx by LSB0 order
99
100 imm = (RC>>(nh*4))&0b1111
101 for i in range(64):
102 RT[i] = lut2(imm, RB[i], RA[i])
103
104 For bincrlut, `BFA` selects the 4-bit CR Field as the LUT2:
105
106 for i in range(64):
107 RT[i] = lut2(CRs{BFA}, RB[i], RA[i])
108
109 When Vectorised with SVP64, as usual both source and destination may be
110 Vector or Scalar.
111
112 *Programmer's note: a dynamic ternary lookup may be synthesised from
113 a pair of `binlut` instructions followed by a `ternlogi` to select which
114 to merge. Use `nh` to select which nibble to use as the lookup table
115 from the RC source register (`nh=1` nibble high), i.e. keeping
116 an 8-bit LUT3 in RC, the first `binlut` instruction may set nh=0 and
117 the second nh=1.*
118
119 ## crternlogi
120
121 another mode selection would be CRs not Ints.
122
123 | 0.5|6.8 | 9.11|12.14|15.17|18.20|21.28 | 29.30|31|
124 | -- | -- | --- | --- | --- |-----|----- | -----|--|
125 | NN | BT | BA | BB | BC |m0-2 | imm | 01 |m3|
126
127 mask = m0-3
128 for i in range(4):
129 a,b,c = CRs[BA][i], CRs[BB][i], CRs[BC][i])
130 if mask[i] CRs[BT][i] = lut3(imm, a, b, c)
131
132 This instruction is remarkably similar to the existing crops, `crand` etc.
133 which have been noted to be a 4-bit (binary) LUT. In effect `crternlogi`
134 is the ternary LUT version of crops, having an 8-bit LUT.
135
136 ## crbinlog
137
138 With ternary (LUT3) dynamic instructions being very costly,
139 and CR Fields being only 4 bit, a binary (LUT2) variant is better
140
141 | 0.5|6.8 | 9.11|12.14|15.17|18.21|22...30 |31|
142 | -- | -- | --- | --- | --- |-----| -------- |--|
143 | NN | BT | BA | BB | BC |m0-m3|000101110 |0 |
144
145 mask = m0..m3
146 for i in range(4):
147 a,b = CRs[BA][i], CRs[BB][i])
148 if mask[i] CRs[BT][i] = lut2(CRs[BC], a, b)
149
150 When SVP64 Vectorised any of the 4 operands may be Scalar or
151 Vector, including `BC` meaning that multiple different dynamic
152 lookups may be performed with a single instruction.
153
154 *Programmer's note: just as with binlut and ternlogi, a pair
155 of crbinlog instructions followed by a merging crternlogi may
156 be deployed to synthesise dynamic ternary (LUT3) CR Field
157 manipulation*
158
159 # int ops
160
161 ## min/m
162
163 required for the [[sv/av_opcodes]]
164
165 signed and unsigned min/max for integer. this is sort-of partly
166 synthesiseable in [[sv/svp64]] with pred-result as long as the dest reg
167 is one of the sources, but not both signed and unsigned. when the dest
168 is also one of the srces and the mv fails due to the CR bittest failing
169 this will only overwrite the dest where the src is greater (or less).
170
171 signed/unsigned min/max gives more flexibility.
172
173 X-Form
174
175 * XO=0001001110, itype=0b00 min, unsigned
176 * XO=0101001110, itype=0b01 min, signed
177 * XO=0011001110, itype=0b10 max, unsigned
178 * XO=0111001110, itype=0b11 max, signed
179
180
181 ```
182 uint_xlen_t mins(uint_xlen_t rs1, uint_xlen_t rs2)
183 { return (int_xlen_t)rs1 < (int_xlen_t)rs2 ? rs1 : rs2;
184 }
185 uint_xlen_t maxs(uint_xlen_t rs1, uint_xlen_t rs2)
186 { return (int_xlen_t)rs1 > (int_xlen_t)rs2 ? rs1 : rs2;
187 }
188 uint_xlen_t minu(uint_xlen_t rs1, uint_xlen_t rs2)
189 { return rs1 < rs2 ? rs1 : rs2;
190 }
191 uint_xlen_t maxu(uint_xlen_t rs1, uint_xlen_t rs2)
192 { return rs1 > rs2 ? rs1 : rs2;
193 }
194 ```
195
196 ## average
197
198 required for the [[sv/av_opcodes]], these exist in Packed SIMD (VSX)
199 but not scalar
200
201 ```
202 uint_xlen_t intavg(uint_xlen_t rs1, uint_xlen_t rs2) {
203 return (rs1 + rs2 + 1) >> 1:
204 }
205 ```
206
207 ## absdu
208
209 required for the [[sv/av_opcodes]], these exist in Packed SIMD (VSX)
210 but not scalar
211
212 ```
213 uint_xlen_t absdu(uint_xlen_t rs1, uint_xlen_t rs2) {
214 return (src1 > src2) ? (src1-src2) : (src2-src1)
215 }
216 ```
217
218 ## abs-accumulate
219
220 required for the [[sv/av_opcodes]], these are needed for motion estimation.
221 both are overwrite on RS.
222
223 ```
224 uint_xlen_t uintabsacc(uint_xlen_t rs, uint_xlen_t ra, uint_xlen_t rb) {
225 return rs + (src1 > src2) ? (src1-src2) : (src2-src1)
226 }
227 uint_xlen_t intabsacc(uint_xlen_t rs, int_xlen_t ra, int_xlen_t rb) {
228 return rs + (src1 > src2) ? (src1-src2) : (src2-src1)
229 }
230 ```
231
232 For SVP64, the twin Elwidths allows e.g. a 16 bit accumulator for 8 bit
233 differences. Form is `RM-1P-3S1D` where RS-as-source has a separate
234 SVP64 designation from RS-as-dest. This gives a limited range of
235 non-overwrite capability.
236
237 # shift-and-add
238
239 Power ISA is missing LD/ST with shift, which is present in both ARM and x86.
240 Too complex to add more LD/ST, a compromise is to add shift-and-add.
241 Replaces a pair of explicit instructions in hot-loops.
242
243 ```
244 uint_xlen_t shadd(uint_xlen_t rs1, uint_xlen_t rs2, uint8_t sh) {
245 return (rs1 << (sh+1)) + rs2;
246 }
247
248 uint_xlen_t shadduw(uint_xlen_t rs1, uint_xlen_t rs2, uint8_t sh) {
249 uint_xlen_t rs1z = rs1 & 0xFFFFFFFF;
250 return (rs1z << (sh+1)) + rs2;
251 }
252 ```
253
254 # bitmask set
255
256 based on RV bitmanip singlebit set, instruction format similar to shift
257 [[isa/fixedshift]]. bmext is actually covered already (shift-with-mask
258 rldicl but only immediate version). however bitmask-invert is not,
259 and set/clr are not covered, although they can use the same Shift ALU.
260
261 bmext (RB) version is not the same as rldicl because bmext is a right
262 shift by RC, where rldicl is a left rotate. for the immediate version
263 this does not matter, so a bmexti is not required. bmrev however there
264 is no direct equivalent and consequently a bmrevi is required.
265
266 bmset (register for mask amount) is particularly useful for creating
267 predicate masks where the length is a dynamic runtime quantity.
268 bmset(RA=0, RB=0, RC=mask) will produce a run of ones of length "mask"
269 in a single instruction without needing to initialise or depend on any
270 other registers.
271
272 | 0.5|6.10|11.15|16.20|21.25| 26..30 |31| name |
273 | -- | -- | --- | --- | --- | ------- |--| ----- |
274 | NN | RS | RA | RB | RC | mode 010 |Rc| bm\* |
275
276 Immediate-variant is an overwrite form:
277
278 | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name |
279 | -- | -- | --- | --- | -- | ----- | -------- |--| ---- |
280 | NN | RS | RB | sh | SH | itype | 1000 110 |Rc| bm\*i |
281
282 ```
283 def MASK(x, y):
284 if x < y:
285 x = x+1
286 mask_a = ((1 << x) - 1) & ((1 << 64) - 1)
287 mask_b = ((1 << y) - 1) & ((1 << 64) - 1)
288 elif x == y:
289 return 1 << x
290 else:
291 x = x+1
292 mask_a = ((1 << x) - 1) & ((1 << 64) - 1)
293 mask_b = (~((1 << y) - 1)) & ((1 << 64) - 1)
294 return mask_a ^ mask_b
295
296
297 uint_xlen_t bmset(RS, RB, sh)
298 {
299 int shamt = RB & (XLEN - 1);
300 mask = (2<<sh)-1;
301 return RS | (mask << shamt);
302 }
303
304 uint_xlen_t bmclr(RS, RB, sh)
305 {
306 int shamt = RB & (XLEN - 1);
307 mask = (2<<sh)-1;
308 return RS & ~(mask << shamt);
309 }
310
311 uint_xlen_t bminv(RS, RB, sh)
312 {
313 int shamt = RB & (XLEN - 1);
314 mask = (2<<sh)-1;
315 return RS ^ (mask << shamt);
316 }
317
318 uint_xlen_t bmext(RS, RB, sh)
319 {
320 int shamt = RB & (XLEN - 1);
321 mask = (2<<sh)-1;
322 return mask & (RS >> shamt);
323 }
324 ```
325
326 bitmask extract with reverse. can be done by bit-order-inverting all
327 of RB and getting bits of RB from the opposite end.
328
329 when RA is zero, no shift occurs. this makes bmextrev useful for
330 simply reversing all bits of a register.
331
332 ```
333 msb = ra[5:0];
334 rev[0:msb] = rb[msb:0];
335 rt = ZE(rev[msb:0]);
336
337 uint_xlen_t bmrevi(RA, RB, sh)
338 {
339 int shamt = XLEN-1;
340 if (RA != 0) shamt = (GPR(RA) & (XLEN - 1));
341 shamt = (XLEN-1)-shamt; # shift other end
342 brb = bitreverse(GPR(RB)) # swap LSB-MSB
343 mask = (2<<sh)-1;
344 return mask & (brb >> shamt);
345 }
346
347 uint_xlen_t bmrev(RA, RB, RC) {
348 return bmrevi(RA, RB, GPR(RC) & 0b111111);
349 }
350 ```
351
352 | 0.5|6.10|11.15|16.20|21.26| 27..30 |31| name | Form |
353 | -- | -- | --- | --- | --- | ------- |--| ------ | -------- |
354 | NN | RT | RA | RB | sh | 1111 |Rc| bmrevi | MDS-Form |
355
356 | 0.5|6.10|11.15|16.20|21.25| 26..30 |31| name | Form |
357 | -- | -- | --- | --- | --- | ------- |--| ------ | -------- |
358 | NN | RT | RA | RB | RC | 11110 |Rc| bmrev | VA2-Form |
359
360 # grevlut <a name="grevlut"> </a>
361
362 ([3x lower latency alternative](grev_gorc_design/) which is
363 not equivalent and has limited constant-generation capability)
364
365 generalised reverse combined with a pair of LUT2s and allowing
366 a constant `0b0101...0101` when RA=0, and an option to invert
367 (including when RA=0, giving a constant 0b1010...1010 as the
368 initial value) provides a wide range of instructions
369 and a means to set hundreds of regular 64 bit patterns with one
370 single 32 bit instruction.
371
372 the two LUT2s are applied left-half (when not swapping)
373 and right-half (when swapping) so as to allow a wider
374 range of options.
375
376 <img src="/openpower/sv/grevlut2x2.jpg" width=700 />
377
378 * A value of `0b11001010` for the immediate provides
379 the functionality of a standard "grev".
380 * `0b11101110` provides gorc
381
382 grevlut should be arranged so as to produce the constants
383 needed to put into bext (bitextract) so as in turn to
384 be able to emulate x86 pmovmask instructions
385 <https://www.felixcloutier.com/x86/pmovmskb>.
386 This only requires 2 instructions (grevlut, bext).
387
388 Note that if the mask is required to be placed
389 directly into CR Fields (for use as CR Predicate
390 masks rather than a integer mask) then sv.cmpi or sv.ori
391 may be used instead, bearing in mind that sv.ori
392 is a 64-bit instruction, and `VL` must have been
393 set to the required length:
394
395 sv.ori./elwid=8 r10.v, r10.v, 0
396
397 The following settings provide the required mask constants:
398
399 | RA=0 | RB | imm | iv | result |
400 | ------- | ------- | ---------- | -- | ---------- |
401 | 0x555.. | 0b10 | 0b01101100 | 0 | 0x111111... |
402 | 0x555.. | 0b110 | 0b01101100 | 0 | 0x010101... |
403 | 0x555.. | 0b1110 | 0b01101100 | 0 | 0x00010001... |
404 | 0x555.. | 0b10 | 0b11000110 | 1 | 0x88888... |
405 | 0x555.. | 0b110 | 0b11000110 | 1 | 0x808080... |
406 | 0x555.. | 0b1110 | 0b11000110 | 1 | 0x80008000... |
407
408 Better diagram showing the correct ordering of shamt (RB). A LUT2
409 is applied to all locations marked in red using the first 4
410 bits of the immediate, and a separate LUT2 applied to all
411 locations in green using the upper 4 bits of the immediate.
412
413 <img src="/openpower/sv/grevlut.png" width=700 />
414
415 demo code [[openpower/sv/grevlut.py]]
416
417 ```
418 lut2(imm, a, b):
419 idx = b << 1 | a
420 return imm[idx] # idx by LSB0 order
421
422 dorow(imm8, step_i, chunksize, us32b):
423 for j in 0 to 31 if is32b else 63:
424 if (j&chunk_size) == 0
425 imm = imm8[0..3]
426 else
427 imm = imm8[4..7]
428 step_o[j] = lut2(imm, step_i[j], step_i[j ^ chunk_size])
429 return step_o
430
431 uint64_t grevlut(uint64_t RA, uint64_t RB, uint8 imm, bool iv, bool is32b)
432 {
433 uint64_t x = 0x5555_5555_5555_5555;
434 if (RA != 0) x = GPR(RA);
435 if (iv) x = ~x;
436 int shamt = RB & 31 if is32b else 63
437 for i in 0 to (6-is32b)
438 step = 1<<i
439 if (shamt & step) x = dorow(imm, x, step, is32b)
440 return x;
441 }
442 ```
443
444 A variant may specify different LUT-pairs per row,
445 using one byte of RB for each. If it is desired that
446 a particular row-crossover shall not be applied it is
447 a simple matter to set the appropriate LUT-pair in RB
448 to effect an identity transform for that row (`0b11001010`).
449
450 ```
451 uint64_t grevlutr(uint64_t RA, uint64_t RB, bool iv, bool is32b)
452 {
453 uint64_t x = 0x5555_5555_5555_5555;
454 if (RA != 0) x = GPR(RA);
455 if (iv) x = ~x;
456 for i in 0 to (6-is32b)
457 step = 1<<i
458 imm = (RB>>(i*8))&0xff
459 x = dorow(imm, x, step, is32b)
460 return x;
461 }
462
463 ```
464
465 | 0.5|6.10|11.15|16.20 |21..28 | 29.30|31| name | Form |
466 | -- | -- | --- | --- | ----- | -----|--| ------ | ----- |
467 | NN | RT | RA | s0-4 | im0-7 | 1 iv |s5| grevlogi | |
468 | NN | RT | RA | RB | im0-7 | 01 |0 | grevlog | |
469
470 An equivalent to `grevlogw` may be synthesised by setting the
471 appropriate bits in RB to set the top half of RT to zero.
472 Thus an explicit grevlogw instruction is not necessary.
473
474 # xperm
475
476 based on RV bitmanip.
477
478 RA contains a vector of indices to select parts of RB to be
479 copied to RT. The immediate-variant allows up to an 8 bit
480 pattern (repeated) to be targetted at different parts of RT.
481
482 xperm shares some similarity with one of the uses of bmator
483 in that xperm indices are binary addressing where bitmator
484 may be considered to be unary addressing.
485
486 ```
487 uint_xlen_t xpermi(uint8_t imm8, uint_xlen_t RB, int sz_log2)
488 {
489 uint_xlen_t r = 0;
490 uint_xlen_t sz = 1LL << sz_log2;
491 uint_xlen_t mask = (1LL << sz) - 1;
492 uint_xlen_t RA = imm8 | imm8<<8 | ... | imm8<<56;
493 for (int i = 0; i < XLEN; i += sz) {
494 uint_xlen_t pos = ((RA >> i) & mask) << sz_log2;
495 if (pos < XLEN)
496 r |= ((RB >> pos) & mask) << i;
497 }
498 return r;
499 }
500 uint_xlen_t xperm(uint_xlen_t RA, uint_xlen_t RB, int sz_log2)
501 {
502 uint_xlen_t r = 0;
503 uint_xlen_t sz = 1LL << sz_log2;
504 uint_xlen_t mask = (1LL << sz) - 1;
505 for (int i = 0; i < XLEN; i += sz) {
506 uint_xlen_t pos = ((RA >> i) & mask) << sz_log2;
507 if (pos < XLEN)
508 r |= ((RB >> pos) & mask) << i;
509 }
510 return r;
511 }
512 uint_xlen_t xperm_n (uint_xlen_t RA, uint_xlen_t RB)
513 { return xperm(RA, RB, 2); }
514 uint_xlen_t xperm_b (uint_xlen_t RA, uint_xlen_t RB)
515 { return xperm(RA, RB, 3); }
516 uint_xlen_t xperm_h (uint_xlen_t RA, uint_xlen_t RB)
517 { return xperm(RA, RB, 4); }
518 uint_xlen_t xperm_w (uint_xlen_t RA, uint_xlen_t RB)
519 { return xperm(RA, RB, 5); }
520 ```
521
522 # bitmatrix
523
524 bmatflip and bmatxor is found in the Cray XMT, and in x86 is known
525 as GF2P8AFFINEQB. uses:
526
527 * <https://gist.github.com/animetosho/d3ca95da2131b5813e16b5bb1b137ca0>
528 * SM4, Reed Solomon, RAID6
529 <https://stackoverflow.com/questions/59124720/what-are-the-avx-512-galois-field-related-instructions-for>
530 * Vector bit-reverse <https://reviews.llvm.org/D91515?id=305411>
531 * Affine Inverse <https://github.com/HJLebbink/asm-dude/wiki/GF2P8AFFINEINVQB>
532
533 | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name | Form |
534 | -- | -- | --- | --- | -- | ----- | -------- |--| ---- | ------- |
535 | NN | RS | RA |im04 | im5| 1 1 | im67 00 110 |Rc| bmatxori | TODO |
536
537
538 ```
539 uint64_t bmatflip(uint64_t RA)
540 {
541 uint64_t x = RA;
542 x = shfl64(x, 31);
543 x = shfl64(x, 31);
544 x = shfl64(x, 31);
545 return x;
546 }
547
548 uint64_t bmatxori(uint64_t RS, uint64_t RA, uint8_t imm) {
549 // transpose of RA
550 uint64_t RAt = bmatflip(RA);
551 uint8_t u[8]; // rows of RS
552 uint8_t v[8]; // cols of RA
553 for (int i = 0; i < 8; i++) {
554 u[i] = RS >> (i*8);
555 v[i] = RAt >> (i*8);
556 }
557 uint64_t bit, x = 0;
558 for (int i = 0; i < 64; i++) {
559 bit = (imm >> (i%8)) & 1;
560 bit ^= pcnt(u[i / 8] & v[i % 8]) & 1;
561 x |= bit << i;
562 }
563 return x;
564 }
565
566 uint64_t bmatxor(uint64_t RA, uint64_t RB) {
567 return bmatxori(RA, RB, 0xff)
568 }
569
570 uint64_t bmator(uint64_t RA, uint64_t RB) {
571 // transpose of RB
572 uint64_t RBt = bmatflip(RB);
573 uint8_t u[8]; // rows of RA
574 uint8_t v[8]; // cols of RB
575 for (int i = 0; i < 8; i++) {
576 u[i] = RA >> (i*8);
577 v[i] = RBt >> (i*8);
578 }
579 uint64_t x = 0;
580 for (int i = 0; i < 64; i++) {
581 if ((u[i / 8] & v[i % 8]) != 0)
582 x |= 1LL << i;
583 }
584 return x;
585 }
586
587 uint64_t bmatand(uint64_t RA, uint64_t RB) {
588 // transpose of RB
589 uint64_t RBt = bmatflip(RB);
590 uint8_t u[8]; // rows of RA
591 uint8_t v[8]; // cols of RB
592 for (int i = 0; i < 8; i++) {
593 u[i] = RA >> (i*8);
594 v[i] = RBt >> (i*8);
595 }
596 uint64_t x = 0;
597 for (int i = 0; i < 64; i++) {
598 if ((u[i / 8] & v[i % 8]) == 0xff)
599 x |= 1LL << i;
600 }
601 return x;
602 }
603 ```
604
605 # Introduction to Carry-less and GF arithmetic
606
607 * obligatory xkcd <https://xkcd.com/2595/>
608
609 There are three completely separate types of Galois-Field-based arithmetic
610 that we implement which are not well explained even in introductory
611 literature. A slightly oversimplified explanation is followed by more
612 accurate descriptions:
613
614 * `GF(2)` carry-less binary arithmetic. this is not actually a Galois Field,
615 but is accidentally referred to as GF(2) - see below as to why.
616 * `GF(p)` modulo arithmetic with a Prime number, these are "proper"
617 Galois Fields
618 * `GF(2^N)` carry-less binary arithmetic with two limits: modulo a power-of-2
619 (2^N) and a second "reducing" polynomial (similar to a prime number), these
620 are said to be GF(2^N) arithmetic.
621
622 further detailed and more precise explanations are provided below
623
624 * **Polynomials with coefficients in `GF(2)`**
625 (aka. Carry-less arithmetic -- the `cl*` instructions).
626 This isn't actually a Galois Field, but its coefficients are. This is
627 basically binary integer addition, subtraction, and multiplication like
628 usual, except that carries aren't propagated at all, effectively turning
629 both addition and subtraction into the bitwise xor operation. Division and
630 remainder are defined to match how addition and multiplication works.
631 * **Galois Fields with a prime size**
632 (aka. `GF(p)` or Prime Galois Fields -- the `gfp*` instructions).
633 This is basically just the integers mod `p`.
634 * **Galois Fields with a power-of-a-prime size**
635 (aka. `GF(p^n)` or `GF(q)` where `q == p^n` for prime `p` and
636 integer `n > 0`).
637 We only implement these for `p == 2`, called Binary Galois Fields
638 (`GF(2^n)` -- the `gfb*` instructions).
639 For any prime `p`, `GF(p^n)` is implemented as polynomials with
640 coefficients in `GF(p)` and degree `< n`, where the polynomials are the
641 remainders of dividing by a specificly chosen polynomial in `GF(p)` called
642 the Reducing Polynomial (we will denote that by `red_poly`). The Reducing
643 Polynomial must be an irreducable polynomial (like primes, but for
644 polynomials), as well as have degree `n`. All `GF(p^n)` for the same `p`
645 and `n` are isomorphic to each other -- the choice of `red_poly` doesn't
646 affect `GF(p^n)`'s mathematical shape, all that changes is the specific
647 polynomials used to implement `GF(p^n)`.
648
649 Many implementations and much of the literature do not make a clear
650 distinction between these three categories, which makes it confusing
651 to understand what their purpose and value is.
652
653 * carry-less multiply is extremely common and is used for the ubiquitous
654 CRC32 algorithm. [TODO add many others, helps justify to ISA WG]
655 * GF(2^N) forms the basis of Rijndael (the current AES standard) and
656 has significant uses throughout cryptography
657 * GF(p) is the basis again of a significant quantity of algorithms
658 (TODO, list them, jacob knows what they are), even though the
659 modulo is limited to be below 64-bit (size of a scalar int)
660
661 # Instructions for Carry-less Operations
662
663 aka. Polynomials with coefficients in `GF(2)`
664
665 Carry-less addition/subtraction is simply XOR, so a `cladd`
666 instruction is not provided since the `xor[i]` instruction can be used instead.
667
668 These are operations on polynomials with coefficients in `GF(2)`, with the
669 polynomial's coefficients packed into integers with the following algorithm:
670
671 ```python
672 [[!inline pagenames="gf_reference/pack_poly.py" raw="yes"]]
673 ```
674
675 ## Carry-less Multiply Instructions
676
677 based on RV bitmanip
678 see <https://en.wikipedia.org/wiki/CLMUL_instruction_set> and
679 <https://www.felixcloutier.com/x86/pclmulqdq> and
680 <https://en.m.wikipedia.org/wiki/Carry-less_product>
681
682 They are worth adding as their own non-overwrite operations
683 (in the same pipeline).
684
685 ### `clmul` Carry-less Multiply
686
687 ```python
688 [[!inline pagenames="gf_reference/clmul.py" raw="yes"]]
689 ```
690
691 ### `clmulh` Carry-less Multiply High
692
693 ```python
694 [[!inline pagenames="gf_reference/clmulh.py" raw="yes"]]
695 ```
696
697 ### `clmulr` Carry-less Multiply (Reversed)
698
699 Useful for CRCs. Equivalent to bit-reversing the result of `clmul` on
700 bit-reversed inputs.
701
702 ```python
703 [[!inline pagenames="gf_reference/clmulr.py" raw="yes"]]
704 ```
705
706 ## `clmadd` Carry-less Multiply-Add
707
708 ```
709 clmadd RT, RA, RB, RC
710 ```
711
712 ```
713 (RT) = clmul((RA), (RB)) ^ (RC)
714 ```
715
716 ## `cltmadd` Twin Carry-less Multiply-Add (for FFTs)
717
718 Used in combination with SV FFT REMAP to perform a full Discrete Fourier
719 Transform of Polynomials over GF(2) in-place. Possible by having 3-in 2-out,
720 to avoid the need for a temp register. RS is written to as well as RT.
721
722 Note: Polynomials over GF(2) are a Ring rather than a Field, so, because the
723 definition of the Inverse Discrete Fourier Transform involves calculating a
724 multiplicative inverse, which may not exist in every Ring, therefore the
725 Inverse Discrete Fourier Transform may not exist. (AFAICT the number of inputs
726 to the IDFT must be odd for the IDFT to be defined for Polynomials over GF(2).
727 TODO: check with someone who knows for sure if that's correct.)
728
729 ```
730 cltmadd RT, RA, RB, RC
731 ```
732
733 TODO: add link to explanation for where `RS` comes from.
734
735 ```
736 a = (RA)
737 c = (RC)
738 # read all inputs before writing to any outputs in case
739 # an input overlaps with an output register.
740 (RT) = clmul(a, (RB)) ^ c
741 (RS) = a ^ c
742 ```
743
744 ## `cldivrem` Carry-less Division and Remainder
745
746 `cldivrem` isn't an actual instruction, but is just used in the pseudo-code
747 for other instructions.
748
749 ```python
750 [[!inline pagenames="gf_reference/cldivrem.py" raw="yes"]]
751 ```
752
753 ## `cldiv` Carry-less Division
754
755 ```
756 cldiv RT, RA, RB
757 ```
758
759 ```
760 n = (RA)
761 d = (RB)
762 q, r = cldivrem(n, d, width=XLEN)
763 (RT) = q
764 ```
765
766 ## `clrem` Carry-less Remainder
767
768 ```
769 clrem RT, RA, RB
770 ```
771
772 ```
773 n = (RA)
774 d = (RB)
775 q, r = cldivrem(n, d, width=XLEN)
776 (RT) = r
777 ```
778
779 # Instructions for Binary Galois Fields `GF(2^m)`
780
781 see:
782
783 * <https://courses.csail.mit.edu/6.857/2016/files/ffield.py>
784 * <https://engineering.purdue.edu/kak/compsec/NewLectures/Lecture7.pdf>
785 * <https://foss.heptapod.net/math/libgf2/-/blob/branch/default/src/libgf2/gf2.py>
786
787 Binary Galois Field addition/subtraction is simply XOR, so a `gfbadd`
788 instruction is not provided since the `xor[i]` instruction can be used instead.
789
790 ## `GFBREDPOLY` SPR -- Reducing Polynomial
791
792 In order to save registers and to make operations orthogonal with standard
793 arithmetic, the reducing polynomial is stored in a dedicated SPR `GFBREDPOLY`.
794 This also allows hardware to pre-compute useful parameters (such as the
795 degree, or look-up tables) based on the reducing polynomial, and store them
796 alongside the SPR in hidden registers, only recomputing them whenever the SPR
797 is written to, rather than having to recompute those values for every
798 instruction.
799
800 Because Galois Fields require the reducing polynomial to be an irreducible
801 polynomial, that guarantees that any polynomial of `degree > 1` must have
802 the LSB set, since otherwise it would be divisible by the polynomial `x`,
803 making it reducible, making whatever we're working on no longer a Field.
804 Therefore, we can reuse the LSB to indicate `degree == XLEN`.
805
806 ```python
807 [[!inline pagenames="gf_reference/decode_reducing_polynomial.py" raw="yes"]]
808 ```
809
810 ## `gfbredpoly` -- Set the Reducing Polynomial SPR `GFBREDPOLY`
811
812 unless this is an immediate op, `mtspr` is completely sufficient.
813
814 ```python
815 [[!inline pagenames="gf_reference/gfbredpoly.py" raw="yes"]]
816 ```
817
818 ## `gfbmul` -- Binary Galois Field `GF(2^m)` Multiplication
819
820 ```
821 gfbmul RT, RA, RB
822 ```
823
824 ```python
825 [[!inline pagenames="gf_reference/gfbmul.py" raw="yes"]]
826 ```
827
828 ## `gfbmadd` -- Binary Galois Field `GF(2^m)` Multiply-Add
829
830 ```
831 gfbmadd RT, RA, RB, RC
832 ```
833
834 ```python
835 [[!inline pagenames="gf_reference/gfbmadd.py" raw="yes"]]
836 ```
837
838 ## `gfbtmadd` -- Binary Galois Field `GF(2^m)` Twin Multiply-Add (for FFT)
839
840 Used in combination with SV FFT REMAP to perform a full `GF(2^m)` Discrete
841 Fourier Transform in-place. Possible by having 3-in 2-out, to avoid the need
842 for a temp register. RS is written to as well as RT.
843
844 ```
845 gfbtmadd RT, RA, RB, RC
846 ```
847
848 TODO: add link to explanation for where `RS` comes from.
849
850 ```
851 a = (RA)
852 c = (RC)
853 # read all inputs before writing to any outputs in case
854 # an input overlaps with an output register.
855 (RT) = gfbmadd(a, (RB), c)
856 # use gfbmadd again since it reduces the result
857 (RS) = gfbmadd(a, 1, c) # "a * 1 + c"
858 ```
859
860 ## `gfbinv` -- Binary Galois Field `GF(2^m)` Inverse
861
862 ```
863 gfbinv RT, RA
864 ```
865
866 ```python
867 [[!inline pagenames="gf_reference/gfbinv.py" raw="yes"]]
868 ```
869
870 # Instructions for Prime Galois Fields `GF(p)`
871
872 ## `GFPRIME` SPR -- Prime Modulus For `gfp*` Instructions
873
874 ## `gfpadd` Prime Galois Field `GF(p)` Addition
875
876 ```
877 gfpadd RT, RA, RB
878 ```
879
880 ```python
881 [[!inline pagenames="gf_reference/gfpadd.py" raw="yes"]]
882 ```
883
884 the addition happens on infinite-precision integers
885
886 ## `gfpsub` Prime Galois Field `GF(p)` Subtraction
887
888 ```
889 gfpsub RT, RA, RB
890 ```
891
892 ```python
893 [[!inline pagenames="gf_reference/gfpsub.py" raw="yes"]]
894 ```
895
896 the subtraction happens on infinite-precision integers
897
898 ## `gfpmul` Prime Galois Field `GF(p)` Multiplication
899
900 ```
901 gfpmul RT, RA, RB
902 ```
903
904 ```python
905 [[!inline pagenames="gf_reference/gfpmul.py" raw="yes"]]
906 ```
907
908 the multiplication happens on infinite-precision integers
909
910 ## `gfpinv` Prime Galois Field `GF(p)` Invert
911
912 ```
913 gfpinv RT, RA
914 ```
915
916 Some potential hardware implementations are found in:
917 <https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.90.5233&rep=rep1&type=pdf>
918
919 ```python
920 [[!inline pagenames="gf_reference/gfpinv.py" raw="yes"]]
921 ```
922
923 ## `gfpmadd` Prime Galois Field `GF(p)` Multiply-Add
924
925 ```
926 gfpmadd RT, RA, RB, RC
927 ```
928
929 ```python
930 [[!inline pagenames="gf_reference/gfpmadd.py" raw="yes"]]
931 ```
932
933 the multiplication and addition happens on infinite-precision integers
934
935 ## `gfpmsub` Prime Galois Field `GF(p)` Multiply-Subtract
936
937 ```
938 gfpmsub RT, RA, RB, RC
939 ```
940
941 ```python
942 [[!inline pagenames="gf_reference/gfpmsub.py" raw="yes"]]
943 ```
944
945 the multiplication and subtraction happens on infinite-precision integers
946
947 ## `gfpmsubr` Prime Galois Field `GF(p)` Multiply-Subtract-Reversed
948
949 ```
950 gfpmsubr RT, RA, RB, RC
951 ```
952
953 ```python
954 [[!inline pagenames="gf_reference/gfpmsubr.py" raw="yes"]]
955 ```
956
957 the multiplication and subtraction happens on infinite-precision integers
958
959 ## `gfpmaddsubr` Prime Galois Field `GF(p)` Multiply-Add and Multiply-Sub-Reversed (for FFT)
960
961 Used in combination with SV FFT REMAP to perform
962 a full Number-Theoretic-Transform in-place. Possible by having 3-in 2-out,
963 to avoid the need for a temp register. RS is written
964 to as well as RT.
965
966 ```
967 gfpmaddsubr RT, RA, RB, RC
968 ```
969
970 TODO: add link to explanation for where `RS` comes from.
971
972 ```
973 factor1 = (RA)
974 factor2 = (RB)
975 term = (RC)
976 # read all inputs before writing to any outputs in case
977 # an input overlaps with an output register.
978 (RT) = gfpmadd(factor1, factor2, term)
979 (RS) = gfpmsubr(factor1, factor2, term)
980 ```
981
982 # Already in POWER ISA or subsumed
983
984 Lists operations either included as part of
985 other bitmanip operations, or are already in
986 Power ISA.
987
988 ## cmix
989
990 based on RV bitmanip, covered by ternlog bitops
991
992 ```
993 uint_xlen_t cmix(uint_xlen_t RA, uint_xlen_t RB, uint_xlen_t RC) {
994 return (RA & RB) | (RC & ~RB);
995 }
996 ```
997
998 ## count leading/trailing zeros with mask
999
1000 in v3.1 p105
1001
1002 ```
1003 count = 0
1004 do i = 0 to 63 if((RB)i=1) then do
1005 if((RS)i=1) then break end end count ← count + 1
1006 RA ← EXTZ64(count)
1007 ```
1008
1009 ## bit deposit
1010
1011 pdepd VRT,VRA,VRB, identical to RV bitmamip bdep, found already in v3.1 p106
1012
1013 do while(m < 64)
1014 if VSR[VRB+32].dword[i].bit[63-m]=1 then do
1015 result = VSR[VRA+32].dword[i].bit[63-k]
1016 VSR[VRT+32].dword[i].bit[63-m] = result
1017 k = k + 1
1018 m = m + 1
1019
1020 ```
1021
1022 uint_xlen_t bdep(uint_xlen_t RA, uint_xlen_t RB)
1023 {
1024 uint_xlen_t r = 0;
1025 for (int i = 0, j = 0; i < XLEN; i++)
1026 if ((RB >> i) & 1) {
1027 if ((RA >> j) & 1)
1028 r |= uint_xlen_t(1) << i;
1029 j++;
1030 }
1031 return r;
1032 }
1033
1034 ```
1035
1036 ## bit extract
1037
1038 other way round: identical to RV bext: pextd, found in v3.1 p196
1039
1040 ```
1041 uint_xlen_t bext(uint_xlen_t RA, uint_xlen_t RB)
1042 {
1043 uint_xlen_t r = 0;
1044 for (int i = 0, j = 0; i < XLEN; i++)
1045 if ((RB >> i) & 1) {
1046 if ((RA >> i) & 1)
1047 r |= uint_xlen_t(1) << j;
1048 j++;
1049 }
1050 return r;
1051 }
1052 ```
1053
1054 ## centrifuge
1055
1056 found in v3.1 p106 so not to be added here
1057
1058 ```
1059 ptr0 = 0
1060 ptr1 = 0
1061 do i = 0 to 63
1062 if((RB)i=0) then do
1063 resultptr0 = (RS)i
1064 end
1065 ptr0 = ptr0 + 1
1066 if((RB)63-i==1) then do
1067 result63-ptr1 = (RS)63-i
1068 end
1069 ptr1 = ptr1 + 1
1070 RA = result
1071 ```
1072
1073 ## bit to byte permute
1074
1075 similar to matrix permute in RV bitmanip, which has XOR and OR variants,
1076 these perform a transpose (bmatflip).
1077 TODO this looks VSX is there a scalar variant
1078 in v3.0/1 already
1079
1080 do j = 0 to 7
1081 do k = 0 to 7
1082 b = VSR[VRB+32].dword[i].byte[k].bit[j]
1083 VSR[VRT+32].dword[i].byte[j].bit[k] = b
1084
1085 ## grev
1086
1087 superceded by grevlut
1088
1089 based on RV bitmanip, this is also known as a butterfly network. however
1090 where a butterfly network allows setting of every crossbar setting in
1091 every row and every column, generalised-reverse (grev) only allows
1092 a per-row decision: every entry in the same row must either switch or
1093 not-switch.
1094
1095 <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Butterfly_Network.jpg/474px-Butterfly_Network.jpg" />
1096
1097 ```
1098 uint64_t grev64(uint64_t RA, uint64_t RB)
1099 {
1100 uint64_t x = RA;
1101 int shamt = RB & 63;
1102 if (shamt & 1) x = ((x & 0x5555555555555555LL) << 1) |
1103 ((x & 0xAAAAAAAAAAAAAAAALL) >> 1);
1104 if (shamt & 2) x = ((x & 0x3333333333333333LL) << 2) |
1105 ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2);
1106 if (shamt & 4) x = ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) |
1107 ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4);
1108 if (shamt & 8) x = ((x & 0x00FF00FF00FF00FFLL) << 8) |
1109 ((x & 0xFF00FF00FF00FF00LL) >> 8);
1110 if (shamt & 16) x = ((x & 0x0000FFFF0000FFFFLL) << 16) |
1111 ((x & 0xFFFF0000FFFF0000LL) >> 16);
1112 if (shamt & 32) x = ((x & 0x00000000FFFFFFFFLL) << 32) |
1113 ((x & 0xFFFFFFFF00000000LL) >> 32);
1114 return x;
1115 }
1116
1117 ```
1118
1119 ## gorc
1120
1121 based on RV bitmanip, gorc is superceded by grevlut
1122
1123 ```
1124 uint32_t gorc32(uint32_t RA, uint32_t RB)
1125 {
1126 uint32_t x = RA;
1127 int shamt = RB & 31;
1128 if (shamt & 1) x |= ((x & 0x55555555) << 1) | ((x & 0xAAAAAAAA) >> 1);
1129 if (shamt & 2) x |= ((x & 0x33333333) << 2) | ((x & 0xCCCCCCCC) >> 2);
1130 if (shamt & 4) x |= ((x & 0x0F0F0F0F) << 4) | ((x & 0xF0F0F0F0) >> 4);
1131 if (shamt & 8) x |= ((x & 0x00FF00FF) << 8) | ((x & 0xFF00FF00) >> 8);
1132 if (shamt & 16) x |= ((x & 0x0000FFFF) << 16) | ((x & 0xFFFF0000) >> 16);
1133 return x;
1134 }
1135 uint64_t gorc64(uint64_t RA, uint64_t RB)
1136 {
1137 uint64_t x = RA;
1138 int shamt = RB & 63;
1139 if (shamt & 1) x |= ((x & 0x5555555555555555LL) << 1) |
1140 ((x & 0xAAAAAAAAAAAAAAAALL) >> 1);
1141 if (shamt & 2) x |= ((x & 0x3333333333333333LL) << 2) |
1142 ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2);
1143 if (shamt & 4) x |= ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) |
1144 ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4);
1145 if (shamt & 8) x |= ((x & 0x00FF00FF00FF00FFLL) << 8) |
1146 ((x & 0xFF00FF00FF00FF00LL) >> 8);
1147 if (shamt & 16) x |= ((x & 0x0000FFFF0000FFFFLL) << 16) |
1148 ((x & 0xFFFF0000FFFF0000LL) >> 16);
1149 if (shamt & 32) x |= ((x & 0x00000000FFFFFFFFLL) << 32) |
1150 ((x & 0xFFFFFFFF00000000LL) >> 32);
1151 return x;
1152 }
1153
1154 ```
1155
1156
1157 # Appendix
1158
1159 see [[bitmanip/appendix]]
1160