(no commit message)
[libreriscv.git] / openpower / sv / bitmanip.mdwn
1 [[!tag standards]]
2
3 [[!toc levels=1]]
4
5 # Implementation Log
6
7 * ternlogi <https://bugs.libre-soc.org/show_bug.cgi?id=745>
8 * grev <https://bugs.libre-soc.org/show_bug.cgi?id=755>
9 * GF2^M <https://bugs.libre-soc.org/show_bug.cgi?id=782>
10 * binutils <https://bugs.libre-soc.org/show_bug.cgi?id=836>
11 * shift-and-add <https://bugs.libre-soc.org/show_bug.cgi?id=968>
12
13 # bitmanipulation
14
15 **DRAFT STATUS**
16
17 pseudocode: [[openpower/isa/bitmanip]]
18
19 this extension amalgamates bitmanipulation primitives from many sources,
20 including RISC-V bitmanip, Packed SIMD, AVX-512 and OpenPOWER VSX.
21 Also included are DSP/Multimedia operations suitable for Audio/Video.
22 Vectorisation and SIMD are removed: these are straight scalar (element)
23 operations making them suitable for embedded applications. Vectorisation
24 Context is provided by [[openpower/sv]].
25
26 When combined with SV, scalar variants of bitmanip operations found in
27 VSX are added so that the Packed SIMD aspects of VSX may be retired as
28 "legacy" in the far future (10 to 20 years). Also, VSX is hundreds of
29 opcodes, requires 128 bit pathways, and is wholly unsuited to low power
30 or embedded scenarios.
31
32 ternlogv is experimental and is the only operation that may be considered
33 a "Packed SIMD". It is added as a variant of the already well-justified
34 ternlog operation (done in AVX512 as an immediate only) "because it
35 looks fun". As it is based on the LUT4 concept it will allow accelerated
36 emulation of FPGAs. Other vendors of ISAs are buying FPGA companies to
37 achieve similar objectives.
38
39 general-purpose Galois Field 2^M operations are added so as to avoid
40 huge custom opcode proliferation across many areas of Computer Science.
41 however for convenience and also to avoid setup costs, some of the more
42 common operations (clmul, crc32) are also added. The expectation is
43 that these operations would all be covered by the same pipeline.
44
45 note that there are brownfield spaces below that could incorporate
46 some of the set-before-first and other scalar operations listed in
47 [[sv/mv.swizzle]],
48 [[sv/vector_ops]], [[sv/int_fp_mv]] and the [[sv/av_opcodes]] as well as
49 [[sv/setvl]], [[sv/svstep]], [[sv/remap]]
50
51 Useful resource:
52
53 * <https://en.wikiversity.org/wiki/Reed%E2%80%93Solomon_codes_for_coders>
54 * <https://maths-people.anu.edu.au/~brent/pd/rpb232tr.pdf>
55 * <https://gist.github.com/animetosho/d3ca95da2131b5813e16b5bb1b137ca0>
56 * <https://github.com/HJLebbink/asm-dude/wiki/GF2P8AFFINEINVQB>
57
58 [[!inline pages="openpower/sv/draft_opcode_tables" quick="yes" raw="yes" ]]
59
60 # binary and ternary bitops
61
62 Similar to FPGA LUTs: for two (binary) or three (ternary) inputs take
63 bits from each input, concatenate them and perform a lookup into a
64 table using an 8-8-bit immediate (for the ternary instructions), or in
65 another register (4-bit for the binary instructions). The binary lookup
66 instructions have CR Field lookup variants due to CR Fields being 4 bit.
67
68 Like the x86 AVX512F
69 [vpternlogd/vpternlogq](https://www.felixcloutier.com/x86/vpternlogd:vpternlogq)
70 instructions.
71
72 ## ternlogi
73
74 | 0.5|6.10|11.15|16.20| 21..28|29.30|31|
75 | -- | -- | --- | --- | ----- | --- |--|
76 | NN | RT | RA | RB | im0-7 | 00 |Rc|
77
78 lut3(imm, a, b, c):
79 idx = c << 2 | b << 1 | a
80 return imm[idx] # idx by LSB0 order
81
82 for i in range(64):
83 RT[i] = lut3(imm, RB[i], RA[i], RT[i])
84
85 ## binlut
86
87 Binary lookup is a dynamic LUT2 version of ternlogi. Firstly, the
88 lookup table is 4 bits wide not 8 bits, and secondly the lookup
89 table comes from a register not an immediate.
90
91 | 0.5|6.10|11.15|16.20| 21..25|26..31 | Form |
92 | -- | -- | --- | --- | ----- |--------|---------|
93 | NN | RT | RA | RB | RC |nh 00001| VA-Form |
94 | NN | RT | RA | RB | /BFA/ |0 01001| VA-Form |
95
96 For binlut, the 4-bit LUT may be selected from either the high nibble
97 or the low nibble of the first byte of RC:
98
99 lut2(imm, a, b):
100 idx = b << 1 | a
101 return imm[idx] # idx by LSB0 order
102
103 imm = (RC>>(nh*4))&0b1111
104 for i in range(64):
105 RT[i] = lut2(imm, RB[i], RA[i])
106
107 For bincrlut, `BFA` selects the 4-bit CR Field as the LUT2:
108
109 for i in range(64):
110 RT[i] = lut2(CRs{BFA}, RB[i], RA[i])
111
112 When Vectorised with SVP64, as usual both source and destination may be
113 Vector or Scalar.
114
115 *Programmer's note: a dynamic ternary lookup may be synthesised from
116 a pair of `binlut` instructions followed by a `ternlogi` to select which
117 to merge. Use `nh` to select which nibble to use as the lookup table
118 from the RC source register (`nh=1` nibble high), i.e. keeping
119 an 8-bit LUT3 in RC, the first `binlut` instruction may set nh=0 and
120 the second nh=1.*
121
122 ## crternlogi
123
124 another mode selection would be CRs not Ints.
125
126 | 0.5|6.8 | 9.11|12.14|15.17|18.20|21.28 | 29.30|31|
127 | -- | -- | --- | --- | --- |-----|----- | -----|--|
128 | NN | BT | BA | BB | BC |m0-2 | imm | 01 |m3|
129
130 mask = m0-3
131 for i in range(4):
132 a,b,c = CRs[BA][i], CRs[BB][i], CRs[BC][i])
133 if mask[i] CRs[BT][i] = lut3(imm, a, b, c)
134
135 This instruction is remarkably similar to the existing crops, `crand` etc.
136 which have been noted to be a 4-bit (binary) LUT. In effect `crternlogi`
137 is the ternary LUT version of crops, having an 8-bit LUT.
138
139 ## crbinlog
140
141 With ternary (LUT3) dynamic instructions being very costly,
142 and CR Fields being only 4 bit, a binary (LUT2) variant is better
143
144 | 0.5|6.8 | 9.11|12.14|15.17|18.21|22...30 |31|
145 | -- | -- | --- | --- | --- |-----| -------- |--|
146 | NN | BT | BA | BB | BC |m0-m3|000101110 |0 |
147
148 mask = m0..m3
149 for i in range(4):
150 a,b = CRs[BA][i], CRs[BB][i])
151 if mask[i] CRs[BT][i] = lut2(CRs[BC], a, b)
152
153 When SVP64 Vectorised any of the 4 operands may be Scalar or
154 Vector, including `BC` meaning that multiple different dynamic
155 lookups may be performed with a single instruction.
156
157 *Programmer's note: just as with binlut and ternlogi, a pair
158 of crbinlog instructions followed by a merging crternlogi may
159 be deployed to synthesise dynamic ternary (LUT3) CR Field
160 manipulation*
161
162 # int ops
163
164 ## min/m
165
166 required for the [[sv/av_opcodes]]
167
168 signed and unsigned min/max for integer. this is sort-of partly
169 synthesiseable in [[sv/svp64]] with pred-result as long as the dest reg
170 is one of the sources, but not both signed and unsigned. when the dest
171 is also one of the srces and the mv fails due to the CR bittest failing
172 this will only overwrite the dest where the src is greater (or less).
173
174 signed/unsigned min/max gives more flexibility.
175
176 X-Form
177
178 * XO=0001001110, itype=0b00 min, unsigned
179 * XO=0101001110, itype=0b01 min, signed
180 * XO=0011001110, itype=0b10 max, unsigned
181 * XO=0111001110, itype=0b11 max, signed
182
183
184 ```
185 uint_xlen_t mins(uint_xlen_t rs1, uint_xlen_t rs2)
186 { return (int_xlen_t)rs1 < (int_xlen_t)rs2 ? rs1 : rs2;
187 }
188 uint_xlen_t maxs(uint_xlen_t rs1, uint_xlen_t rs2)
189 { return (int_xlen_t)rs1 > (int_xlen_t)rs2 ? rs1 : rs2;
190 }
191 uint_xlen_t minu(uint_xlen_t rs1, uint_xlen_t rs2)
192 { return rs1 < rs2 ? rs1 : rs2;
193 }
194 uint_xlen_t maxu(uint_xlen_t rs1, uint_xlen_t rs2)
195 { return rs1 > rs2 ? rs1 : rs2;
196 }
197 ```
198
199 ## average
200
201 required for the [[sv/av_opcodes]], these exist in Packed SIMD (VSX)
202 but not scalar
203
204 ```
205 uint_xlen_t intavg(uint_xlen_t rs1, uint_xlen_t rs2) {
206 return (rs1 + rs2 + 1) >> 1:
207 }
208 ```
209
210 ## absdu
211
212 required for the [[sv/av_opcodes]], these exist in Packed SIMD (VSX)
213 but not scalar
214
215 ```
216 uint_xlen_t absdu(uint_xlen_t rs1, uint_xlen_t rs2) {
217 return (src1 > src2) ? (src1-src2) : (src2-src1)
218 }
219 ```
220
221 ## abs-accumulate
222
223 required for the [[sv/av_opcodes]], these are needed for motion estimation.
224 both are overwrite on RS.
225
226 ```
227 uint_xlen_t uintabsacc(uint_xlen_t rs, uint_xlen_t ra, uint_xlen_t rb) {
228 return rs + (src1 > src2) ? (src1-src2) : (src2-src1)
229 }
230 uint_xlen_t intabsacc(uint_xlen_t rs, int_xlen_t ra, int_xlen_t rb) {
231 return rs + (src1 > src2) ? (src1-src2) : (src2-src1)
232 }
233 ```
234
235 For SVP64, the twin Elwidths allows e.g. a 16 bit accumulator for 8 bit
236 differences. Form is `RM-1P-3S1D` where RS-as-source has a separate
237 SVP64 designation from RS-as-dest. This gives a limited range of
238 non-overwrite capability.
239
240 # shift-and-add <a name="shift-add"> </a>
241
242 Power ISA is missing LD/ST with shift, which is present in both ARM and x86.
243 Too complex to add more LD/ST, a compromise is to add shift-and-add.
244 Replaces a pair of explicit instructions in hot-loops.
245
246 ```
247 # 1.6.27 Z23-FORM
248 |0 |6 |11 |15 |16 |21 |23 |31 |
249 | PO | RT | RA | RB |sm | XO |Rc |
250 ```
251
252 Pseudo-code (shadd):
253
254 shift <- shift + 1 # Shift is between 1-4
255 sum[0:63] <- ((RB) << shift) + (RA) # Shift RB, add RA
256 RT <- sum # Result stored in RT
257
258 Pseudo-code (shadduw):
259
260 shift <- shift + 1 # Shift is between 1-4
261 n <- (RB)[XLEN/2:XLEN-1] # Limit RB to upper word (32-bits)
262 sum[0:63] <- (n << shift) + (RA) # Shift n, add RA
263 RT <- sum # Result stored in RT
264
265 ```
266 uint_xlen_t shadd(uint_xlen_t RA, uint_xlen_t RB, uint8_t sm) {
267 sm = sm & 0x3;
268 return (RB << (sm+1)) + RA;
269 }
270
271 uint_xlen_t shadduw(uint_xlen_t RA, uint_xlen_t RB, uint8_t sm) {
272 uint_xlen_t n = RB & 0xFFFFFFFF;
273 sm = sm & 0x3;
274 return (n << (sm+1)) + RA;
275 }
276 ```
277
278 # bitmask set
279
280 based on RV bitmanip singlebit set, instruction format similar to shift
281 [[isa/fixedshift]]. bmext is actually covered already (shift-with-mask
282 rldicl but only immediate version). however bitmask-invert is not,
283 and set/clr are not covered, although they can use the same Shift ALU.
284
285 bmext (RB) version is not the same as rldicl because bmext is a right
286 shift by RC, where rldicl is a left rotate. for the immediate version
287 this does not matter, so a bmexti is not required. bmrev however there
288 is no direct equivalent and consequently a bmrevi is required.
289
290 bmset (register for mask amount) is particularly useful for creating
291 predicate masks where the length is a dynamic runtime quantity.
292 bmset(RA=0, RB=0, RC=mask) will produce a run of ones of length "mask"
293 in a single instruction without needing to initialise or depend on any
294 other registers.
295
296 | 0.5|6.10|11.15|16.20|21.25| 26..30 |31| name |
297 | -- | -- | --- | --- | --- | ------- |--| ----- |
298 | NN | RS | RA | RB | RC | mode 010 |Rc| bm\* |
299
300 Immediate-variant is an overwrite form:
301
302 | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name |
303 | -- | -- | --- | --- | -- | ----- | -------- |--| ---- |
304 | NN | RS | RB | sh | SH | itype | 1000 110 |Rc| bm\*i |
305
306 ```
307 def MASK(x, y):
308 if x < y:
309 x = x+1
310 mask_a = ((1 << x) - 1) & ((1 << 64) - 1)
311 mask_b = ((1 << y) - 1) & ((1 << 64) - 1)
312 elif x == y:
313 return 1 << x
314 else:
315 x = x+1
316 mask_a = ((1 << x) - 1) & ((1 << 64) - 1)
317 mask_b = (~((1 << y) - 1)) & ((1 << 64) - 1)
318 return mask_a ^ mask_b
319
320
321 uint_xlen_t bmset(RS, RB, sh)
322 {
323 int shamt = RB & (XLEN - 1);
324 mask = (2<<sh)-1;
325 return RS | (mask << shamt);
326 }
327
328 uint_xlen_t bmclr(RS, RB, sh)
329 {
330 int shamt = RB & (XLEN - 1);
331 mask = (2<<sh)-1;
332 return RS & ~(mask << shamt);
333 }
334
335 uint_xlen_t bminv(RS, RB, sh)
336 {
337 int shamt = RB & (XLEN - 1);
338 mask = (2<<sh)-1;
339 return RS ^ (mask << shamt);
340 }
341
342 uint_xlen_t bmext(RS, RB, sh)
343 {
344 int shamt = RB & (XLEN - 1);
345 mask = (2<<sh)-1;
346 return mask & (RS >> shamt);
347 }
348 ```
349
350 bitmask extract with reverse. can be done by bit-order-inverting all
351 of RB and getting bits of RB from the opposite end.
352
353 when RA is zero, no shift occurs. this makes bmextrev useful for
354 simply reversing all bits of a register.
355
356 ```
357 msb = ra[5:0];
358 rev[0:msb] = rb[msb:0];
359 rt = ZE(rev[msb:0]);
360
361 uint_xlen_t bmrevi(RA, RB, sh)
362 {
363 int shamt = XLEN-1;
364 if (RA != 0) shamt = (GPR(RA) & (XLEN - 1));
365 shamt = (XLEN-1)-shamt; # shift other end
366 brb = bitreverse(GPR(RB)) # swap LSB-MSB
367 mask = (2<<sh)-1;
368 return mask & (brb >> shamt);
369 }
370
371 uint_xlen_t bmrev(RA, RB, RC) {
372 return bmrevi(RA, RB, GPR(RC) & 0b111111);
373 }
374 ```
375
376 | 0.5|6.10|11.15|16.20|21.26| 27..30 |31| name | Form |
377 | -- | -- | --- | --- | --- | ------- |--| ------ | -------- |
378 | NN | RT | RA | RB | sh | 1111 |Rc| bmrevi | MDS-Form |
379
380 | 0.5|6.10|11.15|16.20|21.25| 26..30 |31| name | Form |
381 | -- | -- | --- | --- | --- | ------- |--| ------ | -------- |
382 | NN | RT | RA | RB | RC | 11110 |Rc| bmrev | VA2-Form |
383
384 # grevlut <a name="grevlut"> </a>
385
386 generalised reverse combined with a pair of LUT2s and allowing
387 a constant `0b0101...0101` when RA=0, and an option to invert
388 (including when RA=0, giving a constant 0b1010...1010 as the
389 initial value) provides a wide range of instructions
390 and a means to set hundreds of regular 64 bit patterns with one
391 single 32 bit instruction.
392
393 the two LUT2s are applied left-half (when not swapping)
394 and right-half (when swapping) so as to allow a wider
395 range of options.
396
397 <img src="/openpower/sv/grevlut2x2.jpg" width=700 />
398
399 * A value of `0b11001010` for the immediate provides
400 the functionality of a standard "grev".
401 * `0b11101110` provides gorc
402
403 grevlut should be arranged so as to produce the constants
404 needed to put into bext (bitextract) so as in turn to
405 be able to emulate x86 pmovmask instructions
406 <https://www.felixcloutier.com/x86/pmovmskb>.
407 This only requires 2 instructions (grevlut, bext).
408
409 Note that if the mask is required to be placed
410 directly into CR Fields (for use as CR Predicate
411 masks rather than a integer mask) then sv.cmpi or sv.ori
412 may be used instead, bearing in mind that sv.ori
413 is a 64-bit instruction, and `VL` must have been
414 set to the required length:
415
416 sv.ori./elwid=8 r10.v, r10.v, 0
417
418 The following settings provide the required mask constants:
419
420 | RA=0 | RB | imm | iv | result |
421 | ------- | ------- | ---------- | -- | ---------- |
422 | 0x555.. | 0b10 | 0b01101100 | 0 | 0x111111... |
423 | 0x555.. | 0b110 | 0b01101100 | 0 | 0x010101... |
424 | 0x555.. | 0b1110 | 0b01101100 | 0 | 0x00010001... |
425 | 0x555.. | 0b10 | 0b11000110 | 1 | 0x88888... |
426 | 0x555.. | 0b110 | 0b11000110 | 1 | 0x808080... |
427 | 0x555.. | 0b1110 | 0b11000110 | 1 | 0x80008000... |
428
429 Better diagram showing the correct ordering of shamt (RB). A LUT2
430 is applied to all locations marked in red using the first 4
431 bits of the immediate, and a separate LUT2 applied to all
432 locations in green using the upper 4 bits of the immediate.
433
434 <img src="/openpower/sv/grevlut.png" width=700 />
435
436 demo code [[openpower/sv/grevlut.py]]
437
438 ```
439 lut2(imm, a, b):
440 idx = b << 1 | a
441 return imm[idx] # idx by LSB0 order
442
443 dorow(imm8, step_i, chunksize, us32b):
444 for j in 0 to 31 if is32b else 63:
445 if (j&chunk_size) == 0
446 imm = imm8[0..3]
447 else
448 imm = imm8[4..7]
449 step_o[j] = lut2(imm, step_i[j], step_i[j ^ chunk_size])
450 return step_o
451
452 uint64_t grevlut(uint64_t RA, uint64_t RB, uint8 imm, bool iv, bool is32b)
453 {
454 uint64_t x = 0x5555_5555_5555_5555;
455 if (RA != 0) x = GPR(RA);
456 if (iv) x = ~x;
457 int shamt = RB & 31 if is32b else 63
458 for i in 0 to (6-is32b)
459 step = 1<<i
460 if (shamt & step) x = dorow(imm, x, step, is32b)
461 return x;
462 }
463 ```
464
465 A variant may specify different LUT-pairs per row,
466 using one byte of RB for each. If it is desired that
467 a particular row-crossover shall not be applied it is
468 a simple matter to set the appropriate LUT-pair in RB
469 to effect an identity transform for that row (`0b11001010`).
470
471 ```
472 uint64_t grevlutr(uint64_t RA, uint64_t RB, bool iv, bool is32b)
473 {
474 uint64_t x = 0x5555_5555_5555_5555;
475 if (RA != 0) x = GPR(RA);
476 if (iv) x = ~x;
477 for i in 0 to (6-is32b)
478 step = 1<<i
479 imm = (RB>>(i*8))&0xff
480 x = dorow(imm, x, step, is32b)
481 return x;
482 }
483
484 ```
485
486 | 0.5|6.10|11.15|16.20 |21..28 | 29.30|31| name | Form |
487 | -- | -- | --- | --- | ----- | -----|--| ------ | ----- |
488 | NN | RT | RA | s0-4 | im0-7 | 1 iv |s5| grevlogi | |
489 | NN | RT | RA | RB | im0-7 | 01 |0 | grevlog | |
490
491 An equivalent to `grevlogw` may be synthesised by setting the
492 appropriate bits in RB to set the top half of RT to zero.
493 Thus an explicit grevlogw instruction is not necessary.
494
495 # xperm
496
497 based on RV bitmanip.
498
499 RA contains a vector of indices to select parts of RB to be
500 copied to RT. The immediate-variant allows up to an 8 bit
501 pattern (repeated) to be targetted at different parts of RT.
502
503 xperm shares some similarity with one of the uses of bmator
504 in that xperm indices are binary addressing where bitmator
505 may be considered to be unary addressing.
506
507 ```
508 uint_xlen_t xpermi(uint8_t imm8, uint_xlen_t RB, int sz_log2)
509 {
510 uint_xlen_t r = 0;
511 uint_xlen_t sz = 1LL << sz_log2;
512 uint_xlen_t mask = (1LL << sz) - 1;
513 uint_xlen_t RA = imm8 | imm8<<8 | ... | imm8<<56;
514 for (int i = 0; i < XLEN; i += sz) {
515 uint_xlen_t pos = ((RA >> i) & mask) << sz_log2;
516 if (pos < XLEN)
517 r |= ((RB >> pos) & mask) << i;
518 }
519 return r;
520 }
521 uint_xlen_t xperm(uint_xlen_t RA, uint_xlen_t RB, int sz_log2)
522 {
523 uint_xlen_t r = 0;
524 uint_xlen_t sz = 1LL << sz_log2;
525 uint_xlen_t mask = (1LL << sz) - 1;
526 for (int i = 0; i < XLEN; i += sz) {
527 uint_xlen_t pos = ((RA >> i) & mask) << sz_log2;
528 if (pos < XLEN)
529 r |= ((RB >> pos) & mask) << i;
530 }
531 return r;
532 }
533 uint_xlen_t xperm_n (uint_xlen_t RA, uint_xlen_t RB)
534 { return xperm(RA, RB, 2); }
535 uint_xlen_t xperm_b (uint_xlen_t RA, uint_xlen_t RB)
536 { return xperm(RA, RB, 3); }
537 uint_xlen_t xperm_h (uint_xlen_t RA, uint_xlen_t RB)
538 { return xperm(RA, RB, 4); }
539 uint_xlen_t xperm_w (uint_xlen_t RA, uint_xlen_t RB)
540 { return xperm(RA, RB, 5); }
541 ```
542
543 # bitmatrix
544
545 bmatflip and bmatxor is found in the Cray XMT, and in x86 is known
546 as GF2P8AFFINEQB. uses:
547
548 * <https://gist.github.com/animetosho/d3ca95da2131b5813e16b5bb1b137ca0>
549 * SM4, Reed Solomon, RAID6
550 <https://stackoverflow.com/questions/59124720/what-are-the-avx-512-galois-field-related-instructions-for>
551 * Vector bit-reverse <https://reviews.llvm.org/D91515?id=305411>
552 * Affine Inverse <https://github.com/HJLebbink/asm-dude/wiki/GF2P8AFFINEINVQB>
553
554 | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name | Form |
555 | -- | -- | --- | --- | -- | ----- | -------- |--| ---- | ------- |
556 | NN | RS | RA |im04 | im5| 1 1 | im67 00 110 |Rc| bmatxori | TODO |
557
558
559 ```
560 uint64_t bmatflip(uint64_t RA)
561 {
562 uint64_t x = RA;
563 x = shfl64(x, 31);
564 x = shfl64(x, 31);
565 x = shfl64(x, 31);
566 return x;
567 }
568
569 uint64_t bmatxori(uint64_t RS, uint64_t RA, uint8_t imm) {
570 // transpose of RA
571 uint64_t RAt = bmatflip(RA);
572 uint8_t u[8]; // rows of RS
573 uint8_t v[8]; // cols of RA
574 for (int i = 0; i < 8; i++) {
575 u[i] = RS >> (i*8);
576 v[i] = RAt >> (i*8);
577 }
578 uint64_t bit, x = 0;
579 for (int i = 0; i < 64; i++) {
580 bit = (imm >> (i%8)) & 1;
581 bit ^= pcnt(u[i / 8] & v[i % 8]) & 1;
582 x |= bit << i;
583 }
584 return x;
585 }
586
587 uint64_t bmatxor(uint64_t RA, uint64_t RB) {
588 return bmatxori(RA, RB, 0xff)
589 }
590
591 uint64_t bmator(uint64_t RA, uint64_t RB) {
592 // transpose of RB
593 uint64_t RBt = bmatflip(RB);
594 uint8_t u[8]; // rows of RA
595 uint8_t v[8]; // cols of RB
596 for (int i = 0; i < 8; i++) {
597 u[i] = RA >> (i*8);
598 v[i] = RBt >> (i*8);
599 }
600 uint64_t x = 0;
601 for (int i = 0; i < 64; i++) {
602 if ((u[i / 8] & v[i % 8]) != 0)
603 x |= 1LL << i;
604 }
605 return x;
606 }
607
608 uint64_t bmatand(uint64_t RA, uint64_t RB) {
609 // transpose of RB
610 uint64_t RBt = bmatflip(RB);
611 uint8_t u[8]; // rows of RA
612 uint8_t v[8]; // cols of RB
613 for (int i = 0; i < 8; i++) {
614 u[i] = RA >> (i*8);
615 v[i] = RBt >> (i*8);
616 }
617 uint64_t x = 0;
618 for (int i = 0; i < 64; i++) {
619 if ((u[i / 8] & v[i % 8]) == 0xff)
620 x |= 1LL << i;
621 }
622 return x;
623 }
624 ```
625
626 # Introduction to Carry-less and GF arithmetic
627
628 * obligatory xkcd <https://xkcd.com/2595/>
629
630 There are three completely separate types of Galois-Field-based arithmetic
631 that we implement which are not well explained even in introductory
632 literature. A slightly oversimplified explanation is followed by more
633 accurate descriptions:
634
635 * `GF(2)` carry-less binary arithmetic. this is not actually a Galois Field,
636 but is accidentally referred to as GF(2) - see below as to why.
637 * `GF(p)` modulo arithmetic with a Prime number, these are "proper"
638 Galois Fields
639 * `GF(2^N)` carry-less binary arithmetic with two limits: modulo a power-of-2
640 (2^N) and a second "reducing" polynomial (similar to a prime number), these
641 are said to be GF(2^N) arithmetic.
642
643 further detailed and more precise explanations are provided below
644
645 * **Polynomials with coefficients in `GF(2)`**
646 (aka. Carry-less arithmetic -- the `cl*` instructions).
647 This isn't actually a Galois Field, but its coefficients are. This is
648 basically binary integer addition, subtraction, and multiplication like
649 usual, except that carries aren't propagated at all, effectively turning
650 both addition and subtraction into the bitwise xor operation. Division and
651 remainder are defined to match how addition and multiplication works.
652 * **Galois Fields with a prime size**
653 (aka. `GF(p)` or Prime Galois Fields -- the `gfp*` instructions).
654 This is basically just the integers mod `p`.
655 * **Galois Fields with a power-of-a-prime size**
656 (aka. `GF(p^n)` or `GF(q)` where `q == p^n` for prime `p` and
657 integer `n > 0`).
658 We only implement these for `p == 2`, called Binary Galois Fields
659 (`GF(2^n)` -- the `gfb*` instructions).
660 For any prime `p`, `GF(p^n)` is implemented as polynomials with
661 coefficients in `GF(p)` and degree `< n`, where the polynomials are the
662 remainders of dividing by a specificly chosen polynomial in `GF(p)` called
663 the Reducing Polynomial (we will denote that by `red_poly`). The Reducing
664 Polynomial must be an irreducable polynomial (like primes, but for
665 polynomials), as well as have degree `n`. All `GF(p^n)` for the same `p`
666 and `n` are isomorphic to each other -- the choice of `red_poly` doesn't
667 affect `GF(p^n)`'s mathematical shape, all that changes is the specific
668 polynomials used to implement `GF(p^n)`.
669
670 Many implementations and much of the literature do not make a clear
671 distinction between these three categories, which makes it confusing
672 to understand what their purpose and value is.
673
674 * carry-less multiply is extremely common and is used for the ubiquitous
675 CRC32 algorithm. [TODO add many others, helps justify to ISA WG]
676 * GF(2^N) forms the basis of Rijndael (the current AES standard) and
677 has significant uses throughout cryptography
678 * GF(p) is the basis again of a significant quantity of algorithms
679 (TODO, list them, jacob knows what they are), even though the
680 modulo is limited to be below 64-bit (size of a scalar int)
681
682 # Instructions for Carry-less Operations
683
684 aka. Polynomials with coefficients in `GF(2)`
685
686 Carry-less addition/subtraction is simply XOR, so a `cladd`
687 instruction is not provided since the `xor[i]` instruction can be used instead.
688
689 These are operations on polynomials with coefficients in `GF(2)`, with the
690 polynomial's coefficients packed into integers with the following algorithm:
691
692 ```python
693 [[!inline pagenames="gf_reference/pack_poly.py" raw="yes"]]
694 ```
695
696 ## Carry-less Multiply Instructions
697
698 based on RV bitmanip
699 see <https://en.wikipedia.org/wiki/CLMUL_instruction_set> and
700 <https://www.felixcloutier.com/x86/pclmulqdq> and
701 <https://en.m.wikipedia.org/wiki/Carry-less_product>
702
703 They are worth adding as their own non-overwrite operations
704 (in the same pipeline).
705
706 ### `clmul` Carry-less Multiply
707
708 ```python
709 [[!inline pagenames="gf_reference/clmul.py" raw="yes"]]
710 ```
711
712 ### `clmulh` Carry-less Multiply High
713
714 ```python
715 [[!inline pagenames="gf_reference/clmulh.py" raw="yes"]]
716 ```
717
718 ### `clmulr` Carry-less Multiply (Reversed)
719
720 Useful for CRCs. Equivalent to bit-reversing the result of `clmul` on
721 bit-reversed inputs.
722
723 ```python
724 [[!inline pagenames="gf_reference/clmulr.py" raw="yes"]]
725 ```
726
727 ## `clmadd` Carry-less Multiply-Add
728
729 ```
730 clmadd RT, RA, RB, RC
731 ```
732
733 ```
734 (RT) = clmul((RA), (RB)) ^ (RC)
735 ```
736
737 ## `cltmadd` Twin Carry-less Multiply-Add (for FFTs)
738
739 Used in combination with SV FFT REMAP to perform a full Discrete Fourier
740 Transform of Polynomials over GF(2) in-place. Possible by having 3-in 2-out,
741 to avoid the need for a temp register. RS is written to as well as RT.
742
743 Note: Polynomials over GF(2) are a Ring rather than a Field, so, because the
744 definition of the Inverse Discrete Fourier Transform involves calculating a
745 multiplicative inverse, which may not exist in every Ring, therefore the
746 Inverse Discrete Fourier Transform may not exist. (AFAICT the number of inputs
747 to the IDFT must be odd for the IDFT to be defined for Polynomials over GF(2).
748 TODO: check with someone who knows for sure if that's correct.)
749
750 ```
751 cltmadd RT, RA, RB, RC
752 ```
753
754 TODO: add link to explanation for where `RS` comes from.
755
756 ```
757 a = (RA)
758 c = (RC)
759 # read all inputs before writing to any outputs in case
760 # an input overlaps with an output register.
761 (RT) = clmul(a, (RB)) ^ c
762 (RS) = a ^ c
763 ```
764
765 ## `cldivrem` Carry-less Division and Remainder
766
767 `cldivrem` isn't an actual instruction, but is just used in the pseudo-code
768 for other instructions.
769
770 ```python
771 [[!inline pagenames="gf_reference/cldivrem.py" raw="yes"]]
772 ```
773
774 ## `cldiv` Carry-less Division
775
776 ```
777 cldiv RT, RA, RB
778 ```
779
780 ```
781 n = (RA)
782 d = (RB)
783 q, r = cldivrem(n, d, width=XLEN)
784 (RT) = q
785 ```
786
787 ## `clrem` Carry-less Remainder
788
789 ```
790 clrem RT, RA, RB
791 ```
792
793 ```
794 n = (RA)
795 d = (RB)
796 q, r = cldivrem(n, d, width=XLEN)
797 (RT) = r
798 ```
799
800 # Instructions for Binary Galois Fields `GF(2^m)`
801
802 see:
803
804 * <https://courses.csail.mit.edu/6.857/2016/files/ffield.py>
805 * <https://engineering.purdue.edu/kak/compsec/NewLectures/Lecture7.pdf>
806 * <https://foss.heptapod.net/math/libgf2/-/blob/branch/default/src/libgf2/gf2.py>
807
808 Binary Galois Field addition/subtraction is simply XOR, so a `gfbadd`
809 instruction is not provided since the `xor[i]` instruction can be used instead.
810
811 ## `GFBREDPOLY` SPR -- Reducing Polynomial
812
813 In order to save registers and to make operations orthogonal with standard
814 arithmetic, the reducing polynomial is stored in a dedicated SPR `GFBREDPOLY`.
815 This also allows hardware to pre-compute useful parameters (such as the
816 degree, or look-up tables) based on the reducing polynomial, and store them
817 alongside the SPR in hidden registers, only recomputing them whenever the SPR
818 is written to, rather than having to recompute those values for every
819 instruction.
820
821 Because Galois Fields require the reducing polynomial to be an irreducible
822 polynomial, that guarantees that any polynomial of `degree > 1` must have
823 the LSB set, since otherwise it would be divisible by the polynomial `x`,
824 making it reducible, making whatever we're working on no longer a Field.
825 Therefore, we can reuse the LSB to indicate `degree == XLEN`.
826
827 ```python
828 [[!inline pagenames="gf_reference/decode_reducing_polynomial.py" raw="yes"]]
829 ```
830
831 ## `gfbredpoly` -- Set the Reducing Polynomial SPR `GFBREDPOLY`
832
833 unless this is an immediate op, `mtspr` is completely sufficient.
834
835 ```python
836 [[!inline pagenames="gf_reference/gfbredpoly.py" raw="yes"]]
837 ```
838
839 ## `gfbmul` -- Binary Galois Field `GF(2^m)` Multiplication
840
841 ```
842 gfbmul RT, RA, RB
843 ```
844
845 ```python
846 [[!inline pagenames="gf_reference/gfbmul.py" raw="yes"]]
847 ```
848
849 ## `gfbmadd` -- Binary Galois Field `GF(2^m)` Multiply-Add
850
851 ```
852 gfbmadd RT, RA, RB, RC
853 ```
854
855 ```python
856 [[!inline pagenames="gf_reference/gfbmadd.py" raw="yes"]]
857 ```
858
859 ## `gfbtmadd` -- Binary Galois Field `GF(2^m)` Twin Multiply-Add (for FFT)
860
861 Used in combination with SV FFT REMAP to perform a full `GF(2^m)` Discrete
862 Fourier Transform in-place. Possible by having 3-in 2-out, to avoid the need
863 for a temp register. RS is written to as well as RT.
864
865 ```
866 gfbtmadd RT, RA, RB, RC
867 ```
868
869 TODO: add link to explanation for where `RS` comes from.
870
871 ```
872 a = (RA)
873 c = (RC)
874 # read all inputs before writing to any outputs in case
875 # an input overlaps with an output register.
876 (RT) = gfbmadd(a, (RB), c)
877 # use gfbmadd again since it reduces the result
878 (RS) = gfbmadd(a, 1, c) # "a * 1 + c"
879 ```
880
881 ## `gfbinv` -- Binary Galois Field `GF(2^m)` Inverse
882
883 ```
884 gfbinv RT, RA
885 ```
886
887 ```python
888 [[!inline pagenames="gf_reference/gfbinv.py" raw="yes"]]
889 ```
890
891 # Instructions for Prime Galois Fields `GF(p)`
892
893 ## `GFPRIME` SPR -- Prime Modulus For `gfp*` Instructions
894
895 ## `gfpadd` Prime Galois Field `GF(p)` Addition
896
897 ```
898 gfpadd RT, RA, RB
899 ```
900
901 ```python
902 [[!inline pagenames="gf_reference/gfpadd.py" raw="yes"]]
903 ```
904
905 the addition happens on infinite-precision integers
906
907 ## `gfpsub` Prime Galois Field `GF(p)` Subtraction
908
909 ```
910 gfpsub RT, RA, RB
911 ```
912
913 ```python
914 [[!inline pagenames="gf_reference/gfpsub.py" raw="yes"]]
915 ```
916
917 the subtraction happens on infinite-precision integers
918
919 ## `gfpmul` Prime Galois Field `GF(p)` Multiplication
920
921 ```
922 gfpmul RT, RA, RB
923 ```
924
925 ```python
926 [[!inline pagenames="gf_reference/gfpmul.py" raw="yes"]]
927 ```
928
929 the multiplication happens on infinite-precision integers
930
931 ## `gfpinv` Prime Galois Field `GF(p)` Invert
932
933 ```
934 gfpinv RT, RA
935 ```
936
937 Some potential hardware implementations are found in:
938 <https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.90.5233&rep=rep1&type=pdf>
939
940 ```python
941 [[!inline pagenames="gf_reference/gfpinv.py" raw="yes"]]
942 ```
943
944 ## `gfpmadd` Prime Galois Field `GF(p)` Multiply-Add
945
946 ```
947 gfpmadd RT, RA, RB, RC
948 ```
949
950 ```python
951 [[!inline pagenames="gf_reference/gfpmadd.py" raw="yes"]]
952 ```
953
954 the multiplication and addition happens on infinite-precision integers
955
956 ## `gfpmsub` Prime Galois Field `GF(p)` Multiply-Subtract
957
958 ```
959 gfpmsub RT, RA, RB, RC
960 ```
961
962 ```python
963 [[!inline pagenames="gf_reference/gfpmsub.py" raw="yes"]]
964 ```
965
966 the multiplication and subtraction happens on infinite-precision integers
967
968 ## `gfpmsubr` Prime Galois Field `GF(p)` Multiply-Subtract-Reversed
969
970 ```
971 gfpmsubr RT, RA, RB, RC
972 ```
973
974 ```python
975 [[!inline pagenames="gf_reference/gfpmsubr.py" raw="yes"]]
976 ```
977
978 the multiplication and subtraction happens on infinite-precision integers
979
980 ## `gfpmaddsubr` Prime Galois Field `GF(p)` Multiply-Add and Multiply-Sub-Reversed (for FFT)
981
982 Used in combination with SV FFT REMAP to perform
983 a full Number-Theoretic-Transform in-place. Possible by having 3-in 2-out,
984 to avoid the need for a temp register. RS is written
985 to as well as RT.
986
987 ```
988 gfpmaddsubr RT, RA, RB, RC
989 ```
990
991 TODO: add link to explanation for where `RS` comes from.
992
993 ```
994 factor1 = (RA)
995 factor2 = (RB)
996 term = (RC)
997 # read all inputs before writing to any outputs in case
998 # an input overlaps with an output register.
999 (RT) = gfpmadd(factor1, factor2, term)
1000 (RS) = gfpmsubr(factor1, factor2, term)
1001 ```
1002
1003 # Already in POWER ISA or subsumed
1004
1005 Lists operations either included as part of
1006 other bitmanip operations, or are already in
1007 Power ISA.
1008
1009 ## cmix
1010
1011 based on RV bitmanip, covered by ternlog bitops
1012
1013 ```
1014 uint_xlen_t cmix(uint_xlen_t RA, uint_xlen_t RB, uint_xlen_t RC) {
1015 return (RA & RB) | (RC & ~RB);
1016 }
1017 ```
1018
1019 ## count leading/trailing zeros with mask
1020
1021 in v3.1 p105
1022
1023 ```
1024 count = 0
1025 do i = 0 to 63 if((RB)i=1) then do
1026 if((RS)i=1) then break end end count ← count + 1
1027 RA ← EXTZ64(count)
1028 ```
1029
1030 ## bit deposit
1031
1032 pdepd VRT,VRA,VRB, identical to RV bitmamip bdep, found already in v3.1 p106
1033
1034 do while(m < 64)
1035 if VSR[VRB+32].dword[i].bit[63-m]=1 then do
1036 result = VSR[VRA+32].dword[i].bit[63-k]
1037 VSR[VRT+32].dword[i].bit[63-m] = result
1038 k = k + 1
1039 m = m + 1
1040
1041 ```
1042
1043 uint_xlen_t bdep(uint_xlen_t RA, uint_xlen_t RB)
1044 {
1045 uint_xlen_t r = 0;
1046 for (int i = 0, j = 0; i < XLEN; i++)
1047 if ((RB >> i) & 1) {
1048 if ((RA >> j) & 1)
1049 r |= uint_xlen_t(1) << i;
1050 j++;
1051 }
1052 return r;
1053 }
1054
1055 ```
1056
1057 ## bit extract
1058
1059 other way round: identical to RV bext: pextd, found in v3.1 p196
1060
1061 ```
1062 uint_xlen_t bext(uint_xlen_t RA, uint_xlen_t RB)
1063 {
1064 uint_xlen_t r = 0;
1065 for (int i = 0, j = 0; i < XLEN; i++)
1066 if ((RB >> i) & 1) {
1067 if ((RA >> i) & 1)
1068 r |= uint_xlen_t(1) << j;
1069 j++;
1070 }
1071 return r;
1072 }
1073 ```
1074
1075 ## centrifuge
1076
1077 found in v3.1 p106 so not to be added here
1078
1079 ```
1080 ptr0 = 0
1081 ptr1 = 0
1082 do i = 0 to 63
1083 if((RB)i=0) then do
1084 resultptr0 = (RS)i
1085 end
1086 ptr0 = ptr0 + 1
1087 if((RB)63-i==1) then do
1088 result63-ptr1 = (RS)63-i
1089 end
1090 ptr1 = ptr1 + 1
1091 RA = result
1092 ```
1093
1094 ## bit to byte permute
1095
1096 similar to matrix permute in RV bitmanip, which has XOR and OR variants,
1097 these perform a transpose (bmatflip).
1098 TODO this looks VSX is there a scalar variant
1099 in v3.0/1 already
1100
1101 do j = 0 to 7
1102 do k = 0 to 7
1103 b = VSR[VRB+32].dword[i].byte[k].bit[j]
1104 VSR[VRT+32].dword[i].byte[j].bit[k] = b
1105
1106 ## grev
1107
1108 superceded by grevlut
1109
1110 based on RV bitmanip, this is also known as a butterfly network. however
1111 where a butterfly network allows setting of every crossbar setting in
1112 every row and every column, generalised-reverse (grev) only allows
1113 a per-row decision: every entry in the same row must either switch or
1114 not-switch.
1115
1116 <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Butterfly_Network.jpg/474px-Butterfly_Network.jpg" />
1117
1118 ```
1119 uint64_t grev64(uint64_t RA, uint64_t RB)
1120 {
1121 uint64_t x = RA;
1122 int shamt = RB & 63;
1123 if (shamt & 1) x = ((x & 0x5555555555555555LL) << 1) |
1124 ((x & 0xAAAAAAAAAAAAAAAALL) >> 1);
1125 if (shamt & 2) x = ((x & 0x3333333333333333LL) << 2) |
1126 ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2);
1127 if (shamt & 4) x = ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) |
1128 ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4);
1129 if (shamt & 8) x = ((x & 0x00FF00FF00FF00FFLL) << 8) |
1130 ((x & 0xFF00FF00FF00FF00LL) >> 8);
1131 if (shamt & 16) x = ((x & 0x0000FFFF0000FFFFLL) << 16) |
1132 ((x & 0xFFFF0000FFFF0000LL) >> 16);
1133 if (shamt & 32) x = ((x & 0x00000000FFFFFFFFLL) << 32) |
1134 ((x & 0xFFFFFFFF00000000LL) >> 32);
1135 return x;
1136 }
1137
1138 ```
1139
1140 ## gorc
1141
1142 based on RV bitmanip, gorc is superceded by grevlut
1143
1144 ```
1145 uint32_t gorc32(uint32_t RA, uint32_t RB)
1146 {
1147 uint32_t x = RA;
1148 int shamt = RB & 31;
1149 if (shamt & 1) x |= ((x & 0x55555555) << 1) | ((x & 0xAAAAAAAA) >> 1);
1150 if (shamt & 2) x |= ((x & 0x33333333) << 2) | ((x & 0xCCCCCCCC) >> 2);
1151 if (shamt & 4) x |= ((x & 0x0F0F0F0F) << 4) | ((x & 0xF0F0F0F0) >> 4);
1152 if (shamt & 8) x |= ((x & 0x00FF00FF) << 8) | ((x & 0xFF00FF00) >> 8);
1153 if (shamt & 16) x |= ((x & 0x0000FFFF) << 16) | ((x & 0xFFFF0000) >> 16);
1154 return x;
1155 }
1156 uint64_t gorc64(uint64_t RA, uint64_t RB)
1157 {
1158 uint64_t x = RA;
1159 int shamt = RB & 63;
1160 if (shamt & 1) x |= ((x & 0x5555555555555555LL) << 1) |
1161 ((x & 0xAAAAAAAAAAAAAAAALL) >> 1);
1162 if (shamt & 2) x |= ((x & 0x3333333333333333LL) << 2) |
1163 ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2);
1164 if (shamt & 4) x |= ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) |
1165 ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4);
1166 if (shamt & 8) x |= ((x & 0x00FF00FF00FF00FFLL) << 8) |
1167 ((x & 0xFF00FF00FF00FF00LL) >> 8);
1168 if (shamt & 16) x |= ((x & 0x0000FFFF0000FFFFLL) << 16) |
1169 ((x & 0xFFFF0000FFFF0000LL) >> 16);
1170 if (shamt & 32) x |= ((x & 0x00000000FFFFFFFFLL) << 32) |
1171 ((x & 0xFFFFFFFF00000000LL) >> 32);
1172 return x;
1173 }
1174
1175 ```
1176
1177
1178 # Appendix
1179
1180 see [[bitmanip/appendix]]
1181