7 * ternlogi <https://bugs.libre-soc.org/show_bug.cgi?id=745>
8 * grev <https://bugs.libre-soc.org/show_bug.cgi?id=755>
9 * GF2^M <https://bugs.libre-soc.org/show_bug.cgi?id=782>
10 * binutils <https://bugs.libre-soc.org/show_bug.cgi?id=836>
11 * shift-and-add <https://bugs.libre-soc.org/show_bug.cgi?id=968>
17 pseudocode: [[openpower/isa/bitmanip]]
19 this extension amalgamates bitmanipulation primitives from many sources,
20 including RISC-V bitmanip, Packed SIMD, AVX-512 and OpenPOWER VSX.
21 Also included are DSP/Multimedia operations suitable for Audio/Video.
22 Vectorisation and SIMD are removed: these are straight scalar (element)
23 operations making them suitable for embedded applications. Vectorisation
24 Context is provided by [[openpower/sv]].
26 When combined with SV, scalar variants of bitmanip operations found in
27 VSX are added so that the Packed SIMD aspects of VSX may be retired as
28 "legacy" in the far future (10 to 20 years). Also, VSX is hundreds of
29 opcodes, requires 128 bit pathways, and is wholly unsuited to low power
30 or embedded scenarios.
32 ternlogv is experimental and is the only operation that may be considered
33 a "Packed SIMD". It is added as a variant of the already well-justified
34 ternlog operation (done in AVX512 as an immediate only) "because it
35 looks fun". As it is based on the LUT4 concept it will allow accelerated
36 emulation of FPGAs. Other vendors of ISAs are buying FPGA companies to
37 achieve similar objectives.
39 general-purpose Galois Field 2^M operations are added so as to avoid
40 huge custom opcode proliferation across many areas of Computer Science.
41 however for convenience and also to avoid setup costs, some of the more
42 common operations (clmul, crc32) are also added. The expectation is
43 that these operations would all be covered by the same pipeline.
45 note that there are brownfield spaces below that could incorporate
46 some of the set-before-first and other scalar operations listed in
48 [[sv/vector_ops]], [[sv/int_fp_mv]] and the [[sv/av_opcodes]] as well as
49 [[sv/setvl]], [[sv/svstep]], [[sv/remap]]
53 * <https://en.wikiversity.org/wiki/Reed%E2%80%93Solomon_codes_for_coders>
54 * <https://maths-people.anu.edu.au/~brent/pd/rpb232tr.pdf>
55 * <https://gist.github.com/animetosho/d3ca95da2131b5813e16b5bb1b137ca0>
56 * <https://github.com/HJLebbink/asm-dude/wiki/GF2P8AFFINEINVQB>
58 [[!inline pages="openpower/sv/draft_opcode_tables" quick="yes" raw="yes" ]]
60 # binary and ternary bitops
62 Similar to FPGA LUTs: for two (binary) or three (ternary) inputs take
63 bits from each input, concatenate them and perform a lookup into a
64 table using an 8-8-bit immediate (for the ternary instructions), or in
65 another register (4-bit for the binary instructions). The binary lookup
66 instructions have CR Field lookup variants due to CR Fields being 4 bit.
69 [vpternlogd/vpternlogq](https://www.felixcloutier.com/x86/vpternlogd:vpternlogq)
74 | 0.5|6.10|11.15|16.20| 21..28|29.30|31|
75 | -- | -- | --- | --- | ----- | --- |--|
76 | NN | RT | RA | RB | im0-7 | 00 |Rc|
79 idx = c << 2 | b << 1 | a
80 return imm[idx] # idx by LSB0 order
83 RT[i] = lut3(imm, RB[i], RA[i], RT[i])
87 Binary lookup is a dynamic LUT2 version of ternlogi. Firstly, the
88 lookup table is 4 bits wide not 8 bits, and secondly the lookup
89 table comes from a register not an immediate.
91 | 0.5|6.10|11.15|16.20| 21..25|26..31 | Form |
92 | -- | -- | --- | --- | ----- |--------|---------|
93 | NN | RT | RA | RB | RC |nh 00001| VA-Form |
94 | NN | RT | RA | RB | /BFA/ |0 01001| VA-Form |
96 For binlut, the 4-bit LUT may be selected from either the high nibble
97 or the low nibble of the first byte of RC:
101 return imm[idx] # idx by LSB0 order
103 imm = (RC>>(nh*4))&0b1111
105 RT[i] = lut2(imm, RB[i], RA[i])
107 For bincrlut, `BFA` selects the 4-bit CR Field as the LUT2:
110 RT[i] = lut2(CRs{BFA}, RB[i], RA[i])
112 When Vectorised with SVP64, as usual both source and destination may be
115 *Programmer's note: a dynamic ternary lookup may be synthesised from
116 a pair of `binlut` instructions followed by a `ternlogi` to select which
117 to merge. Use `nh` to select which nibble to use as the lookup table
118 from the RC source register (`nh=1` nibble high), i.e. keeping
119 an 8-bit LUT3 in RC, the first `binlut` instruction may set nh=0 and
124 another mode selection would be CRs not Ints.
128 | 0.5|6.8 |9.10|11.13|14.15|16.18|19.25|26.30| 31|
129 |----|----|----|-----|-----|-----|-----|-----|---|
130 | NN | BF | msk|BFA | msk | BFB | TLI | XO |TLI|
133 a,b,c = CRs[BF][i], CRs[BFA][i], CRs[BFB][i])
134 if msk[i] CRs[BF][i] = lut3(imm, a, b, c)
136 This instruction is remarkably similar to the existing crops, `crand` etc.
137 which have been noted to be a 4-bit (binary) LUT. In effect `crternlogi`
138 is the ternary LUT version of crops, having an 8-bit LUT. However it
139 is an overwrite instruction in order to save on register file ports,
140 due to the mask requiring the contents of the BF to be both read and
143 Programmer's note: This instruction is useful when combined with Matrix REMAP
144 in "Inner Product" Mode, creating Warshall Transitive Closure that has many
145 applications in Computer Science.
149 With ternary (LUT3) dynamic instructions being very costly,
150 and CR Fields being only 4 bit, a binary (LUT2) variant is better
154 | 0.5|6.8 |9.10|11.13|14.15|16.18|19.25|26.30| 31|
155 |----|----|----|-----|-----|-----|-----|-----|---|
156 | NN | BF | msk|BFA | msk | BFB | // | XO | //|
159 a,b = CRs[BF][i], CRs[BF][i])
160 if msk[i] CRs[BF][i] = lut2(CRs[BFB], a, b)
162 When SVP64 Vectorised any of the 4 operands may be Scalar or
163 Vector, including `BFB` meaning that multiple different dynamic
164 lookups may be performed with a single instruction. Note that
165 this instruction is deliberately an overwrite in order to reduce
166 the number of register file ports required: like `crternlogi`
167 the contents of `BF` **must** be read due to the mask only
168 writing back to non-masked-out bits of `BF`.
170 *Programmer's note: just as with binlut and ternlogi, a pair
171 of crbinlog instructions followed by a merging crternlogi may
172 be deployed to synthesise dynamic ternary (LUT3) CR Field
179 required for the [[sv/av_opcodes]]
181 signed and unsigned min/max for integer.
183 signed/unsigned min/max gives more flexibility.
185 \[un]signed min/max instructions are specifically needed for vector reduce min/max operations which are pretty common.
189 * PO=19, XO=----000011 `minmax RT, RA, RB, MMM`
190 * PO=19, XO=----000011 `minmax. RT, RA, RB, MMM`
192 see [[openpower/sv/rfc/ls013]] for `MMM` definition and pseudo-code.
194 implements all of (and more):
196 uint_xlen_t mins(uint_xlen_t rs1, uint_xlen_t rs2)
197 { return (int_xlen_t)rs1 < (int_xlen_t)rs2 ? rs1 : rs2;
199 uint_xlen_t maxs(uint_xlen_t rs1, uint_xlen_t rs2)
200 { return (int_xlen_t)rs1 > (int_xlen_t)rs2 ? rs1 : rs2;
202 uint_xlen_t minu(uint_xlen_t rs1, uint_xlen_t rs2)
203 { return rs1 < rs2 ? rs1 : rs2;
205 uint_xlen_t maxu(uint_xlen_t rs1, uint_xlen_t rs2)
206 { return rs1 > rs2 ? rs1 : rs2;
212 required for the [[sv/av_opcodes]], these exist in Packed SIMD (VSX)
216 uint_xlen_t intavg(uint_xlen_t rs1, uint_xlen_t rs2) {
217 return (rs1 + rs2 + 1) >> 1:
223 required for the [[sv/av_opcodes]], these exist in Packed SIMD (VSX)
227 uint_xlen_t absdu(uint_xlen_t rs1, uint_xlen_t rs2) {
228 return (src1 > src2) ? (src1-src2) : (src2-src1)
234 required for the [[sv/av_opcodes]], these are needed for motion estimation.
235 both are overwrite on RS.
238 uint_xlen_t uintabsacc(uint_xlen_t rs, uint_xlen_t ra, uint_xlen_t rb) {
239 return rs + (src1 > src2) ? (src1-src2) : (src2-src1)
241 uint_xlen_t intabsacc(uint_xlen_t rs, int_xlen_t ra, int_xlen_t rb) {
242 return rs + (src1 > src2) ? (src1-src2) : (src2-src1)
246 For SVP64, the twin Elwidths allows e.g. a 16 bit accumulator for 8 bit
247 differences. Form is `RM-1P-3S1D` where RS-as-source has a separate
248 SVP64 designation from RS-as-dest. This gives a limited range of
249 non-overwrite capability.
251 # shift-and-add <a name="shift-add"> </a>
253 Power ISA is missing LD/ST with shift, which is present in both ARM and x86.
254 Too complex to add more LD/ST, a compromise is to add shift-and-add.
255 Replaces a pair of explicit instructions in hot-loops.
259 |0 |6 |11 |15 |16 |21 |23 |31 |
260 | PO | RT | RA | RB |sm | XO |Rc |
267 RT <- (n[m:XLEN-1] || [0]*m) + (RA)
269 Pseudo-code (shaddw):
271 shift <- sm + 1 # Shift is between 1-4
272 n <- EXTS((RB)[XLEN/2:XLEN-1]) # Only use lower XLEN/2-bits of RB
273 RT <- (n << shift) + (RA) # Shift n, add RA
275 Pseudo-code (shadduw):
277 n <- ([0]*(XLEN/2)) || (RB)[XLEN/2:XLEN-1]
279 RT <- (n[m:XLEN-1] || [0]*m) + (RA)
282 uint_xlen_t shadd(uint_xlen_t RA, uint_xlen_t RB, uint8_t sm) {
284 return (RB << (sm+1)) + RA;
287 uint_xlen_t shaddw(uint_xlen_t RA, uint_xlen_t RB, uint8_t sm) {
288 uint_xlen_t n = (int_xlen_t)(RB << XLEN / 2) >> XLEN / 2;
290 return (n << (sm+1)) + RA;
293 uint_xlen_t shadduw(uint_xlen_t RA, uint_xlen_t RB, uint8_t sm) {
294 uint_xlen_t n = RB & 0xFFFFFFFF;
296 return (n << (sm+1)) + RA;
302 based on RV bitmanip singlebit set, instruction format similar to shift
303 [[isa/fixedshift]]. bmext is actually covered already (shift-with-mask
304 rldicl but only immediate version). however bitmask-invert is not,
305 and set/clr are not covered, although they can use the same Shift ALU.
307 bmext (RB) version is not the same as rldicl because bmext is a right
308 shift by RC, where rldicl is a left rotate. for the immediate version
309 this does not matter, so a bmexti is not required. bmrev however there
310 is no direct equivalent and consequently a bmrevi is required.
312 bmset (register for mask amount) is particularly useful for creating
313 predicate masks where the length is a dynamic runtime quantity.
314 bmset(RA=0, RB=0, RC=mask) will produce a run of ones of length "mask"
315 in a single instruction without needing to initialise or depend on any
318 | 0.5|6.10|11.15|16.20|21.25| 26..30 |31| name |
319 | -- | -- | --- | --- | --- | ------- |--| ----- |
320 | NN | RS | RA | RB | RC | mode 010 |Rc| bm\* |
322 Immediate-variant is an overwrite form:
324 | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name |
325 | -- | -- | --- | --- | -- | ----- | -------- |--| ---- |
326 | NN | RS | RB | sh | SH | itype | 1000 110 |Rc| bm\*i |
332 mask_a = ((1 << x) - 1) & ((1 << 64) - 1)
333 mask_b = ((1 << y) - 1) & ((1 << 64) - 1)
338 mask_a = ((1 << x) - 1) & ((1 << 64) - 1)
339 mask_b = (~((1 << y) - 1)) & ((1 << 64) - 1)
340 return mask_a ^ mask_b
343 uint_xlen_t bmset(RS, RB, sh)
345 int shamt = RB & (XLEN - 1);
347 return RS | (mask << shamt);
350 uint_xlen_t bmclr(RS, RB, sh)
352 int shamt = RB & (XLEN - 1);
354 return RS & ~(mask << shamt);
357 uint_xlen_t bminv(RS, RB, sh)
359 int shamt = RB & (XLEN - 1);
361 return RS ^ (mask << shamt);
364 uint_xlen_t bmext(RS, RB, sh)
366 int shamt = RB & (XLEN - 1);
368 return mask & (RS >> shamt);
372 bitmask extract with reverse. can be done by bit-order-inverting all
373 of RB and getting bits of RB from the opposite end.
375 when RA is zero, no shift occurs. this makes bmextrev useful for
376 simply reversing all bits of a register.
380 rev[0:msb] = rb[msb:0];
383 uint_xlen_t bmrevi(RA, RB, sh)
386 if (RA != 0) shamt = (GPR(RA) & (XLEN - 1));
387 shamt = (XLEN-1)-shamt; # shift other end
388 brb = bitreverse(GPR(RB)) # swap LSB-MSB
390 return mask & (brb >> shamt);
393 uint_xlen_t bmrev(RA, RB, RC) {
394 return bmrevi(RA, RB, GPR(RC) & 0b111111);
398 | 0.5|6.10|11.15|16.20|21.26| 27..30 |31| name | Form |
399 | -- | -- | --- | --- | --- | ------- |--| ------ | -------- |
400 | NN | RT | RA | RB | sh | 1111 |Rc| bmrevi | MDS-Form |
402 | 0.5|6.10|11.15|16.20|21.25| 26..30 |31| name | Form |
403 | -- | -- | --- | --- | --- | ------- |--| ------ | -------- |
404 | NN | RT | RA | RB | RC | 11110 |Rc| bmrev | VA2-Form |
406 # grevlut <a name="grevlut"> </a>
408 generalised reverse combined with a pair of LUT2s and allowing
409 a constant `0b0101...0101` when RA=0, and an option to invert
410 (including when RA=0, giving a constant 0b1010...1010 as the
411 initial value) provides a wide range of instructions
412 and a means to set hundreds of regular 64 bit patterns with one
413 single 32 bit instruction.
415 the two LUT2s are applied left-half (when not swapping)
416 and right-half (when swapping) so as to allow a wider
419 <img src="/openpower/sv/grevlut2x2.jpg" width=700 />
421 * A value of `0b11001010` for the immediate provides
422 the functionality of a standard "grev".
423 * `0b11101110` provides gorc
425 grevlut should be arranged so as to produce the constants
426 needed to put into bext (bitextract) so as in turn to
427 be able to emulate x86 pmovmask instructions
428 <https://www.felixcloutier.com/x86/pmovmskb>.
429 This only requires 2 instructions (grevlut, bext).
431 Note that if the mask is required to be placed
432 directly into CR Fields (for use as CR Predicate
433 masks rather than a integer mask) then sv.cmpi or sv.ori
434 may be used instead, bearing in mind that sv.ori
435 is a 64-bit instruction, and `VL` must have been
436 set to the required length:
438 sv.ori./elwid=8 r10.v, r10.v, 0
440 The following settings provide the required mask constants:
442 | RA=0 | RB | imm | iv | result |
443 | ------- | ------- | ---------- | -- | ---------- |
444 | 0x555.. | 0b10 | 0b01101100 | 0 | 0x111111... |
445 | 0x555.. | 0b110 | 0b01101100 | 0 | 0x010101... |
446 | 0x555.. | 0b1110 | 0b01101100 | 0 | 0x00010001... |
447 | 0x555.. | 0b10 | 0b11000110 | 1 | 0x88888... |
448 | 0x555.. | 0b110 | 0b11000110 | 1 | 0x808080... |
449 | 0x555.. | 0b1110 | 0b11000110 | 1 | 0x80008000... |
451 Better diagram showing the correct ordering of shamt (RB). A LUT2
452 is applied to all locations marked in red using the first 4
453 bits of the immediate, and a separate LUT2 applied to all
454 locations in green using the upper 4 bits of the immediate.
456 <img src="/openpower/sv/grevlut.png" width=700 />
458 demo code [[openpower/sv/grevlut.py]]
463 return (imm>>idx) & 1
465 def dorow(imm8, step_i, chunk_size):
468 if (j&chunk_size) == 0:
469 imm = (imm8 & 0b1111)
473 b = (step_i>>(j ^ chunk_size))&1
474 res = lut2(imm, a, b)
475 #print(j, bin(imm), a, b, res)
477 #print (" ", chunk_size, bin(step_o))
480 def grevlut64(RA, RB, imm, iv):
482 if RA is None: # RA=0
483 x = 0x5555555555555555
491 x = dorow(imm, x, step)
492 return x & ((1<<64)-1)
495 A variant may specify different LUT-pairs per row,
496 using one byte of RB for each. If it is desired that
497 a particular row-crossover shall not be applied it is
498 a simple matter to set the appropriate LUT-pair in RB
499 to effect an identity transform for that row (`0b11001010`).
502 uint64_t grevlutr(uint64_t RA, uint64_t RB, bool iv, bool is32b)
504 uint64_t x = 0x5555_5555_5555_5555;
505 if (RA != 0) x = GPR(RA);
507 for i in 0 to (6-is32b)
509 imm = (RB>>(i*8))&0xff
510 x = dorow(imm, x, step, is32b)
516 | 0.5|6.10|11.15|16.20 |21..28 | 29.30|31| name | Form |
517 | -- | -- | --- | --- | ----- | -----|--| ------ | ----- |
518 | NN | RT | RA | s0-4 | im0-7 | 1 iv |s5| grevlogi | |
519 | NN | RT | RA | RB | im0-7 | 01 |0 | grevlog | |
521 An equivalent to `grevlogw` may be synthesised by setting the
522 appropriate bits in RB to set the top half of RT to zero.
523 Thus an explicit grevlogw instruction is not necessary.
527 based on RV bitmanip.
529 RA contains a vector of indices to select parts of RB to be
530 copied to RT. The immediate-variant allows up to an 8 bit
531 pattern (repeated) to be targetted at different parts of RT.
533 xperm shares some similarity with one of the uses of bmator
534 in that xperm indices are binary addressing where bitmator
535 may be considered to be unary addressing.
538 uint_xlen_t xpermi(uint8_t imm8, uint_xlen_t RB, int sz_log2)
541 uint_xlen_t sz = 1LL << sz_log2;
542 uint_xlen_t mask = (1LL << sz) - 1;
543 uint_xlen_t RA = imm8 | imm8<<8 | ... | imm8<<56;
544 for (int i = 0; i < XLEN; i += sz) {
545 uint_xlen_t pos = ((RA >> i) & mask) << sz_log2;
547 r |= ((RB >> pos) & mask) << i;
551 uint_xlen_t xperm(uint_xlen_t RA, uint_xlen_t RB, int sz_log2)
554 uint_xlen_t sz = 1LL << sz_log2;
555 uint_xlen_t mask = (1LL << sz) - 1;
556 for (int i = 0; i < XLEN; i += sz) {
557 uint_xlen_t pos = ((RA >> i) & mask) << sz_log2;
559 r |= ((RB >> pos) & mask) << i;
563 uint_xlen_t xperm_n (uint_xlen_t RA, uint_xlen_t RB)
564 { return xperm(RA, RB, 2); }
565 uint_xlen_t xperm_b (uint_xlen_t RA, uint_xlen_t RB)
566 { return xperm(RA, RB, 3); }
567 uint_xlen_t xperm_h (uint_xlen_t RA, uint_xlen_t RB)
568 { return xperm(RA, RB, 4); }
569 uint_xlen_t xperm_w (uint_xlen_t RA, uint_xlen_t RB)
570 { return xperm(RA, RB, 5); }
575 bmatflip and bmatxor is found in the Cray XMT, and in x86 is known
576 as GF2P8AFFINEQB. uses:
578 * <https://gist.github.com/animetosho/d3ca95da2131b5813e16b5bb1b137ca0>
579 * SM4, Reed Solomon, RAID6
580 <https://stackoverflow.com/questions/59124720/what-are-the-avx-512-galois-field-related-instructions-for>
581 * Vector bit-reverse <https://reviews.llvm.org/D91515?id=305411>
582 * Affine Inverse <https://github.com/HJLebbink/asm-dude/wiki/GF2P8AFFINEINVQB>
584 | 0.5|6.10|11.15|16.20| 21 | 22.23 | 24....30 |31| name | Form |
585 | -- | -- | --- | --- | -- | ----- | -------- |--| ---- | ------- |
586 | NN | RS | RA |im04 | im5| 1 1 | im67 00 110 |Rc| bmatxori | TODO |
590 uint64_t bmatflip(uint64_t RA)
599 uint64_t bmatxori(uint64_t RS, uint64_t RA, uint8_t imm) {
601 uint64_t RAt = bmatflip(RA);
602 uint8_t u[8]; // rows of RS
603 uint8_t v[8]; // cols of RA
604 for (int i = 0; i < 8; i++) {
609 for (int i = 0; i < 64; i++) {
610 bit = (imm >> (i%8)) & 1;
611 bit ^= pcnt(u[i / 8] & v[i % 8]) & 1;
617 uint64_t bmatxor(uint64_t RA, uint64_t RB) {
618 return bmatxori(RA, RB, 0xff)
621 uint64_t bmator(uint64_t RA, uint64_t RB) {
623 uint64_t RBt = bmatflip(RB);
624 uint8_t u[8]; // rows of RA
625 uint8_t v[8]; // cols of RB
626 for (int i = 0; i < 8; i++) {
631 for (int i = 0; i < 64; i++) {
632 if ((u[i / 8] & v[i % 8]) != 0)
638 uint64_t bmatand(uint64_t RA, uint64_t RB) {
640 uint64_t RBt = bmatflip(RB);
641 uint8_t u[8]; // rows of RA
642 uint8_t v[8]; // cols of RB
643 for (int i = 0; i < 8; i++) {
648 for (int i = 0; i < 64; i++) {
649 if ((u[i / 8] & v[i % 8]) == 0xff)
656 # Introduction to Carry-less and GF arithmetic
658 * obligatory xkcd <https://xkcd.com/2595/>
660 There are three completely separate types of Galois-Field-based arithmetic
661 that we implement which are not well explained even in introductory
662 literature. A slightly oversimplified explanation is followed by more
663 accurate descriptions:
665 * `GF(2)` carry-less binary arithmetic. this is not actually a Galois Field,
666 but is accidentally referred to as GF(2) - see below as to why.
667 * `GF(p)` modulo arithmetic with a Prime number, these are "proper"
669 * `GF(2^N)` carry-less binary arithmetic with two limits: modulo a power-of-2
670 (2^N) and a second "reducing" polynomial (similar to a prime number), these
671 are said to be GF(2^N) arithmetic.
673 further detailed and more precise explanations are provided below
675 * **Polynomials with coefficients in `GF(2)`**
676 (aka. Carry-less arithmetic -- the `cl*` instructions).
677 This isn't actually a Galois Field, but its coefficients are. This is
678 basically binary integer addition, subtraction, and multiplication like
679 usual, except that carries aren't propagated at all, effectively turning
680 both addition and subtraction into the bitwise xor operation. Division and
681 remainder are defined to match how addition and multiplication works.
682 * **Galois Fields with a prime size**
683 (aka. `GF(p)` or Prime Galois Fields -- the `gfp*` instructions).
684 This is basically just the integers mod `p`.
685 * **Galois Fields with a power-of-a-prime size**
686 (aka. `GF(p^n)` or `GF(q)` where `q == p^n` for prime `p` and
688 We only implement these for `p == 2`, called Binary Galois Fields
689 (`GF(2^n)` -- the `gfb*` instructions).
690 For any prime `p`, `GF(p^n)` is implemented as polynomials with
691 coefficients in `GF(p)` and degree `< n`, where the polynomials are the
692 remainders of dividing by a specificly chosen polynomial in `GF(p)` called
693 the Reducing Polynomial (we will denote that by `red_poly`). The Reducing
694 Polynomial must be an irreducable polynomial (like primes, but for
695 polynomials), as well as have degree `n`. All `GF(p^n)` for the same `p`
696 and `n` are isomorphic to each other -- the choice of `red_poly` doesn't
697 affect `GF(p^n)`'s mathematical shape, all that changes is the specific
698 polynomials used to implement `GF(p^n)`.
700 Many implementations and much of the literature do not make a clear
701 distinction between these three categories, which makes it confusing
702 to understand what their purpose and value is.
704 * carry-less multiply is extremely common and is used for the ubiquitous
705 CRC32 algorithm. [TODO add many others, helps justify to ISA WG]
706 * GF(2^N) forms the basis of Rijndael (the current AES standard) and
707 has significant uses throughout cryptography
708 * GF(p) is the basis again of a significant quantity of algorithms
709 (TODO, list them, jacob knows what they are), even though the
710 modulo is limited to be below 64-bit (size of a scalar int)
712 # Instructions for Carry-less Operations
714 aka. Polynomials with coefficients in `GF(2)`
716 Carry-less addition/subtraction is simply XOR, so a `cladd`
717 instruction is not provided since the `xor[i]` instruction can be used instead.
719 These are operations on polynomials with coefficients in `GF(2)`, with the
720 polynomial's coefficients packed into integers with the following algorithm:
723 [[!inline pagenames="gf_reference/pack_poly.py" raw="yes"]]
726 ## Carry-less Multiply Instructions
729 see <https://en.wikipedia.org/wiki/CLMUL_instruction_set> and
730 <https://www.felixcloutier.com/x86/pclmulqdq> and
731 <https://en.m.wikipedia.org/wiki/Carry-less_product>
733 They are worth adding as their own non-overwrite operations
734 (in the same pipeline).
736 ### `clmul` Carry-less Multiply
739 [[!inline pagenames="gf_reference/clmul.py" raw="yes"]]
742 ### `clmulh` Carry-less Multiply High
745 [[!inline pagenames="gf_reference/clmulh.py" raw="yes"]]
748 ### `clmulr` Carry-less Multiply (Reversed)
750 Useful for CRCs. Equivalent to bit-reversing the result of `clmul` on
754 [[!inline pagenames="gf_reference/clmulr.py" raw="yes"]]
757 ## `clmadd` Carry-less Multiply-Add
760 clmadd RT, RA, RB, RC
764 (RT) = clmul((RA), (RB)) ^ (RC)
767 ## `cltmadd` Twin Carry-less Multiply-Add (for FFTs)
769 Used in combination with SV FFT REMAP to perform a full Discrete Fourier
770 Transform of Polynomials over GF(2) in-place. Possible by having 3-in 2-out,
771 to avoid the need for a temp register. RS is written to as well as RT.
773 Note: Polynomials over GF(2) are a Ring rather than a Field, so, because the
774 definition of the Inverse Discrete Fourier Transform involves calculating a
775 multiplicative inverse, which may not exist in every Ring, therefore the
776 Inverse Discrete Fourier Transform may not exist. (AFAICT the number of inputs
777 to the IDFT must be odd for the IDFT to be defined for Polynomials over GF(2).
778 TODO: check with someone who knows for sure if that's correct.)
781 cltmadd RT, RA, RB, RC
784 TODO: add link to explanation for where `RS` comes from.
789 # read all inputs before writing to any outputs in case
790 # an input overlaps with an output register.
791 (RT) = clmul(a, (RB)) ^ c
795 ## `cldivrem` Carry-less Division and Remainder
797 `cldivrem` isn't an actual instruction, but is just used in the pseudo-code
798 for other instructions.
801 [[!inline pagenames="gf_reference/cldivrem.py" raw="yes"]]
804 ## `cldiv` Carry-less Division
813 q, r = cldivrem(n, d, width=XLEN)
817 ## `clrem` Carry-less Remainder
826 q, r = cldivrem(n, d, width=XLEN)
830 # Instructions for Binary Galois Fields `GF(2^m)`
834 * <https://courses.csail.mit.edu/6.857/2016/files/ffield.py>
835 * <https://engineering.purdue.edu/kak/compsec/NewLectures/Lecture7.pdf>
836 * <https://foss.heptapod.net/math/libgf2/-/blob/branch/default/src/libgf2/gf2.py>
838 Binary Galois Field addition/subtraction is simply XOR, so a `gfbadd`
839 instruction is not provided since the `xor[i]` instruction can be used instead.
841 ## `GFBREDPOLY` SPR -- Reducing Polynomial
843 In order to save registers and to make operations orthogonal with standard
844 arithmetic, the reducing polynomial is stored in a dedicated SPR `GFBREDPOLY`.
845 This also allows hardware to pre-compute useful parameters (such as the
846 degree, or look-up tables) based on the reducing polynomial, and store them
847 alongside the SPR in hidden registers, only recomputing them whenever the SPR
848 is written to, rather than having to recompute those values for every
851 Because Galois Fields require the reducing polynomial to be an irreducible
852 polynomial, that guarantees that any polynomial of `degree > 1` must have
853 the LSB set, since otherwise it would be divisible by the polynomial `x`,
854 making it reducible, making whatever we're working on no longer a Field.
855 Therefore, we can reuse the LSB to indicate `degree == XLEN`.
858 [[!inline pagenames="gf_reference/decode_reducing_polynomial.py" raw="yes"]]
861 ## `gfbredpoly` -- Set the Reducing Polynomial SPR `GFBREDPOLY`
863 unless this is an immediate op, `mtspr` is completely sufficient.
866 [[!inline pagenames="gf_reference/gfbredpoly.py" raw="yes"]]
869 ## `gfbmul` -- Binary Galois Field `GF(2^m)` Multiplication
876 [[!inline pagenames="gf_reference/gfbmul.py" raw="yes"]]
879 ## `gfbmadd` -- Binary Galois Field `GF(2^m)` Multiply-Add
882 gfbmadd RT, RA, RB, RC
886 [[!inline pagenames="gf_reference/gfbmadd.py" raw="yes"]]
889 ## `gfbtmadd` -- Binary Galois Field `GF(2^m)` Twin Multiply-Add (for FFT)
891 Used in combination with SV FFT REMAP to perform a full `GF(2^m)` Discrete
892 Fourier Transform in-place. Possible by having 3-in 2-out, to avoid the need
893 for a temp register. RS is written to as well as RT.
896 gfbtmadd RT, RA, RB, RC
899 TODO: add link to explanation for where `RS` comes from.
904 # read all inputs before writing to any outputs in case
905 # an input overlaps with an output register.
906 (RT) = gfbmadd(a, (RB), c)
907 # use gfbmadd again since it reduces the result
908 (RS) = gfbmadd(a, 1, c) # "a * 1 + c"
911 ## `gfbinv` -- Binary Galois Field `GF(2^m)` Inverse
918 [[!inline pagenames="gf_reference/gfbinv.py" raw="yes"]]
921 # Instructions for Prime Galois Fields `GF(p)`
923 ## `GFPRIME` SPR -- Prime Modulus For `gfp*` Instructions
925 ## `gfpadd` Prime Galois Field `GF(p)` Addition
932 [[!inline pagenames="gf_reference/gfpadd.py" raw="yes"]]
935 the addition happens on infinite-precision integers
937 ## `gfpsub` Prime Galois Field `GF(p)` Subtraction
944 [[!inline pagenames="gf_reference/gfpsub.py" raw="yes"]]
947 the subtraction happens on infinite-precision integers
949 ## `gfpmul` Prime Galois Field `GF(p)` Multiplication
956 [[!inline pagenames="gf_reference/gfpmul.py" raw="yes"]]
959 the multiplication happens on infinite-precision integers
961 ## `gfpinv` Prime Galois Field `GF(p)` Invert
967 Some potential hardware implementations are found in:
968 <https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.90.5233&rep=rep1&type=pdf>
971 [[!inline pagenames="gf_reference/gfpinv.py" raw="yes"]]
974 ## `gfpmadd` Prime Galois Field `GF(p)` Multiply-Add
977 gfpmadd RT, RA, RB, RC
981 [[!inline pagenames="gf_reference/gfpmadd.py" raw="yes"]]
984 the multiplication and addition happens on infinite-precision integers
986 ## `gfpmsub` Prime Galois Field `GF(p)` Multiply-Subtract
989 gfpmsub RT, RA, RB, RC
993 [[!inline pagenames="gf_reference/gfpmsub.py" raw="yes"]]
996 the multiplication and subtraction happens on infinite-precision integers
998 ## `gfpmsubr` Prime Galois Field `GF(p)` Multiply-Subtract-Reversed
1001 gfpmsubr RT, RA, RB, RC
1005 [[!inline pagenames="gf_reference/gfpmsubr.py" raw="yes"]]
1008 the multiplication and subtraction happens on infinite-precision integers
1010 ## `gfpmaddsubr` Prime Galois Field `GF(p)` Multiply-Add and Multiply-Sub-Reversed (for FFT)
1012 Used in combination with SV FFT REMAP to perform
1013 a full Number-Theoretic-Transform in-place. Possible by having 3-in 2-out,
1014 to avoid the need for a temp register. RS is written
1018 gfpmaddsubr RT, RA, RB, RC
1021 TODO: add link to explanation for where `RS` comes from.
1027 # read all inputs before writing to any outputs in case
1028 # an input overlaps with an output register.
1029 (RT) = gfpmadd(factor1, factor2, term)
1030 (RS) = gfpmsubr(factor1, factor2, term)
1033 # Already in POWER ISA or subsumed
1035 Lists operations either included as part of
1036 other bitmanip operations, or are already in
1041 based on RV bitmanip, covered by ternlog bitops
1044 uint_xlen_t cmix(uint_xlen_t RA, uint_xlen_t RB, uint_xlen_t RC) {
1045 return (RA & RB) | (RC & ~RB);
1049 ## count leading/trailing zeros with mask
1055 do i = 0 to 63 if((RB)i=1) then do
1056 if((RS)i=1) then break end end count ← count + 1
1062 pdepd VRT,VRA,VRB, identical to RV bitmamip bdep, found already in v3.1 p106
1065 if VSR[VRB+32].dword[i].bit[63-m]=1 then do
1066 result = VSR[VRA+32].dword[i].bit[63-k]
1067 VSR[VRT+32].dword[i].bit[63-m] = result
1073 uint_xlen_t bdep(uint_xlen_t RA, uint_xlen_t RB)
1076 for (int i = 0, j = 0; i < XLEN; i++)
1077 if ((RB >> i) & 1) {
1079 r |= uint_xlen_t(1) << i;
1089 other way round: identical to RV bext: pextd, found in v3.1 p196
1092 uint_xlen_t bext(uint_xlen_t RA, uint_xlen_t RB)
1095 for (int i = 0, j = 0; i < XLEN; i++)
1096 if ((RB >> i) & 1) {
1098 r |= uint_xlen_t(1) << j;
1107 found in v3.1 p106 so not to be added here
1117 if((RB)63-i==1) then do
1118 result63-ptr1 = (RS)63-i
1124 ## bit to byte permute
1126 similar to matrix permute in RV bitmanip, which has XOR and OR variants,
1127 these perform a transpose (bmatflip).
1128 TODO this looks VSX is there a scalar variant
1133 b = VSR[VRB+32].dword[i].byte[k].bit[j]
1134 VSR[VRT+32].dword[i].byte[j].bit[k] = b
1138 superceded by grevlut
1140 based on RV bitmanip, this is also known as a butterfly network. however
1141 where a butterfly network allows setting of every crossbar setting in
1142 every row and every column, generalised-reverse (grev) only allows
1143 a per-row decision: every entry in the same row must either switch or
1146 <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Butterfly_Network.jpg/474px-Butterfly_Network.jpg" />
1149 uint64_t grev64(uint64_t RA, uint64_t RB)
1152 int shamt = RB & 63;
1153 if (shamt & 1) x = ((x & 0x5555555555555555LL) << 1) |
1154 ((x & 0xAAAAAAAAAAAAAAAALL) >> 1);
1155 if (shamt & 2) x = ((x & 0x3333333333333333LL) << 2) |
1156 ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2);
1157 if (shamt & 4) x = ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) |
1158 ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4);
1159 if (shamt & 8) x = ((x & 0x00FF00FF00FF00FFLL) << 8) |
1160 ((x & 0xFF00FF00FF00FF00LL) >> 8);
1161 if (shamt & 16) x = ((x & 0x0000FFFF0000FFFFLL) << 16) |
1162 ((x & 0xFFFF0000FFFF0000LL) >> 16);
1163 if (shamt & 32) x = ((x & 0x00000000FFFFFFFFLL) << 32) |
1164 ((x & 0xFFFFFFFF00000000LL) >> 32);
1172 based on RV bitmanip, gorc is superceded by grevlut
1175 uint32_t gorc32(uint32_t RA, uint32_t RB)
1178 int shamt = RB & 31;
1179 if (shamt & 1) x |= ((x & 0x55555555) << 1) | ((x & 0xAAAAAAAA) >> 1);
1180 if (shamt & 2) x |= ((x & 0x33333333) << 2) | ((x & 0xCCCCCCCC) >> 2);
1181 if (shamt & 4) x |= ((x & 0x0F0F0F0F) << 4) | ((x & 0xF0F0F0F0) >> 4);
1182 if (shamt & 8) x |= ((x & 0x00FF00FF) << 8) | ((x & 0xFF00FF00) >> 8);
1183 if (shamt & 16) x |= ((x & 0x0000FFFF) << 16) | ((x & 0xFFFF0000) >> 16);
1186 uint64_t gorc64(uint64_t RA, uint64_t RB)
1189 int shamt = RB & 63;
1190 if (shamt & 1) x |= ((x & 0x5555555555555555LL) << 1) |
1191 ((x & 0xAAAAAAAAAAAAAAAALL) >> 1);
1192 if (shamt & 2) x |= ((x & 0x3333333333333333LL) << 2) |
1193 ((x & 0xCCCCCCCCCCCCCCCCLL) >> 2);
1194 if (shamt & 4) x |= ((x & 0x0F0F0F0F0F0F0F0FLL) << 4) |
1195 ((x & 0xF0F0F0F0F0F0F0F0LL) >> 4);
1196 if (shamt & 8) x |= ((x & 0x00FF00FF00FF00FFLL) << 8) |
1197 ((x & 0xFF00FF00FF00FF00LL) >> 8);
1198 if (shamt & 16) x |= ((x & 0x0000FFFF0000FFFFLL) << 16) |
1199 ((x & 0xFFFF0000FFFF0000LL) >> 16);
1200 if (shamt & 32) x |= ((x & 0x00000000FFFFFFFFLL) << 32) |
1201 ((x & 0xFFFFFFFF00000000LL) >> 32);
1210 see [[bitmanip/appendix]]