(no commit message)
[libreriscv.git] / ztrans_proposal.mdwn
1 # Zftrans - transcendental operations
2
3 With thanks to:
4
5 * Jacob Lifshay
6 * Dan Petroski
7 * Mitch Alsup
8 * Allen Baum
9 * Andrew Waterman
10 * Luis Vitorio Cargnini
11
12 [[!toc levels=2]]
13
14 See:
15
16 * <http://bugs.libre-riscv.org/show_bug.cgi?id=127>
17 * <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
18 * Discussion: <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002342.html>
19 * [[rv_major_opcode_1010011]] for opcode listing.
20 * [[zfpacc_proposal]] for accuracy settings proposal
21
22 Extension subsets:
23
24 * **Zftrans**: standard transcendentals (best suited to 3D)
25 * **ZftransExt**: extra functions (useful, not generally needed for 3D,
26 can be synthesised using Ztrans)
27 * **Ztrigpi**: trig. xxx-pi sinpi cospi tanpi
28 * **Ztrignpi**: trig non-xxx-pi sin cos tan
29 * **Zarctrigpi**: arc-trig. a-xxx-pi: atan2pi asinpi acospi
30 * **Zarctrignpi**: arc-trig. non-a-xxx-pi: atan2, asin, acos
31 * **Zfhyp**: hyperbolic/inverse-hyperbolic. sinh, cosh, tanh, asinh,
32 acosh, atanh (can be synthesised - see below)
33 * **ZftransAdv**: much more complex to implement in hardware
34 * **Zfrsqrt**: Reciprocal square-root.
35
36 Minimum recommended requirements for 3D: Zftrans, Ztrignpi,
37 Zarctrignpi, with Ztrigpi and Zarctrigpi as augmentations.
38
39 Minimum recommended requirements for Mobile-Embedded 3D: Ztrignpi, Zftrans, with Ztrigpi as an augmentation.
40
41 # TODO:
42
43 * Decision on accuracy, moved to [[zfpacc_proposal]]
44 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002355.html>
45 * Errors **MUST** be repeatable.
46 * How about four Platform Specifications? 3DUNIX, UNIX, 3DEmbedded and Embedded?
47 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002361.html>
48 Accuracy requirements for dual (triple) purpose implementations must
49 meet the higher standard.
50 * Reciprocal Square-root is in its own separate extension (Zfrsqrt) as
51 it is desirable on its own by other implementors. This to be evaluated.
52
53 # Requirements <a name="requirements"></a>
54
55 This proposal is designed to meet a wide range of extremely diverse needs,
56 allowing implementors from all of them to benefit from the tools and hardware
57 cost reductions associated with common standards adoption.
58
59 **There are *four* different, disparate platform's needs (two new)**:
60
61 * 3D Embedded Platform (new)
62 * Embedded Platform
63 * 3D UNIX Platform (new)
64 * UNIX Platform
65
66 **The use-cases are**:
67
68 * 3D GPUs
69 * Numerical Computation
70 * (Potentially) A.I. / Machine-learning (1)
71
72 (1) although approximations suffice in this field, making it more likely
73 to use a custom extension. High-end ML would inherently definitely
74 be excluded.
75
76 **The power and die-area requirements vary from**:
77
78 * Ultra-low-power (smartwatches where GPU power budgets are in milliwatts)
79 * Mobile-Embedded (good performance with high efficiency for battery life)
80 * Desktop Computing
81 * Server / HPC (2)
82
83 (2) Supercomputing is left out of the requirements as it is traditionally
84 covered by Supercomputer Vectorisation Standards (such as RVV).
85
86 **The software requirements are**:
87
88 * Full public integration into GNU math libraries (libm)
89 * Full public integration into well-known Numerical Computation systems (numpy)
90 * Full public integration into upstream GNU and LLVM Compiler toolchains
91 * Full public integration into Khronos OpenCL SPIR-V compatible Compilers
92 seeking public Certification and Endorsement from the Khronos Group
93 under their Trademarked Certification Programme.
94
95 **The "contra"-requirements are**:
96
97 * NOT for use with RVV (RISC-V Vector Extension). These are *scalar* opcodes.
98 Ultra Low Power Embedded platforms (smart watches) are sufficiently
99 resource constrained that Vectorisation (of any kind) is likely to be
100 unnecessary and inappropriate.
101 * The requirements are **not** for the purposes of developing a full custom
102 proprietary GPU with proprietary firmware driven by *hardware* centric
103 optimised design decisions as a priority over collaboration.
104 * A full custom proprietary GPU ASIC Manufacturer *may* benefit from
105 this proposal however the fact that they typically develop proprietary
106 software that is not shared with the rest of the community likely to
107 use this proposal means that they have completely different needs.
108 * This proposal is for *sharing* of effort in reducing development costs
109
110 # Requirements Analysis <a name="requirements_analysis"></a>
111
112 **Platforms**:
113
114 3D Embedded will require significantly less accuracy and will need to make
115 power budget and die area compromises that other platforms (including Embedded)
116 will not need to make.
117
118 3D UNIX Platform has to be performance-price-competitive: subtly-reduced
119 accuracy in FP32 is acceptable where, conversely, in the UNIX Platform,
120 IEEE754 compliance is a hard requirement that would compromise power
121 and efficiency on a 3D UNIX Platform.
122
123 Even in the Embedded platform, IEEE754 interoperability is beneficial,
124 where if it was a hard requirement the 3D Embedded platform would be severely
125 compromised in its ability to meet the demanding power budgets of that market.
126
127 Thus, learning from the lessons of
128 [SIMD considered harmful](https://www.sigarch.org/simd-instructions-considered-harmful/)
129 this proposal works in conjunction with the [[zfpacc_proposal]], so as
130 not to overburden the OP32 ISA space with extra "reduced-accuracy" opcodes.
131
132 **Use-cases**:
133
134 There really is little else in the way of suitable markets. 3D GPUs
135 have extremely competitive power-efficiency and power-budget requirements
136 that are completely at odds with the other market at the other end of
137 the spectrum: Numerical Computation.
138
139 Interoperability in Numerical Computation is absolutely critical: it
140 implies (correlates directly with) IEEE754 compliance. However full
141 IEEE754 compliance automatically and inherently penalises a GPU on
142 performance and die area, where accuracy is simply just not necessary.
143
144 To meet the needs of both markets, the two new platforms have to be created,
145 and [[zfpacc_proposal]] is a critical dependency. Runtime selection of
146 FP accuracy allows an implementation to be "Hybrid" - cover UNIX IEEE754
147 compliance *and* 3D performance in a single ASIC.
148
149 **Power and die-area requirements**:
150
151 This is where the conflicts really start to hit home.
152
153 A "Numerical High performance only" proposal (suitable for Server / HPC
154 only) would customise and target the Extension based on a quantitative
155 analysis of the value of certain opcodes *for HPC only*. It would
156 conclude, reasonably and rationally, that it is worthwhile adding opcodes
157 to RVV as parallel Vector operations, and that further discussion of
158 the matter is pointless.
159
160 A "Proprietary GPU effort" (even one that was intended for publication
161 of its API through, for example, a public libre-licensed Vulkan SPIR-V
162 Compiler) would conclude, reasonably and rationally, that, likewise, the
163 opcodes were best suited to be added to RVV, and, further, that their
164 requirements conflict with the HPC world, due to the reduced accuracy.
165 This on the basis that the silicon die area required for IEEE754 is far
166 greater than that needed for reduced-accuracy, and thus their product
167 would be completely unacceptable in the market if it had to meet IEEE754,
168 unnecessarily.
169
170 An "Embedded 3D" GPU has radically different performance, power
171 and die-area requirements (and may even target SoftCores in FPGA).
172 Sharing of the silicon to cover multi-function uses (CORDIC for example)
173 is absolutely essential in order to keep cost and power down, and high
174 performance simply is not. Multi-cycle FSMs instead of pipelines may
175 be considered acceptable, and so on. Subsets of functionality are
176 also essential.
177
178 An "Embedded Numerical" platform has requirements that are separate and
179 distinct from all of the above!
180
181 Mobile Computing needs (tablets, smartphones) again pull in a different
182 direction: high performance, reasonable accuracy, but efficiency is
183 critical. Screen sizes are not at the 4K range: they are within the
184 800x600 range at the low end (320x240 at the extreme budget end), and
185 only the high-performance smartphones and tablets provide 1080p (1920x1080).
186 With lower resolution, accuracy compromises are possible which the Desktop
187 market (4k and soon to be above) would find unacceptable.
188
189 Meeting these disparate markets may be achieved, again, through
190 [[zfpacc_proposal]], by subdividing into four platforms, yet, in addition
191 to that, subdividing the extension into subsets that best suit the different
192 market areas.
193
194 **Software requirements**:
195
196 A "custom" extension is developed in near-complete isolation from the
197 rest of the RISC-V Community. Cost savings to the Corporation are
198 large, with no direct beneficial feedback to (or impact on) the rest
199 of the RISC-V ecosystem.
200
201 However given that 3D revolves around Standards - DirectX, Vulkan, OpenGL,
202 OpenCL - users have much more influence than first appears. Compliance
203 with these standards is critical as the userbase (Games writers,
204 scientific applications) expects not to have to rewrite extremely large
205 and costly codebases to conform with *non-standards-compliant* hardware.
206
207 Therefore, compliance with public APIs (Vulkan, OpenCL, OpenGL, DirectX)
208 is paramount, and compliance with Trademarked Standards is critical.
209 Any deviation from Trademarked Standards means that an implementation
210 may not be sold and also make a claim of being, for example, "Vulkan
211 compatible".
212
213 For 3D, this in turn reinforces and makes a hard requirement a need for public
214 compliance with such standards, over-and-above what would otherwise be
215 set by a RISC-V Standards Development Process, including both the
216 software compliance and the knock-on implications that has for hardware.
217
218 For libraries such as libm and numpy, accuracy is paramount, for software interoperability across multiple platforms. Some algorithms critically rely on correct IEEE754, for example.
219 The conflicting accuracy requirements can be met through the zfpacc extension.
220
221 **Collaboration**:
222
223 The case for collaboration on any Extension is already well-known.
224 In this particular case, the precedent for inclusion of Transcendentals
225 in other ISAs, both from Graphics and High-performance Computing, has
226 these primitives well-established in high-profile software libraries and
227 compilers in both GPU and HPC Computer Science divisions. Collaboration
228 and shared public compliance with those standards brooks no argument.
229
230 The combined requirements of collaboration and multi accuracy requirements
231 mean that *overall this proposal is categorically and wholly unsuited
232 to relegation of "custom" status*.
233
234 # Quantitative Analysis <a name="analysis"></a>
235
236 This is extremely challenging. Normally, an Extension would require full,
237 comprehensive and detailed analysis of every single instruction, for every
238 single possible use-case, in every single market. The amount of silicon
239 area required would be balanced against the benefits of introducing extra
240 opcodes, as well as a full market analysis performed to see which divisions
241 of Computer Science benefit from the introduction of the instruction,
242 in each and every case.
243
244 With 34 instructions, four possible Platforms, and sub-categories of
245 implementations even within each Platform, over 136 separate and distinct
246 analyses is not a practical proposition.
247
248 A little more intelligence has to be applied to the problem space,
249 to reduce it down to manageable levels.
250
251 Fortunately, the subdivision by Platform, in combination with the
252 identification of only two primary markets (Numerical Computation and
253 3D), means that the logical reasoning applies *uniformly* and broadly
254 across *groups* of instructions rather than individually, making it a primarily
255 hardware-centric and accuracy-centric decision-making process.
256
257 In addition, hardware algorithms such as CORDIC can cover such a wide
258 range of operations (simply by changing the input parameters) that the
259 normal argument of compromising and excluding certain opcodes because they
260 would significantly increase the silicon area is knocked down.
261
262 However, CORDIC, whilst space-efficient, and thus well-suited to
263 Embedded, is an old iterative algorithm not well-suited to High-Performance
264 Computing or Mid to High-end GPUs, where commercially-competitive
265 FP32 pipeline lengths are only around 5 stages.
266
267 Not only that, but some operations such as LOG1P, which would normally
268 be excluded from one market (due to there being an alternative macro-op
269 fused sequence replacing it) are required for other markets due to
270 the higher accuracy obtainable at the lower range of input values when
271 compared to LOG(1+P).
272
273 (Thus we start to see why "proprietary" markets are excluded from this
274 proposal, because "proprietary" markets would make *hardware*-driven
275 optimisation decisions that would be completely inappropriate for a
276 common standard).
277
278 ATAN and ATAN2 is another example area in which one market's needs
279 conflict directly with another: the only viable solution, without compromising
280 one market to the detriment of the other, is to provide both opcodes
281 and let implementors make the call as to which (or both) to optimise,
282 at the *hardware* level.
283
284 Likewise it is well-known that loops involving "0 to 2 times pi", often
285 done in subdivisions of powers of two, are costly to do because they
286 involve floating-point multiplication by PI in each and every loop.
287 3D GPUs solved this by providing SINPI variants which range from 0 to 1
288 and perform the multiply *inside* the hardware itself. In the case of
289 CORDIC, it turns out that the multiply by PI is not even needed (is a
290 loop invariant magic constant).
291
292 However, some markets may not wish to *use* CORDIC, for reasons mentioned
293 above, and, again, one market would be penalised if SINPI was prioritised
294 over SIN, or vice-versa.
295
296 In essence, then, even when only the two primary markets (3D and
297 Numerical Computation) have been identified, this still leaves two
298 (three) diametrically-opposed *accuracy* sub-markets as the prime
299 conflict drivers:
300
301 * Embedded Ultra Low Power
302 * IEEE754 compliance
303 * Khronos Vulkan compliance
304
305 Thus the best that can be done is to use Quantitative Analysis to work
306 out which "subsets" - sub-Extensions - to include, provide an additional
307 "accuracy" extension, be as "inclusive" as possible, and thus allow
308 implementors to decide what to add to their implementation, and how best
309 to optimise them.
310
311 This approach *only* works due to the uniformity of the function space,
312 and is **not** an appropriate methodology for use in other Extensions
313 with huge (non-uniform) market diversity even with similarly large
314 numbers of potential opcodes. BitManip is the perfect counter-example.
315
316 # Proposed Opcodes vs Khronos OpenCL vs IEEE754-2019<a name="khronos_equiv"></a>
317
318 This list shows the (direct) equivalence between proposed opcodes,
319 their Khronos OpenCL equivalents, and their IEEE754-2019 equivalents.
320 98% of the opcodes in this proposal that are in the IEEE754-2019 standard
321 are present in the Khronos Extended Instruction Set.
322
323 For RISCV opcode encodings see
324 [[rv_major_opcode_1010011]]
325
326 See
327 <https://www.khronos.org/registry/spir-v/specs/unified1/OpenCL.ExtendedInstructionSet.100.html>
328 and <https://ieeexplore.ieee.org/document/8766229>
329
330 * Special FP16 opcodes are *not* being proposed, except by indirect / inherent
331 use of the "fmt" field that is already present in the RISC-V Specification.
332 * "Native" opcodes are *not* being proposed: implementors will be expected
333 to use the (equivalent) proposed opcode covering the same function.
334 * "Fast" opcodes are *not* being proposed, because the Khronos Specification
335 fast\_length, fast\_normalise and fast\_distance OpenCL opcodes require
336 vectors (or can be done as scalar operations using other RISC-V instructions).
337
338 The OpenCL FP32 opcodes are **direct** equivalents to the proposed opcodes.
339 Deviation from conformance with the Khronos Specification - including the
340 Khronos Specification accuracy requirements - is not an option, as it
341 results in non-compliance, and the vendor may not use the Trademarked words
342 "Vulkan" etc. in conjunction with their product.
343
344 IEEE754-2019 Table 9.1 lists "additional mathematical operations".
345 Interestingly the only functions missing when compared to OpenCL are
346 compound, exp2m1, exp10m1, log2p1, log10p1, pown (integer power) and powr.
347
348 [[!table data="""
349 opcode | OpenCL FP32 | OpenCL FP16 | OpenCL native | OpenCL fast | IEEE754 |
350 FSIN | sin | half\_sin | native\_sin | NONE | sin |
351 FCOS | cos | half\_cos | native\_cos | NONE | cos |
352 FTAN | tan | half\_tan | native\_tan | NONE | tan |
353 NONE (1) | sincos | NONE | NONE | NONE | NONE |
354 FASIN | asin | NONE | NONE | NONE | asin |
355 FACOS | acos | NONE | NONE | NONE | acos |
356 FATAN | atan | NONE | NONE | NONE | atan |
357 FSINPI | sinpi | NONE | NONE | NONE | sinPi |
358 FCOSPI | cospi | NONE | NONE | NONE | cosPi |
359 FTANPI | tanpi | NONE | NONE | NONE | tanPi |
360 FASINPI | asinpi | NONE | NONE | NONE | asinPi |
361 FACOSPI | acospi | NONE | NONE | NONE | acosPi |
362 FATANPI | atanpi | NONE | NONE | NONE | atanPi |
363 FSINH | sinh | NONE | NONE | NONE | sinh |
364 FCOSH | cosh | NONE | NONE | NONE | cosh |
365 FTANH | tanh | NONE | NONE | NONE | tanh |
366 FASINH | asinh | NONE | NONE | NONE | asinh |
367 FACOSH | acosh | NONE | NONE | NONE | acosh |
368 FATANH | atanh | NONE | NONE | NONE | atanh |
369 FATAN2 | atan2 | NONE | NONE | NONE | atan2 |
370 FATAN2PI | atan2pi | NONE | NONE | NONE | atan2pi |
371 FRSQRT | rsqrt | half\_rsqrt | native\_rsqrt | NONE | rSqrt |
372 FCBRT | cbrt | NONE | NONE | NONE | NONE (2) |
373 FEXP2 | exp2 | half\_exp2 | native\_exp2 | NONE | exp2 |
374 FLOG2 | log2 | half\_log2 | native\_log2 | NONE | log2 |
375 FEXPM1 | expm1 | NONE | NONE | NONE | expm1 |
376 FLOG1P | log1p | NONE | NONE | NONE | logp1 |
377 FEXP | exp | half\_exp | native\_exp | NONE | exp |
378 FLOG | log | half\_log | native\_log | NONE | log |
379 FEXP10 | exp10 | half\_exp10 | native\_exp10 | NONE | exp10 |
380 FLOG10 | log10 | half\_log10 | native\_log10 | NONE | log10 |
381 FPOW | pow | NONE | NONE | NONE | pow |
382 FPOWN | pown | NONE | NONE | NONE | pown |
383 FPOWR | powr | NONE | NONE | NONE | powr |
384 FROOTN | rootn | NONE | NONE | NONE | rootn |
385 FHYPOT | hypot | NONE | NONE | NONE | hypot |
386 FRECIP | NONE | half\_recip | native\_recip | NONE | NONE (3) |
387 NONE | NONE | NONE | NONE | NONE | compound |
388 NONE | NONE | NONE | NONE | NONE | exp2m1 |
389 NONE | NONE | NONE | NONE | NONE | exp10m1 |
390 NONE | NONE | NONE | NONE | NONE | log2p1 |
391 NONE | NONE | NONE | NONE | NONE | log10p1 |
392 """]]
393
394 Note (1) FSINCOS is macro-op fused (see below).
395
396 Note (2) synthesised in IEEE754-2019 as "pown(x, 3)"
397
398 Note (3) synthesised in IEEE754-2019 using "1.0 / x"
399
400 ## List of 2-arg opcodes
401
402 [[!table data="""
403 opcode | Description | pseudocode | Extension |
404 FATAN2 | atan2 arc tangent | rd = atan2(rs2, rs1) | Zarctrignpi |
405 FATAN2PI | atan2 arc tangent / pi | rd = atan2(rs2, rs1) / pi | Zarctrigpi |
406 FPOW | x power of y | rd = pow(rs1, rs2) | ZftransAdv |
407 FPOWN | x power of n (n int) | rd = pow(rs1, rs2) | ZftransAdv |
408 FPOWR | x power of y (x +ve) | rd = exp(rs1 log(rs2)) | ZftransAdv |
409 FROOTN | x power 1/n (n integer)| rd = pow(rs1, 1/rs2) | ZftransAdv |
410 FHYPOT | hypotenuse | rd = sqrt(rs1^2 + rs2^2) | ZftransAdv |
411 """]]
412
413 ## List of 1-arg transcendental opcodes
414
415 [[!table data="""
416 opcode | Description | pseudocode | Extension |
417 FRSQRT | Reciprocal Square-root | rd = sqrt(rs1) | Zfrsqrt |
418 FCBRT | Cube Root | rd = pow(rs1, 1.0 / 3) | ZftransAdv |
419 FRECIP | Reciprocal | rd = 1.0 / rs1 | Zftrans |
420 FEXP2 | power-of-2 | rd = pow(2, rs1) | Zftrans |
421 FLOG2 | log2 | rd = log(2. rs1) | Zftrans |
422 FEXPM1 | exponential minus 1 | rd = pow(e, rs1) - 1.0 | ZftransExt |
423 FLOG1P | log plus 1 | rd = log(e, 1 + rs1) | ZftransExt |
424 FEXP | exponential | rd = pow(e, rs1) | ZftransExt |
425 FLOG | natural log (base e) | rd = log(e, rs1) | ZftransExt |
426 FEXP10 | power-of-10 | rd = pow(10, rs1) | ZftransExt |
427 FLOG10 | log base 10 | rd = log(10, rs1) | ZftransExt |
428 """]]
429
430 ## List of 1-arg trigonometric opcodes
431
432 [[!table data="""
433 opcode | Description | pseudo-code | Extension |
434 FSIN | sin (radians) | rd = sin(rs1) | Ztrignpi |
435 FCOS | cos (radians) | rd = cos(rs1) | Ztrignpi |
436 FTAN | tan (radians) | rd = tan(rs1) | Ztrignpi |
437 FASIN | arcsin (radians) | rd = asin(rs1) | Zarctrignpi |
438 FACOS | arccos (radians) | rd = acos(rs1) | Zarctrignpi |
439 FATAN | arctan (radians) | rd = atan(rs1) | Zarctrignpi |
440 FSINPI | sin times pi | rd = sin(pi * rs1) | Ztrigpi |
441 FCOSPI | cos times pi | rd = cos(pi * rs1) | Ztrigpi |
442 FTANPI | tan times pi | rd = tan(pi * rs1) | Ztrigpi |
443 FASINPI | arcsin / pi | rd = asin(rs1) / pi | Zarctrigpi |
444 FACOSPI | arccos / pi | rd = acos(rs1) / pi | Zarctrigpi |
445 FATANPI | arctan / pi | rd = atan(rs1) / pi | Zarctrigpi |
446 FSINH | hyperbolic sin (radians) | rd = sinh(rs1) | Zfhyp |
447 FCOSH | hyperbolic cos (radians) | rd = cosh(rs1) | Zfhyp |
448 FTANH | hyperbolic tan (radians) | rd = tanh(rs1) | Zfhyp |
449 FASINH | inverse hyperbolic sin | rd = asinh(rs1) | Zfhyp |
450 FACOSH | inverse hyperbolic cos | rd = acosh(rs1) | Zfhyp |
451 FATANH | inverse hyperbolic tan | rd = atanh(rs1) | Zfhyp |
452 """]]
453
454 # Subsets
455
456 The full set is based on the Khronos OpenCL opcodes. If implemented
457 entirely it would be too much for both Embedded and also 3D.
458
459 The subsets are organised by hardware complexity, need (3D, HPC), however
460 due to synthesis producing inaccurate results at the range limits,
461 the less common subsets are still required for IEEE754 HPC.
462
463 MALI Midgard, an embedded / mobile 3D GPU, for example only has the
464 following opcodes:
465
466 E8 - fatan_pt2
467 F0 - frcp (reciprocal)
468 F2 - frsqrt (inverse square root, 1/sqrt(x))
469 F3 - fsqrt (square root)
470 F4 - fexp2 (2^x)
471 F5 - flog2
472 F6 - fsin1pi
473 F7 - fcos1pi
474 F9 - fatan_pt1
475
476 These in FP32 and FP16 only: no FP32 hardware, at all.
477
478 Vivante Embedded/Mobile 3D (etnaviv <https://github.com/laanwj/etna_viv/blob/master/rnndb/isa.xml>) only has the following:
479
480 sin, cos2pi
481 cos, sin2pi
482 log2, exp
483 sqrt and rsqrt
484 recip.
485
486 It also has fast variants of some of these, as a CSR Mode.
487
488 AMD's R600 GPU (R600\_Instruction\_Set\_Architecture.pdf) and the
489 RDNA ISA (RDNA\_Shader\_ISA\_5August2019.pdf, Table 22, Section 6.3) have:
490
491 COS2PI (appx)
492 EXP2
493 LOG (IEEE754)
494 RECIP
495 RSQRT
496 SQRT
497 SIN2PI (appx)
498
499 AMD RDNA has F16 and F32 variants of all the above, and also has F64
500 variants of SQRT, RSQRT and RECIP. It is interesting that even the
501 modern high-end AMD GPU does not have TAN or ATAN, where MALI Midgard
502 does.
503
504 Also a general point, that customised optimised hardware targetting
505 FP32 3D with less accuracy simply can neither be used for IEEE754 nor
506 for FP64 (except as a starting point for hardware or software driven
507 Newton Raphson or other iterative method).
508
509 Also in cost/area sensitive applications even the extra ROM lookup tables
510 for certain algorithms may be too costly.
511
512 These wildly differing and incompatible driving factors lead to the
513 subset subdivisions, below.
514
515 ## Transcendental Subsets
516
517 ### Zftrans
518
519 LOG2 EXP2 RECIP RSQRT
520
521 Zftrans contains the minimum standard transcendentals best suited to
522 3D. They are also the minimum subset for synthesising log10, exp10,
523 exp1m, log1p, the hyperbolic trigonometric functions sinh and so on.
524
525 They are therefore considered "base" (essential) transcendentals.
526
527 ### ZftransExt
528
529 LOG, EXP, EXP10, LOG10, LOGP1, EXP1M
530
531 These are extra transcendental functions that are useful, not generally
532 needed for 3D, however for Numerical Computation they may be useful.
533
534 Although they can be synthesised using Ztrans (LOG2 multiplied
535 by a constant), there is both a performance penalty as well as an
536 accuracy penalty towards the limits, which for IEEE754 compliance is
537 unacceptable. In particular, LOG(1+rs1) in hardware may give much better
538 accuracy at the lower end (very small rs1) than LOG(rs1).
539
540 Their forced inclusion would be inappropriate as it would penalise
541 embedded systems with tight power and area budgets. However if they
542 were completely excluded the HPC applications would be penalised on
543 performance and accuracy.
544
545 Therefore they are their own subset extension.
546
547 ### Zfhyp
548
549 SINH, COSH, TANH, ASINH, ACOSH, ATANH
550
551 These are the hyperbolic/inverse-hyperbolic functions. Their use in 3D is limited.
552
553 They can all be synthesised using LOG, SQRT and so on, so depend
554 on Zftrans. However, once again, at the limits of the range, IEEE754
555 compliance becomes impossible, and thus a hardware implementation may
556 be required.
557
558 HPC and high-end GPUs are likely markets for these.
559
560 ### ZftransAdv
561
562 CBRT, POW, POWN, POWR, ROOTN
563
564 These are simply much more complex to implement in hardware, and typically
565 will only be put into HPC applications.
566
567 * **Zfrsqrt**: Reciprocal square-root.
568
569 ## Trigonometric subsets
570
571 ### Ztrigpi vs Ztrignpi
572
573 * **Ztrigpi**: SINPI COSPI TANPI
574 * **Ztrignpi**: SIN COS TAN
575
576 Ztrignpi are the basic trigonometric functions through which all others
577 could be synthesised, and they are typically the base trigonometrics
578 provided by GPUs for 3D, warranting their own subset.
579
580 In the case of the Ztrigpi subset, these are commonly used in for loops
581 with a power of two number of subdivisions, and the cost of multiplying
582 by PI inside each loop (or cumulative addition, resulting in cumulative
583 errors) is not acceptable.
584
585 In for example CORDIC the multiplication by PI may be moved outside of
586 the hardware algorithm as a loop invariant, with no power or area penalty.
587
588 Again, therefore, if SINPI (etc.) were excluded, programmers would be penalised by being forced to divide by PI in some circumstances. Likewise if SIN were excluded, programmers would be penaslised by being forced to *multiply* by PI in some circumstances.
589
590 Thus again, a slightly different application of the same general argument applies to give Ztrignpi and
591 Ztrigpi as subsets. 3D GPUs will almost certainly provide both.
592
593 ### Zarctrigpi and Zarctrignpi
594
595 * **Zarctrigpi**: ATAN2PI ASINPI ACOSPI
596 * **Zarctrignpi**: ATAN2 ACOS ASIN
597
598 These are extra trigonometric functions that are useful in some
599 applications, but even for 3D GPUs, particularly embedded and mobile class
600 GPUs, they are not so common and so are typically synthesised, there.
601
602 Although they can be synthesised using Ztrigpi and Ztrignpi, there is,
603 once again, both a performance penalty as well as an accuracy penalty
604 towards the limits, which for IEEE754 compliance is unacceptable, yet
605 is acceptable for 3D.
606
607 Therefore they are their own subset extensions.
608
609 # Synthesis, Pseudo-code ops and macro-ops
610
611 The pseudo-ops are best left up to the compiler rather than being actual
612 pseudo-ops, by allocating one scalar FP register for use as a constant
613 (loop invariant) set to "1.0" at the beginning of a function or other
614 suitable code block.
615
616 * FSINCOS - fused macro-op between FSIN and FCOS (issued in that order).
617 * FSINCOSPI - fused macro-op between FSINPI and FCOSPI (issued in that order).
618
619 FATANPI example pseudo-code:
620
621 lui t0, 0x3F800 // upper bits of f32 1.0
622 fmv.x.s ft0, t0
623 fatan2pi.s rd, rs1, ft0
624
625 Hyperbolic function example (obviates need for Zfhyp except for
626 high-performance or correctly-rounding):
627
628 ASINH( x ) = ln( x + SQRT(x**2+1))
629
630 # Evaluation and commentary
631
632 This section will move later to discussion.
633
634 ## Reciprocal
635
636 Used to be an alias. Some implementors may wish to implement divide as
637 y times recip(x).
638
639 Others may have shared hardware for recip and divide, others may not.
640
641 To avoid penalising one implementor over another, recip stays.
642
643 ## To evaluate: should LOG be replaced with LOG1P (and EXP with EXPM1)?
644
645 RISC principle says "exclude LOG because it's covered by LOGP1 plus an ADD".
646 Research needed to ensure that implementors are not compromised by such
647 a decision
648 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002358.html>
649
650 > > correctly-rounded LOG will return different results than LOGP1 and ADD.
651 > > Likewise for EXP and EXPM1
652
653 > ok, they stay in as real opcodes, then.
654
655 ## ATAN / ATAN2 commentary
656
657 Discussion starts here:
658 <http://lists.libre-riscv.org/pipermail/libre-riscv-dev/2019-August/002470.html>
659
660 from Mitch Alsup:
661
662 would like to point out that the general implementations of ATAN2 do a
663 bunch of special case checks and then simply call ATAN.
664
665 double ATAN2( double y, double x )
666 { // IEEE 754-2008 quality ATAN2
667
668 // deal with NANs
669 if( ISNAN( x ) ) return x;
670 if( ISNAN( y ) ) return y;
671
672 // deal with infinities
673 if( x == +∞ && |y|== +∞ ) return copysign( π/4, y );
674 if( x == +∞ ) return copysign( 0.0, y );
675 if( x == -∞ && |y|== +∞ ) return copysign( 3π/4, y );
676 if( x == -∞ ) return copysign( π, y );
677 if( |y|== +∞ ) return copysign( π/2, y );
678
679 // deal with signed zeros
680 if( x == 0.0 && y != 0.0 ) return copysign( π/2, y );
681 if( x >=+0.0 && y == 0.0 ) return copysign( 0.0, y );
682 if( x <=-0.0 && y == 0.0 ) return copysign( π, y );
683
684 // calculate ATAN2 textbook style
685 if( x > 0.0 ) return ATAN( |y / x| );
686 if( x < 0.0 ) return π - ATAN( |y / x| );
687 }
688
689
690 Yet the proposed encoding makes ATAN2 the primitive and has ATAN invent
691 a constant and then call/use ATAN2.
692
693 When one considers an implementation of ATAN, one must consider several
694 ranges of evaluation::
695
696 x [ -∞, -1.0]:: ATAN( x ) = -π/2 + ATAN( 1/x );
697 x (-1.0, +1.0]:: ATAN( x ) = + ATAN( x );
698 x [ 1.0, +∞]:: ATAN( x ) = +π/2 - ATAN( 1/x );
699
700 I should point out that the add/sub of π/2 can not lose significance
701 since the result of ATAN(1/x) is bounded 0..π/2
702
703 The bottom line is that I think you are choosing to make too many of
704 these into OpCodes, making the hardware function/calculation unit (and
705 sequencer) more complicated that necessary.
706
707 --------------------------------------------------------
708
709 We therefore I think have a case for bringing back ATAN and including ATAN2.
710
711 The reason is that whilst a microcode-like GPU-centric platform would do ATAN2 in terms of ATAN, a UNIX-centric platform would do it the other way round.
712
713 (that is the hypothesis, to be evaluated for correctness. feedback requested).
714
715 This because we cannot compromise or prioritise one platfrom's
716 speed/accuracy over another. That is not reasonable or desirable, to
717 penalise one implementor over another.
718
719 Thus, all implementors, to keep interoperability, must both have both
720 opcodes and may choose, at the architectural and routing level, which
721 one to implement in terms of the other.
722
723 Allowing implementors to choose to add either opcode and let traps sort it
724 out leaves an uncertainty in the software developer's mind: they cannot
725 trust the hardware, available from many vendors, to be performant right
726 across the board.
727
728 Standards are a pig.
729
730 ---
731
732 I might suggest that if there were a way for a calculation to be performed
733 and the result of that calculation chained to a subsequent calculation
734 such that the precision of the result-becomes-operand is wider than
735 what will fit in a register, then you can dramatically reduce the count
736 of instructions in this category while retaining
737
738 acceptable accuracy:
739
740 z = x / y
741
742 can be calculated as::
743
744 z = x * (1/y)
745
746 Where 1/y has about 26-to-32 bits of fraction. No, it's not IEEE 754-2008
747 accurate, but GPUs want speed and
748
749 1/y is fully pipelined (F32) while x/y cannot be (at reasonable area). It
750 is also not "that inaccurate" displaying 0.625-to-0.52 ULP.
751
752 Given that one has the ability to carry (and process) more fraction bits,
753 one can then do high precision multiplies of π or other transcendental
754 radixes.
755
756 And GPUs have been doing this almost since the dawn of 3D.
757
758 // calculate ATAN2 high performance style
759 // Note: at this point x != y
760 //
761 if( x > 0.0 )
762 {
763 if( y < 0.0 && |y| < |x| ) return - π/2 - ATAN( x / y );
764 if( y < 0.0 && |y| > |x| ) return + ATAN( y / x );
765 if( y > 0.0 && |y| < |x| ) return + ATAN( y / x );
766 if( y > 0.0 && |y| > |x| ) return + π/2 - ATAN( x / y );
767 }
768 if( x < 0.0 )
769 {
770 if( y < 0.0 && |y| < |x| ) return + π/2 + ATAN( x / y );
771 if( y < 0.0 && |y| > |x| ) return + π - ATAN( y / x );
772 if( y > 0.0 && |y| < |x| ) return + π - ATAN( y / x );
773 if( y > 0.0 && |y| > |x| ) return +3π/2 + ATAN( x / y );
774 }
775
776 This way the adds and subtracts from the constant are not in a precision
777 precarious position.